Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Twenty-eight countries including the US, UK and China have agreed to work together to ensure artificial intelligence is used in a “human-centric, trustworthy and responsible” way, in the first global commitment of its kind.
The pledge forms part of a communique signed by major powers including Brazil, India and Saudi Arabia, at the inaugural AI Safety Summit. The two-day event, hosted and convened by British prime minister Rishi Sunak at Bletchley Park, started on Wednesday.
Called the Bletchley Declaration, the document recognises the “potential for serious, even catastrophic, harm” to be caused by advanced AI models, but adds such risks are “best addressed through international co-operation”. Other signatories include the EU, France, Germany, Japan, Kenya and Nigeria.
The communique represents the first global statement on the need to regulate the development of AI, but at the summit there are expected to be disagreements about how far such controls should go.
Country representatives attending the event include Hadassa Getzstain, Israeli chief of staff at the ministry of innovation, science and technology, and Wu Zhaohui, Chinese vice minister for technology.
Gina Raimondo, US commerce secretary, gave an opening speech at the summit and announced a US safety institute to evaluate the risks of AI. This comes on the heels of a sweeping executive order by President Joe Biden, announced on Monday, and intended to curb the risks posed by the technology.
Tech executives including OpenAI’s Sam Altman, Elon Musk, Salesforce’s Marc Benioff, Google’s James Manyika and Demis Hassabis, and Arm’s Rene Haas are also in attendance.
Michelle Donelan, the UK secretary of state for science, innovation and technology, said that the summit marked a “historic moment not just for the UK but the world”, adding that China’s presence was “monumental”.
Asked whether the UK was now lagging behind the US in rolling out clear regulation for companies developing advanced AI technology, she said: “I don’t think it’s helpful to set arbitrary deadlines on regulation . . . there needs to be an empirical approach”.
“Are we ruling out legislation? Absolutely not. We’re trying to do things that are faster than legislation,” Donelan said, pointing to her government’s work to encourage companies to publish their AI safety policies.
Wu, representing China, also spoke at the opening, saying that all the actors “need to respect international law” and work together in the fight against the malicious use of AI. He said that AI technologies are “uncertain, unexplainable and lack transparency”.
A series of roundtables during the first day will cover the risks of AI misuse to global safety, the loss of control of AI, as well as what national policymakers and the international community can do in relation to the risks and opportunities of AI, according to a schedule seen by the Financial Times.
Musk will be at a session on the risks from loss of control of AI, which is likely to cover so-called artificial general intelligence, a term used for the technology reaching or overtaking human intelligence. Representatives from OpenAI, Nvidia, Anthropic and Arm will also be there.
Altman will attend an afternoon discussion on what AI developers can do to scale responsibly, alongside Hassabis, chief executive of Google DeepMind, and Dario Amodei, chief executive of Anthropic.
South Korea will co-host a mini virtual summit on AI over the next six months while France will host the next in-person summit in a year, the UK government announced.
Read the full article here