
AI Summit: A Start, But Global Agreement Remains a Distant Hope – By Reuters
By Martin Coulter and Paul Sandle
LONDON – British Prime Minister Rishi Sunak celebrated a series of significant agreements after hosting the inaugural artificial intelligence (AI) safety summit, yet the international framework for regulating the technology remains elusive.
During two days of discussions involving global leaders, business figures, and researchers, prominent executives like Elon Musk and OpenAI’s Sam Altman mingled with U.S. Vice President Kamala Harris and European Commission President Ursula von der Leyen to deliberate on future AI regulation strategies.
Leaders from 28 countries, including China, endorsed the Bletchley Declaration, a collaborative statement recognizing the risks associated with AI technology. Both the U.S. and the UK revealed plans to establish their own AI safety institutes, and two additional summits are scheduled to be held in South Korea and France next year.
While there was some agreement on the necessity of AI regulation, significant differences remain regarding how these regulations should be implemented and which entities should take the lead.
Concerns about the rapid advancements in AI have heightened for policymakers since the public release of ChatGPT, backed by Microsoft, last year. The chatbot’s exceptional ability to mimic human-like responses has prompted experts to advocate for a pause in further development, warning of the potential for systems to gain autonomy and pose threats to humanity.
Sunak expressed his "privilege and excitement" at hosting Musk, although European officials cautioned against the concentration of technology and data within a limited number of companies in the U.S. "Having just one single country with all of the technologies, all of the private companies, all the devices, all the skills, will be a failure for all of us," stated Bruno Le Maire, France’s Minister of the Economy and Finance.
The UK has adopted a distinct approach compared to the EU by proposing a lighter touch in AI regulation, in contrast to Europe’s forthcoming AI Act, which will impose stricter controls on developers of applications deemed "high risk." Vera Jourova, Vice President of the European Commission, indicated that while she did not expect other nations to adopt the EU’s legislation directly, a consensus on global regulatory standards is critical. "If the democratic world will not be rule-makers, and we become rule-takers, the battle will be lost," she emphasized.
Despite projecting a unified front, key players at the summit — namely the U.S., the EU, and China — were seen vying for influence. Some participants noted that Harris might have overshadowed Sunak by announcing the U.S. AI safety institute shortly after the UK revealed its own initiative. She delivered a speech in London that focused on the immediate risks of AI, diverging from the summit’s overarching emphasis on existential dangers.
"It was fascinating that just as we announced our AI safety institute, the Americans announced theirs," reflected Nigel Toon, CEO of British AI firm Graphcore.
British officials hailed China’s participation in the summit and its decision to sign the Bletchley Declaration as a diplomatic success. The Chinese Vice Minister of Science and Technology conveyed the country’s willingness to collaborate on AI governance, although he added that "countries, regardless of their size and scale, have equal rights to develop and use AI," highlighting existing tensions between China and the West.
While attending closed-door discussions, many participants raised concerns about the risks of open-source AI, which allows public access to the underlying technology code. Some experts warned that open-source models could potentially be exploited by malicious actors to create dangerous applications.
At a live event in London, Musk warned that "open-source AI might approach or even exceed human-level intelligence," expressing uncertainty about the best course of action.
Yoshua Bengio, a renowned AI pioneer tasked with leading a significant report under the Bletchley Declaration, underscored the importance of addressing open-source AI risks. He cautioned that such technology could fall into the wrong hands and be manipulated for harmful purposes, suggesting that releasing powerful systems as open-source could compromise public safety without adequate safeguards.