The potential of artificial intelligence (AI) raises high hopes, especially in medicine, however, this technology is seen as progressing largely uncontrolled.
AI forcefully emerged in early November 2023 when the G7 issued voluntary code of conduct guidelines. Rome also hosted a trilateral meeting of French, German and Italian ministers. US President Joe Biden signed an executive order, and a global summit on “AI Security and Safety” was held at Bletchley Park north of London.
In late October 2023, the United Nations formed an expert panel to provide recommendations by year-end on managing the use of AI, voicing concerns that the technology with “stunning capabilities” may pose serious risks to democracy and human rights.
UN Secretary-General António Guterres tasked the committee to “race against time” in advising how to steer AI use and identify the hazards and opportunities it presents.
Voluntary Code of Conduct
The G7 recently agreed on a draft of global guiding principles and a voluntary code of conduct for companies and institutions developing advanced forms of AI systems under the Hiroshima Process for AI.
The guidelines aim to mitigate risks of AI technology regarding privacy violations or intellectual property issues.
The plan proposed by the group seeks to promote the development of “safe” global AI systems, “mitigate risks” and “preserve humanity at its core”, according to a joint statement that also called on all players in this artificial sector to commit to the plan.
While underscoring advanced AI models’ “innovative and transformative potential”, particularly generative programmes like ChatGPT, the G7 recognised the need to “safeguard individuals, society and shared values.”
European Commission President Ursula von der Leyen welcomed the G7 guidelines and voluntary code of Conduct reflecting EU values for developing trustworthy AI. She invited all AI developers to promptly sign and implement it.
Von der Leyen added that the EU contributes to erecting global AI safety guards and governance, with the bloc being a pioneer in regulating AI – referring to the proposed EU Artificial Intelligence Act.
Excessive Regulation Hurts Competition
Germany, France and Italy seek closer cooperation on AI so Europe can more effectively compete with the US and China.
The ministers representing the three largest EU economies welcomed the bloc’s upcoming AI law, said it should be based on an innovation-friendly approach and called for increasing investment in this emerging technology.
At a recent Rome meeting, German Minister of Economic Affairs Robert Habeck, French Minister of Economy Bruno Le Maire, and Italian Minister of Business and Made in Italy Adolfo Urso welcomed the European Union’s upcoming artificial intelligence law – the world’s first covering this field, which is expected to be released by the end of the year.
The ministers jointly stated in a statement that it is important for the EU law to be crafted without unnecessary bureaucracy and red tape.
The bill aims to regulate AI according to risk level – the higher the threat to individual rights or health, for instance, the greater the systems’ obligations.
Moreover, the ministers stressed Europe can maintain its global standing on AI.
German Minister Robert Habeck said “We don’t have to hide. In many areas, we have better companies than the tech giants in America.” However, he also called for quick decisions at the EU level, adding: “If it takes three and a half years of waiting, we no longer stand a chance…We’ll end up regulating a market that no longer exists.”
French Minister Bruno Le Maire noted US investments in AI currently equal ten times that of Europe. He explained America invested $53 billion in AI last year, compared to $5.3 billion by the EU and $10.6 billion by China.
The three ministers called for streamlining cross-border operations to assist European startups. Italian Minister Adolfo Urso said AI would be a priority on his country’s G7 agenda when it assumes rotational presidency in 2024.
In late October 2023, Bulgarian Minister of Electronic Government Alexander Yulovski said the EU must regulate AI technology use as it poses a broad risk to fundamental rights and European values, but without over-regulating.
In 2020, Slovenia opened the UNESCO-sponsored International Research Centre for AI in Ljubljana. The first European AI summer school was held with over 630 participants from 42 countries. Slovenian Digital Transformation Minister Emilija Stojmenova Duh said that while smaller states cannot match the deep pockets of major players, specialisation and dedication in education and research can provide a competitive edge in individual AI areas.
United States: “Historic” Regulatory Decision
In early November 2023, US President Joe Biden issued an executive order on AI regulation to position America at the vanguard of global efforts in managing risks associated with this technology.
Biden’s order directs federal agencies to develop safety and security standards for AI systems. It also requires AI system developers to share safety testing results and other important information with the US government, according to a White House statement.
Bletchley Park Summit: “Beginnings” or “Missed Opportunities”?
At the first Global AI Safety Summit held over two days at Britain’s Bletchley Park, the birthplace of computing where Nazi codes were cracked during WWII, several countries including Britain, the US and China agreed on the “need for international action.”
British Technology Minister Michelle Donelan said the declaration “represents the first time the world has united to define this problem.” The summit proposed forming a global expert committee to produce periodic reports for follow-up.
Defending China’s summit invitation despite accusations of technological espionage, UK Prime Minister Sunak said that “a serious strategy can’t be forged without including global capabilities” in this sector.
Citing the recent Italy-Germany-France agreement, Italian Minister Adolfo Urso described the Bletchley Park summit as the launch of a process aimed at building a “new global alliance” around rules and safeguards to be adopted for AI.
In an open letter to Sunak, over 100 experts, organisations and activists from Britain and abroad called the summit a “missed opportunity” purposefully designed for “Big Tech.”
The coalition behind the letter, including trade unions, human rights groups like Amnesty International and other technology community voices, warned that “communities and workers most impacted by AI have been sidelined.”