In a significant week for AI policymakers, the European Union has finalised its AI Act, marking a milestone in the regulation of artificial intelligence. Meanwhile, in Seoul, South Korea, 16 leading companies signed the “Frontier AI Safety Commitments,” and multiple countries pledged to collaborate on mitigating AI-related risks.
The Rise of AI and Global Concerns
The past year has seen artificial intelligence transition from a niche technology to a mainstream topic of discussion, largely spurred by the launch of ChatGPT in late 2022. This shift has fueled both excitement and anxiety about AI’s potential, with some experts and industry leaders warning of catastrophic risks if AI development continues unchecked. These concerns are not merely theoretical; the proliferation of AI has already raised issues related to bias, surveillance, and the spread of misinformation.
Despite these warnings, the pace of AI development remains rapid, with tech companies racing to release new AI products. In response to these developments, policymakers are grappling with how to regulate a technology that is advancing quickly.
Collaborative Efforts and Future Directions
At the AI Seoul Summit in South Korea, a coalition of countries, including Germany, France, Italy, Spain, Switzerland, the UK, Turkey, and the Netherlands, along with the EU, agreed to collaborate on setting thresholds for severe AI risks. This agreement aims to address extreme threats, such as the use of AI in creating biological and chemical weapons.
The “Frontier AI Safety Commitments,” signed by 16 influential AI companies, including the French startup Mistral AI, represent a voluntary effort to manage AI risks. These commitments involve identifying and mitigating risks throughout the AI lifecycle and establishing processes for handling risks that exceed defined thresholds.
Looking ahead, the AI regulatory landscape continues to evolve. The AI Seoul Summit follows the AI Safety Summit held in the UK six months earlier, and the upcoming AI Action Summit in Paris in February 2025 reflects a shift towards proactive regulation. This shift aligns with President Macron’s vision of positioning Paris as a hub for artificial intelligence.
Mark Rodseth, VP of Technology, EMEA at CI&T, emphasised the need for global collaboration and more frequent regulatory updates to keep pace with rapid AI advancements. While the EU’s consensus-driven regulatory process may pose challenges, the newly signed AI Act signifies a commitment to robust AI governance.