As negotiators gather for key discussions on Friday, preceding the final talks set for December 6, sources reveal that “foundation models,” or generative AI, have emerged as the primary obstacle in the negotiations concerning the European Union’s proposed AI Act. The insiders, requesting anonymity due to the confidential nature of the discussions, highlighted the significance of addressing this issue in the ongoing talks.
Foundation models, such as the one developed by Microsoft-backed OpenAI, refer to AI systems trained on extensive datasets, capable of learning from new data to execute diverse tasks.
Following two years of negotiations, the European Parliament endorsed the bill in June. The next step involves reaching an agreement on the draft AI rules through discussions among representatives from the European Parliament, the Council, and the European Commission.
Experts from EU countries are set to convene on Friday to discuss their stance on foundation models, access to source codes, penalties, and other related topics. Concurrently, lawmakers from the European Parliament will also gather to finalize their positions.
Failure to reach an agreement poses the risk of the AI Act being delayed due to time constraints before the upcoming European parliamentary elections next year.
Divergent opinions exist among experts and lawmakers, with some proposing a tiered regulatory approach for foundation models, categorized as those with over 45 million users, while others argue that even smaller models could pose similar risks.
The primary obstacle to reaching an agreement has emerged from France, Germany, and Italy, who advocate for allowing generative AI model developers to self-regulate rather than imposing stringent rules.
During a meeting of the economy ministers from these countries in Rome on October 30, France successfully persuaded Italy and Germany to back this proposal, according to insider sources.
Up until this point, negotiations had been progressing relatively smoothly, with lawmakers reaching compromises in several other contentious areas, including the regulation of high-risk AI applications.
Criticism of self-regulation has been voiced by European parliamentarians, EU Commissioner Thierry Breton, and numerous AI researchers. In an open letter, researchers like Geoffrey Hinton have cautioned that self-regulation is likely to fall short of the necessary standards for ensuring the safety of foundation models.
AI companies, such as Mistral in France and Aleph Alpha in Germany, have opposed the tiered approach to regulating foundation models, receiving support from their respective countries. Mistral, for instance, prefers strict rules for products rather than the technology itself, according to a source close to the company.
The growing legal uncertainty, despite stakeholders’ efforts to keep negotiations on track, is seen as unhelpful for European industries, creating challenges for business planning. Pending issues in the talks include the definition of AI, fundamental rights impact assessment, law enforcement exceptions, and national security exceptions.
Lawmakers have struggled with divisions over the use of AI systems by law enforcement agencies for biometric identification in publicly accessible spaces. Spain, currently holding the EU presidency, has proposed compromises to expedite the process. If a deal is not reached in December, the incoming presidency of Belgium will have a limited time before it’s likely shelved ahead of European elections.
Mark Brakel, director of policy at the Future of Life Institute, noted that what initially seemed like emerging compromises on key issues have become more challenging in recent weeks.