The European Union's landmark AI Act entered its second compliance phase this week, imposing stringent documentation and transparency requirements on providers of large foundation models — a move that has drawn sharp criticism from smaller developers and open-source advocates who say the rules favor established technology giants.
Under the new requirements, companies deploying foundation models with more than 45 million users or significant compute capacity must submit detailed technical documentation to EU authorities, including information about training data sources, model architecture, and risk mitigation measures. The rules, which took effect Monday, represent the most concrete implementation of the AI Act since its passage last year.
A spokesperson for the European Commission defended the phased approach, saying the documentation requirements are essential to ensuring public safety and accountability in AI deployment. "These measures are proportionate and necessary," the spokesperson said in a written statement. "We have designed the framework to balance innovation with fundamental rights protection."
The Commission has granted large providers a six-month window to compile and submit their documentation packages, with initial reviews expected to begin in the third quarter. Companies that fail to comply face fines of up to 3 percent of global annual revenue.
Smaller Players Voice Concerns
The regulatory burden has sparked pushback from European startups and open-source communities, which argue that compliance costs will cement the dominance of well-funded American and Chinese competitors. A representative from a European open-source foundation, speaking on condition of anonymity to discuss regulatory strategy, said the documentation requirements are "disproportionately expensive for organizations without dedicated legal teams."
"We're talking about hundreds of hours of work to document training processes that were never designed with this level of bureaucratic oversight in mind," the representative said. "The irony is that open-source models, which are inherently more transparent, face the same paperwork as closed commercial systems."
The foundation has joined several industry groups in requesting a simplified compliance pathway for open-source projects and models released under permissive licenses. They argue that publicly available code and training data should satisfy transparency requirements without additional documentation.
Implementation Challenges
EU member states have begun establishing national AI offices to process compliance filings and coordinate enforcement, but several countries have reported staffing shortages and technical capacity gaps. A senior official at a national data protection authority in a Northern European country said regulators are "learning as we go" and acknowledged that initial reviews may take longer than anticipated.
The Commission has allocated approximately €150 million to support member state implementation efforts, including funding for technical expertise and cross-border coordination mechanisms.
Industry analysts expect the documentation phase to reveal significant variations in how companies have approached AI safety and risk management. Some large providers have been preparing compliance materials for more than a year, while others have only recently begun systematic documentation efforts.
The AI Act's third compliance phase, scheduled for early next year, will impose operational requirements on high-risk AI systems in sectors including healthcare, law enforcement, and critical infrastructure. Those rules are expected to affect a broader range of companies and public institutions across the bloc.




