The Landmark AI Act Could Set the Framework for Future Compliance Practices and Other Countries’ Legislation
Last week, the EU’s groundbreaking AI law came into effect, with significant implications for companies operating within the European Union and beyond. This comprehensive legislation known as the AI Act has been in development for years and introduces new regulations focused on the safety, sustainability, and impartiality of AI systems, particularly those deemed high-risk, such as those used in law enforcement, hiring, or critical infrastructure. The law officially took effect on August 1, with certain provisions set to be implemented over the next two years.
Much like the EU’s General Data Protection Regulation, the AI Act could have widespread effects on technology companies conducting business in European markets, setting a potential standard for other regions to follow. According to a briefing from the European Parliament, companies offering high-risk AI products in the EU will be required to conduct a conformity assessment before entering the market. This assessment includes “testing, data training, and cybersecurity” requirements and may also involve a “fundamental rights impact assessment.”
A May report from the Brookings Institution suggested that AI companies operating internationally are likely to comply with these regulations to maintain access to the lucrative European market, potentially leading them to develop region-specific models. “Given the importance of the European market, international companies could be expected to align some of their AI governance practices with the AI Act to maintain access to the European Union’s internal market,” the report’s authors noted.
In response to the evolving regulatory environment, some companies are adjusting their strategies. When Microsoft partnered with French AI startup Mistral in February, the move was seen as potentially signalling more geographically strategic partnerships considering the global regulatory landscape. Similarly, IBM has collaborated with German AI startup Aleph Alpha.
In contrast, Meta recently announced it would not release its upcoming multimodal Llama AI models in the EU, as reported by Axios last month. This decision is reportedly linked to GDPR restrictions related to social media posts used in training data, rather than the AI Act itself.
Efforts to align AI regulations globally may position the AI Act as a model for other countries. However, while President Joe Biden signed a broad executive order last fall addressing similar topics as the AI Act, progress on specific federal AI legislation in the United States has stalled.