GitHub and others call for more open-source support in EU AI law - 4 minutes read




In a paper sent to EU policymakers, a group of companies, including GitHub, Hugging Face, Creative Commons, and others, are encouraging more support for the open-source development of different AI models as they consider finalizing the AI Act. EleutherAI, LAION, and Open Future also cosigned the paper.

Their list of suggestions to the European Parliament ahead of the final rules includes clearer definitions of AI components, clarifying that hobbyists and researchers working on open-source models are not commercially benefiting from AI, allowing limited real-world testing for AI projects, and setting proportional requirements for different foundation models.

“The AI Act holds promise to set a global precedent in regulating AI to address its risks while encouraging innovation”

Github senior policy manager Peter Cihon tells The Verge the goal of the paper is to provide guidance to lawmakers on the best way to support the development of AI. He says once other governments come out with their versions of AI laws, companies want to be heard. “As policymakers put pen to paper, we hope that they can follow the example of the EU.”

Regulations around AI have been a hot topic for many governments, with the EU among the first to begin seriously discussing proposals. But the EU’s AI Act has been criticized for being too broad in its definitions of AI technologies while still focusing too narrowly on the application layer. 

“The AI Act holds promise to set a global precedent in regulating AI to address its risks while encouraging innovation,” the companies write in the paper. “By supporting the blossoming open ecosystem approach to AI, the regulation has an important opportunity to further this goal.”

The Act is meant to encompass rules for different kinds of AI, though most of the attention has been on how the proposed regulations would govern generative AI. The European Parliament passed a draft policy in June.

Some developers of generative AI models embraced the open-source ethos of sharing access to the models and allowing the larger AI community to play around with it and enable trust. Stability AI released an open-sourced version of Stable Diffusion, and Meta kinda sorta released its large language model Llama 2 as open source. Meta doesn’t share where it got its training data and also restricts who can use the model for free, so Llama 2 technically doesn’t follow open-source standards. 

Open-source advocates believe AI development works better when people don’t need to pay for access to the models, and there’s more transparency in how a model is trained. But it has also caused some issues for companies creating these frameworks. OpenAI decided to stop sharing much of its research around GPT over the fear of competition and safety. 

The companies that published the paper said some current proposed impacting models considered high-risk, no matter how big or small the developer is, could be detrimental to those without considerable financial largesse. For example, involving third-party auditors “is costly and not necessary to mitigate the risks associated with foundation models.” 

The group also insists that sharing AI tools on open-source libraries does not fall under commercial activities, so these should not fall under regulatory measures. 

Rules prohibiting testing AI models in real-world circumstances, the companies said, “will significantly impede any research and development.” They said open testing provides lessons for improving functions. Currently, AI applications cannot be tested outside of closed experiments to prevent legal issues from untested products.

Predictably, AI companies have been very vocal about what should be part of the EU’s AI Act. OpenAI lobbied EU policymakers against harsher rules around generative AI, and some of its suggestions made it to the most recent version of the act. 



Source: The Verge

Powered by NewsAPI.org