Skip to content

The extent to which open-source artificial intelligence can truly be considered open.

Examining the AI Act's oversight of openly available Artificial Intelligence technologies

The degree of openness in open-source AI systems.
The degree of openness in open-source AI systems.

The extent to which open-source artificial intelligence can truly be considered open.

The Open Source AI Definition (OSAID), a community-driven framework developed by the Open Source Initiative (OSI) and Carnegie Mellon University-led Open Forum for AI (OFAI), is aiming to bring clarity to the world of open-source AI. The OSAID encompasses the requirement that all AI components—model architecture, inference code, model parameters (weights), and related artifacts—must be openly available for an AI system to be considered truly open source.

However, the relationship between the OSAID and the European Union's AI Act (EU AI Act) is complex. The EU AI Act offers exemptions for certain open-source AI systems in specific contexts, aiming to support small, independent developers. Yet, the Act lacks detailed definitions and clarity, leading to potential interpretational gaps where the OSAID could become an informal standard that influences legal decisions.

One of the key concerns is the lack of explicit definition of open source in the EU AI Act. This ambiguity could result in definitions like OSAID influencing what the law considers open source, thereby impacting the scope and enforcement of the EU AI Act. However, the relatively 'soft' nature of OSAID—specifically, its omission of requirements like transparency on training data—raises concerns that it might not ensure full reproducibility or accountability, potentially weakening the regulatory framework if relied upon exclusively.

The EU AI Act mentions exemptions for open-source AI projects, but the regulation seems to offer room for interpretation regarding open-source AI. For instance, recent consultations by the Commission suggest that models that are hosted on platforms such as Hugging Face and GitHub will be considered 'put into service'. This raises questions about who holds responsibility if a third party puts an open-source AI into service, and the precise regulatory scope for open-source models hosted on platforms.

In summary, the OSAID provides a clearer, multi-component definition of open-source AI, including code, model, and weights transparency, but does not mandate full training data disclosure. On the other hand, the EU AI Act offers exemptions for certain open-source AI systems but lacks detailed definitions and clarity, resulting in potential interpretational gaps where the OSAID could become an informal standard that influences legal decisions.

This situation creates uncertainty about the treatment of open-source AI under the Act, especially regarding responsibility for AI systems put into service by third parties and the precise regulatory scope for open-source models hosted on platforms. As the debate continues, striking a balance between risks and innovation through an open-source regulatory cutout could create more risk for people, without providing additional opportunity for innovation.

Artificial-intelligence systems that comply with the Open Source AI Definition (OSAID) could potentially influence legal interpretations of the European Union's AI Act (EU AI Act), as OSAID offers a clearer definition of open-source AI components and could serve as an informal standard. However, the EU AI Act's lack of explicit requirements for full transparency on training data in open-source AI projects raises concerns about reproducibility and accountability, potentially weakening the regulatory framework when exclusively relied upon.

Read also:

    Latest