Utilization of Public Computing Infrastructure
In the rapidly evolving world of Artificial Intelligence (AI), the growing divide in compute resources is a significant concern. The largest AI models today use 10 billion times more training compute than the largest models in 2010, with the amount of training compute used in the largest models doubling every five to six months. This trend has seen access to compute increasingly dominated by a handful of tech companies, leaving smaller entities and public-good projects at a disadvantage.
To address this issue, public sector intervention is crucial. Public compute policies, such as the UK Government's £900 million investment in a new AI Research Resource (AIRR), aim to provide world-class compute to UK-based researchers. The AIRR, hosted by the University of Bristol, could promote safe, sustainable, and socially beneficial AI activities.
However, realizing the potential of public compute policies for a more plural and public interest AI development model is not without challenges. One of the main challenges is the insufficient scale of public compute resources. Current public programs, like the U.S. National AI Research Resource (NAIRR), provide computing power that is significantly smaller than industry capacities and inadequate for large universities or broader public-good projects, limiting their ability to meet growing AI research demands.
Another challenge is the lack of clarity and concrete implementation. Efforts like California’s CalCompute and New York’s Empire AI consortium exist, but often lack detailed public information or tangible access frameworks, making it unclear how effectively they serve diverse public research needs.
Complexity of intellectual property (IP) and licensing issues around open-source and open-weight AI models also pose a significant challenge. While these models advance transparency and innovation, they introduce legal and compliance challenges related to IP protection, license obligations, and international variations in law, complicating the development and sharing of publicly accessible AI technologies.
Regulatory focus on compute may also misalign with actual risks. Many current AI policies emphasize compute capacity as a proxy for risk, but some experts argue that contextual factors like training data and application use cases are more effective for governance. This misalignment can lead to regulatory frameworks that do not optimally support public interest or pluralistic innovation.
To overcome these challenges, beyond merely increasing public computational capacity, realizing a plural and public interest AI model requires substantive public investment in large-scale, well-governed compute infrastructure accessible to varied public-good actors. Clear legal frameworks to navigate complex open source and IP environments are also essential. Governance models that emphasize data transparency and AI use contexts over simplistic compute thresholds are necessary. Policy guardrails that prevent privatization and promote inclusivity, openness, and accountability in AI development are crucial.
In conclusion, expanding public compute is one way of exerting greater public control over the AI sector. Leaving AI development wholly to market forces will fail to realize its full benefits. The UK Government's AIRR initiative is a step in the right direction, but it will require careful policy design to ensure that it does not reinforce monopolization and enclosure of AI infrastructure by Big Tech, undermining open, competitive, and trustworthy AI ecosystems that reflect diverse societal interests.
- To foster a more diverse and public-interest driven Artificial Intelligence (AI) development model, substantial public investment in scalable, well-governed data-and-cloud-computing infrastructure is necessary, as the current capacities provided by public compute programs often lag behind industry standards, limiting their utility for larger universities and public-good projects.
- lack of clear legal frameworks and concrete implementation plans surrounding public compute resources and open-source AI models pose significant challenges, as they introduce legal and compliance complexities that hinder the development and sharing of publicly accessible AI technologies, necessitating the need for comprehensive governance models and policy guardrails.