Skip to content

Artificial Intelligence's environmental footprint data might not prevent inertia in tackling climate change issues

Overcoming data issues alone can't completely address the ongoing climate catastrophe

Insight on Artificial Intelligence's impact on climate change may not deter inactivity in...
Insight on Artificial Intelligence's impact on climate change may not deter inactivity in mitigation efforts

Artificial Intelligence's environmental footprint data might not prevent inertia in tackling climate change issues

In the rapidly evolving world of artificial intelligence (AI), concerns about its environmental impact are growing. A holistic approach is needed to mitigate the tech industry's environmental footprint, focusing on adaptable frameworks that can operate based on existing knowledge and integrate new data over time.

Mariana Mazzucato, an economist, has criticised the lack of transparency over energy consumption by tech companies, arguing that big tech should not be let off the hook regarding their environmental impact. This opacity and incomplete metrics can be seen in the narrow or outmoded metrics used by big tech, and the purchase of renewable energy credits, which can obscure their true environmental footprint.

One of the key challenges is the lack of comprehensive reporting and standards. There are no standardized, comprehensive metrics for AI’s energy and environmental impacts across the entire lifecycle, from model training and inference to data centre infrastructure. While some regions have sustainability reporting requirements, these often lack specificity for AI and do not mandate granular disclosures.

The environmental costs of AI development and use have been described as 'out of control'. AI’s environmental impact extends beyond energy and carbon: hyperscale data centres consume hundreds of thousands of gallons of water daily, and the disposal of obsolete hardware generates substantial e-waste. The extraction of critical minerals for AI hardware is linked to environmental degradation, corruption, and conflict, yet these externalities are rarely included in accountability frameworks.

To address these challenges, policymakers should establish comprehensive, standardized metrics for AI’s environmental impact, requiring disaggregated reporting by model, usage, and infrastructure. This could be enforced through mandatory annual reports and by conditioning access to public markets, government funding, or procurement on compliance.

Governance frameworks must address the entire AI lifecycle, including the environmental and social costs of mineral extraction, manufacturing, data centre operation, and end-of-life disposal. This includes mandating verifiable, site-specific renewable energy use and tracking the fate of retired hardware.

Federal agencies should integrate AI’s resource footprint into national energy and grid planning, ensuring that AI growth does not exacerbate strain on local ecosystems or infrastructure. AI itself can be used to enhance accountability: remote sensing and machine learning can monitor illicit mining, track supply chains, and provide early warnings of environmental harm.

Sustainability rating agencies should expand criteria to include AI-specific impacts, creating market pressure for better disclosure. Regulators could also introduce carbon-aware scheduling, aligning AI workloads with periods of high renewable energy availability to minimise carbon intensity.

International reporting standards and enforceable transparency norms are needed to hold multinational tech firms accountable, especially as AI hardware and data infrastructure are globally distributed.

In conclusion, holding big tech accountable for AI’s environmental impact requires moving beyond carbon accounting and data transparency to address the full lifecycle—including water, e-waste, and mineral extraction—through standardized disclosure, regulatory integration, and global cooperation. Leveraging AI for environmental governance and creating enforceable market incentives are critical steps toward sustainable AI development.

Mariana Mazzucato's criticism highlights the need for tech companies to be more transparent about their energy consumption, as their environmental impact, often hidden by incomplete metrics and the purchase of renewable energy credits, is a significant concern. To effectively mitigate AI's environmental footprint, policymakers should establish comprehensive, standardized metrics for its entire lifecycle, ensuring accountability through mandatory annual reports and conditioning access to public markets, government funding, or procurement on compliance.

Read also:

    Latest