Artificial Intelligence Act's Strict Requirements Inadvertently Encompass Mobile Phones and Internet-Connected Gadgets
The European Union's Artificial Intelligence (AI) Act, designed to regulate high-risk uses of AI, has raised concerns about potential over-regulation of low-risk AI products, such as smartphones and smart thermostats. To address these concerns, amendments are needed to clarify the scope, refine risk assessment methodologies, and ensure flexibility in the Act.
## Key Strategies for Reform
### Clarify and Narrow Definitions
To prevent catch-all regulation, the Act's definitions of "AI system" and "high-risk" should be tightened. Everyday consumer devices that pose minimal risk to health, safety, or rights could be explicitly excluded from high-risk obligations, provided they do not process sensitive data or interact in critical domains. Clear, objective technical criteria should also be developed for determining risk levels, reducing reliance on subjective judgment or overbroad categories.
### Enhance Proportionality and Graduated Obligations
A "presumption of compliance" could be introduced for well-understood, low-risk AI applications, allowing providers to self-certify compliance with minimal requirements. Dynamic regulatory sandboxes could also be established, allowing providers of innovative but low-risk AI to test products in controlled environments with reduced regulatory burdens.
### Leverage Industry Standards and Codes of Practice
Voluntary codes of practice, as highlighted in the General-Purpose AI Code of Practice, could guide compliance for low-risk products, reducing formal regulatory overhead. Sector-specific standards should also be encouraged to reflect the actual risk profile of products like smart home devices.
### Strengthen Governance and Expertise
National authorities and the European AI Office should be equipped with technical expertise to assess products accurately, avoiding over-classification due to lack of understanding. Regular dialogue with industry, civil society, and technical experts should also be institutionalized to update risk categories and thresholds based on real-world experience and technological evolution.
## Concrete Amendment Proposals
The proposed amendments aim to prevent overreach into low-risk sectors, keep pace with technological change, reduce unnecessary burdens, encourage innovation, and adopt a more flexible compliance approach.
| Current Approach | Proposed Amendment | Rationale | |-------------------------------|-------------------------------------------|--------------------------------------------| | Broad “AI system” definition | Exempt clearly benign, everyday devices | Prevents overreach into low-risk sectors | | Static risk categories | Dynamic, evidence-based risk assessment | Keeps pace with technological change | | One-size-fits-all compliance | Graduated, sector-specific requirements | Reduces unnecessary burdens | | Binding rules for all | Codes of practice for low-risk cases | Encourages innovation and adoption |
## Conclusion
By implementing these strategies, the EU can maintain high standards of safety and rights protection without stifling innovation in consumer technologies. Regular review and stakeholder engagement will be essential to keep the regulatory landscape proportionate and effective. Policymakers should exempt AI systems that do not perform a safety function by deleting certain phrases from Articles 6(1)(a) and 6(1)(b) of the AI Act.
- The amendments to the European Union's Artificial Intelligence Act should focus on tightening the definitions of "AI system" and "high-risk" to prevent over-regulation of low-risk AI products.
- To foster innovation in consumer technologies, Policymakers should exempt AI systems that do not perform a safety function by deleting certain phrases from Articles 6(1)(a) and 6(1)(b) of the AI Act.
- Voluntary industry standards and codes of practice, such as the General-Purpose AI Code of Practice, could guide compliance for low-risk AI products, reducing formal regulatory overhead.
- National authorities and the European AI Office need to be equipped with technical expertise to assess products accurately, avoiding over-classification due to lack of understanding.
- A "presumption of compliance" could be introduced for well-understood, low-risk AI applications, allowing providers to self-certify compliance with minimal requirements, and dynamic regulatory sandboxes should be established for testing low-risk AI products in controlled environments.