Regulation of AI under GDPR: A Guide for Compliance from the Design Stage - Episode 2: The Design Stage Explored
In the realm of artificial intelligence (AI) development, adhering to privacy and data protection principles is of utmost importance. Both the European Union's Artificial Intelligence Act (AI Act) and the General Data Protection Regulation (GDPR) emphasise the need for privacy and data protection 'by design' and 'by default' in the design phase of AI systems.
Firstly, AI developers must exercise caution when selecting data sources for training AI models. Sources should be assessed for their relevance, adequacy, and appropriateness to the intended purposes. Inappropriate sources should be avoided, and selection criteria should be consistent with GDPR rules, such as avoiding data scraping from public social media profiles if those data are not permissible.
Secondly, personal data should be minimized and anonymized wherever possible during the design phase. Privacy-enhancing technologies like pseudonymization and encryption are essential to protect personal data.
Thirdly, the principles of 'Privacy by Design' and 'Privacy by Default' require that privacy protection measures be integrated proactively from the earliest stages of AI system design. This means embedding technical and organizational safeguards to ensure maximum data protection without extra user effort. These measures should remain throughout the AI lifecycle and avoid retrospective fixes.
Fourthly, addressing how the AI model’s outputs are handled and the architecture of the system itself to prevent unintended personal data exposure or bias is also crucial.
Lastly, businesses should implement appropriate technical and organizational measures during processing method determination and actual processing phases to ensure data minimization and safeguard data subjects’ rights, as outlined in Article 25 GDPR.
In summary, the design phase requires a strategic data approach emphasizing lawful source selection, data minimization, strong privacy protections integrated early via privacy by design/default principles, and ongoing technical measures aligning with the GDPR’s requirements and the AI Act’s frameworks for trustworthy AI systems.
Moreover, the minimization of personal data processed is required, and AI models must be tested to prevent unintentional data memorization and reduce the risk of accidentally disclosing personal data. Anonymization of personal data for AI training purposes is a way to limit GDPR scope, but the standard for anonymization is high and subject to complex case law.
Accuracy of personal data is key, both for input and output data. The GDPR transparency principle requires informing individuals about the accuracy limits of personal data generated by AI. As AI continues to evolve, it is essential to ensure that it is developed and used in a way that respects privacy and protects personal data.
Within the AI development industry, incorporating financial investments in technology and data-and-cloud-computing resources should prioritize adherence to personal-finance management principles. This includes adopting a strategic approach to handle funding for privacy-enhancing technologies and data minimization measures, such as pseudonymization and encryption, as mandated by the GDPR.
In addition, the efficiency of AI systems relies heavily on the accuracy of input and output data, as is emphasized by both GDPR and the emerging AI Act. To achieve this, businesses should invest in data verification systems to maintain the transparency principle of informing individuals about the accuracy limits of AI-generated personal data.
Lastly, as AI technology increasingly permeates various aspects of business, finance, and personal-finance, it is crucial for industry leaders to stay informed about the latest Privacy by Design/Default principles, data protection regulations like GDPR, and updates concerning Artificial Intelligence Act, particularly in regards to trustworthy AI system development.