
- Introduction
The forthcoming European Union (EU) Artificial Intelligence (AI) Act sets a new regulatory framework, categorizing AI systems according to their risk levels: Unacceptable Risk, High-Risk, Limited Risk, and Minimal Risk. The most stringent compliance requirements are reserved for High-Risk AI Systems. These systems must adhere to multiple regulatory processes, including the High-risk AI System Requirements, EU Declaration of Conformity, CE Marking, and registration in the EU database.
It is important to note that all AI systems are required to be assessed before they are placed on the market for commercial purposes. Therefore, it is crucial for AI enterprises to prepare and conduct an AI Risk Assessment to determine whether their systems are classified as high risk. For an overview of what constitutes a High-Risk AI system, more information is available here.
If your business is planning to deploy or develop an AI system that could be classified as high risk, you should not skip this article. It's important to understand the obligations and responsibilities your company might bear when launching your AI product in the EU market. So, an High-risk AI systems are strictly regulated, and non-compliance can result in substantial penalties. .
- High-risk AI Compliance Requirements
Under Articles 8-25 of the EU AI Act, developers and providers of High-Risk AI Systems must implement a comprehensive compliance strategy encompassing:
- Risk Management System: A continuous, iterative process across the AI system's lifecycle, including risk identification, analysis, evaluation, and mitigation (Art. 9).
- Data Governance: Ensures that training, validation, and testing data comply with GDPR and uphold fairness and individual rights.
- Technical Documentation: Comprehensive documentation outlining the AI system's specifications and functionalities.
- Traceability: Mechanisms to track records and monitor AI operations.
- Human Supervision: Processes to ensure human oversight of AI systems.
- Accuracy, Robustness, and Security: Ensuring the AI system performs reliably and securely under varied conditions.
- Quality Management System: Standards ensuring consistent quality in AI system development and deployment.
- EU Declaration of Conformity and CE Marking: Formal declarations and marking indicating compliance with EU standards.
- Registration: Enrollment in the official EU database for monitoring and regulatory oversight.
Risk Management System (Art. 9)
The risk management system shall be understood as a continuos iterative process planned and run throughout the entire lifecycle of a high-risk AI system. In includes following steps: (1) The identification and analysis of risks associated with health, safety or fundamental rights; (2) The estimation and evaluation of the risks that may emerge; (3) the evaluation of other risks possibly arising; and (4) the adoption of appropriate targeted risk management measures.
Data Governance
Training, validation and testing data shall be subject to data governance and management practices in high-risk AI system. It is important to note that those practices include personal data protection. Then the providers or developers have to comply and understand their roles under GDPR, ensuring fairness and fundamental individual rights.
EU Declaration of Conformity
According to Article 47 of the EU AI Act, after documenting all the technical requirements for High-risk AI System, providers have to submit High risk AI Declaration of Conformity to the competent authorities.
- Name and business address of the product manufacture of the authorized representative
- Identification at allows the product’s traceability
- Notified body details, if application
- A statement, saying that the manufacture takes full responsibility for the product’s compliance
- Addressing Non-High-Risk Use Cases in High-Risk Systems
If a High-Risk AI System is employed in non-high-risk scenarios, providers are still required to submit the Declaration of Conformity and register in the EU system but may bypass the CE Marking process.
- Determining AI System is not high risk
The most challenging scenario involves identifying whether an AI system is considered high-risk as per Annex III and Art. 6(3). Systems not posing significant harm to health, safety, or fundamental rights, and not significantly influencing decision outcomes may be classified as non-high-risk under specific conditions, such as:
- Narrow Procedural tasks
- To improve the result of a previously completed human activity
- Detect decision-making patterns and deviations from prior decision-making patterns and is not meant to replace or impact the completed human assessment without appropriate human review
- To perform a preparatory task
If an AI system is not a high-risk AI system, the provider must perform documentation of the assessment, register to EU database and provide necessary information upon the request of national competent authorities.
3. Compliance Roadmap for High-risk AI Systems
To comply with the EU AI Act effectively, enterprises must prepare a robust internal AI governance framework as well as legal roadmaps in advance. This preparation helps mitigate potential legal risks and reduces compliance costs.
1. Documentation and Planning
- Licensing Agreements: Review and draft necessary licensing agreements for AI technology use.
- Budget Estimation: Develop a detailed budget forecast covering development, deployment, and compliance costs.
- Market Research: Conduct thorough market research to understand industry standards, competitive landscape, and regulatory expectations.
- Human Resources and Data Training: Plan the recruitment or training of personnel skilled in AI, data handling, and compliance.
- Stakeholders Involvement: Identify and engage all relevant stakeholders such as developers, providers, and external partners.
2. Use Case Identification
- Risk Classification: Analyze the intended use cases of the AI system to determine if they fall under the high-risk category as per the EU AI Act’s criteria (Annex III).
3. Internal Training
- Employee Education: Organize training sessions for all employees on the legal and ethical aspects of the High-Risk AI Systems.
- Ethical Standards Awareness: Foster a culture of ethics and compliance throughout the organization.
4. Risk Assessment and Management
- Risk Assessment Preparation: Conduct a comprehensive AI Risk Assessment to identify potential risks associated with the AI systems.
- Risk Management System Establishment: Develop and implement a risk management system to continuously monitor and mitigate identified risks.
5. Data Governance and Fundamental Rights Impact
- Data Governance Framework: Create a framework that includes data integrity, security, and privacy protocols.
- Fundamental Rights Impact Assessment: Carry out an assessment to ensure that the AI system adheres to fundamental rights protections.
6. Regulatory Testing
- Testing within Regulatory Framework: Test the AI system within a simulated or controlled environment to ensure it meets all regulatory requirements before market release.
7. Legal Compliance and Ethics Integration
- Legal Compliance Checklist: Develop a checklist that includes all compliance points under the EU AI Act.
- Collaboration with Legal Teams: Work closely with legal counsels, the legal team, and an AI Ethics Committee to address ethical and legal issues such as human supervision and adherence to AI ethical principles.
8. External Communication
- Policy Documentation: Draft and publicize external documents such as AI Policies, a Responsible AI manifesto, Ethical AI Framework, and AI Governance Framework to inform stakeholders and the public.
9. Administrative Compliance
- EU Declaration of Conformity and EU Database Registration: Complete the administrative procedures required for registering the AI system in the relevant EU database to ensure transparency and regulatory compliance.
Follow-Up and Continuous Improvement
- Ongoing Monitoring and Updates: Continuously monitor the AI system's performance and compliance status, updating risk assessments and governance frameworks as necessary.
- Stakeholder Feedback: Regularly collect feedback from users, regulatory bodies, and other stakeholders to improve AI system performance and compliance.
The compliance timeline is set between 24 to 36 months after the AI Act's publication, with possible financial penalties amounting to the greater of 15 million EUR or up to 3% of the total worldwide annual turnover.
4. Why choose Humane-AI Asia?
Humane-AI Asia ensures a comprehensive, secure, and reliable compliance program for the EU AI Act for our esteemed partners, encompassing:
- Consulting, evaluating, and recommending risk management assessments for high-risk AI systems.
- Advising and assessing compliance related to data governance, ensuring businesses adhere to regulations concerning personal data protection.
- Providing support in consulting, evaluating, and proposing technical documentation related to AI systems.
- Reviewing report documents and record-keeping to meet the requirements of the EU AI Act.
- Conducting inspections, testing, and evaluations related to quality management systems.
- Ensuring that enterprise AI systems adhere to the three core principles: accuracy, reliability, and security.
- Advising on procedures for the Declaration of Conformity, CE Marking, and registration within the European Union system.
Our tailored services ensure that your AI systems not only meet regulatory requirements but also align with core principles of accuracy, reliability, and security. With Humane-AI Asia, your business can confidently navigate the legal landscape, ensuring comprehensive, reliable, and secure AI system deployment.
—
At Humane-AI Asia, we are committed to delivering exceptional value and strategic advantage to our partners, helping them stay ahead of regulatory challenges and ensuring their AI solutions are both innovative and compliant. As your trusted advisor, we enable seamless integration of high-risk AI systems in compliance with EU standards, positioning your enterprise for success in a regulated digital future.
For further consultation or to begin your compliance journey, please Schedule a call and we will reach out soon.
For more information, please visit:
Fanpage: Humane-AI Asia
LinkedIn: https://www.linkedin.com/company/humane-ai-asia/
Website: https://www.humane-ai.asia/
Email: info@humane-ai.asia