As artificial intelligence (AI) continues to advance, AI systems are increasingly capable of generating decisions within seconds. However, regardless of how fast or efficient these decisions may be, the fundamental principle remains: AI must always operate under human control.
- Legal Basis: Article 27 and Article 4(2) of AI Law
Article 4(2) of Vietnam’s AI Law establishes a core principle: AI is designed to serve humans, not to replace human authority or responsibility. Accordingly, the principle requires maintaining human control and the ability to intervene in all decisions and actions carried out by AI systems. It also emphasizes the need for human oversight in both the development and operation of such systems.
This principle is further concretized in Article 27 of AI Law, which introduces ethical responsibility and impact assessment obligations in the application of AI, particularly in public administration and the provision of public services, namely:
- AI systems must not replace the legal authority or responsibility of human decision-makers;
- Decision-makers remain responsible for reviewing and using outputs generated by AI systems.
- For high-risk AI systems or those with significant impacts on human rights, social fairness, or public interests, operating entities must conduct impact assessments, including risk identification, control measures, and mechanisms to ensure effective human oversight and intervention.
2. The “Human in the Loop” Principle
To operationalize the “human in the loop” principle, the AI Law adopts a risk-based approach, imposing different obligations on stakeholders across the AI lifecycle depending on the level of risk involved.
For high-risk AI systems, these obligations are particularly stringent, namely:
- Providers are required to establish and maintain risk management frameworks, which must be continuously reviewed in light of significant changes or newly identified risks. They must also design AI systems in a way that ensures effective human oversight, including the possibility of timely human intervention.
- Deployers are responsible for ensuring data security and for maintaining the possibility of human intervention during the system’s operation.
- Users, in turn, must comply with operational procedures and technical guidelines, refrain from unlawfully altering system functionalities, and promptly report any incidents to the deployer.
For medium- and low-risk AI systems, the law adopts a more proportionate approach. While providers and deployers are still required to manage risks and ensure system safety, the obligations imposed are less stringent.
Overall, this framework reflects a clear regulatory logic: the higher the level of risk, the stronger the requirement for human control.
3. Responsibility of Decision-Makers
A key implication of this principle is that responsibility remains with humans, not AI systems. As clearly stated in Article 4(2) and Article 14 of the AI Law, the AI system shall not replace the human responsibilities. For instance, a doctor using AI-assisted diagnostic tools cannot transfer responsibility to the system. The final medical judgment still rests with the doctor.
Importantly, the AI Law does not prohibit the use of AI in decision-making processes. AI may assist human decision-makers by providing insights or recommendations. This means that AI does not have independent decision-making authority. Responsibility, as well as final decision-making power, continues to lie with human individuals or organisations.
4. Implications for enterprises in Vietnam
Overall, to effectively implement this principle, businesses, particularly companies developing and deploying AI systems, need to carefully consider several key aspects.
- Fully autonomous systems without human oversight are not legally viable in many contexts;
- Businesses must establish clear human oversight mechanisms, such as manual approval processes or emergency intervention protocols;
- Internal accountability for AI-assisted decisions must be clearly allocated;
- In conjunction with explainability requirements, companiese must ensure that humans have sufficient information to understand and effectively control AI systems.
More broadly, compliance with the “human in the loop” principle is not only a legal obligation but also a crucial factor in building trust, particularly in sensitive sectors such as finance, healthcare, and public administration.
- Connect with Humane-AI Asia to operationalize AI accountability and turn compliance into a foundation for responsible and sustainable innovation.
For more information about our services, visit: https://humane-ai.asia/en
Humane-AI Asia
Tran Vu Ha Minh | 0938801587
minh.tran@humane-ai.asia | info@humane-ai.asia