Vietnam’s AI Law, effective from 1 March 2026, adopts a risk-based approach to AI governance. Instead of imposing heavy obligations on all AI systems, the law differentiates requirements based on risk levels. In practice, most AI-powered tools currently used by businesses, such as customer-service chatbots, AI-assisted marketing, internal analytics, or workflow automation, are likely to fall into the medium- or low-risk categories, depending on their specific use cases and impact.
This means most businesses are not subject to the strict compliance regime applied to high-risk AI, but they are still expected to take proportionate and responsible steps to ensure lawful and trustworthy use.
🔑Transparency is the core requirement.
For any AI systems, the law focuses on ensuring that users are aware when they are interacting with an AI system or consuming AI-generated content. Chatbots, virtual assistants, voice bots, and AI-generated text, images, audio, or video should be clearly disclosed or labeled where there is a risk of confusion about authenticity. Transparency is the foundation for maintaining user trust.
According to Article 11.5 of Law on Artificial Intelligence 2025:
“Providers and deployers are responsible for maintaining transparent information as prescribed in Article 11 throughout the process of providing systems, products, or content to users.”
🔑 Explainability—without exposing trade secrets.
Businesses must be able to explain their AI systems when requested by regulators, but only at a functional level. This includes the intended purpose of the system, how it generally works, the main types of input data, and basic risk-management measures. The law explicitly avoids requiring disclosure of source code, detailed algorithms, model parameters, or other proprietary information.
🔑 Basic readiness for AI incidents.
Even for low-risk AI, organizations should be able to identify and respond if an issue arises that affects legal rights or legitimate interests. In practice, this can be a simple internal process: a contact point for AI-related feedback, a quick assessment mechanism, and the ability to adjust or suspend the system if needed.
🔑 “Just enough” preparation—no compliance overload.
A clear inventory of AI systems in use, a simple internal risk classification, shared guidelines on AI transparency and labeling, and a lightweight incident-response process are generally sufficient. These measures align with the law’s intent: encouraging innovation while preserving accountability and trust.
🔑 The key message of Vietnam’s AI Law is clear:
Medium- and low-risk AI systems are governed with flexibility—but not without responsibility. Early, proportionate preparation allows businesses to use AI confidently, meet legal expectations, and build long-term trust with customers and partners.
With the legal groundwork now taking shape, this is the critical window for businesses, startups, and AI developers to prepare — before the rules of the game are formally written into law.
The Government shall regulate reporting duties and assign responsibilities to relevant agencies, organizations, and individuals, commensurate with the incident’s severity and the breadth of impact of the AI system.
📞Ready to navigate Vietnam’s upcoming AI Law with confidence?
With legislation on the horizon, early preparation will determine who leads — and who gets left behind.
Connect with Humane-AI Asia to stay ahead of regulatory change. We help organizations interpret policy, build AI governance frameworks, and turn compliance into a strategic advantage.
For more information about our services, visit: https://humane-ai.asia/en
Humane-AI Asia
Tran Vu Ha Minh | 0938801587
minh.tran@humane-ai.asia | info@humane-ai.asia