As Vietnam’s Law on Artificial Intelligence introduces a formal obligation of “accountability,” many technology companies are asking a practical question: how much explanation is legally sufficient? Must developers disclose source code and detailed algorithms, or is a functional description enough?
The answer lies in the Law’s risk-based regulatory approach.
1. Scope and Accountability Principle
Article 1 and Article 2 confirm that the Law governs research, development, provision, deployment, and use of AI systems in Vietnam. This means accountability is not limited to developers; it extends to providers and deployers as well.
At the principle level, Article 4(3) establishes the general obligation to “ensure accountability for decisions and consequences generated by AI systems.”, forming the foundational legal basis for explainability and transparency.
2. Risk-Based Explanation Requirements
Under Article 9, AI systems are classified into high-risk, medium-risk, and low-risk categories, with corresponding inspection and supervision mechanisms specified in Article 10.2.1. High-Risk AI Systems
High-risk systems are subject to the strictest accountability obligations.
Under Article 14(1)(e), providers of high-risk AI systems must explain to competent state authorities:
- The intended purpose of the system;
- The operating principles at a functional description level;
- The main categories of input data;
- Risk management and control measures;
- Other necessary information for inspection and supervision.
Similarly, deployers must provide explanations regarding operation, risk control, and incident handling under Article 14(2)(đ).
Crucially, Article 14(1)(e) and Article 14(4) clearly state that explanations must not require disclosure of source code, detailed algorithms, model parameters, trade secrets, or technological secrets. This provision establishes a legal boundary: accountability does not mean full technical exposure.
In addition, high-risk systems must undergo conformity assessment before deployment under Article 13, and providers must maintain technical documentation and operational logs pursuant to Article 14(1)(c) to facilitate inspection and post-market supervision.
2.2. Medium-Risk AI Systems
For medium-risk systems, the obligation is lighter.
According to Article 15(1)(b), providers must provide explanations upon request during inspections or when signs of risk or incidents arise. The scope remains limited to functional descriptions, main input data types, and risk management measures—without disclosing proprietary technical details.
Deployers also bear explanatory duties under Article 15(1)(c).
2.3. Low-Risk AI Systems
Under Article 15(2), accountability obligations arise only when there are signs of legal violations or impacts on lawful rights and interests. This reflects a post-inspection model and minimizes unnecessary compliance burdens.
3. Transparency and Incident Management
Beyond formal explanations to regulators, transparency obligations under Article 11 (e.g., informing users when interacting with AI, labeling AI-generated content) and incident reporting duties under Article 12 also function as forms of operational accountability.
By embedding accountability within a risk-based structure, Vietnam’s AI Law clarifies that “sufficient explanation” means functional transparency, documented risk controls, and demonstrate human oversight without requiring disclosure of source code or trade secrets.
Organizations involved in AI should prioritize early compliance, particularly in risk classification (Articles 9–10), conformity assessment for high-risk systems (Article 13), documentation and audit readiness (Article 14), and conditional explanation duties (Article 15). Proactive alignment will reduce regulatory exposure and strengthen institutional trust.
Connect with Humane-AI Asia to operationalize AI accountability and turn compliance into a foundation for responsible and sustainable innovation.
For more information, visit: https://humane-ai.asia/en
Humane-AI Asia
Tran Vu Ha Minh | 0938801587