Humane-AI Asia

What Does It Mean To Classify AI Systems By Risk: High - Medium - Low?

Index

    Main Contents:

    1.  Viet Nam classifies AI Systems by Risk: High - Medium – Low
    2. Providers’ Obligations
    3. Deployers’ Obligations
    4. The Purpose of Classification
    5. The Consequences of Misclassification
    6. What do businesses need to prepare?

    1. Vietnamese AI classification system

    Vietnam has adopted a "classification-first" regulatory model for Artificial Intelligence, creating a risk-based classification system that includes three tiers: High, Medium, and Low. This framework aims for proportionate regulation. Higher-risk applications undergo stricter pre-compliance assessments, while lower-risk systems can use more flexible self-monitoring methods. By adjusting oversight to potential harm, the Vietnam AI Law aims to balance innovation while protecting national security, public safety, and fundamental rights.

    The framework categorizes AI into four main tiers based on the following criteria:

    • High-Risk Systems: Applications that pose serious threats to life, health, national security, or fundamental rights. These systems face the strictest controls, including mandatory registration and compliance assessments.
    • Medium-Risk Systems: Systems that interact with humans or create content that could mislead or confuse users. The main goal here is transparency, which requires clear labeling of AI-generated outputs.
    • Low-Risk Systems: All other systems with minimal impact, which are encouraged to follow voluntary ethics codes and industry standards.
    • Prohibited Systems: Systems that engaging in acts strictly prohibited by law; falsifying or imitating to manipulate people's perceptions and behavior; exploiting the weaknesses of vulnerable groups; creating or disseminating false content that seriously affects people's legitimate rights and interests, national security, and social order and safety.

    2. Obligations of Providers

    In the Vietnamese regulatory framework, accountability for AI safety is spread across the supply chain using a structured "classification-inheritance" model that balances innovation and oversight. Providers shall:

    • Self-classify their AI systems based on potential impact;
    • Prepare formal documentation; and
    • Submit mandatory notification to the Ministry of Science and Technology (MST) via the national electronic portal for any systems considered medium or high-risk.

    This ensures government visibility from the earliest development stages.

    3. Obligations for deployers

    Once putting the AI systems into service, the deployers shall:

    • Inherit the classification results from the providers; and
    • Assume full legal responsibility for ensuring the system's safety, performance standards, and operational integrity throughout its lifecycle.

    If they change the AI's context, combine it with other systems, or modify its functionality in ways that increase risk, such as expanding user access from hundreds to millions or using it in sensitive areas, they must work with the original provider to start a formal reclassification process and update regulatory filings.

    Low-risk systems benefit from lighter regulation focused on voluntary adherence to classification and information disclosure requirements. This tiered accountability system keeps high or medium-risk AI applications under ongoing, documented oversight with well-defined responsibilities from development through to deployment and operation.

    4. Purpose of the risk-based classification framework

    It is the main tool for effective state management, enabling a shift from reactive oversight to a data-driven, preventive model that anticipates and reduces potential harm before it materializes. In this regulatory setup, classification acts as a "strategic filter" that optimizes the use of limited state resources and expertise. It enables agencies to impose strict, regular inspections on high-risk systems that affect national security, critical infrastructure, or human safety, while adopting a balanced and efficient monitoring approach for medium-risk systems, such as report-based monitoring, statistical sampling inspections, and accredited third-party assessments to manage the risks of user manipulation, misleading content, or deceptive interactions. This approach avoids the administrative and financial burden of constant audits that could stifle legitimate innovation. At the same time, a "light-touch" regulatory stance is maintained for low-risk innovations to prevent unnecessary administrative hurdles, compliance costs, or delays that could deter startups and SMEs from developing beneficial AI applications. By connecting inspection of intensity, documentation requirements, and enforcement mechanisms to a system's declared risk profile, the law creates an enforceable accountability system with clear consequences, supporting a transition from passive observation to active regulatory governance.

    5. Regulatory responses to noncompliance

    If any discrepancies, operational deviations from declared specifications, or false safety statements arise during monitoring, whether through regular inspections, incident investigations, or third-party audits, the competent authority can legally intervene by requesting formal reclassification with updated risk assessments. They may also demand additional technical documentation and test results, impose corrective action plans with compliance deadlines, or order the temporary suspension of system operations to ensure public safety, maintain legal integrity, and prevent irreversible harm pending the outcomes of investigations.

    6. Implications for enterprises in Vietnam

    For businesses in Vietnam, this approach means moving away from reacting to AI issues after they arise and instead adopting a proactive, lifecyclebased approach to compliance, where legal requirements are built into system design, development and deployment from the start. In practice, this requires strong integration of internal processes, such as risk assessment and documentation, as well as effective cooperation mechanisms between AI developers, implementation teams, and the regulatory bodies to ensure the AI systems developed and in place in Vietnam are safe, secure, confidential, and consistent with the expectations of the Vietnamese regulatory framework.

    Ready to navigate Vietnam’s upcoming AI Law with confidence?

    With legislation on the horizon, early preparation will determine who leads - and who gets left behind.

    👉 Connect with Humane-AI Asia to stay ahead of regulatory change. We help organizations interpret policy, build AI governance frameworks, and turn compliance into a strategic advantage. 

    For more information about our services, visit: https://humane-ai.asia/en 

    Humane-AI Asia 

    Tran Vu Ha Minh | 0938801587  

    minh.tran@humane-ai.asia | info@humane-ai.asia