Humane-AI Asia

Common pitfalls in the self-classification of AI systems by enterprises

Index
    The new Artificial Intelligence (AI) Law in Vietnam requires businesses to self-classify AI systems according to risk levels (high – medium – low) before putting them into operation. This is mandatory, and would determine every other legal duty: compliance procedures, notification obligations, and even audit readiness. Yet, the interpretation of this requirement in practice remains a challenge among businesses. Here are the most common (and costly) mistakes we are seeing: 🔑 Confusing AI risk level with product functionality/ Does “simple function” mean low risk? Many businesses focus only on the technical functions or “appealing” applications of AI when classifying risk—for example, considering a sales-support chatbot as “low-risk” because it does not directly affect human life. Under the law, classification must be based on the level of impact on users’ rights, safety, security, and broader social effects, not merely technical features. This mistake can easily lead to misclassification, overlooking the risk of misleading or inaccurate information to users (medium risk) simply because the system appears technically simple. 🔑 Poor reasoning for the classifications For AI systems classified as medium-risk or high-risk, the law requires a comprehensive classification dossier, including supporting evidence and documentation reflecting management decisions on risk levels. Many businesses currently record classification in only one line of internal documents, lacking evidence of reasoning. The gap exposesthem to reclassification requests, temporary suspension and additional measures as provided by law.. 🔑“Guessing” classifications without guidance The law allows businesses to submit documentation to seek guidance from the competent authority (the Ministry of Science and Technology) when the risk level of an AI system cannot be clearly determined. A major mistake is “guessing” the classification to save time, which may result in applying incorrect notification obligations, administrative penalties, or being required to reclassify later. 🔑Assuming classification is solely the responsibility of the AI provider Classification is not only the responsibility of the provider named in the documentation. Deployers are also required to reassess the classification if they customize, integrate, upgrade, or change the intended use of the AI system. It is a likely regulatory blind spot for relying entirely on the initial classification without reassessing it during deployment. 🔑Overlooking notification and other compliance obligations Just because an AI system is classified as low risk does not mean that no obligations apply. Businesses must still prepare documentation and may be required to disclose information to enhance transparency under the law. 📍Conclusion Building an accurate and defensible classification process isn’t just about avoiding penalties; it’s about earning trust and staying ahead as AI governance becomes the backbone of competitive advantage. To avoid common mistakes, businesses should: • Analyze actual risks based on legal criteria, not just functionality. • Build robust classification documentation with clear reasoning and transparent data. • Seek regulatory guidance when uncertain. • Reassess classifications when AI systems change. • Comply with all obligations associated with the classification (notification, transparency, risk management, etc.). 📌Ready to navigate Vietnam’s upcoming AI Law with confidence? With legislation on the horizon, early preparation will determine who leads — and who gets left behind. Connect with Humane-AI Asia to stay ahead of regulatory change. We help organizations interpret policy, build AI governance frameworks, and turn compliance into a strategic advantage. For more information about our services, visit: https://humane-ai.asia/en Tran Vu Ha Minh | 0938801587 minh.tran@humane-ai.asia | info@humane-ai.asia