Humane-AI Asia

National AI Ethics Framework: How Should Businesses Apply it in Practice?

Index

    The official adoption of Vietnam’s Law on Artificial Intelligence marks a significant shift: AI is no longer merely a technological matter, but a governance, ethical, and legal responsibility issue. In this context, Article 26 of the Law on Artificial Intelligence and Circular No.05/2026/TT-BKHCN formally establish the National AI Ethics Framework, setting binding expectations that AI use must ensure safety and reliability, respect human rights and dignity, and prevent harm to life, health, honour, and mental wellbeing.

    From a business perspective, how should this ethical framework be translated into concrete and day-to-day practices?

    💡 The National AI Ethics Framework Is Not a “Declaration” — It Is a Governance Requirement

    The framework goes beyond high-level value statements and creates a clear expectation that businesses must:

    • Integrate AI ethics principles from the design, development, procurement, and deployment stages of AI systems.
    • Proactively prevent risks, rather than reacting only after incidents occur.
    • Remain accountable for the impacts of AI on individuals, society, and the environment.

    In practical terms, this means AI ethics should sit on the same governance map as data protection, information security, and ESG: overseen at Board or executive level, embedded into risk management and internal controls, and regularly reported on.

    💡 Principle 1: Safety and Reliability

    The National AI Ethics Framework requires that AI systems be safe, reliable, and designed to prevent harm to individuals and society.

    What this means for businesses:

    • Build “safety by design” into AI projects from the start, including scenario analysis for potential harms to life, health, reputation, and mental wellbeing.
    • Establish clear quality criteria for data, models, and outputs, and implement testing and validation before deployment, especially for higherrisk use cases.
    • Define human oversight and intervention points in critical processes so that humans can monitor, pause, or override AI when needed.
    • Put in place incident reporting and response procedures for AIrelated issues, including a way to capture user complaints and operational incidents, and learn from them.

    💡 Principle 2: Human Rights, Fairness and Transparency

    The National AI Ethics Framework emphasizes fairness and non-discrimination. For businesses, key operational adoption include:

    • Assessing bias risks in training data and operational datasets.
    • Establishing periodic review mechanisms to evaluate AI outputs across different user groups.
    • Recording and addressing complaints or feedback from individuals affected by AI-driven decisions or recommendations.
    • Businesses are not required to eliminate all risks, but they must be able to demonstrate that reasonable measures have been taken to identify and mitigate such risks.
    • Recognising and reducing harms to vulnerable groups such as children, the elderly, and persons with disabilities; businesses should therefore be able to evidence, through documentation and testing logs, how they have assessed and mitigated bias risks for these groups.
    • Informing users when they are interacting with an AI system or when AI plays a significant role in decisions that affect them.
    • Being able to explain, at a functional level, what the AI system is used for, how it generally works, and what its key risks and limitations are.
    • Maintaining documentation and records sufficient to support internal reviews, audits, or regulatory inspections, without disclosing trade secrets or core algorithms.

    💡 Principle 3: Social Benefit and Sustainable Development

    This expands AI ethics beyond individual rights into societal and environmental impacts. For businesses, this implies:

    • Evaluating whether an AI use case genuinely creates social or customer value, or whether it mainly optimises shortterm metrics at the expense of trust or social cohesion.
    • Considering impacts on vulnerable groups such as children, the elderly, and persons with disabilities, and designing safeguards accordingly.
    • Taking into account environmental impacts, particularly for resourceintensive AI models, and favouring energyefficient architectures or infrastructure where feasible.

    This requires crossfunctional collaboration between technology, business, legal, risk, and sustainability teams, rather than treating AI solely as an IT matter.

    💡 Principle 4: Innovation and Social Responsibility

    The Framework does not aim to slow innovation; it encourages responsible experimentation with clear accountability. AI should be used to support human flourishing, not to avoid human responsibility. This approach includes:

    • Clearly defining who is accountable for AIsupported decisions.
    • Training employees to understand both the capabilities and limitations of AI, and to question AI outputs rather than relying on them blindly.
    • Putting in place a mechanism of consulting relevant parties (e.g. the authoritative agencies) where necessary.

    When implemented in this way, AI ethics becomes a competitive advantage: it reduces legal and operational risk, and builds trust with customers, partners, and regulators.

    💡 Suggested Implementation Roadmap for Businesses

    To effectively apply the National AI Ethics Framework, businesses may consider the following steps:

    • Develop or update an internal AI Ethics Policy aligned with the framework.
    • Embed AI ethics considerations into risk assessment, procurement, and deployment processes.
    • Issue templates and checklists addressing transparency, fairness, and AI safety.
    • Conduct AI ethics awareness training for relevant stakeholders, including leadership, technical teams, operations, and legal functions.

    Most importantly, AI ethics should be treated as a continuous process, not a one-time policy document.

    📌 Conclusion

    The National AI Ethics Framework is not intended to hinder innovation, but to establish a trustworthy foundation for responsible and sustainable AI adoption. Businesses that proactively translate ethical principles into practical governance mechanisms will not only meet legal expectations, but also strengthen credibility, competitiveness, and market trust.

    👉 Connect with Humane-AI Asia to stay ahead of regulatory change. We help organizations interpret policy, build AI governance frameworks, and turn compliance into a strategic advantage.

    For more information about our services, visit: https://humane-ai.asia/en

    Humane-AI Asia

    Tran Vu Ha Minh | 0938801587  

    minh.tran@humane-ai.asia | info@humane-ai.asia