Humane-AI Asia

AI Governance Landscape 2025_ Comparing AI Strategies and Regulatory Frameworks of the EU, US, China and ASEAN

Index
    By 2025, there are more than 70 countries around the world with AI initiatives, or instruments reflecting different approaches and priorities. To better comprehend the current legal landscape of AI governance, we would like to present a comparison of strategies and regulations from the European Union (EU), United States (US), China, and ASEAN.

    AI GOVERNANCE LANDSCAPE 2025: COMPARING AI STRATEGIES AND REGULATORY FRAMEWORKS OF THE EU, US, CHINA, AND ASEAN

    #Long version

    By 2025, there are more than 70 countries around the world with AI initiatives, or instruments reflecting different approaches and priorities. To better comprehend the current legal landscape of AI governance, we would like to present a comparison of strategies and regulations from the European Union (EU), United States (US), China, and ASEAN.

    1. EU: Harmonized regulation by the EU AI Act

    a) Strategy

    The EU AI governance is led by a risk-based, human-centric approach, emphasizing fundamental rights, safety, and ethics. Its strategy promotes a trustworthy AI ecosystem while pursuing digital sovereignty, aiming to reduce reliance on foreign technology and foster a robust European AI industry.

    b) Legal Framework

    The EU AI Act (2024) is the world’s first comprehensive AI law, categorizing systems by risk:

    • Unacceptable Risk: Bans applications like social scoring or subliminal manipulation.
    • High Risk: Systems in critical infrastructure, hiring, education, or law enforcement face stringent requirements including pre-market conformity assessments, robust risk management, human oversight, high transparency, cybersecurity, and data governance. Post-market monitoring is also mandated.
    • Limited Risk: the AI Act introduces specific transparency requirements for certain applications, for example where there is a clear risk of manipulation (e.g. via the use of chatbots or deep fakes). Users should be aware that they are interacting with machines.
    • Minimal Risk: Remaining systems whose providers may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.

    General-purpose AI (GPAI), including large generative AI models, can be used for a variety of tasks. These model types may need to comply with special requirements if there are systemic risks inherent to the model.

    2. US: Innovation-driven decentralization

    a) Strategy

    The US prioritizes AI innovation to maintain technological leadership, particularly in geopolitical competition with China. The Trump administration (2025) emphasizes economic nationalism, minimizing regulatory barriers, and empowering the private sector.

    b) Legal Framework

    The US notably does not have a unified, comprehensive AI law, relying instead on a decentralized, adaptive approach:

    • Sector-Specific Rules: Existing federal agencies (e.g., FDA for healthcare, FTC for commerce, DHS for critical infrastructure) apply and adapt their regulations to AI use within their specific domains, focusing on consumer protection, safety, and fairness.
    • Federal Initiatives: The Executive Order on Trustworthy AI (2023) and the AI Bill of Rights promote bias mitigation through non-binding principles. The NIST AI Risk Management Framework offers a voluntary guide for organizations to manage AI risks.
    • State-Level Efforts: States are enacting their own AI legislation, like California’s SB 1047 (2024), aimed at regulating high-risk AI, was vetoed over innovation concerns.

    Pre-deployment AI approvals are not required (except in sectors like healthcare), nor is AI content labeling or model registration mandated.

    3. China: State-centric control

    a) Strategy

    China views AI as critical to achieving global technological dominance under its "Made in China 2025" plan. Its strategy is characterized by a unique blend of top-down state control, massive innovation subsidies, and the widespread use of AI for social governance and political stability. The goal is to create a self-sufficient AI ecosystem.

    b) Legal Framework

    China employs a proactive, sector-specific regulatory approach rather than a single overarching AI law, often setting precedents for emerging AI uses:

    • AI Content Regulation: AI-generated content must be clearly labeled, and certain powerful generative models require public security assessments and algorithm registration.
    • Generative AI Rules (2023): Specifically address misinformation, intellectual property, and content adherence to "socialist core values."
    • Data Security Law and Personal Information Protection Law: Provide robust, overarching legal frameworks for data governance, significantly influencing AI data collection, storage, and processing.
    • AI Safety Governance Framework (V1.0): China’s AI governance promotes a people-centered, AI-for-good approach with risk-based safeguards and global cooperation. Its centralized control supports rapid development but raises concerns over surveillance and state dominance.

    4. ASEAN: Regional harmonization

    a) Strategy

    ASEAN aims to become a leading digital community by 2025. Its AI strategy emphasizes regional harmonization, readiness, and inclusive innovation while adhering to the "ASEAN Way" of non-interference and dialogue. The focus is on digital transformation and enhancing human capital.

    b) Legal Framework

    ASEAN relies on non-binding guidelines:

    • ASEAN AI Governance and Ethics Guide (2024): Outlines seven key principles (transparency, fairness, safety, accountability, etc.) to support innovation while addressing ethical concerns. This guide serves as a reference for member states.
    • Education and Reskilling Frameworks: Promotes reskilling initiatives to address AI-driven labor shifts and prepare the workforce for future AI roles.
    • ASEAN Working Group on AI: ASEAN aims to bridge AI readiness gaps among its diverse members, from advanced (e.g., Singapore) to developing (e.g., Laos). While there’s no unified ASEAN AI Act, countries like Singapore have mature frameworks (e.g., Model AI Governance Framework), and others are drafting national laws.

    5. Conclusion: Dynamic and fragmented approaches

    In 2025, AI governance reflects regional priorities: the EU’s rights-based regulation, the US’s innovation-driven approach, China’s centralized control, and ASEAN’s flexible coordination. This fragmented landscape calls for strong international cooperation to tackle cross-border issues like ethics, safety, and data privacy. Harmonizing standards while respecting diverse values is essential for responsible global AI development.

    For further information, please contact us via email at to connect with us.

    (Call-to-Action Section)

    Author: Vo Thi Ngoc Huong - Consultant

    Date: June, 2025.

     

     

    #Short version

    AI GOVERNANCE LANDSCAPE 2025: COMPARING AI STRATEGIES AND REGULATORY FRAMEWORKS OF THE EU, US, CHINA, AND ASEAN

    By 2025, global AI governance reflects divergent priorities as the European Union (EU), United States (US), China, and ASEAN shape their AI strategies and regulatory frameworks. This article compares their approaches based on strategic goals and legal structures.

    1. EU: Rights-Based Regulation

    The EU leads with a risk-based, human-centric approach that prioritizes safety, ethics, and fundamental rights. The EU AI Act (2024) is the world’s first comprehensive AI law. It classifies AI systems into four risk categories:

    • Unacceptable Risk: Bans on practices like social scoring.
    • High Risk: Systems in critical areas (e.g., hiring, law enforcement) must meet strict requirements, including risk management, human oversight, and post-market monitoring.
    • Limited Risk: Requires transparency for chatbots and deepfakes.
    • Minimal Risk: General-purpose AI remains under existing laws. However, high-risk AI must be publicly registered and AI-generated content labeled. The EU is also drafting workplace AI rules. The “Brussels Effect” aims to shape global norms, though critics warn of slowed innovation.
    1. US: Innovation-First Decentralization

    The US prioritizes innovation and private-sector leadership. It lacks a unified AI law, relying on a decentralized framework:

    • Sector-Specific Regulation: Agencies like the FDA and FTC adapt existing laws to AI use.
    • Federal Guidance: The 2023 Executive Order on Trustworthy AI and the AI Bill of Rights offer non-binding principles. The NIST AI Risk Management Framework is a voluntary guide focused on governance and risk control.
    • State-Level Efforts: States like California have proposed AI laws, but major bills face opposition over innovation concerns. No federal rules require AI content labeling or pre-deployment approval, except in limited sectors.
    1. China: Centralized Governance

    China views AI as essential for technological self-sufficiency and global leadership. Its governance combines rapid innovation with tight control:

    • Content Regulation: AI-generated content must be labeled; certain models need security reviews and algorithm registration.
    • Generative AI Rules (2023): Address misinformation and enforce alignment with state values.
    • Data Laws: The Data Security Law and Personal Information Protection Law provide comprehensive data governance.
    • AI Safety Governance Framework: Promotes a people-centered, risk-based approach and international cooperation, though concerns persist over surveillance and state dominance.
    1. ASEAN: Regional Harmonization

    ASEAN promotes inclusive innovation and regional coordination while respecting national sovereignty:

    • Strategy: Aims to bridge AI readiness gaps, from advanced members like Singapore to developing ones like Laos.
    • Governance: The ASEAN AI Governance and Ethics Guide (2024) outlines principles like transparency and safety. Countries develop national laws; Singapore leads with its Model AI Governance Framework.
    • Capacity Building: ASEAN supports workforce reskilling to manage AI-induced labor shifts.
    1. Conclusion

    By 2025, AI governance remains regionally fragmented: the EU emphasizes rights, the US promotes innovation, China exercises central control, and ASEAN focuses on harmonization. Addressing global challenges like data privacy and AI safety requires international cooperation. Harmonizing standards while respecting diverse systems is vital for ethical and sustainable AI development.

    For further information, please contact us via email at to connect with us.

    (Call-to-Action Section)

    Author: Vo Thi Ngoc Huong - Consultant

    Date: June, 2025.

     

     

    #Version có icon

    🌏 AI 2025: CUỘC ĐUA “CẦM CƯƠNG” AI CỦA EU, MỸ, TRUNG, ASEAN – AI CHẬM LÀ MẤT THẾ TRẬN!

    2025 rồi, AI không còn là “cái gì đó viễn tưởng” nữa. Với sức mạnh "tưởng không nguy hiểm mà nguy hiểm không tưởng", các khu vực lớn như EU, Mỹ, Trung Quốc và ASEAN đều phải "đặt luật chơi" riêng cho AI. Mỗi nơi một phong cách — như hội bạn thân mà ai cũng khác vibe. Cùng soi xem họ đang “gò cương” AI ra sao nhé! 🕵️‍♀️

    1. EU: Chính chuyên như crush bạn – khó gần nhưng đáng tin!

    EU chọn cách làm “nghiêm túc” nhất quả đất: chia AI thành 4 mức rủi ro – từ “không chơi luôn” (bị cấm), đến “xài được nhưng phải canh chừng”. AI nguy hiểm như chấm điểm công dân hay thao túng hành vi bị “cho ra đảo”. Những ứng dụng như tuyển dụng, y tế, an ninh thì phải qua kiểm định gắt gao, có người giám sát.👩‍⚖️

    🤖 Cả những chatbot, deepfake cũng phải tự khai “Tôi là AI nha quý zị”. EU đang xây luật để AI phục vụ con người chứ không phải điều khiển họ. Tuy nhiên, có người nói: làm nghiêm quá thì dễ bị “tụt meta” trong cuộc đua đổi mới.

    2. Mỹ: “Tui không ép ai, miễn sao sáng tạo là được”

    Mỹ thì như kiểu bạn học free-style: không có một bộ luật AI tổng thể, nhưng mỗi ngành có quy định riêng. AI trong y tế sẽ do FDA canh; AI trong mua sắm thì FTC nhảy vào. 🏥💼

    Chính phủ đưa ra các hướng dẫn mềm như “AI Bill of Rights” hay “NIST AI Risk Framework” – kiểu như “gợi ý nhẹ nhàng, làm thì tốt, không làm cũng không bị gì”.

    Một số bang như California từng thử ra luật AI “gắt”, nhưng lại bị bác vì sợ ảnh hưởng đến đổi mới. Mỹ rõ ràng muốn AI phát triển nhanh, cạnh tranh với Trung Quốc, và tin vào sự năng động của thị trường hơn là xiềng xích từ chính sách.

    3. Trung Quốc: “Vừa nuôi, vừa kiểm soát sát sao”

    Trung Quốc coi AI là “vũ khí chiến lược” cho tham vọng Made in China 2025. Luật lệ ở đây chặt chẽ nhưng phát triển thì cực nhanh. AI nào tạo nội dung phải gắn nhãn rõ ràng. Những mô hình mạnh phải qua kiểm tra an ninh, đăng ký thuật toán.

    Bên cạnh đó là hệ thống luật về dữ liệu cực kỳ nghiêm ngặt – bảo vệ thông tin nhưng cũng giúp nhà nước có quyền lực kiểm soát. Trung Quốc nói họ hướng đến “AI vì con người”, nhưng thế giới vẫn lo: “Có phải AI vì dân, hay vì chính quyền thì chưa rõ lắm…” 👁️

    4. ASEAN: “Mỗi nhà mỗi cảnh, nhưng đang cố bắt tay nhau”

    ASEAN thì vui như lớp học đa quốc tịch. Singapore siêu pro, có hẳn khung quản trị AI đỉnh chóp. Mấy nước như Lào, Campuchia thì mới bắt đầu làm quen với AI. Vậy nên ASEAN ra một bộ hướng dẫn chung – không ép, chỉ gợi ý – với các nguyên tắc như minh bạch, công bằng, an toàn…

    💪 Điểm cộng là ASEAN đang đầu tư mạnh vào giáo dục, đào tạo kỹ năng mới cho người lao động để không bị AI “cướp job”. Vừa giữ bản sắc riêng, vừa học nhau – đó là cách ASEAN đẩy mình đi lên trong thế giới số.

    🔮 Tổng kết: Khi cả thế giới vừa yêu vừa sợ AI

    Mỗi nơi có “gu” quản trị AI riêng: EU kỹ tính, Mỹ thoáng, Trung Quốc kiểm soát, ASEAN linh hoạt. Nhưng điểm chung là: không ai dám thả trôi AI. Để AI phát triển đúng hướng, thế giới cần bắt tay nhau – không chỉ chia sẻ công nghệ mà còn đồng lòng về đạo đức, quyền riêng tư và an toàn.

    AI không xấu – nhưng nếu không quản đúng, thì “chơi lớn” có thể thành “bay màu”! 😅

    For further information, please contact us via email at to connect with us.

    (Call-to-Action Section)

    Author: Vo Thi Ngoc Huong - Consultant

    Date: June, 2025.