Humane-AI Asia

Introduction to the AI Lifecycle - AI Actors_May.2025

Index
    Ever wonder how AI goes from a cool idea to something that actually runs your Spotify or screens job applications?

    INTRODUCTION TO THE AI LIFECYCLE – AI ACTORS

    #longversion

    🌀 Ever wonder how AI goes from a cool idea to something that actually runs your Spotify or screens job applications?

    Spoiler alert: it doesn’t happen overnight, and it’s not magic. 🧙‍♂️✨
     Just like your favorite app updates or that chaotic group project, AI systems go through a whole life cycle — from building and testing to being used in the wild and constantly tweaked.

    This process is called the AI Lifecycle — and trust us, it’s more than just coding and data. It’s about people, choices, risks, and keeping things ethical while making them smart. 🧠⚙️

    Let’s break it down. 👇

    🤖 What is the AI Lifecycle?

    AI doesn’t just exist. It goes through a whole life cycle, kinda like a TikTok trend:

    🔹 Starts with a real-world problem
     🔹 Moves through design, training, and development
     🔹 Gets deployed into real systems
     🔹 Then used, monitored, and updated as needed

    => The AI lifecycle is a series of stages that an AI system goes through — from its initial design to real-world deployment and monitoring. This process is not linear, but rather cyclical, involving continuous feedback and improvement across stages such as data collection, model development, deployment, and monitoring.

    💡 It’s not linear — it’s a loop with constant feedback.

    👥 Who’s in the AI Lifecycle?

    Real talk: AI doesn’t run itself. Here’s who’s behind the scenes...

    1. 🧑‍💻 AI Provider (The Builders)

    Who?
     Big tech companies, AI labs, research teams, startups.

    What they do:

    • Build the AI models
    • Collect and clean data
    • Train the algorithms
    • Provide tools & APIs for others to use[LM1] [AT2] 

    Must ensure:
     ✔️ Accuracy
     ✔️ Transparency
     ✔️ No shady data bias or misuse

    📌 Example: OpenAI creates GPT → offers an API for others to integrate.

    2. 🏗️ AI Deployer (The Implementers)

    Who?
     Banks, hospitals, e-commerce platforms — anyone using AI in the real world.

    What they do:

    • Customize AI to their needs
    • Test & monitor performance
    • Make sure it follows the law & ethics

    🎯 They don’t build AI, but they’re responsible for how it’s used.

    📌 Example: A Vietnamese bank uses AI to screen loan applications & detect fraud.

    3. 🧑‍💼 AI User (The Frontline Crew)

    Who?
     Employees, customers, pros who interact directly with AI tools.

    What they do:

    • Use AI in their daily tasks
    • Understand AI’s limitations
    • Give feedback to improve the system

    📌 Example: A support rep uses a chatbot to answer FAQs quickly and save time.

    4. 😬 Impacted Party (The Affected Ones)

    Who?
     People affected by AI decisions — without even using AI.

    Examples:

    • Rejected from a job by AI
    • Watched by facial recognition
    • Denied a loan due to an AI score

    They deserve:
     ✔️ Transparency
     ✔️ Explanation
     ✔️ Right to challenge decisions

    📌 Example: An applicant gets rejected by AI with zero explanation. Not cool.

    5. ⚖️ Regulators / Policymakers (The Rule-Makers)

    Who?
     Governments, international orgs, data watchdogs.

    What they do:

    • Create AI regulations
    • Assess risks
    • Enforce compliance
    • Protect public interest

    📌 Example: The EU created the AI Act to regulate high-risk AI in health, security, etc.

    🧠 TL;DR:

    AI has a full squad: Provider ➡️ Deployer ➡️ User ➡️ Impacted Party ➡️ Regulator

    Each plays a crucial role in making sure AI is helpful — not harmful.

    📣 Takeaways:

    ✅ Don’t just use AI — understand who’s behind it
     ✅ Know your role: Are you a user, deployer, or impacted party?
     ✅ Ask questions, give feedback, demand transparency

    Call to Action: Dive into the AI Lifecycle, flex your tech skills, and build AI that slays responsibly!

    🎬 Meme/GIF Suggestions:

    • "This is fine" dog in fire
       → Caption: “When you deploy AI but skip the monitoring step”
    • Spider-Man pointing at Spider-Man
       → Caption: “Provider vs. Deployer vs. Regulator when AI causes a problem”
    • SpongeBob meme: ‘Ight Imma Head Out’
       → Caption: “When AI makes a decision and offers no explanation”

    #GenZforAI; #AILifecycleVibes; #CodeTheFuture

    For further information, please contact us via email at

    (Call-to-Action Section)

    Author: Phan Van Dong - Consultant

    Date: May, 2025.

    #shortversion

    🎯 What’s the AI Lifecycle, really?

    Let’s get one thing straight: AI ≠ magic. It’s not some wizard living in your phone. It’s a looping, living process — just like your skincare routine (but for code).

    Here’s the AI Lifecycle in simple steps:

    1. 🧠 Design — Smart humans (like AI developers) create a model by training it on data.
    2. 🚀 Deploy — The model gets plugged into apps, services, or systems (like your bank’s credit scoring tool).
    3. 👤 Use — You interact with AI every day: chatbots, face filters, Spotify suggestions, even Grammarly![LM3] 
    4. 🔄 Monitor & Improve — Teams track how it performs and update it regularly (because no one wants a glitchy AI giving weird answers).

    👀 Who’s running this AI show?

    • 🧑‍💻 AI Providers — Build and train the models (e.g., OpenAI with ChatGPT).
    • 🏗️ Deployers — Integrate it into services (e.g., e-commerce, education, banking).
    • 🧑‍💼 Users (You!) — Use AI knowingly or unknowingly.
    • 😬 Impacted Folks — People affected by AI decisions (even if they never touch a chatbot).
    • ⚖️ Regulators — Set the rules and protect the public.

    💥 Why should Gen Z care?

    Because AI is shaping your world — from what you see online to how you get hired. You’re not just a user; you’re part of this AI ecosystem.

    ✅ Understand how it works
     ❓ Ask the hard questions
     🛡️ Push for fairness & transparency

    👉 TL;DR: Be curious. Be critical. Keep AI in check. The future’s yours to sha[LM4] pe.

    For further information, please contact us via email at

    (Call-to-Action Section)

    Author: Phan Van Dong - Consultant

    Date: May, 2025.

     

    #Original

    INTRODUCTION TO THE AI LIFECYCLE – AI ACTORS[AT5] [LM6] 

    #longversion

    The AI lifecycle is a repetitive process that starts with a real-world problem and ends with an AI solution designed to address it. Rather than being linear, it involves continuous feedback and iteration across stages like data collection, model development, deployment, and monitoring.

    1. What is the AI lifecycle?

    The AI lifecycle is the series of stages that an AI system goes through, from its design to its actual use and monitoring. The main stages include:

    • Design and development of the AI model;
    • Deployment and integration into the actual system;
    • Use and interaction by users;
    • Monitoring, evaluation and adjustment to ensure safety and effectiveness.

    At each stage, there will be different actors involved and responsible.

    2. Key actors in the AI lifecycle

    a. AI Provider

    AI vendors are organizations or individuals responsible for developing artificial intelligence models. They can be large technology companies, academic research labs, or AI startups. Their roles include designing algorithms, collecting and processing training data, training models, and providing tools or platforms for deploying AI. Vendors need to ensure that their models are not only accurate, but also transparent, explainable, and minimize risks such as data bias or technology misuse. They also play an important role in publishing technical documentation and supporting implementers in using AI effectively.

    For example: OpenAI developed the GPT language model, then provided an API for other organizations to integrate into their products.

    b. AI Deployer

    AI implementers are organizations or businesses that use AI models developed by others and integrate them into real-world products, services, or operational processes. They can be banks, hospitals, e-commerce companies, or government agencies. Although they do not directly build the models, they are responsible for how AI is used in practice. This includes customizing the models to suit specific contexts, testing them in real-world environments, monitoring performance, and ensuring that the AI system complies with legal and ethical regulations. Implementers act as a bridge between technology and practical applications.

    For example: a Vietnamese bank uses an AI model from a technology partner to automatically evaluate loan applications and detect fraud.

    c. AI User

    AI users are individuals or organizations that interact directly with an AI system during use. They can be consumers using chatbots, employees using data analytics tools, or professionals using AI to support decision making. The role of users is not only to use the system effectively, but also to understand the limitations of AI, recognize potential risks, and provide feedback to improve the system. In some cases, users can also adjust how they interact with AI to ensure consistent and reliable output.

    For example: A customer service representative uses an AI chatbot to quickly answer frequently asked questions from customers, saving time and improving efficiency.

    d. Impacted Party

    Affected parties are those who do not directly use AI systems but are affected by decisions made by AI. These could be candidates who are rejected from a job offer due to a screening system, citizens who are monitored by facial recognition systems, or customers who are denied loans due to a credit rating system. Their role is crucial in reflecting on the social impacts of AI. They need to be protected, and have the right to know, be informed, and be given feedback when they are affected by automated decisions. Listening to and responding to these groups helps improve the fairness and accountability of AI systems.

    For example: a job applicant is automatically rejected by an AI system because his profile does not match the criteria but is not given a clear reason.

    e. Regulator/Policymaker

    Regulators are national or international organizations tasked with developing, enacting, and monitoring the implementation of AI-related regulations. They can be the European Commission (with its EU AI Act), national data protection authorities, or international standardization organizations. Their role is to ensure that AI systems are developed and used safely, transparently, and in accordance with ethical and legal values. They are also responsible for classifying risks, requiring impact assessments, and addressing violations if they occur. As AI develops rapidly, the role of regulators is becoming increasingly important to protect the public interest and maintain social trust.

    For example: the European Commission enacted the EU AI Act to regulate the use of AI in high-risk areas such as health, justice, and security.

     

    For further information, please contact us via email at

    (Call-to-Action Section)

    Author: Phan Van Dong - Consultant

    Date: May, 2025.

     

     

    #shortversion

    1. What is the AI lifecycle?

    The AI lifecycle is a cyclical process that guides how AI systems are built and maintained. It includes stages such as model design, deployment, user interaction, and ongoing monitoring. Each stage involves different stakeholders and allows for continuous improvement through feedback and iteration.

    2. Key actors in the AI lifecycle

    a. AI Provider

    These are individuals or organizations that develop AI models. They design algorithms, process data, train models, and provide platforms for deployment. They must ensure transparency, accuracy, and minimize risks.

    Example: OpenAI developed GPT and offers it via API for integration.

    b. AI Deployer

    Organizations that implement AI into real-world systems. They customize, test, and monitor AI to ensure it works effectively and complies with regulations.

    Example: A Vietnamese bank uses AI to assess loan applications and detect fraud.

    c. AI User

    People or teams who interact directly with AI systems. They use the tools, understand limitations, and provide feedback.

    Example: A customer service agent uses an AI chatbot to answer FAQs.

    d. Impacted Party

    Those affected by AI decisions without directly using the system. They deserve transparency and the right to challenge outcomes.

    Example: A job applicant is rejected by an AI system without explanation.

    e. Regulator/Policymaker

    Authorities that create and enforce AI-related laws to ensure ethical and safe use.

    Example: The EU AI Act regulates high-risk AI applications.

     

    For further information, please contact us via email at

    (Call-to-Action Section)

    Author: Phan Van Dong - Consultant

    Date: May, 2025.