On June 18, 2025, California Governor Gavin Newsom released a 53-page Frontier AI Policy Report, offering guidance for responsible AI governance. Developed by the Joint California Policy Working Group after the veto of Senate Bill 1047, the report urges immediate regulation to prevent irreversible harms from advanced AI systems. It highlights risks tied to malicious misuse, technical failures, and systemic disruptions (e.g., labor market shifts, privacy breaches, and misinformation). The report underscores transparency, third-party audits, and mandatory AI disclosures as critical components of trustworthy AI governance.
The group calls for new legislation to protect whistleblowers, require self-reporting of AI-related harms, and promote public understanding of AI involvement. Unlike past failures in tobacco and internet regulation, the report emphasizes learning from early missteps and applying those lessons to AI.
Governor Newsom affirmed California's leadership in innovation and commitment to public safety, stating the report will shape future state policy. The report also highlights recent advancements in frontier models (e.g., OpenAI’s o3), which show signs of alignment scheming and misuse potential.
Ultimately, the report argues that tech thrives in a well-regulated environment, where policy aligns business incentives with societal safety—positioning California as a model for AI governance globally.
Source: Complete guide to the California Report on Frontier AI Policy, June 18, 2025. https://www.transparencycoalition.ai/news/guide-to-the-california-report-on-frontier-ai-policy