California’s AI Transparency Act and Chatbot Law likely to have ripple effect

Home AI Infrastructure News California’s AI Transparency Act and Chatbot Law likely to have ripple effect

The ‘Sacramento Effect’ could mean last week’s flurry of landmark AI legislation could serve as a blueprint for other states

In sum – what to know:

California passes landmark AI legislation – SB 53, AB853 and SB243 are Senate and Assembly bills that regulate developers of advanced, or “frontier,” AI models; require disclosures about AI-generated content; assess impact of data centers on other ratepayers and consumers of electricity, and more.

State legislation could influence federal laws – California is home to the largest AI companies and the proximity of Silicon Valley to the state capital in Sacramento means there’s a powerful feedback loop between legislators, lobbyists, and tech influencers.

Assessment of data center costs – Legislation to protect California ratepayers from data center-driven electricity costs passed into law; assessment of impact of data centers on grid and ratepayers underway, as well as further assessment of how to tame carbon emissions.

On October 13, Governor Newsom signed into law a cluster of new bills that will both directly and indirectly regulate how companies use and deploy AI in California, and why that matters is California’s tech laws usually become de facto national standards for other states. Home to the nation’s largest AI companies and the 5th-largest economy in the world, it’s expected that California’s new AI regulations will have a ripple effect.

Let’s take a closer look at the “firsts,” such as the first state law to directly regulate developers of “frontier foundation models”; the first to mandate “studying data center electricity use”; and the first to regulate AI companions for the protection of children and other users.

One of the most significant is AB853, which updates the California AI Transparency Act by extending the compliance deadline for the existing AI Transparency Act—including requirements for latent and manifest disclosures in AI-generated content—to August 2, 2026. Additionally, AB853 creates obligations on large online platforms, generative AI system-hosting platforms, and capture device manufacturers. Also significant is SB243, a Companion Chatbot law that will require operators to be transparent with children and other users when they are interacting with AI, as opposed to a human.

Deeper Dive:

  • Transparency in Frontier Artificial Intelligence Act, Senate bill SB 53 and Assembly bill AB853: offers evidence-generating transparency measures that impose safety, transparency, and accountability requirements on major AI developers. The law makes California the first state to regulate developers of advanced, or “frontier,” AI models. AB853 updates the California AI Transparency Act, stating: a person that creates, codes, or otherwise produces a generative AI system with more than 1 million monthly visitors or users (and is publicly accessible within the geographic boundaries of the state to make available an AI detection tool at no cost to the user) allows a user to assess whether a generative AI system was used to create or alter an image, video, audio content (or content that is a combination thereof). The focus on the “development” of frontier models differs from last year’s Colorado AI Act, which took a risk-based approach to regulating the development and downstream deployment of “high-risk artificial intelligence systems.”
  • Companion Chatbot law, SB243: The first law requiring that chatbots tell users their output is AI generated, in case people don’t know. Because of recent news about the ways in which children have been misled or exposed to potentially harmful or inappropriate content, the law mandates disclosures to users that they are interacting with a companion chatbot, requires operators to implement safety protocols to prevent the dissemination of certain harmful content, and calls for annual reports to the Office of Suicide Prevention.
  • Ratepayer and Technological Innovation Protection Act, SB 57: Recently updated to include a study of data center electricity use, the California Public Utilities Commission (PUC) must assess the impact of data center electricity use on ratepayers by January 1, 2027.  Critics say the original bill’s focus on consumers was weakened with language around assessing “the extent to which electrical corporation costs associated with new loads from data centers result in cost shifts to other electrical corporation customers.” Depending on the results, there will be a “special tariff” to shield residential and small business ratepayers from transmission costs and ensure that the investments for new data centers are recovered from the data centers themselves. In addition to shielding consumers from the costs of infrastructure upgrades, the bill also mentions aligning data center investments with the California’s climate goals. Worth noting is the fact Ohio’s Public Utilities Commission did this past summer pass a settlement requiring that new data centers pay 85% of their requested energy.

With the government shutdown, and continued debate by Congress about federal laws, some are touting the legislation as “commonsense guardrails,” while others criticize it for potentially stifling AI innovation in the sate, or not being comprehensive and clear enough. The next step for AI infrastructure, frontier, data center, and other AI companies and stakeholders is to assess where they fit in the big picture and to update safety and governance policies, as well as risk-management, cybersecurity and third-party frameworks.

What you need to know in 5 minutes

Join 37,000+ professionals receiving the AI Infrastructure Daily Newsletter

This field is for validation purposes and should be left unchanged.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More