California’s SB 53 and Emerging AI Regulation: Strategic Guidance for Founders and Investors
As published on Law 360
California's S.B. 53, or the Transparency in Frontier Artificial Intelligence Act, was signed into law on Sept. 29, 2025, as the first comprehensive state-level artificial intelligence safety framework with mandated public disclosures in the U.S.
S.B. 53 applies primarily to developers training AI models with extreme compute use (10^26 integer or floating-point operations, or FLOPS) with further regulations for those developers affiliated with organizations above $500 million annual gross revenue.
It is unlikely that all regulatory aspects of the law apply directly to emerging companies. S.B. 53 is explicitly designed to limit the ability of large AI labs to avoid safety obligations in the pursuit of speed or scale.
However, S.B. 53 is still likely to materially affect emerging companies' business. S.B. 53 introduces regulatory, commercial and reputational dynamics that are likely to extend well beyond California, even in the wake of President Donald Trump's Dec. 11 executive order targeting excessive state AI laws.
Universal Requirements of the Bill
The focus of the bill is on foundational models,[1] frontier models,[2] and their developers.[3] However, each year, starting Jan. 1, 2027, these definitions will be reviewed by the California Department of Technology to ensure accuracy.[4]
Updates to these definitions should be monitored, as compliance with the act requires written, implemented and "conspicuously" published AI frameworks describing how large frontier developers incorporate national, international and "industry-consensus" best practices into its frontier AI framework.[5]
The expected evolution of these key definitions and enforcement thresholds will reward actors who plan ahead. Voluntary alignment with S.B. 53 practices can signal institutional investment readiness and mitigate reputational risk.
California's AI regulatory leadership is likely to influence policy beyond state borders. Much like the General Data Protection Regulation's effect on global privacy standards, S.B. 53 may serve as a prototype for future federal or multistate regulatory frameworks.
AI Governance in Commercial Agreements, Financings and Exit Transactions
Core elements of S.B. 53-aligned governance will likely be included in future procurement checklists, representations and warranties, and diligence processes.
The definition of "large frontier developer" includes "affiliates," which is defined by the bill as "a person controlling, controlled by, or under common control with a specified person, directly or indirectly, through one or more intermediaries."[6]
An acquisition may result in a controlling relationship with collective annual gross revenues exceeding the $500 million cap. Because of this, even in the absence of a legal mandate, failure to implement basic AI governance protocols may disadvantage emerging companies in commercial agreements, financing and exit conversations.
These new regulatory requirements may accelerate compliance timelines. Founders should anticipate requests from investors and partners for documented compliance readiness. Early-stage companies may benefit from proactively integrating governance infrastructure into their product and organizational roadmaps.
Treating AI governance not merely as a compliance matter, but as a component of strategic positioning, may benefit institutional capital raises.
Establishment of Industry Norms
While only large frontier developers will face civil penalties of up to $1 million per violation of the new transparency requirements,[7] S.B. 53 also expands whistleblower protections for any covered employee of mere frontier developers, and forbids entering into any rule, regulation, policy or contract that prevents a covered employee from disclosing information on catastrophic risk.[8]
Frontier developers also have to provide clear notice to all covered employees of their rights and responsibilities under this section, with a post and display in any workplace or by providing written notice at least once a year.[9] Accordingly, whistleblower protections are likely to become baseline expectations across the sector.
CalCompute and Strategic Positioning for Emerging Companies
Founders and executives of emerging companies should consider engaging with CalCompute, California's new state-backed computing initiative, which is designed to provide access to infrastructure, guidance and public-private research resources.[10]
A "fully owned and hosted cloud platform," CalCompute is to be established within the University of California, and intends to enable "equitable innovation by expanding access to computational resources."
The bill also states that an incoming report will include a landscape analysis of California's current public, private and nonprofit cloud computing platform infrastructure, as well as an analysis of the cost to build and maintain CalCompute, with recommendations for potential funding sources, governance structure, and parameters for use of CalCompute.
This section of S.B. 53 could reduce asymmetries that currently favor dominant players.
Comparison to the European Union AI Act
The EU AI Act, passed in January 2024, is broader and more burdensome for emerging companies. Obligations focus on providers of high-risk AI systems and general-purpose AI models, with systematic risk. The EU Act's stricter compliance requirements apply to those trained on 10^25 FLOPS, versus California's 10^26.
The EU act also requires regulatory submissions, while California only requires the frontier developers and large frontier developers to publish transparency reports. An earlier version of the California law, closer in similarity to the EU act, was vetoed by Gov. Gavin Newsom for being too broad and potentially stifling innovation.
Ensuring a National Policy Framework for Artificial Intelligence
The Trump administration's Dec. 11 executive order on AI regulations is designed to implement a minimally burdensome national standard, and not "50 discordant State ones."[11] On its face, the order may seem targeted toward S.B. 53.
However, the published executive order does not name S.B. 53, while it does mention Colorado state AI law. Additionally, the executive order requires establishment of an AI litigation task force within 30 days, and an evaluation of state AI laws within 90 days.[12]
It is unclear if the AI litigation task force will target S.B. 53, what the evaluation of the law will result in, or how S.B. 53 would be challenged in court.
Finally, the executive order does not suspend or invalidate any existing regulations, noting that states will retain authority on specific carveouts for child safety protections, AI compute and data center infrastructure, state government procurement and use of AI, and "other topics as shall be determined."[13]
As such, businesses, especially those founded in California, will still benefit from voluntary alignment with S.B. 53 as a strategic hedge against regulatory volatility and as a signal of institutional readiness in an increasingly competitive and costly environment.
Conclusion
S.B. 53, on its face, may not affect emerging companies directly. Yet emerging companies that embed governance and transparency into their operations will differentiate themselves in highly competitive markets and allow them to align with evolving standards, which has the ultimate benefit of de-risking future partnerships and potential financings and exits.
[1] Cal. Bus. & Prof. Code § 22757.11(f) ("'Foundation model' means an artificial intelligence model that is trained on a broad data set; designed for generality of output; and adaptable to a wide range of distinctive tasks.").[2] Cal. Bus. & Prof. Code § 22757.11(i): ("'Frontier model' means a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations; (2) The quantity of computing power described in paragraph (1) shall include computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model.")[3] Cal. Bus. & Prof. Code § 22757.11(h) ("'Frontier Developer' a person who has trained, or initiated the training of, a frontier model, with respect to which the person has used, or intends to use, at least as much computing power to train the frontier model as would meet the technical specifications of 'Frontier Model'"); Cal. Bus. & Prof. Code § 22757.11(j) ("'Large Frontier Developer' a Fronter Developer that, together with its affiliates, collectively had annual gross revenues in excess of $500,000,000 in the preceding calendar year.")[4] Cal. Bus. & Prof. Code § 22757.11(a)[5] Cal. Senate Bill No. 53 Chapter 138, Legislative Counsel's Digest, published September 29, 2025.[6] Cal. Bus. & Prof. Code § 22757.11(a)[7] Cal. Bus. & Prof. Code § 22757.15(a)[8] Cal. Bus. & Prof. Code §1107.1(a)[9] Cal. Bus. & Prof. Code §1107.1(d)(1)-(2)[10] Cal. Bus. & Prof. Code § 11546.8(a)[11] Exec. Order on Eliminating State‑Law Obstruction of National Artificial Intelligence Policy, Dec. 11, 2025, https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.[12] Id.[13] Id.