California’s recent AI Regulation Bill (‘SB 1047’) has caused significant controversy across the AI community. The bill has been so contentious that it has become the center of a “fierce national debate”. The biggest players in the AI industry are taking sides on the bill like rival teams in a championship game, each rallying their supporters, knowing the outcome could shape the season ahead.
Both those who support and oppose the bill have been highly vocal about it, with prominent figures writing articles, doing interviews, and even holding online debates about the bill. There is even a split in opinions between the supposed godfather and godmother of AI (with Hinton pro and Li against). Not to mention a division within companies themselves: while OpenAI, Meta, and Anthropic have raised concerns about the bill, over 120 of their employees have come out in favor of the bill.
The regulation that slipped quietly under the mat
What I find most interesting, is that this is just one of the many AI related bills to be considered, let alone passed through the Californian state legislature. 38 bills have reached California Governor Gavin Newsom’s desk, and in 2024 alone Newsome enacted 17 of these bills. These bills cover a number of important issues such as:
- transparency requirements for Generative AI providers (AB-2013);
- mandatory disclosure of AI-generated content through a tag in their metadata (SB-942);
- an expansion of privacy laws to restrict the use of private information in AI systems (AB-1008);
- limitations on healthcare automation (SB-1120); and,
- mandatory risk analyses that collaborate with frontier AI companies (SB-896).
These are just a few examples of some of the many areas that are being legislated, others include education, combating deepfake pornography, the entertainment industry etc. The impact of these legislations on AI developers should not be underestimated, it will affect the way they use individuals’ data to train their programs, the disclosure requirements they must meet, and the types of AI applications they can develop.
Given the significant impact of these other regulations, why has all the focus been on SB 1047? Why the controversy?
The big guys are unhappy
The SB 1047 legislation applies to “covered models” which are AI models that either:
- “Cost over $100 million to develop and are trained using computing power “greater than 10^26 integer or floating-point operations” (FLOPs); or
- Are based on covered models and fine-tuned at a cost of over $10 million and using computing power of three times 10^25 integer or FLOPs.”
While the current frontier models don’t yet meet that threshold, it is predicted that the following generation of models will. SB 1047 also applies to “developers” rather than those who utilize an AI model.
All this means that the bill targets the biggest players in the AI game, in other words: OpenAI, Google DeepMind, Microsoft, Amazon Web Services, NVIDIA, Meta, Apple, IBM, Baidu, Tencent, Anthropic, Huawei etc. Given the significant power and influence of these companies, any legislation which sought to rein in their extensive powers, is likely to see backlash from the companies and their many investments. It may be the companies themselves that are driving the discussion and controversy behind SB 1047.
Could it be the actual content of the regulation?
Or is it possible that the regulation itself isn’t satisfactory in achieving its goals? When Governor Gavin Newsom vetoed the AI bill, his reasons for doing so were that: the focus on only the most “expensive and large-scale models” would give the public “a false sense of security”. Newsom also argued that it may “curtail” innovations, and that the bill does not “take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data”.
The concerns regarding “a false sense of security” being attained have been echoed amongst other individuals. For example, Usama Fayyad (the Executive Director for the Institute of Experiential Artificial Intelligence) claims that “the real damaging stuff” can be attributed to smaller more specialized models which the regulation does not cover. Other tech professionals, such as Chris Kelly (former Facebook chief privacy officer and general counsel), suggest that the bill is overinclusive and underinclusive at the same time. His ideas endorse Newsom’s concerns regarding stringent standards being applied to basic functions, so long as a large system deploys it.
The concern of the bill limiting innovation in California has also been widely reiterated. Fei-Fei Li suggested that if the bill were passed into law then it would “harm our budding AI ecosystem”. Ion Stoica reaffirmed this concern in the Carnegie Endowment debate on the bill. Yet, I can’t help but agree with Dan Hendryck’s rebuttal to Stoica, quoting Dario Amodei he argued that AI companies have said similar things about the EU AI Act, about data privacy laws, and these all passed without anything changing. Hendrycks claims that “the value of doing business in California is higher than any cost of this Bill” and from what we have seen about the regulation of new technologies and scientific advancements thus far, I would agree.
While I can understand criticisms that the bill does not do enough to protect individuals from AI, most of the criticisms tend to be centered around the bill doing too much and curtailing innovation. Unfortunately, a lot of these arguments contain fundamental misconceptions of the bill. For example, Fei Fei Li’s article in opposition to the bill contains a number misconceptions regarding everything from the scope of developers covered by the bill to the scope of a kill switch requirement. Perhaps it is the spread of these falsehoods that has encouraged such a strong opposition to the SB 1047.
Or is it the very idea of regulating AI?
It seems that if you fully understand what the bill really does, it is not that different from what the law of tort already requires of individuals. In the Carnegie Endowment debate, Ketan Ramakrishnan (a legal professor) suggested that the bill is not creating radically new laws, but rather is building on one of our oldest legal institutions: tort law. Furthermore, Joshua Turner and Nicol Turner Lee argue that:
It should also be noted that a lot of these companies have already committed to carrying out various of the requirements of the bill such as testing.
If this is the case, then is it the very idea of AI regulation which bothers the companies? Has the debate become so polarized that we can no longer look at an AI Regulation Bill without opposing it purely on the basis that it contains the word ‘regulation’? Whatever the cause of all the controversy behind SB 1047 is, what the debate does reflect is a fundamental divergence in the AI community of the role individuals expect AI to play in our future.
Written by Celene Sandiford, smartR AI