close
close

Silicon Valley in turmoil over new AI law in California

A new bill in California, SB 1047, is causing concern in Silicon Valley.

The bill, introduced by State Senator Scott Wiener, would establish “reasonable security standards” for companies that develop large artificial intelligence (AI) models that exceed certain size and cost thresholds.

If the law goes into effect, the tech giants would have to take steps to prevent their AI systems from causing “critical harm,” ensure they can be shut down if necessary, and disclose their compliance efforts to a newly formed “Frontier Model Division” within the California Department of Technology. Failure to comply could result in civil penalties and possible lawsuits, putting even more at stake for Silicon Valley's AI innovators.

The bill is part of a growing wave of AI regulations being proposed and discussed around the world as lawmakers grapple with the rapid advancement of AI and its potential risks and benefits.

From the European Union's AI Act to the White House's AI Bill of Rights, governments are increasingly seeking to create frameworks for the responsible development and deployment of AI technologies. SB 1047 is California's attempt to address these issues at the state level, and its outcome could have significant implications for the future of AI regulation in the United States and beyond.

The bill has won the support of two AI pioneers, Geoffrey Hinton and Yoshua Bengio, who have been vocal about the potential existential threats posed by artificial intelligence. Hinton praised the bill, saying it “takes a very common-sense approach to balancing those concerns.”

However, the bill is also facing considerable opposition, which already gives an indication of the lines of attack that could follow at the national level if and when Congress pushes forward with tougher legislation.

Impact on AI companies and innovation

The potential impact of SB 1047 extends beyond the technology industry as AI is increasingly becoming a critical part of commerce in numerous sectors, observers say.

From retail and healthcare to finance and transportation, companies are using AI to optimize operations, personalize customer experiences and drive innovation, so any regulation affecting the development and deployment of AI technologies could have far-reaching implications for the economy.

If the law's requirements prove too burdensome or ambiguous, critics say it could slow corporate adoption of artificial intelligence and potentially put California-based companies at a competitive disadvantage compared to their competitors in other states or countries with less stringent regulations.

The bill could have a significant impact on AI companies, especially smaller startups and open source developers.

“While some safeguards make sense, regulations need to be precise so as not to hinder innovation in this developing field,” Sunil Rao, CEO and co-founder of AI company Tribble, told PYMNTS.

“Overregulation risks hurting small companies that push the boundaries of what is possible with AI. We need to be careful to find the right balance – basic safety standards make sense, but the compliance burden must only be high enough to allow the tech giants to build AI systems,” he added. “Otherwise, we risk going down the same path as with nuclear power in the US, where strict regulation stalled innovation.”

The benefit is that companies now have more assurance that the AI ​​tools they use are developed to higher safety standards, Bob Rogers, data scientist and CEO of Oii.ai, a California-based supply chain AI company, told PYMNTS.

“Unfortunately, I can list more cons than pros,” he added. “Companies will likely face higher prices for AI tools and services, as AI developers will pass on any additional costs associated with compliance to their customers. And then there is the potential impact of regulations on innovation if AI companies feel they need to be extra cautious. Business users themselves may find it a burden to have to develop strategies to ensure that the AI ​​tools they use comply with regulations.”

The bill would impose new safety requirements and greater oversight for the development of large, advanced AI models.

“Companies will need to assess risks, implement safeguards and report compliance to a new government regulator or face potential penalties,” Stephen Kowski, field CTO of generative AI security firm SlashNext, told PYMNTS.

For companies that use AI, the draft law brings a lot of uncertainty.

“It is yet to be clarified exactly what technical requirements companies will have to meet and how the government would assess compliance. This lack of clarity could lead to companies being hesitant to adopt cutting-edge AI technologies,” Rao stressed.

“Some strong safety and transparency standards could help increase public trust in AI systems,” he added. “But these standards must be implemented in a way that enables access to transformative AI tools that can help businesses innovate and compete. Policymakers should actively engage the business community to understand their needs and concerns as they develop this legislation.”

The two main aspects of the law are what it requires companies to do and what happens if they don't comply. Narayana Pappu, CEO of Zendata, a San Francisco-based provider of data security and privacy compliance solutions, drew comparisons to California's CCPA, noting that while there have only been two enforcement actions by the state since its passage about four years ago, class action lawsuits for data breaches have grown exponentially, with more than 100 in 2023 alone.

Pappu also pointed out a possible shortcoming in the bill: “What is missing from the bill are transparency requirements for frontier models. These would have been a good intermediate step towards risk mitigation requirements.”

Tarun Thummala, CEO of AI company PressW, told PYMNTS that the bill could inadvertently stifle innovation and lead to less secure AI systems.

“Unfortunately, I believe that this bill, while intended to do good, will do more harm than good,” Thummala said. “In many ways, this bill will hamper growth and innovation in AI and may even lead to less secure AI being developed.”

The bill would put the onus on software developers to prevent misuse of their AI services, which Thummala said could be particularly damaging to smaller players in the industry.

“This bill would most likely create a world where only large companies with abundant resources could afford to develop these AI models,” he explained. “Open source development is critical not only to technical progress and evolution, but also to democratizing access to these powerful technologies.”

Thummala said he believes the bill's approach could backfire and ultimately lead to less secure AI systems. As an alternative, he suggested legislation that focuses on regulating specific AI applications rather than the development process itself.

“I believe that legislation that focuses on use rather than development will do a better job of protecting our society,” Thummala said. “Regulating the applications that are actually developed and setting clear boundaries on what is and is not an acceptable AI application will give developers a free hand to continue developing, and only those who use it with the intent to cause harm will be penalized.”

Navigating the legislative landscape

While federal legislation on AI is still pending, a patchwork of draft laws is emerging at the state level, similar to the California proposal.

“Given California's enormous influence in the technology sector, its approach could be a model for other states,” Rao noted. “While I agree with Senator Wiener that federal legislation would be ideal, realistically it may be up to the states to act first. However, a fragmented regulatory landscape would be a nightmare for AI companies. It is critical that states coordinate with each other and strive for as much consistency as possible.

“The technology industry must also proactively engage with policymakers at the state and federal levels to craft smart, innovation-friendly legislation,” he continued. “We have an opportunity now to get the policy framework right and cement the U.S. as a leader in AI, but over-regulation could put that at risk.”

As SB 1047 continues to move through the legislative process, the debate surrounding the bill underscores the ongoing tension between innovation and regulation in the rapidly evolving world of AI. The eyes of the tech world remain on California, awaiting the outcome of this AI regulation.