AI regulation faces a test in California

Benefits and costs of the new AI gold rush.

Move fast and break things … vying for supremacy (“America’s AI edge,” like another Manhattan Project) … but … who’s a developer, and what are their responsibilities in some type of regulatory framework?

Who does AI safety testing? Are there 3rd party evaluations? Certifications? Kill switches? Incident logging and reporting? AI “meltdowns” and lawsuits? Collateral damage from embedded AI medical devices? Fine-tuning and fines?

Regulation of the AI “frontier” faces a milestone in California with SB 1047, a softer draft of its original version.

The original version of SB 1047 was bold and ambitious. Introduced by state Senator Scott Wiener as the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, …

It’s a political tightrope.

• The Verge > “Will California flip the AI industry on its head?” by Kylie Robison, a senior AI reporter working with The Verge’s policy and tech teams, who previously worked at Fortune Magazine and Business Insider (Sep 11, 2024) – SB 1047 aims to regulate AI, and the AI industry is out to stop it.

SB 1047, which passed the California State Assembly and Senate in late August, is now on the desk of California Governor Gavin Newsom — who will determine the fate of the bill. While the EU and some other governments have been hammering out AI regulation for years now, SB 1047 would be the strictest framework in the US so far.

Critics have painted a nearly apocalyptic picture of its impact, calling it a threat to startups, open source developers, and academics.

Supporters call it a necessary guardrail for a potentially dangerous technology — and a corrective to years of under-regulation.

Related articles

• LA Times 9-9-2024 > “Overinflated AI bubble is beginning to leak” by Michael Hiltzik – After a huge run-up on Wall Street, users now wonder whether the craze will fall flat.

Companies that plunged into the AI market for fear of missing out on useful new applications for their businesses have discovered that usefulness is elusive.

One persistent concern about AI is its potential for misuse for nefarious ends, such as making it easier to shut down an electric grid, melt down the financial system, or produce deepfakes to deceive consumers or voters. That’s the topic of Senate Bill 1047, a California measure awaiting the signature of Gov. Gavin Newsom (who hasn’t said whether he’ll approve it).

The bill mandates safety testing of advanced AI models and the imposition of “guardrails” to ensure they can’t slip out of the control of their developers or users and can’t be employed to create “biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.” It’s been endorsed by some AI developers but condemned by others who assert that its constraints will drive AI developers out of California.

That brings us to doubts not about AI risks, but about its real-world utility for business. These have been spreading in industry as more businesses try to use it, and find that it has been oversold.

That may be true of projected economic gains from AI more broadly. In a recent paper, MIT economist Daron Acemoglu forecast that AI would produce an increase of only about 0.5% in U.S. productivity and an increase of about 1% in gross domestic product over the next 10 years, mere fractions of standard economic projections.

Related posts

Lords of AI – Tech giants and an International Agency