This controversial California AI bill was amended to quell Silicon Valley fears. Here’s what changed

This controversial California AI bill was amended to quell Silicon Valley fears. Here's what changed

As a seasoned tech enthusiast and a resident of California for over two decades, I find myself deeply intrigued by the ongoing debate surrounding SB 1047 – the AI safety bill. Having witnessed the meteoric rise of Silicon Valley and its influence on our daily lives, I can’t help but feel a sense of unease about the potential dangers that unchecked AI development might pose to us all.


As a movie critic, I find myself at the center of a heated debate over a proposed bill designed to shield Californians from potential disasters brought on by AI technology. The tech community has been left in an uproar following this week’s committee approval, but with amendments aimed at smoothing things over for Silicon Valley.

Senator Scott Wiener’s SB 1047, a trailblazing bill from San Francisco, is slated for a vote on the state Assembly floor later this month. If it manages to pass through the Legislature, Governor Gavin Newsom will face the decision of whether to sign or veto the landmark legislation.

Supporters of the bill argue that it aims to establish safety mechanisms to stop sophisticated AI systems from triggering catastrophic events, like unexpectedly shutting down the power grid. They are concerned that the pace of technological advancement outstrips the ability of humans to manage and regulate it effectively.

Lawmakers are working towards encouraging developers to use technology ethically, giving the state’s attorney general the power to enforce penalties when there is an immediate danger or harm. The proposed law also necessitates that developers must possess the ability to deactivate the AI models they manage in case of any mishaps directly.

I, for one, am a staunch advocate who believes that certain tech companies like Meta Platforms, Facebook’s owner, and politicians such as U.S. Rep. Ro Khanna (D-Fremont), argue that this bill could hinder innovation. Some critics voice their concerns, stating that it concentrates on distant, catastrophic scenarios instead of the pressing issues of privacy and misinformation. However, there are other bills designed to tackle these very issues directly.

1047 SB is among approximately 50 bills focusing on artificial intelligence (AI) that have been proposed in the state legislature, as concerns about its impact on employment, misinformation, and public safety escalate. As politicians strive to establish regulations for this rapidly expanding sector, certain companies and professionals are resorting to lawsuits against AI firms in an attempt to have courts define the boundaries.

As a passionate cinephile residing in the heart of San Francisco – a city that nurtures pioneering AI startups like OpenAI and Anthropic – I’ve found myself right in the thick of an intriguing discussion.

On Thursday, he adjusted his bill substantially, with modifications some think make the law less stringent, but increase its chances of being approved by the Assembly.

The amendments removed a perjury penalty from the bill and changed the legal standard for developers regarding the safety of their advanced AI models.

Instead of establishing a new government entity known as the Frontier Model Division for reviewing safety measures, the proposed plan has been abandoned. Under the initial proposal, developers were supposed to send their safety plans to this newly created division. However, in the revised version, developers will now submit these safety measures directly to the attorney general.

“Christian Grose, a political science and public policy professor at the University of Southern California, expressed his belief that certain modifications could potentially increase its chances of being approved.”

Supporters of the bill come from some tech circles, such as the Center for AI Safety and Geoffrey Hinton, often referred to as the “founding father” of AI. However, others in this field express concerns that it might negatively impact the thriving technology sector in California.

8 California representatives, including Khanna, Zoe Lofgren (from San Jose), Anna G. Eshoo (from Menlo Park), Scott Peters (from San Diego), Tony Cárdenas (from Pacoima), Ami Bera (from Elk Grove), Nanette Diaz Barragan (from San Pedro) and Lou Correa (from Santa Ana), wrote a letter to Newsom on Thursday, urging him to reject the bill if it reaches the state Assembly.

In San Francisco’s Wiener context, he faces a significant dilemma: on one hand, there are experts advocating that AI should be regulated due to potential dangers, and on the other, there are those whose livelihood depends on AI research. Grose noted that this could pose a critical juncture in Wiener’s career, as he navigates both the benefits and risks associated with AI.

Some tech giants say they are open to regulation but disagree with Wiener’s approach.

In a recent discussion with the L.A. Times editorial board, Kevin McKinley, Meta’s state policy manager, expressed agreement with Wiener’s perspective on the bill and its objectives. However, he voiced apprehension regarding the potential effects of the bill on AI advancements, specifically in California, and notably on open-source innovation.

One of the firms, Meta, offers an assortment of open-source AI models called Llama, enabling developers to create on top of it for their specific projects. The latest version, Llama 3, was released by Meta in April and has already been downloaded over 20 million times, according to the tech giant’s announcement.

Meta chose not to comment on the recent amendments. Previously, McKinsey mentioned that SB 1047 was “quite challenging to review and make changes.”

A spokesperson for Newsom said his office does not typically comment on pending legislation.

According to a statement made by Izzy Gardon via email, the Governor intends to assess the bill based on its strengths if it gets presented to him for consideration.

Anthropic, a San Francisco-based AI company known for its AI assistant Claude, hinted they might back a bill if it underwent modifications. In a letter dated July 15th to Assemblymember Buffy Wicks (D-Oakland), Hank Dempsey, Anthropic’s state and policy lead, suggested amendments. These changes include rephrasing the bill to emphasize accountability for companies causing catastrophes instead of enforcing regulations before harm occurs.

Wiener said the amendments took Anthropic’s concerns into account.

According to Wiener, it’s possible to foster both progress and safety at the same time. They don’t have to be conflicting objectives.

It remains uncertain if the proposed changes will affect Anthropic’s stance on the legislation. In a recent declaration, Anthropic mentioned they would examine the new “language of the bill as it gets released,” which is expected to happen on Thursday.

Russell Wald, who serves as deputy director at Stanford University’s Human-Centered Artificial Intelligence (HAI), emphasized that he remains against the proposed legislation. This institution focuses on driving forward AI research and policy development.

In his statement, Wald expressed that the recent changes seem to prioritize appearance over actual content. It appears these alterations are designed to avoid controversy among key AI companies, yet they fail to address genuine worries stemming from academic institutions and open-source communities.

Striking a delicate equilibrium poses a challenge for policymakers, as they must consider AI-related concerns while simultaneously fostering the growth of their state’s technology industry.

After the committee meeting, Wicks stated, “We’re all working on creating a regulatory landscape that provides necessary safeguards without hindering the advancement of AI, including its potential for driving economic development.”

Read More

2024-08-17 02:01

Previous post How to see Taskmaster star Rose Matafeo’s live comedy special