California lawmakers are trying to regulate AI before it’s too late. Here’s how

As someone who has worked in the tech industry for over a decade, I have witnessed firsthand the incredible potential of artificial intelligence (AI) to revolutionize various industries and improve our lives in countless ways. However, I also understand the growing concerns regarding the safety and ethical implications of AI, particularly when it comes to issues such as bias and discrimination.


Jacob Hilton spent four consecutive years at OpenAI, a prominent Bay Area startup. During his tenure, he conducted research essential for evaluating and enhancing the accuracy of models like ChatGPT. He’s optimistic about AI’s potential societal advantages but acknowledges the significant dangers if not properly managed.

Hilton was one of the thirteen individuals, currently or formerly employed by OpenAI and Google, who recently penned an open letter this month. In this letter, they voiced their concerns over the need for stronger whistleblower protections, pointing out that overly broad confidentiality agreements pose issues.

Employees, who are most intimately involved with the technology, face the greatest risk of repercussions when expressing their concerns, according to Hilton, a 33-year-old researcher at the nonprofit Alignment Research Center based in Berkeley.

As a film enthusiast, I’m keeping a close eye on the buzz in California‘s legislative scene. They’re acting swiftly to tackle concerns surrounding AI and have drafted approximately 50 bills for this purpose. These proposals are primarily focused on implementing safety measures around AI technology, which some lawmakers believe could potentially harm our society if not handled carefully.

Yet, major tech firms’ advocacy groups contend that the suggested law may hinder invention and originality, potentially leading California to forfeit its technological advantage and alter the way artificial intelligence is produced within the state.

Artificial intelligence’s impact on jobs, society, and culture is significant and broad, as demonstrated by numerous legislative proposals addressing related concerns. These bills aim to address various AI-related anxieties, such as potential job loss, data protection, and biases related to race.

One proposal, supported by the Teamsters, intends to enforce human supervision over autonomous heavy-truck operations through legislation. The Service Employees International Union endorses another bill seeking to prohibit automation or AI takeover of jobs in call centers providing public services like Medi-Cal. Lastly, Sen. Scott Wiener (D-San Francisco) has drafted a bill mandating safety testing for companies building large AI models.

After facing criticism for being too lenient towards social media companies, politicians are now introducing a large number of bills during the Biden administration to take tougher actions against these tech giants at the federal and state levels.

As a movie reviewer reflecting on the issue of social media, I’ve come to realize that we’ve often waited too long before addressing the negative consequences of new technologies. Just like in a film where the protagonist ignores warning signs, leading to disastrous outcomes, we’ve seen this pattern repeat with social media.

As a seasoned technology observer with decades of experience under my belt, I’ve witnessed the rapid evolution of artificial intelligence (AI) tools. From my perspective, this progression is nothing short of remarkable. I’ve seen AI in action, from reading bedtime stories to children, sorting drive-through orders at fast food locations, and even helping create music videos.

As a devoted cinema-goer, I’m always on the lookout for the latest technological advancements that promise to revolutionize our world. But even I was taken aback by how swiftly this tech is advancing, leaving many experts and myself in a state of surprise. If we fail to act now and wait for several years, we might find ourselves facing the consequences when it’s already too late.

I’m a big fan of Wiener’s SB1047 bill, also known as the “Wiener Bill” or “SB1047,” which is supported by the Center for AI Safety. In simple terms, this legislation asks companies developing large artificial intelligence (AI) models to carry out safety checks and retain the power to deactivate models they have direct control over.

Supporters of the bill argue that it is necessary to prevent misuses of AI technology, such as developing biological weapons or causing power grid failures. The legislation also includes a mandate for AI companies to establish channels for employees to submit anonymous complaints regarding safety concerns. In case of noncompliance with safety regulations, the state attorney general has the authority to take legal action.

“Powerful technology like AI comes with advantages and drawbacks, and it’s important to me that the positive impacts significantly surpass the potential hazards,” Wiener expressed.

Critics of the bill, which includes TechNet – a business alliance representing tech giants such as Meta, Google, and OpenAI – recommend that lawmakers approach this issue with care. Neither Meta nor OpenAI responded to our inquiry for comment, while Google chose not to comment on the matter.

“Hurrying things too much can have drawbacks when it comes to this technology,” noted Dylan Hoffman, TechNet’s California and Southwestern executive director.

On Tuesday, the privacy and consumer protection bill advanced from the Assembly Committee. Subsequently, it is scheduled to be reviewed by the Assembly Judiciary and Appropriations Committees. If it manages to secure a favorable outcome in these committees, it will proceed to the Assembly floor for further deliberation.

Supporters of Wiener’s bill argue that they are addressing the concerns of the public. A survey conducted by the Center for AI Safety Action Fund among 800 potential voters in California revealed that 86% considered it essential for California to establish AI safety regulations. Furthermore, 77% of these respondents expressed their approval for subjecting AI systems to safety inspections.

Hilton, a former OpenAI employee, pointed out that currently, companies prioritize safety and security based on their own voluntary commitments. However, he highlighted an issue: there’s no effective way to hold them accountable for these promises.

A new legislation named AB 2930 aims to address the issue of “algorithmic bias” in workplaces. This means that the bill intends to stop automated systems from unfairly disadvantaging individuals based on their race, gender, or sexual orientation when it comes to employment opportunities, wages, and terminations.

In the field of artificial intelligence, we frequently encounter instances where the results are influenced by bias. (Assemblymember Rebecca Bahan-Kahan’s statement)

Last year, the anti-discrimination bill faced significant opposition from tech companies during the legislative session, resulting in its failure. In the current session, this bill was reintroduced with initial support from tech giants Workday and Microsoft. However, their backing has become uncertain as they have voiced concerns over proposed amendments that would increase accountability for tech firms creating AI products to eliminate bias.

“Typically, industries don’t ask for regulation, but given the skepticism towards AI in some communities, this initiative aims to foster trust in AI systems. In my opinion, this is a positive development for the industry.”

Concerned parties in the labor and data privacy sectors express apprehensions that the proposed anti-discrimination law may not provide sufficient protection against labor and data misuses. Critics argue that the language of the bill is overly permissive instead.

Chandler Morse, the public policy chief at Workday, expressed the company’s backing for AB 2930 in its initial form. However, they are still assessing their stance regarding the latest amendments.

Microsoft declined to comment.

AI poses a danger that resonates with Hollywood unions, prompting them to advocate for safeguards during last year’s strikes. The Writers Guild and the Screen Actors Guild-AFTRA worked on securing AI protections for their members, but the potential hazards of this technology extend beyond the boundaries of labor agreements, according to Duncan Crabtree-Ireland, National Executive Director of SAG-AFTRA.

“Crabtree-Ireland suggested that it’s essential for policymakers to act swiftly and establish regulations for AI use, preventing an uncontrolled situation similar to the Wild West era.”

SAG-AFTRA has played a role in crafting three proposed federal laws and two California bills regarding deepfakes. These measures aim to address misleading images and videos, frequently featuring celebrity impersonations. If passed, the legislation would give workers the power to approve contracts involving artificial intelligence-generated versions of their likenesses before they become legally binding. This means that unions or legal representatives would need to represent the workers in such agreements.

Tech firms warn against excessive regulation: The tech sector’s Chamber of Progress, represented by Todd O’Boyle, expresses concern. California AI businesses might consider relocating if regulatory scrutiny becomes too strict. Legislators should be mindful not to base policies on potential dangers when we have the opportunity to harness this groundbreaking technology that holds immense potential for growth and prosperity during its initial stages.

Once regulations are implemented, it can be challenging to remove them, Box CEO Aaron Levie cautioned. His company, located in Redwood City and specializing in cloud computing with AI integration, is included in this scope.

“Levie suggested that we require more advanced models with greater capabilities and abilities. Once we obtain these models, we can gradually evaluate the associated risks.”

Instead of Crabtree-Ireland’s statement, you could say: “Crabtech-Ireland pointed out that tech companies try to delay regulations by making the problems appear more intricate than necessary and insisting on a single all-encompassing solution.”

I wholeheartedly dispute that idea. Not every aspect of artificial intelligence needs to be figured out right away.

Read More

2024-07-18 22:11