Skip to Content

New AI bill introduces safeguards to enhance consumer protection

Senate Bill 1047 enforces new safety checks but is potentially costly for platform developers
New AI bill introduces safeguards to enhance consumer protection

After a long day of school, junior Nathan Mourrain opens up his computer and stares blankly at the daunting list of tasks that seem to stretch on indefinitely.  The endless scroll of menial chores blurs together, each item adding to his mounting frustration. Seeking a way to lighten the load, Mourrain turns to AI, something that has recently become more integrated into his daily life.

“As a high schooler, I have seen AI used in emailing others, in applications for job positions, and anything that has to do with writing, thanks to (Large Language Models),” Mourrain said.

While students like Mourrain find AI useful for help in lightening their workload, there is fear about how AI companies are moving forward at lightning speed without considering the full implications of their progress. That’s the issue that California Senate Bill 1047, the country’s first comprehensive legislation on artificial intelligence, hopes to address.

But the bill, approved by the California State Assembly on Aug. 29, is already raising concerns among companies developing advanced AI technologies.

Known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Act,” SB 1047 targets “frontier models” — AI systems that require substantial computational power, costing $100 million or more to train. The bill mandates that developers complete thorough safety checks before releasing their technology and implement measures to minimize societal risks.

Companies that fail to adhere to these regulations will face significant fines: 10% of the cost to develop and train their AI model for the first violation and 30% for subsequent violations.

Research from the Walton Foundation and Impact Research shows that 63% of teachers use AI for curriculum development, and 42% of students rely on it for schoolwork. This trend is driven by AI technologies like LLMs, which understand and generate human-like text. Popular examples include ChatGPT and Bard.

And Computer Science teacher Christopher Bell said AI has become more accurate and efficient every year.

“You will see that now we have a lot less errors and a lot better results coming out,” Bell said. “So, AI (like Chat GPT) has definitely improved from the year and a half it has been out.”

However, regulation has failed to keep up with the rapid advancement of AI.

Because of this, Mourrain said he was glad to see SB 1047 introduced.

“It’s good to institute these sorts of protections before it actually reaches an uncontrollable level where we see the split of AI into tons of different domains — be it professional, recreational or other,” Mourrain said.

The bill has also garnered support from politicians.

Sunny Gandhi, Vice President of Political Affairs at Encode Justice, voiced his enthusiasm for the bill in a press release from Senator Scott Weiner.

“SB 1047 represents a critical step towards responsible development and proactive governance,” Gandhi said. “This bill is a forward thinking approach that protects the public from potential AI related harms.”

Junior David Wu, who is the president of the AI Club, said he isn’t as enthusiastic, and instead concerned with SB 1047s potential negative impact on society.

“Cost of training is not a good measure of impact on society — rather, models that are used in sensitive applications, for example, criminal justice systems, medicine, etc., may need to face additional scrutiny to ensure compliance with existing laws regarding those sensitive applications,” Wu said. 

Wu also said legislation like SB 1047 places unfair blame on the developers of AI instead of addressing misuse by users.

“Hammers can be used both to build things and destroy things, and destruction caused by a hammer is not blamed on the hammer company,” Wu said. “For AI companies, it’s impossible to ensure that a tool has only positive impacts — placing blame on developers will only limit the ability for AI tools to move society forward.”

But Mourrain said regulation is crucial for managing AI as many users lack understanding of its capabilities.

“For example, when you go on a rollercoaster, you expect to be safe,” Mourrain said. “Although there are regulations, it’s designed in a way where you’re not in danger, you won’t get hurt and random death is almost non-existent.”

SB 1047 now awaits Gov. Gavin Newsom’s signature. He has until Sept. 30 to sign the bill into law or veto it.

Donate to The Campanile
$300
$500
Contributed
Our Goal

Your donation will support the student journalists of Palo Alto High School's newspaper

More to Discover
Donate to The Campanile
$300
$500
Contributed
Our Goal