California Governor Gavin Newsom made headlines this week by signing legislation requiring companies building frontier AI to disclose their safety practices, but another AI-related bill still sits on his desk awaiting his decision. That legislation, SB 7, would restrict California employers from using AI for employment decisions. Advocates for this bill want to protect workers’ rights, but instead, they are erecting a series of unnecessary barriers to the use of AI in the workplace that will ultimately hurt both workers and employers.
SB 7 would require all California employers to provide detailed notices to workers about their use of an “automated decision system” (ADS) to make employment decisions. Employers would have to provide written notice to all employees at least 30 days prior to introducing an ADS and fully explain its intended use, internal logic, types of output, performance metrics, and data collection methods, as well as detail who created the system and who will operate and interpret results.
In addition, after using an ADS to make an employment-related decision, employers would have to provide another written notice to affected workers. Employers would then have to allow workers to lodge an appeal and provide them with access to any input or output data from the ADS, as well as any supporting evidence that a human reviewer used to verify the output. Finally, the bill would strictly prohibit employers from using an ADS to make compensation decisions in virtually all cases or to predict or infer an employee’s beliefs, personality, emotional state, or other behavioral characteristics. The California Labor Commissioner would be primarily responsible for enforcement, but workers could also bring their own claims against employers for violations and seek civil penalties, including punitive damages.
There are many problems with this legislation. First, it is almost entirely unnecessary. Federal, state, and local labor laws, whether they be about worker protections, occupational safety, or civil rights, apply regardless of whether an employer uses AI. Employers still cannot discriminate against employees based on protected characteristics, like religion or age, and they cannot retaliate against them for engaging in legally protected activities, such as taking family leave or filing a harassment complaint.
But this legislation would regulate how employers make a variety of employment-related decisions, including not just those about compensation, promotions, and terminations, but also work schedules, work assignments, productivity requirements, and training offerings, if they use AI. That creates a bizarre situation in which employers can, for example, fire workers based on completely arbitrary criteria, provided they do not use AI at any point in that decision. Ironically, this discourages employers from adopting AI systems that can actually improve workplace safety, such as tools that monitor fatigue, prevent accidents, or identify hazardous conditions. If policymakers believe workers need additional protections, they should apply those safeguards universally, not just to employers using the latest technology.
Second, as with many proposed laws, this bill defines automated systems so broadly that almost any software-based tool involving data or analytics, including basic spreadsheets and simple programs with conditional logic, would fall within its scope. The bill defines an ADS as “any computational process…that issues simplified output…that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.” The bill goes on to say that employers cannot “rely primarily” on an ADS for certain employment-related decisions. While lawmakers may have thought they were preventing employers from using a Terminator-style robot to engage in mass firings, instead, they have just made it harder for hiring managers to filter out online job applicants without the necessary qualifications and supervisors to expedite annual reviews with spreadsheets.
Third, the bill’s transparency requirements are a serious misstep. The rules are highly prescriptive, which makes compliance errors more common. Even trivial compliance mistakes, such as sending out notices 29 days before implementing an ADS instead of 30 days, would expose companies to serious financial penalties. The required disclosures could also reveal confidential information, such as trade secrets and the personal information of other employees. For example, consider a hypothetical ADS that recommends one worker receive a smaller bonus than more productive peers. That worker would have the right to not only obtain information about the proprietary ranking algorithm, but also all the data about their coworkers used to arrive at that assessment.
On top of these issues, SB 7 would create duplicative and sometimes contradictory requirements for many California businesses. The California Privacy Protection Agency has already created its own set of onerous rules around automated decision-making under the California Consumer Privacy Act (CCPA) that will go into effect in 2026. The scope of the CCPA is different because it applies to more than just employment-related decisions, but it does not apply to many smaller organizations. The CCPA also has additional requirements, including risk assessments and opt-out obligations, meaning California businesses will be tangled in red tape.
Discouraging employers from using technology ultimately hurts workers by slowing hiring, reducing opportunities for advancement, and limiting access to the kinds of tools that make workplaces safer, more efficient, and more flexible. AI can help employers spot bias, tailor training, streamline scheduling, and better match workers with the right roles. Burdening companies with redundant red tape only ensures that fewer workers will benefit from these advances. California is the nation’s tech hub, and it should be leading the way in responsible adoption of AI—not putting up barriers that will leave its workers and businesses behind.
Image credit: Gage Skidmore