A bill that seeks to enhance consumer protections against discrimination by artificial intelligence systems has passed through committee, but concerns persist regarding its potential impact on small businesses and innovation.
Senate Bill 205 establishes regulations governing the development and use of artificial intelligence in Colorado and focuses on combatting “algorithmic discrimination.” Senate Majority Leader Robert Rodriguez, the bill’s sponsor, referenced bias within AI systems in housing, bank loans, and job applications. He said he has been collaborating with Connecticut Senator James Moroney, who is running a similar bill in his state.
Rodriguez said the strikebelow amendment made to the bill tweaked and clarified some definitions, while also postponing the bill’s effective date to October 2025. He said more changes are likely coming to the bill, but for the time being, it offers a “basic model framework” for the state.
The bill passed through the Senate Judiciary Committee on a 3-2 vote and will be read before the Senate Committee of the Whole.
The bill requires developers to exercise “reasonable care” to prevent discrimination when using “high-risk” artificial intelligence systems, which are defined as systems involved in making “substantial or consequential” decisions. It requires developers to complete risk assessments, implement risk management strategies, and report instances of algorithmic discrimination to the Attorney General within 90 days of discovery.
The bill also seeks to increase consumer transparency by requiring businesses that employ artificial intelligence to disclose the types of systems they use and notify consumers when a high-risk artificial intelligence system will be used to make “consequential” decisions.
Rodriguez noted that despite calls from tech giants like Mark Zuckerberg and Elon Musk for federal AI regulations, Congress has not acted, prompting his move to introduce the bill at the state level. He emphasized Colorado’s track record as a leader in enacting legislation on data privacy and transparency within the tech sector.
“At the base of this bill and policy is accountability, assessments, and disclosures that people need to know when they’re interacting with artificial intelligence,” he said. We’re in a groundbreaking place on this policy, similar to we were with data privacy, but every year we delay it, the more engrained it becomes and the harder it is to unravel.”
“The bill in its current form will do more harm than good”
Eli Wood, the founder of software company Black Flag Design, expressed concern that the bill could inadvertently disadvantage small startups like his that heavily depend on open-source AI systems. These systems serve as publicly available blueprints, enabling developers to access and customize them to craft artificial intelligence solutions. Major corporations such as OpenAI, the creator of ChatGPT, often contribute to these open-source systems. He mentioned that the bill could penalize small businesses for algorithmic bias identified in their system, even if the bias originated from the open-source system rather than the one developed by the small business itself.
Because of this, Wood argued that generative AI models created by major corporations should be the bill’s target, not small startups.
“AI is the defining technology of our generation, and I believe it’s in the best interest of every Coloradan that we’re having this discussion today, but the bill in its current form will do more harm than good,” he said. “At first glance, it seems like it’s a sensible solution to control impacts of this technology before it negatively impacts society, but I believe it will severely curtail the ability of small organizations like ours and negatively impact democratizing the technology for societal good.”
Logan Cerkovnik, founder of Thumper AI Corporation, said the bill would effectively ban his company’s platform and constitute a “de facto” ban on leasing open source AI models “while failing to stop algorithmic discrimination due to loopholes.”
Cerkovnik noted that Connecticut’s governor has threatened not to enact the state’s similar bill into law unless startup protections are incorporated. He advocated for scrapping the bill and introducing a revised version in the next legislative session after thorough discussions between the sponsor and artificial intelligence experts because “the future of AI in Colorado is too important to be banned by poorly drafted regulations.”
“Innovation should be encouraged and not stifled, and any legislative measure should strike a balance between consumers and fostering technological advancement,” he said. “The bill is implementing measures that may not be feasible or effective.”
“I have the right to know what models are deciding my future”
Several high school students interested in artificial intelligence argued the bill was necessary, even if it wasn’t perfect. Benjapon Frankel said artificial intelligence can be found in “most everything” and was concerned about algorithmic discrimination’s increasing prevalence.
Cherry Creek High School junior Shourya Hooda said the bill provides the state with a “strong base off which to build a robust, innovative AI regulation framework.” He emphasized the significance of the bill’s consumer notification aspect and argued that it ensures that the market is not oversaturated with “useless businesses.”
“This bill ensures that careful developers stay that way and makes reckless development impossible,” he said.
Beth Rudden, CEO of Bast AI, called the bill a “pragmatic and necessary measure” to maintain the integrity of artificial intelligence systems. She argued that the bill is not just about compliance within the industry, but also about holding developers accountable for unethical actions.
“By supporting this bill, we commit to a path that respects consumer rights, promotes transparency, and fosters trust in the technologies that are shaping our future,” she concluded.