Charles Brennan provided testimony in support of HB26-1012, which would have required sellers to provide consumers with the prices of the delivered goods and the goods available at the store for price transparency and fairness. It also would have prohibited unfair or deceptive trade practices by charging unreasonably excessive prices for goods and services.
Recent articles
CCLP testifies in support of worker protections
Chris Nelson provided testimony in strong support of House Bill 26-1054, which would allow Colorado to step in to address declining workplace safety standards due to federal rollbacks and decline in enforcement, and allows for individual workers and labor unions to enforce their rights through private right of action.
CCLP testifies against HOAs requiring “proof of need” for language access
Morgan Turner provided testimony against HB26-1201 which would require owner's to provide "proof of need" prior to HOAs providing correspondence and notices in a language other than English.
CCLP testifies in support of ITINs for non-educational opportunities
Milena Tayah provided testimony in support of HB26-1143, which addresses the background check barrier for educational opportunities. It would require that an ITIN be allowed in lieu of a SSN when required for these background checks.
Colorado AI Sunshine Act – Q&A with Charles Brennan

The AI Sunshine Act (SB25B-004), introduced during Colorado’s 2025 special legislative session, would simplify the Artificial Intelligence (AI) regulations passed in SB24-205 last year to focus on increased transparency and accountability when an automated decision system is used as a substantial factor in making decisions in essential areas such as employment, health care, housing, and legal services. It ensures Coloradans know when a decision is made, what factors went into that decision, and provides them the right to correct any inaccurate data that may have been used. The bill also fundamentally shifts responsibility for any violations of Colorado’s consumer protections or civil rights laws onto both deployers and developers of AI technology, meaning Colorado’s businesses are not at fault for unknowingly deploying unreliable or biased systems that violate the law.
This bill has generated much debate at the Capitol and around the state. It’s also been the subject of many misunderstandings and not a little disinformation. We sat down with our colleague Charles Brennan, CCLP’s Income and Housing Policy Director to examine some of the claims made about the bill.
Question 1: Is the AI Sunshine Act too broad in its regulations of AI? Does it cover harmless tools like spell check, that do not run any risk of harming workers and consumers?
Charles Brennan: The bill does not regulate AI at all — it requires developers and deployers who use algorithmic decision systems in areas like employment, housing, healthcare, lending, and education to disclose the use of AI, but only if that system’s output is a substantial factor in the decision. Use of tools like databases, spreadsheets, spell check, calculators, or antivirus software are not covered. Nothing in the bill would affect the use of other AI-powered tools, such as notetakers (like Otter), budgeting (like Origin or Copilot), or task management apps (like Motion). The bill also requires disclosure to consumers when interacting with a generative AI system, but does not affect in any way how these systems function, how they are trained, or how they interact with consumers.
Q2: Is it even possible to regulate a technology that is so rapidly evolving?
CB: Again, disclosure is the goal of the bill, not regulation. The AI Sunshine Act would require disclosure by developers of automated decision systems to ensure deployers know how to use the system properly and are aware of any known or foreseeable risks that could cause the system to violate Colorado’s civil rights and consumer protection laws. The bill also requires deployers of algorithmic decision systems to inform Colorado consumers when an algorithmic decision system is being used to make a decision related to the provision, cost, or terms of education, employment, financial or lending services, essential government services, health-care services, housing, insurance or legal services. Consumers would also be informed of the types of data that were analyzed, up to 20 of the most important characteristics the system considered and gives consumers the right to correct any wrong information.
Q3: Will this bill stifle innovation and harm Colorado’s tech economy?
CB: Oversight and transparency can coexist with innovation — in fact, greater transparency can support technology innovation and adoption. Right now, with limited transparency, the public’s distrust of AI in decisions like hiring or lending is high. The bill’s provisions are meant to address some of the public’s concerns. There is strong public support for measures like informing people when a decision is made by AI and allowing them to correct any errors. Developer disclosures to AI deployers can also help them to make better AI purchasing choices, which in turn helps create a stronger, more robust market for AI products. Without readily available and accurate data, markets cannot allocate resources efficiently, leading to potential inefficiencies and market failures in Colorado’s growing AI industry.
Q4: Will small businesses and startups be crushed by the compliance costs of this bill?
CB: Although the disclosure requirements on developers and deployers apply to all businesses regardless of size, such disclosures are only required when an algorithmic decision system has a significant effect on decisions in a very narrow set of circumstances. There is a broader requirement that any AI developer or deployer notify consumers when they are interacting with generative AI systems, like ChatGPT. These requirements are not intended to place unnecessary burdens on business, regardless of their size; rather, these requirements attempt to give consumers the transparency they are looking for when interacting with AI or when it is used to make important decisions about them. The bill also makes AI developers jointly responsible with deployers when an algorithmic decision system leads to a violation of existing law, ensuring that Colorado businesses are not made responsible for the actions of powerful tech companies when unreliable or biased algorithms violate the law.
Q5: Do existing consumer protections and civil rights laws already protect workers and consumers, making this bill redundant?
CB: Unlike SB24-205, this bill does not create any new civil rights or consumer protections by preventing “algorithmic discrimination.” Instead, it requires disclosure of information so that deployers of algorithmic decision systems in Colorado can ensure their systems will not lead to violations of existing civil rights or consumer protections in our state. The bill is also clear that nothing it proposes would preempt or otherwise limit the rights of consumers to enforce their rights under existing laws.
Q6: Will the bill’s disclosure requirements force companies to reveal trade secrets or proprietary data?
CB: The bill clearly sets out what kind of information about an algorithmic decision system must be shared with AI deployers or with consumers, none of which would require the release of trade secrets. At most, AI deployers must provide consumers with the types and sources of the consumer’s personal data and up to 20 specific personal characteristics that were used to make a decision. AI developers and deployers are not required to disclose how that data was used, the data a system may have been trained on, or any other technical information about how the algorithm was built or developed.
Q7: Are proactive disclosure requirements really necessary?
CB: Proactive disclosures are commonly used across many sectors, from nutrition labels on food, to identifying paid promotional content, to home sale disclosures about asbestos and lead pipes.
The fact is, sometimes AI gets things wrong. AI can misinterpret input, carry unintentional biases into decision-making, or “hallucinate” information that sounds accurate but is invented in error. Without proactive disclosure, people may never know AI influenced a decision about them, and thus can’t challenge it. Creating this kind of transparency, so that consumers clearly know when AI is making a decision about them, is at the heart of the bill.
