Colorado Senate Bill 24-205, the Colorado AI Act, is the first comprehensive US state law regulating AI decision-making. It takes effect on June 30, 2026. If your organization uses AI systems that influence consequential decisions affecting Colorado residents, you have compliance obligations that are now months away from enforcement.
This is not a theoretical concern limited to large technology companies. The law applies to any organization that develops or deploys AI systems making consequential decisions, regardless of headquarters location. The statute includes limited accommodations for small businesses with fewer than 50 full-time employees, but the core duty of reasonable care applies broadly. For mid-market organizations that have adopted AI tools across operations without formal governance structures, the compliance gap is significant and the timeline is tight.
What the Colorado AI Act Requires
The Act introduces two primary roles with distinct obligations: developers who create AI systems and deployers who use them. Most mid-market organizations fall into the deployer category. If you are purchasing and using AI tools rather than building them, here is what the law requires of you.
Duty of reasonable care. Deployers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. This is not an aspirational goal. It is a legal standard that regulators will evaluate based on the steps you took or failed to take.
Impact assessments. Before deploying a high-risk AI system, annually thereafter, or within 90 days of any substantial modification to the system, deployers must complete and document an algorithmic impact assessment. The assessment must include: the purpose of the AI system, its intended benefits, an analysis of potential risks of algorithmic discrimination, the categories of data processed, the outputs produced, and the transparency measures in place.
Consumer disclosure and appeal rights. Deployers must inform consumers when an AI system is making or substantially contributing to a consequential decision about them. If the system produces an adverse decision, the consumer must be notified with enough detail to understand what happened and how to contest it. Consumers have the right to appeal an adverse decision through human review and the right to correct personal data used in the decision-making process.
Governance measures. Deployers must implement a risk management policy and program that is proportionate to the size and complexity of the organization and the nature of the AI systems in use.
What Counts as a High-Risk AI System
The Act defines a "high-risk AI system" as any system that makes or is a substantial factor in making a consequential decision. A consequential decision is one that has a material legal or similarly significant effect on a consumer in the following areas:
- Education: Enrollment, academic discipline, financial aid, credentialing
- Employment: Hiring, termination, compensation, promotion, performance evaluation
- Financial services: Lending, credit decisions, financial product access
- Healthcare: Medical diagnosis, treatment recommendations, clinical decision support
- Housing: Rental screening, mortgage approval, property valuation
- Insurance: Underwriting, pricing, claims decisions
- Legal services: Access to legal services, legal matter disposition
- Essential government services: Access to public benefits, licensing, regulatory compliance
If your organization uses AI tools in any of these decision areas, those tools likely qualify as high-risk AI systems under the Act. This includes AI-powered applicant tracking systems, AI-assisted underwriting platforms, automated customer service systems that determine access to financial products, and AI tools that influence clinical or diagnostic decisions.
Who Is Affected
The Colorado AI Act has extraterritorial reach. It applies to any organization whose AI systems make consequential decisions affecting Colorado residents. You do not need to be based in Colorado. You do not need to have offices in Colorado. If your products, services, or internal operations use AI to make decisions that affect people who live in Colorado, the law applies to you.
For mid-market organizations with national customer bases, remote employees across multiple states, or financial and healthcare services that cross state lines, this means the Colorado AI Act is effectively a national regulation. It does not matter whether you have a single employee or customer in Colorado today. If your AI systems could affect a Colorado resident tomorrow, compliance is prudent.
Practical Steps for Mid-Market Organizations
Compliance with the Colorado AI Act requires concrete action, not just awareness. Here is a practical sequence for mid-market organizations that have not yet started formal AI governance.
Step 1: Inventory your AI systems. Identify every AI tool in use across the organization, including tools adopted by individual teams without IT approval. For each tool, document what decisions it influences, what data it processes, and which consumer populations it affects. Shadow AI discovery is a critical component of this step. Research indicates that the majority of employees using AI tools at work have not disclosed this use to their employer.
Step 2: Classify risk levels. For each AI system, determine whether it makes or substantially contributes to a consequential decision as defined by the Act. Any system that influences decisions in the eight regulated areas listed above requires full compliance treatment.
Step 3: Conduct algorithmic impact assessments. For every high-risk system, document the purpose, benefits, risks, data categories, outputs, and safeguards. This assessment must be completed before the June 30, 2026 effective date, updated annually, and revised within 90 days after any substantial modification to the AI system. The assessment should specifically evaluate risks of algorithmic discrimination based on race, color, ethnicity, sex, religion, age, disability, or other protected characteristics.
Step 4: Implement consumer disclosure mechanisms. Design and deploy the notification systems required when AI influences a consequential decision. This includes both proactive disclosure that AI is involved in the decision-making process and reactive notification when an adverse decision occurs.
Step 5: Establish governance structures. Create or update your risk management policy to address AI-specific risks. Assign accountability for AI governance. Implement monitoring procedures. Document everything. The Act evaluates compliance based on the proportionality of your governance program to the nature and scale of your AI use.
How NIST AI RMF Provides a Legal Foundation
The Colorado AI Act does not leave organizations guessing about what framework to follow. The statute explicitly references the NIST AI Risk Management Framework by name. Section 6-1-1705 establishes an affirmative defense for deployers who meet two conditions: first, they must discover and cure violations through feedback mechanisms, adversarial testing, red teaming, or internal review; and second, they must comply with the most recent version of the NIST AI RMF or a comparable nationally or internationally recognized risk management framework. Both conditions must be satisfied. This means that adopting NIST AI RMF is legally significant, but framework compliance alone is not sufficient — you must also demonstrate proactive efforts to identify and remediate issues.
NIST AI RMF is the most widely recognized AI governance framework in the United States. Its four core functions, Govern, Map, Measure, and Manage, directly map to the Colorado AI Act's requirements. The framework is not theoretical. Its Govern function establishes the accountability structures and policies the Act requires. Map identifies and classifies AI systems and their associated risks. Measure provides the evaluation methodology for impact assessments. Manage addresses the ongoing monitoring and remediation obligations.
Using NIST AI RMF as the foundation for your Colorado AI Act compliance program creates a dual benefit: it satisfies the specific requirements of Colorado law while building a governance posture that is transferable to the EU AI Act, future state legislation, and industry-specific regulations.
Enforcement and Record Retention
The Colorado AI Act is enforced exclusively by the Colorado Attorney General. There is no private right of action, meaning individual consumers cannot sue under the Act directly. However, the AG's enforcement authority is backed by the state's consumer protection statutes.
Deployers must retain all documentation related to their compliance efforts — including impact assessments, governance policies, and disclosure records — for a minimum of three years. Maintaining thorough records is not optional. It is part of the compliance obligation and essential to establishing the affirmative defense described above.
The Window Is Closing
The Colorado AI Act's June 30, 2026 effective date is less than four months away. Organizations that have not started their compliance programs face a difficult choice: begin now with a structured approach, or scramble when enforcement arrives.
For mid-market organizations, the path forward is not to build a compliance program from scratch. It starts with understanding what AI you have, where it creates risk, and what governance structures are proportionate to your operations. A structured AI security assessment is the practical first step.
At Ayliea, our assessment methodology aligns to NIST AI RMF and evaluates across 10 AI-specific control domains mapped to eight compliance frameworks. If your organization is subject to the Colorado AI Act, we can help you build the evidence base that demonstrates reasonable care before the deadline arrives.
