Colorado’s New AI Law: What Therapists Need to Know
In 2024, Colorado enacted one of the nation’s broadest state laws regulating artificial intelligence (AI). Senate Bill 24-205, Consumer Protections for Artificial Intelligence, sets the stage for how AI will be managed in sensitive fields, and therapists should pay close attention to its implications.
At its core, the law regulates “high-risk AI systems,” which it defines as tools that play a substantial role in making consequential decisions. These are decisions that significantly affect an individual’s access to services like education, employment, housing, health care, insurance, financial lending, or legal services. For therapists, this definition should catch your attention. If a practice uses AI-enabled tools to screen clients, support hiring, or assist with insurance eligibility, those activities may fall under the law’s scope.
The law requires both developers and deployers of high-risk AI systems to take active steps to prevent harm. Developers must provide documentation about their systems to deployers, the state attorney general, and the public. Deployers (meaning therapists or practices who adopt these systems) must create an AI risk management program, notify clients when AI is used in high-risk contexts, and make information about these systems accessible.
These requirements take effect beginning February 1, 2026, giving therapists time to evaluate their current use of AI-based tools and prepare. For therapists, the most immediate concern is transparency. Clients must be informed if AI systems are being used in ways that affect access to or the terms of therapeutic services. This could include something as routine as an AI tool screening for insurance coverage or a scheduling system that determines client eligibility for certain programs.
What about AI note-taking software?
Colorado defines a high-risk AI system as one that is a “substantial factor” in making a consequential decision. Even if AI note-taking is framed as “assistive,” the outputs often influence therapist decision-making, documentation practices, and potentially billing. If those outputs shape what goes into treatment records or insurance claims, the AI system could arguably be part of a consequential decision.
Looking ahead, Colorado’s framework is likely to influence broader AI regulation across the country. While only a handful of states enacted AI laws in 2024, many others introduced bills, and more are expected to pass in 2025. For therapists in private practice, group settings, or clinics, it is no longer sufficient to assume that AI oversight applies only to big tech. If you are using software that embeds AI (even for routine operations) compliance obligations may follow.The law also underscores that discrimination does not become lawful simply because it originates in an automated system.
The takeaway for therapists is this: first, evaluate where AI may already be present in your practice, whether through client-facing tools or back-end systems. Second, begin building policies that account for disclosure, risk assessment, and bias mitigation. Even for smaller practices, now is the time to prepare! Colorado’s model shows where the future of AI regulation is headed, and it is focused on protecting individuals in precisely the areas where therapists work every day.
This blog is intended for educational purposes only and does not constitute specific legal advice for any individual. Reading this material does not establish an attorney-client relationship between the reader and our firm. For personalized legal guidance, please consult a licensed attorney in your jurisdiction.