U.S. State and Federal AI Regulations Applicable to Financial Services

This post highlights enacted U.S. state and federal laws, regulations and regulatory guidance that apply to artificial intelligence (AI) used in financial services. Where proposed bills were notable, they are mentioned in the narrative but not included in the table. The information is current as of 30 Nov 2025. Rhindon Cyber helps firms navigate complex cybersecurity regulatory requirements for financial services firms.

Overview

Because the U.S. has no comprehensive federal AI statute, regulators largely apply existing fair‑lending, consumer protection, data‑privacy and civil rights laws to AI. Federal agencies have issued guidance clarifying that AI does not provide a “safe harbor” and have adopted sector‑specific rules (e.g., automated valuation models). States have increasingly filled the vacuum with AI‑specific legislation focused on transparency, bias audits and consumer disclosures.

  • Federal guidance: The Consumer Financial Protection Bureau (CFPB) published guidance in Sept 2023 reminding lenders that the Equal Credit Opportunity Act requires specific reasons for credit denials—even when complex AI/algorithmic models are used. Lenders may not simply check boxes on sample forms; they must explain the actual factors used by the model[1]. A joint statement issued in April 2023 by the CFPB, Department of Justice (DOJ), Federal Trade Commission (FTC) and Equal Employment Opportunity Commission (EEOC) emphasized that AI and automated systems must comply with existing civil‑rights, consumer‑protection and fair‑competition laws; there is no AI exemption[2]. Federal banking regulators also issued a final rule under the Dodd‑Frank Act (2024) requiring mortgage originators and secondary‑market issuers that use automated valuation models (AVMs) to establish quality‑control policies, guard against data manipulation and ensure compliance with nondiscrimination laws[3].
  • State initiatives: In the absence of a federal AI law, states have begun enacting statutes addressing AI transparency, bias and consumer protection, many of which impact financial services. Notable statutes include Colorado’s AI Act (SB 24‑205), Utah’s AI Policy Act (S.B. 149), California’s Generative AI Training Data Transparency Act (AB 2013), Illinois House Bill 3773 (amending the Human Rights Act), New York City Local Law 144 (bias audits for automated employment decision tools), and Texas’s Responsible Artificial Intelligence Governance Act (HB 149). Several states—including Massachusetts and Oregon—have issued Attorney‑General guidance confirming that existing consumer‑protection laws apply to AI.

U.S. AI Laws and Regulations Relevant to Financial Services

The following table summarizes enacted laws, regulations and binding guidance that affect the use of AI in financial services. It lists the agency responsible for enforcement, summarizes the AI‑related requirements and notes any penalties and the types of firms covered.

Level & agency (enforcer)Legislation/regulationAI‑related requirements & notable provisionsPenalties/fines (if specified)Firms covered
Federal – CFPB (Consumer Financial Protection Bureau)Equal Credit Opportunity Act (ECOA) & Regulation B – CFPB guidance on AI credit decisions (Sept 19 2023)[1]The CFPB reminded lenders that ECOA requires specific and accurate reasons for credit denials and adverse‑action notices. Creditors using AI or “black‑box” models cannot simply check boxes on sample forms; they must explain the actual factors used by their algorithm[1]. The guidance underscores that AI does not provide a special exemption from ECOA[4].ECOA allows the CFPB and DOJ to seek civil penalties and damages for discrimination; no new AI‑specific fine was created.All creditors and lenders (banks, credit unions, mortgage companies, fintechs) using algorithmic models for underwriting or credit decisions.
Federal – Interagency (CFPB, FDIC, FHFA, FRB, NCUA, OCC)Automated Valuation Models (AVMs) Final Rule (2024)[3]Under §1125 of the Dodd‑Frank Act, six regulators adopted a final rule requiring institutions that use AVMs for residential mortgage collateral to adopt policies, practices and control systems to: (i) ensure high‑quality valuations, (ii) protect against data manipulation, (iii) avoid conflicts of interest, (iv) conduct random sample testing and reviews, and (v) comply with nondiscrimination laws[3].Non‑compliance can result in enforcement actions under each agency’s existing authority (e.g., civil money penalties under the Federal Deposit Insurance Act). The rule itself does not set a separate AI fine.Mortgage originators, secondary‑market issuers and other financial institutions using AVMs to value residential real‑estate collateral.
Federal – FTC, DOJ, CFPB & EEOCJoint Statement on Enforcement Against Discrimination and Bias in Automated Systems (Apr 25 2023)[2]The agencies pledged to vigorously enforce existing civil‑rights, fair‑lending, fair‑competition and employment laws against discriminatory uses of AI. They warned that AI tools must not be used to discriminate and that claims of “innovation” will not shield companies from liability[2].Enforcement under existing statutes (e.g., Civil Rights Act, Fair Housing Act, FTC Act) including civil penalties, damages and injunctions.Businesses across sectors, including lenders, landlords, employers and technology vendors using AI.
Federal – OCC, FRB, FDIC & other prudential regulatorsModel Risk Management Guidance (OCC Bulletin 2011‑12 / Federal Reserve SR 11‑7)Although not AI‑specific, prudential regulators require banks to have robust model risk‑management frameworks for all models, including AI and machine‑learning models. The guidance covers governance, validation, documentation, testing and oversight[5].Violations may result in enforcement actions, civil money penalties and supervisory corrective actions.Banks, federal savings associations and bank holding companies using AI or other quantitative models.
Federal – New York State Department of Financial Services (NY DFS)Industry Letter on Cybersecurity Risks from AI (Oct 16 2024)[6]DFS advised regulated entities to evaluate cybersecurity risks introduced by AI, urging compliance with the existing 23 NYCRR Part 500 Cybersecurity Regulation. The letter highlights risks such as AI‑enabled social engineering and emphasizes risk‑assessment, governance, monitoring and third‑party management[6].No new penalties; non‑compliance with Part 500 can result in administrative penalties under that regulation.“Covered entities” under Part 500, including New York‑regulated banks, insurance companies and licensed lenders.
State – CaliforniaAB 2013 (Generative AI: Training Data Transparency Act) – effective Jan 1 2026[7]Requires developers of generative AI systems made available to Californians to publish on their website detailed documentation of the training data, including sources, data types, whether personal or copyrighted material was used and whether synthetic data was generated[7]. This transparency aims to address the “black‑box” problem but does not impose usage restrictions.Enforcement through the California attorney general; the act itself does not specify fines but violations may constitute unfair competition under the Unfair Competition Law.AI developers (including financial‑services firms that develop or substantially modify generative models).
State – California Civil Rights CouncilAutomated‑Employment‑Decision Systems (ADS) Regulations (effective Oct 1 2025)Regulations issued under the Fair Employment and Housing Act prohibit employers from using ADS that discriminate on the basis of protected traits. Employers must maintain meaningful human oversight, test ADS for disparate impact, retain records for four years, provide alternative assessments when ADS could disadvantage people with disabilities and ensure vendors can be held liable when they control the tool[8].Violations of the Fair Employment and Housing Act can result in administrative fines and civil liability; the regulations themselves do not set separate fines.Employers (including financial institutions) using automated hiring, promotion or performance‑management tools.
State – Colorado (Attorney General)SB 24‑205 – Colorado Artificial Intelligence and Consumer Protection Act – effective June 1 2026Defines “high‑risk AI systems” as those that make or influence “consequential decisions” in areas such as credit and lending. Developers must use reasonable care to protect consumers from algorithmic discrimination, provide detailed documentation on training data, limitations and intended uses and issue public summaries[9]. Deployers must establish a risk‑management program, conduct annual impact assessments for each high‑risk AI system analysing purpose, risks, data, performance metrics and monitoring[10]. Consumers must receive notice when an AI system makes or influences a consequential decision.Violations constitute an unfair trade practice under Colorado’s Consumer Protection Act (CPA); the attorney general has exclusive enforcement authority[11]. The statute does not set a specific per‑violation fine, but CPA penalties can include civil penalties up to $20,000 per violation.Developers and deployers of high‑risk AI systems; includes financial institutions using AI for credit decisions and insurance underwriting.
State – ColoradoHB 24‑1468 (Artificial Intelligence Impact Task Force)Expands Colorado’s facial recognition task force into the Artificial Intelligence Impact Task Force with experts in generative AI and representatives of communities affected by AI discrimination. It does not impose substantive requirements on financial firms but signals future regulation.None; advisory task force only.Not applicable (task force).
State – IllinoisHB 3773 (Limit Predictive Data Analytics) – enacted Aug 9 2024; effective Jan 1 2026Amends the Illinois Human Rights Act to restrict the use of predictive data analytics in employment and credit decisions. Employers may not use race or ZIP code as proxies for race in employment decisions, and entities assessing creditworthiness or interest rates for more than 50 Illinois residents annually must ensure race or ZIP code are not used as factors[12]. Violations constitute discrimination under the Human Rights Act.The act provides enforcement through the Illinois Department of Human Rights; there is no private right of action. Remedies may include cease‑and‑desist orders, damages and civil penalties under the existing Human Rights Act.Employers and entities that make credit decisions (banks, credit unions, insurers and fintech lenders) for Illinois residents.
State – Massachusetts (Attorney General)AG Advisory on Artificial Intelligence (Apr 16 2024)The Massachusetts AG clarified that existing consumer‑protection (Chapter 93A), civil‑rights and data‑security laws apply to AI systems. The advisory notes that the “novelty, complexity and claimed inscrutability” of AI does not remove it from the reach of Chapter 93A[13]. Developers and users must prevent deceptive marketing, discrimination and data‑security violations.Violations of Chapter 93A can result in civil penalties up to $5,000 per violation and restitution; no new AI‑specific fines were created.Companies and individuals developing, selling or using AI systems in Massachusetts, including financial‑services firms.
State – Oregon (Attorney General)Oregon DOJ Guidance on AI (Dec 24 2024)The Oregon DOJ issued guidance explaining that the state’s Unlawful Trade Practices Act, Consumer Privacy Act and Equality Act apply to AI use. Businesses must ensure that AI systems protect consumer data, privacy and fairness[14]. The guidance encourages companies to consult existing laws when implementing AI but does not impose new requirements.The underlying statutes provide enforcement mechanisms and civil penalties (e.g., up to $25,000 per violation under the Unlawful Trade Practices Act).Businesses developing or using AI in Oregon, including banks, lenders and insurance companies.
State – UtahS.B. 149 – AI Policy Act (effective May 1 2024)[15]Establishes an Office of AI Policy and requires any business using AI (e.g., a chatbot) to disclose when a consumer is interacting with AI. It clarifies that AI use does not excuse violations of consumer‑protection laws and authorizes the Division of Consumer Protection to issue rules.The Division of Consumer Protection may impose an administrative fine up to $2,500 per violation for failing to disclose AI use; courts may impose civil penalties up to $5,000 per violation and can order disgorgement and injunctive relief[15].Businesses (including financial institutions and fintechs) that interact with consumers using AI tools.
State – New York City (Department of Consumer and Worker Protection)Local Law 144 of 2021 & final rules on Automated Employment Decision Tools (enforced since July 5 2023)Employers and employment agencies may not use automated employment decision tools (AEDTs) to screen or rank candidates unless: (1) an independent bias audit has been performed within the prior year, (2) the audit results are publicly available, and (3) candidates and employees receive at least 10 business days’ notice and the right to request an alternative evaluation[16]. Final rules specify how to calculate selection rates by race/ethnicity and sex, define “independent auditor,” and clarify that multiple employers using the same AEDT may rely on a common audit[17].The NYC Department of Consumer and Worker Protection can impose civil penalties of $500 per violation, increasing to $1,500 per incident for repeat offenses; each day an AEDT is used without a valid audit constitutes a separate offense[18].Employers and employment agencies operating in NYC (including banks and fintechs with NYC employees) using AI‑based hiring or promotion tools.
State – Texas (Attorney General and Department of Information Resources)HB 149 – Texas Responsible Artificial Intelligence Governance Act (TRAIGA) – effective Jan 1 2026Establishes an Artificial Intelligence Council and requires developers and deployers of AI systems used in Texas to follow transparency obligations, including disclosing AI interactions to consumers and complying with state and federal law. The act preempts local AI regulation, creates an AI regulatory sandbox program and allows the attorney general to enforce against discriminatory or deceptive AI use. A violator who fails to cure is liable for a civil penalty: $10k–$12k per curable violation, $80k–$200k per uncurable violation, and $2k–$40k per day for continuing violations[19]. State agencies may also revoke licenses and impose penalties up to $100k[20].Civil penalties enforced by the Texas attorney general; license revocation or suspension by state agencies[21].“Developers” and “deployers” of AI systems used in Texas, including banks, lenders and fintechs offering AI‑based services. Consumers interacting with AI must receive clear disclosures.

Notes on Proposed or Pending Legislation

  • California AB 1018, SB 813 & SB 833 (2025) – Proposed laws that would, respectively, create oversight for automated decision systems used in consequential decisions (including financial services), grant immunity for AI developers certified by a multi‑stakeholder organization and require human oversight for AI used in critical infrastructure, including finance. As of Nov 30 2025, these bills remain under consideration and are not included in the table.
  • Connecticut SB 2 (2025) – Would establish AI governance programs and an AI regulatory sandbox; pending in the House.
  • Hawaii SB 59 (2025) – Would prohibit discriminatory “algorithmic eligibility determinations” affecting access to credit and other life opportunities.
  • Illinois SB 2203 (Preventing Algorithmic Discrimination Act) – Would require annual impact assessments and consumer notice for automated decision tools used in consequential decisions, including financial services.

Implications for FinancialServices Firms

Financial‑services firms operate across jurisdictions and must integrate AI‑specific laws with existing fair‑lending, privacy and consumer‑protection obligations. Key compliance themes include:

  1. Transparency and consumer disclosures – Many state laws (e.g., Utah SB 149, Colorado SB 24‑205) require firms to notify consumers when AI is used in a decision or interaction. California AB 2013 demands transparency on training data, and New York City Local Law 144 mandates public bias‑audit reports.
  2. Fair lending and antidiscrimination – Federal laws (ECOA, Fair Housing Act) already prohibit discrimination; new state laws (Colorado, Illinois HB 3773, Texas HB 149) explicitly apply these principles to AI. Firms must perform impact assessments and monitor AI systems for disparate outcomes.
  3. Riskmanagement and governance – Regulators expect robust model risk‑management frameworks. The Colorado AI Act requires annual impact assessments; prudential regulators demand model validation and governance; Texas’s TRAIGA establishes an AI council and sandbox; and California’s employment regulations require human oversight of ADS.
  4. Cybersecurity and data protection – AI introduces cybersecurity risks; the NY DFS industry letter instructs covered entities to apply Part 500 cybersecurity controls to AI. Data‑privacy laws (CCPA, Colorado Privacy Act) and AG guidance in Massachusetts and Oregon remind firms that existing privacy obligations extend to AI.

Conclusion

U.S. financial institutions using AI must navigate a patchwork of state and federal requirements. While the federal government continues to rely on existing laws and issue guidance, states have begun enacting AI‑specific statutes that impose disclosure obligations, impact assessments, bias audits and penalties for non‑compliance. Firms should monitor developments, update their AI governance frameworks and ensure that AI‑driven decisions align with fair‑lending, privacy and consumer‑protection laws.


[1] [4] CFPB Issues Guidance on Credit Denials by Lenders Using Artificial Intelligence | Consumer Financial Protection Bureau

https://www.consumerfinance.gov/about-us/newsroom/cfpb-issues-guidance-on-credit-denials-by-lenders-using-artificial-intelligence

[2] CFPB and Federal Partners Confirm Automated Systems and Advanced Technology Not an Excuse for Lawbreaking Behavior | Consumer Financial Protection Bureau

https://www.consumerfinance.gov/about-us/newsroom/cfpb-federal-partners-confirm-automated-systems-advanced-technology-not-an-excuse-for-lawbreaking-behavior

[3] Agencies Issue Final Rule to Help Ensure Credibility and Integrity of Automated Valuation Models | FHFA

https://www.fhfa.gov/news/news-release/agencies-issue-final-rule-to-help-ensure-credibility-and-integrity-of-automated-valuation-models

[5] Model Risk Management, Comptroller’s Handbook

https://www.occ.treas.gov/publications-and-resources/publications/comptrollers-handbook/files/model-risk-management/pub-ch-model-risk.pdf

[6] Industry Letter – October 16, 2024: Cybersecurity Risks Arising from Artificial Intelligence and Strategies to Combat Related Risks | Department of Financial Services

https://www.dfs.ny.gov/industry-guidance/industry-letters/il20241016-cyber-risks-ai-and-strategies-combat-related-risks

[7] Bill Text – AB-2013 Generative artificial intelligence: training data transparency.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml

[8] AI in Hiring: Emerging Legal Developments and Compliance Guidance for 2026 | HR Defense

https://www.hrdefenseblog.com/2025/11/ai-in-hiring-emerging-legal-developments-and-compliance-guidance-for-2026/

[9] [10] [11] 2024a_205_signed.pdf

https://leg.colorado.gov/sites/default/files/2024a_205_signed.pdf

[12] Illinois: Bill on predictive data analytics signed by Governor | News | DataGuidance

https://www.dataguidance.com/news/illinois-bill-predictive-data-analytics-signed

[13] download

https://www.mass.gov/doc/ago-ai-advisory-41624/download

[14] DOJ Issues Guidance on AI for Oregon Businesses – Oregon Department of Justice : Media

https://www.doj.state.or.us/media-home/news-media-releases/ag-rosenblum-issues-guidance-on-ai-for-oregon-businesses

[15] SB0149.pdf

https://le.utah.gov/~2024/bills/sbillenr/SB0149.pdf

[16] [18] NYC 144 Law: Automated Employment Decisions Compliance Guide

https://mosey.com/blog/nyc-local-law-144-compliance

[17] Microsoft Word – DCWP NOA for Use of Automated Employment Decisionmaking Tools

https://rules.cityofnewyork.us/wp-content/uploads/2023/04/DCWP-NOA-for-Use-of-Automated-Employment-Decisionmaking-Tools-2.pdf

[19] [20] [21] capitol.texas.gov

https://capitol.texas.gov/tlodocs/89R/analysis/html/HB00149S.htm