Skip to Content

Financial Services Committee Examines Artificial Intelligence in Financial Services

Today, the House Committee on Financial Services, led by Chairman French Hill (AR-02), held a hearing exploring how artificial intelligence (AI) is being used in the financial services and housing sectors. Members assessed how current laws and regulations apply and identified where existing frameworks may create uncertainty or stifle innovation.

Chairman Hill said, "AI has shown its transformative potential to reshape how financial institutions operate, from enhancing analysis, to managing risk, mitigating fraud, and, importantly, enhancing customer service. However, as with any innovation, risks give rise to new challenges. ... To move forward, we must embrace and adapt to innovation. … Identifying gaps and obstacles in our regulatory frameworks will help Congress create an AI landscape where innovation can flourish without unnecessary barriers, while ensuring robust consumer protections, and risk-based, technology-neutral regulation.”

On How AI Innovation Helps Combat Fraud:

Subcommittee on Digital Assets, Financial Technology, and Artificial intelligence Chair Bryan Steil (WI-01) said, “Earlier this year, Mr. Chairman Hill and I introduced the bipartisan Unleashing AI Innovation in Financial Services Act, which enables regulatory sandboxes at the federal financial agencies that are targeted in size and scope. These sandboxes, they have a handful of things. They have to be approved and overseen by federal regulators. They require compliance strategies and risk management … they must not impose systemic or national security risks. … Additionally, fraud and unsafe and unsound practices will remain prohibited under the sandboxes. … the sandboxes provide a more secure environment to experiment with AI, enabling innovation with built in guardrails and federal oversight.”

Subcommittee on Oversight and Investigations Chair Dan Meuser (PA-09) said, “… Many of us, the leadership of this Committee, as well as the Trump Administration, is committed to unleashing AI’s full potential, so the United States wins the AI race with investment and energy dominance to support the AI infrastructure. My home state of Pennsylvania is doing everything we can to draw in as much AI infrastructure as possible. However, it brings risks that we're talking about and exposing here, which are happening now, and we want to mitigate for the future. They definitely include fraud, scams, profiling, and seem to be growing more sophisticated. AI is proving to be an incredibly strong tool for detecting and shutting down these very threats but as well creating them.”

Subcommittee on Capital Markets Chair Ann Wagner (MO-02) said, “NASDAQ was an important player in the creation of electronic trading. And, while rapid growth over the last few years in generative artificial intelligence, or AI, has brought this technology center stage for the general public, NASDAQ has been quietly using AI for years and years to fight fraud and increase market efficiency, liquidity, and transparency.

On How Unified AI Laws Promote Innovation:

Rep. William Timmons (SC-04) said, "If the United States is going to remain a global leader in innovation, we must adopt clear and harmonized rules that support technological progress while also protecting consumers and preserving market integrity. Industry leaders are increasingly concerned about the growing patchwork of state laws related to artificial intelligence. These state requirements often conflict with one another, whether in the form of impact assessments, documentation, standards, or definitions of high-risk systems for firms that operate across the country. These inconsistencies can create significant operational challenges, especially when artificial intelligence supports critical functions such as fraud detection and cyber defense. Without a unified federal framework that replaces duplicative and contradictory state rules, we risk higher compliance costs, slower innovation, and weaker protection for consumers."

Rep. Marlin Stutzman (IN-03) said, “During the Biden Administration, AI innovation was viewed primarily as a threat to the American people. While President Trump has set the country back on track towards innovation and American AI dominance on the world stage, we cannot have Biden's allies in anti-innovation states like California and Massachusetts setting the trend on overregulating AI.”

Rep. Young Kim (CA-40) said, “Earlier this year, the state of California, where I'm from, they passed S.B. [53] that would unfairly regulate AI and impose heavy compliance burdens on companies. Now, states across the country are looking to this California model as a basis for developing their own artificial intelligence regulations. Therefore, there is an urgency for congress to establish a federal framework for AI. That's why I support legislation like Chairman French Hill’s Unleashing AI Innovation in Financial Services Act that would create federal regulatory sandboxes.”

Witnesses Echoed the Work of the Committee:

Ms. Jeanette Manfra, Vice President and Global Head of Risk & Compliance, Google Cloud said, “Advances in AI have led to increased adoption in the financial services sector. A prominent use for this technology is to assist in key compliance and risk functions, including the detection of fraud, money laundering, and other financial crimes, as well as trade manipulation. As the use of these models grows, so do questions about managing risks associated with the models. In particular, regulators, financial institutions, and technology service providers have been looking at whether existing risk management guidance (“MRM Guidance”)—which has traditionally been the regulatory regime applicable to managing risk in the financial services industry—continues to be relevant for AI models and, if so, how the guidance should be interpreted and applied to this new technology.”

Mr. Tal Cohen, President, Nasdaq said, “AI-specific regulation should be consistent and harmonized, meaning that it should avoid creating gaps, overlaps, or inconsistencies among different regulators, jurisdictions, or sectors, and that it should promote coordination and cooperation, among the regulators, the industry, and the international community. Within the federal government, while we oppose the creation of a central regulator, we support leveraging NIST or another body to provide coordination across government to ensure consistent regulatory standards and development of the federal workforce knowledge of AI. We also believe that Congress should consider appropriate action to avoid the creation of a patchwork of differing state laws governing AI as this could stifle innovation, increase expense, reduce the availability of AI tools to various states and harm the competitiveness of the United States globally.”

Mr. Nicholas Stevens, Vice President of Product, Artificial Intelligence, Zillow said, “For a product manager, the most important test of any product is whether the product solves a real problem for its users. In the housing market, those problems include a lack of affordability, limited supply, and outdated systems that frustrate consumers. We see opportunities for AI to help tackle all of those interconnected challenges. For example, AI can help reduce loan origination costs, which have risen by over 40 percent since 2019, by automating manual work like document checks, compliance reviews, and loan summary generation. It could help local governments clear permitting and zoning backlogs by reviewing project submissions and quickly identifying issues and solutions when projects are submitted. It can also help builders submit compliant plans to accelerate the permitting process. Together, these innovations can unlock housing supply by reducing administrative delays. The integration of AI into the homebuying system should go hand-in-hand with the modernization of that system, adapting rules so they continue to protect people while allowing for innovation and an improved consumer experience.”

Ms. Wendi Whitmore, Chief Security Intelligence Officer, Palo Alto Networks said, “At Palo Alto Networks, we see firsthand how AI-driven cybersecurity is essential to protecting privacy, strengthening national security, and safeguarding our digital way of life. The risky outcome for society would be to not meaningfully leverage AI for cyber defense. Every day, Palo Alto Networks detects up to 8.95 million new attacks. The process of continuous discovery and analysis allows threat detection to stay ahead of the adversary. This real-time awareness of the threat landscape allows our company to block up to 30.9 billion attacks each day. This would not be possible without AI. We are committed to disrupting the status quo of the cybersecurity industry to simultaneously: 1) deliver transformative cybersecurity outcomes, 2) drive much-needed cost rationalization for network defenders, and 3) eliminate inefficient, manual processes. This innovative spirit will be critical to combatting not just the threats of today, but also the emerging risks – like encryption-breaking quantum computing – of tomorrow.”