Skip to main content

Looking for Valuant? You are in the right place!

Valuant is now Abrigo, giving you a single source to Manage Risk and Drive Growth

Make yourself at home – we hope you enjoy your new web experience.

Looking for DiCOM? You are in the right place!

DiCOM Software is now part of Abrigo, giving you a single source to Manage Risk and Drive Growth. Make yourself at home – we hope you enjoy your new web experience.

Looking for TPG Software? You are in the right place!

TPG Software is now part of Abrigo. You can continue to count on the world-class Investment Accounting software and services you’ve come to expect, plus all that Abrigo has to offer.

Make yourself at home – we hope you enjoy being part of our community.

AI cybersecurity risks: What they are and how financial institutions can mitigate them

Edward Callis, CPA, CISSP, CCSP
October 17, 2025
Read Time: 0 min

AI poses 3 main cybersecurity risks to banks & credit unions

Understand the specific AI-related risks and take action now to mitigate cybersecurity threats at your bank or credit union. Key topics covered in this post:   
This blog was rewritten and updated to reflect recent threats and trends.

AI can enhance operations and add risks

As artificial intelligence becomes more embedded in financial institutions’ daily operations, cybersecurity is both urgent and complex. AI offers enhanced capabilities for threat detection and incident response, but it also introduces new opportunities for cybercriminals to exploit cybersecurity vulnerabilities at a greater scale and speed.

This Cybersecurity Awareness Month is a good reminder for banks and credit unions to understand specific cybersecurity risks associated with AI so they can manage and mitigate them. Many mitigation strategies require applying the same fundamentals of diligence, oversight, and employee training to AI tools that institutions apply to other technology in their environment. Even so, financial institutions need to evaluate how AI fits into existing cybersecurity frameworks, reassessing the testing and oversight needed to protect data, evaluate vendors, and stay exam ready.

Watch the on-demand webinar, "Navigating AI risks: Policies, vendors, and compliance."

Watch now

What are the cybersecurity risks of AI?

Three main cybersecurity risks are associated with AI, according to a 2024 letter by the New York State Department of Financial Services (DFS) to the banking industry:

  • AI-enabled social engineering: AI has amplified traditional social engineering attacks, such as phishing. It can generate realistic audio, video, and text deep fakes that are highly personalized and sophisticated, making fraudulent activity appear alarmingly legitimate.
  • Faster and more advanced cyberattacks: Cybercriminals can use AI to scan and analyze vast amounts of data quickly, identifying and exploiting security vulnerabilities more efficiently than ever before. They can conduct reconnaissance, deploy malware, and exfiltrate nonpublic information (NPI) at an unprecedented rate.
  • Data misuse or theft involving sensitive information: The large datasets (often including NPI from institutional or customer data) used in AI models themselves represent new cybersecurity exposure. Threat actors are incentivized to target entities with substantial amounts of information, increasing the risk of data breaches.

Even though each of these risks is an extension of existing risks, they can undermine customer trust and institutional resilience, so the DFS letter’s messaging to state-regulated entities is nevertheless relevant for banks and credit unions across the country. 

Risk mitigation with data management & planning

Financial institutions must treat AI not as an isolated innovation but as another layer of their cybersecurity environment that needs to be tested, documented, and governed.

Data encryption and management, risk assessment, response planning, and vendor due diligence are among the areas financial institutions should focus on when it comes to protecting against AI-related risks. Financial institutions will want to:

1. Revisit data management and encryption efforts

Central to protecting any kind of data financial institutions have in their systems is encryption of data both at rest and in transit. Encryption is vital to preventing unauthorized access and maintaining confidentiality, so financial institutions should verify encryption practices, irrespective of whether AI is in scope. Similarly, financial institutions should regularly assess security, privacy, and cyber resiliency as part of ongoing efforts to safeguard sensitive information, regardless of whether it is tied to AI models and tools. Technical controls and data governance are vital.

2. Update risk assessments, policies, and incident response plans

Institutions’ risk assessments should identify potential AI-related threats and implement appropriate controls. This includes regular reviews and updates to security policies and procedures. As AI tools are introduced, they need to be integrated into your institution’s cybersecurity and incident response plans. Review your current policies and determine whether they cover AI-related incidents, such as compromised models or data misuse.

If an AI model used for customer service were manipulated or “poisoned,” for example, your response plan should outline how to isolate it, communicate with affected parties, and analyze the event. Institutions should also consider how to maintain essential operations while that model is taken offline.

Board and senior management oversight are critical to these updates. Regular briefings on AI initiatives, ideally quarterly, help ensure that the use of AI aligns with the institution’s broader strategy and risk appetite. These discussions should include the results of any AI testing or model assessments, reinforcing accountability and transparency.

AI risk requires vendor due diligence & training

3. Strengthen due diligence and vendor oversight

For many institutions, AI arrives through third-party vendors. That means the focus should be on understanding how those tools are built and how they’re secured before implementation. The institution can offload implementation but not risk ownership.

When assessing AI vendors, request details about the specific models used and the data on which they were trained. If you’re using a large language model (LLM), ask whether training is ongoing and what safeguards exist for issues like prompt injection and hallucinations. Vendor contracts should reflect these considerations and include:

  • Disclosure of the model type and its training parameters
  • Limitations on using your institution’s data for further training
  • AI-specific security reporting
  • Prohibitions against sharing data with public or fourth-party models

You should also request results from AI-specific penetration testing, ideally following the OWASP Top 10 for LLMs, performed by qualified, independent assessors. That testing provides confidence that vendors are applying recognized security standards to their AI systems. Due diligence should be refreshed regularly, and of course, everything should be documented.

4. Expand training and ensure collaboration

Employees remain a key line of defense against cyber threats, and training should evolve alongside technology. Updating annual cybersecurity awareness programs to include examples of AI-driven phishing or deep-fake impersonations can make a real difference.

AI governance is also about collaboration. IT, compliance, audit, and risk management functions should work together to assess AI use cases and ensure controls are applied consistently. Internal audits can help verify that documentation, contracts, and policies are keeping pace with technology. Some institutions even run mock audits focused on AI governance to prepare for future regulatory exams.

5. Stay connected to peers and regulatory developments

The AI threat environment changes quickly. Staying connected to peers and industry organizations helps institutions remain informed and proactive. Groups such as FS-ISAC, RMA (now ProSight), and the ABA regularly share emerging threat information and best practices.

Monitoring regulatory updates is equally important. Even when guidance originates at the state level, such as from the DFS, it can offer useful direction for developing internal frameworks and ensuring readiness for broader oversight.

6. Balance innovation with security discipline

AI can enhance defenses when deployed carefully. Many vendors are now embedding AI into their existing cybersecurity tools, improving anomaly detection and response times. Institutions should evaluate these enhancements through the same lens as any other vendor solution: documentation, testing, and clear accountability.

Every new technology introduces risk. The institutions that will benefit most from AI are those that apply discipline and rigor to its adoption, verifying controls, maintaining strong governance, and documenting every step.

A structured approach for long-term success

AI is not a passing trend. It is another evolution in how financial institutions operate and protect themselves. By incorporating AI into existing cybersecurity frameworks through vendor management, board oversight, and ongoing education, institutions can stay secure while adapting to technological change and maintaining the trust of their customers, members, and regulators.

About the Author

Edward Callis, CPA, CISSP, CCSP

Vice President of IT Risk & Assurance
Edward Callis is a Vice President of IT Risk & Assurance at Abrigo. He leads a team of IT professionals who assess Abrigo’s vendor and partner ecosystem, and who provide comprehensive due diligence documentation so financial institutions can make an informed choice when selecting software platforms. Edward has more than

Full Bio

About Abrigo

Abrigo enables U.S. financial institutions to support their communities through technology that fights financial crime, grows loans and deposits, and optimizes risk. Abrigo's platform centralizes the institution's data, creates a digital user experience, ensures compliance, and delivers efficiency for scale and profitable growth.

Make Big Things Happen.