AI risk requires vendor due diligence & training
3. Strengthen due diligence and vendor oversight
For many institutions, AI arrives through third-party vendors. That means the focus should be on understanding how those tools are built and how they’re secured before implementation. The institution can offload implementation but not risk ownership.
When assessing AI vendors, request details about the specific models used and the data on which they were trained. If you’re using a large language model (LLM), ask whether training is ongoing and what safeguards exist for issues like prompt injection and hallucinations. Vendor contracts should reflect these considerations and include:
- Disclosure of the model type and its training parameters
- Limitations on using your institution’s data for further training
- AI-specific security reporting
- Prohibitions against sharing data with public or fourth-party models
You should also request results from AI-specific penetration testing, ideally following the OWASP Top 10 for LLMs, performed by qualified, independent assessors. That testing provides confidence that vendors are applying recognized security standards to their AI systems. Due diligence should be refreshed regularly, and of course, everything should be documented.
4. Expand training and ensure collaboration
Employees remain a key line of defense against cyber threats, and training should evolve alongside technology. Updating annual cybersecurity awareness programs to include examples of AI-driven phishing or deep-fake impersonations can make a real difference.
AI governance is also about collaboration. IT, compliance, audit, and risk management functions should work together to assess AI use cases and ensure controls are applied consistently. Internal audits can help verify that documentation, contracts, and policies are keeping pace with technology. Some institutions even run mock audits focused on AI governance to prepare for future regulatory exams.
5. Stay connected to peers and regulatory developments
The AI threat environment changes quickly. Staying connected to peers and industry organizations helps institutions remain informed and proactive. Groups such as FS-ISAC, RMA (now ProSight), and the ABA regularly share emerging threat information and best practices.
Monitoring regulatory updates is equally important. Even when guidance originates at the state level, such as from the DFS, it can offer useful direction for developing internal frameworks and ensuring readiness for broader oversight.
6. Balance innovation with security discipline
AI can enhance defenses when deployed carefully. Many vendors are now embedding AI into their existing cybersecurity tools, improving anomaly detection and response times. Institutions should evaluate these enhancements through the same lens as any other vendor solution: documentation, testing, and clear accountability.
Every new technology introduces risk. The institutions that will benefit most from AI are those that apply discipline and rigor to its adoption, verifying controls, maintaining strong governance, and documenting every step.
A structured approach for long-term success
AI is not a passing trend. It is another evolution in how financial institutions operate and protect themselves. By incorporating AI into existing cybersecurity frameworks through vendor management, board oversight, and ongoing education, institutions can stay secure while adapting to technological change and maintaining the trust of their customers, members, and regulators.