Skip to main content

Looking for Valuant? You are in the right place!

Valuant is now Abrigo, giving you a single source to Manage Risk and Drive Growth

Make yourself at home – we hope you enjoy your new web experience.

Looking for DiCOM? You are in the right place!

DiCOM Software is now part of Abrigo, giving you a single source to Manage Risk and Drive Growth. Make yourself at home – we hope you enjoy your new web experience.

IFSLeaseWorks is now part of Abrigo.

Diversify your portfolio and earn additional interest income. End-to-end lease origination and administration automation make it possible.

Read the press announcement

Looking for TPG Software? You are in the right place!

TPG Software is now part of Abrigo. You can continue to count on the world-class Investment Accounting software and services you’ve come to expect, plus all that Abrigo has to offer.

Make yourself at home – we hope you enjoy being part of our community.

Terms to know related to artificial intelligence

AI glossary for bankers

Banking AI dictionary

Essential AI terms

The AI glossary for bankers helps staff at banks and credit unions understand artificial intelligence terms used in financial services. As institutions discuss AI across growth, risk management, operations, and service, clear and consistent language supports better evaluation of tools and strategies and more informed decisions. This glossary includes foundational AI terms, risk and governance concepts, and practical definitions tied to how banks and credit unions may encounter AI in day-to-day work. 

Discover our AI resources

Adaptive models 

Definition: Adaptive models are AI systems that improve over time by learning from new data, user feedback, or changing conditions. Unlike static models, they can be updated or refined to perform better on specific tasks. 

Why it matters for bankers: For bankers, this explains why modern AI tools are more useful than earlier chatbots as they can evolve based on real-world use.  

Banking example: In fraud detection, an adaptive model might improve its case summaries and alerts based on feedback from analysts reviewing flagged transactions. 

Adversarial AI 

Definition: Adversarial AI refers to techniques used to manipulate or deceive AI systems by introducing misleading or carefully crafted inputs that cause incorrect outputs. 

Why it matters for bankers: In financial services, adversarial activity can target models used in fraud detection, credit scoring, or authentication—potentially leading to missed risks or incorrect decisions. Managing this risk requires robust model testing, monitoring, and controls to detect unusual or manipulated inputs. 

Banking example: Fraudsters altering transaction patterns or input data to evade detection by a fraud model. 

Agentic AI 

Definition: Agentic AI refers to AI systems that use one or more AI agents to complete multi-step workflows with minimal human input. These systems can plan, sequence tasks, and coordinate actions across tools and data sources. An AI agent performs a task; agentic AI connects tasks into a workflow. 

Why it matters for bankers: Agentic AI enables automation of end-to-end processes—reducing handoffs, speeding up workflows, and improving consistency across complex tasks. 

Banking example: An agentic AI system that collects financial data, analyzes risk, drafts a credit narrative, and prepares supporting documentation end-to-end. 

AI agent  

Definition: An AI agent is an AI system that can take actions to complete a task, not just respond to a single prompt. It can gather information, make decisions, and use tools to achieve a specific goal. 

Why it matters for bankers: AI agents can automate individual tasks within workflows—reducing manual effort while keeping humans in control of final decisions. 

Banking example: An AI agent that pulls borrower data and drafts a first-pass credit memo. 

AI-based customer segmentation 

Definition: AI-based customer segmentation uses AI to group customers based on shared characteristics or behaviors—such as product usage, balances, transaction patterns, industry, or life stage. 

Why it matters for bankers: Compared to traditional methods used by banks and credit unions, AI can create more dynamic and granular segments, helping institutions improve targeting for deposit growth, cross-sell opportunities, and customer retention. 

Banking example: Identifying customers with rising deposit balances who may be candidates for treasury or lending products. 

AI bias 

Definition: AI bias occurs when an AI system produces unfair, inconsistent, or systematically skewed outcomes, often because of the data it was trained on or how it was designed. Bias can appear in predictions, recommendations, or generated content. 

Banking example: For bankers, AI bias is especially important in areas like lending, where models may unintentionally disadvantage certain groups if not properly tested and monitored. Managing bias requires strong governance, including data review, model validation, and ongoing oversight. 

Banking example: A credit model that consistently assigns lower scores to applicants from certain geographies due to historical data patterns rather than current risk. 

AI change management 

Definition: AI change management is the work of making new AI tools usable in real life with training, communication, updated procedures, and clear accountability.  

Why it matters for bankers: Like other technology modernization effortsAI projects fail less from math problems and more from people-and-process gaps, especially when teams don’t trust outputs or don’t know when not to use them. Change management promotes adaptability, improved employee morale, optimized operations, and risk mitigation.  

Banking example: Form a cross-functional AI Governance Committee to oversee initiatives, set ethical standards, track inventory of AI models, provide and promote training, and maintain oversight. 

AI covenant monitoring

Definition: AI covenant monitoring uses AI to track whether borrowers are meeting agreed financial and reporting requirements. It can analyze financial statements, calculate covenant-related ratios, and flag potential breaches or risks early. 

Why it matters for bankers: For banks and credit unions, this helps reduce manual review, improve consistency, and enable more proactive risk management—while still requiring oversight and follow-up. 

Banking example: Identifying early warning signs when a borrower is approaching a debt service coverage or leverage covenant breach. 

AI credit scoring model 

Definition: A credit scoring model estimates the likelihood of repayment problems using borrower and performance data using artificial intelligence. If AI/ML is used, the key banking questions are: Is it explainable? Is it stable? Does it create fair lending risk? And can we defend it? 

Why it matters for bankers: For banks and credit unions, key considerations include whether the model is explainable, stable over time, free from unfair bias, and defensible during internal review or regulatory exams. 

Banking example: Using cash flow and transaction data to assess creditworthiness alongside or instead of traditional credit scores. 

AI-driven alert triage 

Definition: AI-driven alert triage uses AI to prioritize alerts so investigators can focus on the highest-risk cases first. Instead of reviewing alerts in sequence, the AI-powered system ranks them based on risk signals, patterns, or learned behavior. 

Why it matters for bankers: For community banks and credit unions, this can reduce backlogs and improve consistency in how alerts are handled. However, it requires clear explainability, ongoing monitoring, and controls to ensure that high-risk activity is not overlooked. 

Banking example: The Abrigo AML Assistant uses adaptive machine learning and real-time insights for smart alert triaging that allows investigators to review the most likely AML cases first. It accelerates investigations, delivering up to an 80% time savings.  

AI evaluations (AI evals) 

Definition: AI evaluations (AI evals) are structured tests used to assess the quality, accuracy, and reliability of AI system outputs against expected results or “ground truth.” They help measure how well an AI system performs across different scenarios and use cases. 

Why this matters for bankers: For banks and credit unions, AI evals provide a systematic way to validate AI outputs, identify errors or inconsistencies, and ensure models perform as intended over time. They are especially important for generative AI, where outputs can vary and require ongoing quality checks. In practice, evals use curated datasets, defined success criteria, and repeatable testing to support model validation, monitoring, and audit readiness. 

Banking example: Testing an AI system that generates credit memos against a set of known cases to evaluate accuracy, completeness, and consistency of the output.  

AI governance 

Definition: AI governance is the framework of policies, controls, and oversight that defines how AI is used within an organization—including who can use it, what data it can access, how outputs are validated, and how risks are monitored. 

Why it matters for bankers: For community banks and credit unions, effective AI governance helps prevent “shadow AI,” ensures responsible use of customer data, and provides a clear line of sight for examiners from policy to practice. It typically involves cross-functional oversight from risk, compliance, IT, and business leaders. 

Banking example: Establishing approval processes, usage guidelines, and audit logs for AI tools used in lending or fraud monitoring. 

AI knowledge assistant 

Definition: An internally focused AI knowledge assistant answers staff questions using your institution’s approved content: policies, procedures, product guides, and internal documentation.  

Why it matters for bankers: Done well, it reduces time wasted searching, improves consistency, and lowers dependency on “tribal knowledge,” while still requiring permission controls and source grounding.  

Banking example: AskAbrigo is an AI knowledge assistant that delivers insights from an institution’s data and Abrigo solutions, saving customers up to 5 hours a week.  

AI lifecycle 

Definition: The AI lifecycle refers to the end-to-end process of developing, deploying, and managing AI models over time. It includes stages such as data collection, model development and training, validation, deployment, monitoring, and ongoing updates or retraining. 

Why it matters for bankers: For banks and credit unions, managing the full lifecycle is critical to ensure models remain accurate, compliant, and aligned with their intended use. It also supports model risk management, governance, and auditability. 

Banking example: Developing a fraud model, validating it before use, monitoring performance over time, and retraining it as fraud patterns evolve. 

AI-powered loan review assistant 

Definition: A loan review assistant powered by AI uses analytics and generative AI to help reviewers assess credit quality and document findings.  

Why it matters for bankers: In practice, it often means faster evaluation of credit quality, more standardized write-ups, and improved credit risk review coverage even as reviewer judgment remains central to the process. 

Banking example: Abrigo’s AI-powered Loan Review Assistant speeds up loan reviews by 30% with configurable risk assessment and comprehensive automated reviews and narrative generation.  

AI-powered sanctions screening 

Definition: AI-powered sanctions screening uses AI to match customers and transactions against sanctions and restricted-party lists, improving accuracy in identifying potential matches. 

Why it matters for bankers: For banks and credit unions, AI can reduce false positives—especially with name variations and incomplete data—while helping investigators focus on higher-risk alerts. However, screening processes must remain transparent, tunable, and fully auditable to meet regulatory expectations. 

Banking example: Abrigo Intelligent Scan uses expansive risk data, record-level matching, and entity extraction to reduce false positives, decrease customer manual effort, and lower the risk exposure for our customers.  

AI readiness assessment 

Definition: An AI readiness assessment evaluates whether a bank or credit union is prepared to adopt AI responsibly. It looks at key areas such as data quality, governance, processes, technology, and staff capabilities. 

Why it matters for bankers: The goal is to identify practical starting points and prioritize use cases that align with the institution’s risk appetite and operational capacity. 

Banking example: Assessing whether data, controls, and staffing are in place before deploying AI in lending or fraud monitoring. Abrigo’s AI Policy & Governance Advisory team helps banks and credit unions establish robust oversight structures, define acceptable use standards, and implement best practices for responsible AI deployment. 

AI transparency 

Definition: AI transparency refers to the ability to understand how an AI system is designed, how it uses data, how it operates, and what risks it may introduce. 

Why it matters for bankers: For banks and credit unions, transparency supports effective risk management, model validation, and regulatory confidence by making AI systems more understandable and reviewable. It also aligns with industry frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework, which emphasize transparency as a core principle. 

Banking example: Documenting a model’s purpose, data sources, assumptions, and limitations for internal review and examiner evaluation. 

Allowance narrative generator 

Definition: An allowance narrative generator uses AI to draft the narrative explaining changes in reserves across reporting periods—what changed, why it changed, and key risks to monitor. It can also generate first-draft disclosures aligned to underlying data, models, and policies. 

Why it matters for bankers: An allowance narrative generator can save hours of time typically gathering reports, analyzing data, and drafting management discussion and required disclosures. To be effective, the output must remain consistent with model results and incorporate management judgment and review. 

Banking example: The Abrigo Allowance Narrative Generator produces first-draft allowance narratives and disclosure text traceable to the institution’s data and policies to accelerate closing with tight governance.  

AML assistant 

An AML assistant uses adaptive machine learning, real-time insights, and generative AI to streamline alert triage and accelerate investigations. It scores alerts based on historical performance and prioritizes high-risk cases. It automates case narrative creation by assembling investigative data automatically, analyzing it, and producing a draft summary for human review and revision. 

Definition: An AML assistant uses AI to support alert triage and investigations by prioritizing higher-risk alerts and helping assemble case information. It can analyze data across systems and generate draft case narratives for investigator review. 

Why it matters for bankers: For community banks and credit unions, this can improve consistency, reduce manual work, and speed investigations—while still requiring human oversight and validation. 

Banking example: Abrigo AML Assistant triages alerts and automates case narrative creation by assembling investigative data automatically, analyzing it, and producing a draft summary for human review and revision. 

Anomaly detection 

Definition: Anomaly detection flags activity that deviates from normal behavior: unusual transaction patterns, sudden volume spikes, odd counterparties, new device behavior.  

Why it matters for bankers: In fraud and AML, anomaly detection is useful for surfacing “unknown unknowns,” but it needs strong triage to avoid alert overload. 

Banking example: Flagging a customer account with transaction behavior that differs significantly from its historical pattern. 

Artificial intelligence (AI) 

Definition: AI is a broad label for technology that performs tasks we usually associate with human judgment—recognizing patterns, making recommendations, or generating content.  

Why it matters for bankers: AI can help deepen relationships, increase customer or member satisfaction, minimize risk, and build long-term loyalty by allowing financial institutions to work faster, smarter, and with greater precision without giving up control. 

Banking example: In banking, “AI” can include rules-based decisioning, machine learning fraud scores, generative AI that drafts narratives, etc. 

Audit trail 

Definition: An audit trail in AI is a record of how an AI-generated output was created, including the inputs used, outputs generated, edits made, approvals given, and who performed each step. 

Why it matters for bankers: For banks and credit unions, audit trails make AI use transparent and reviewable, turning “the model said so” into a defensible process.  They are critical for demonstrating control to examiners, supporting model validation, and enabling effective oversight of AI-driven decisions. 

Banking example: Tracking how a credit memo draft was generated, edited, and approved, including the data sources and user actions involved. 

Autonomous AI 

Definition: Autonomous AI refers to AI systems that can make decisions and take actions end-to-end without human involvement, except in cases where issues are escalated. 

Why it matters for bankers: In banking, fully autonomous AI is limited to lower-risk use cases due to regulatory and oversight requirements. Most implementations still include human review for higher-risk decisions. 

Banking example: Automatically approving or declining small-dollar loans within defined thresholds or resolving routine customer service requests without human intervention. 

Black box AI 

Definition: Black box AI refers to models whose internal decision-making processes are not easily understood or explained. It can be difficult to determine how inputs are transformed into outputs. 

Why it matters for bankers: For banks and credit unions, this lack of transparency creates challenges for validation, governance, and regulatory compliance—especially when decisions must be explained or justified. 

Banking example: A credit model that produces a risk score without clear insight into which factors drove the decision. 

Counterfactual analysis 

Definition: Counterfactual analysis is an explainability technique that shows how changing specific inputs would lead to a different outcome—in other words, it answers “what if” scenarios. It helps identify which factors had the greatest influence on a model’s prediction. 

Why it matters for bankers: In banking, it can support model validation, reviews, and regulatory exams by making AI-driven decisions easier to understand and justify. 

Banking example: Showing how a borrower’s credit decision would change if income, debt levels, or cash flow were different. 

Credit memo/credit narrative generation 

Definition: This is generative AI drafting narrative sections of a credit memo (including borrower overview, risk factors, mitigants, covenant summary) using data already in financial institution systems and documents.  

Why it matters for bankers: The time savings can be real, and the governance requirement is important: drafting is not approving.  

Banking example: Abrigo Lending Assistant generates credit narratives 25% faster, validating documents, and extracting key data from unstructured files. 

Data lineage 

Definition: Data lineage refers to the history and movement of data as it flows through systems, including its origin, transformation, and use. 

Why it matters for bankers: For banks and credit unions, data lineage supports transparency, auditability, and regulatory compliance by providing a clear record of how data contributes to reports, models, and decisions. 

Banking example: Tracing how transaction data flows from core systems into a credit model or regulatory report. 

Data poisoning 

Data poisoning is an attack that intentionally corrupts training data to degrade an AI model’s performance or manipulate its outputs. 

Why it matters for bankers: In financial services, this can lead to inaccurate predictions, missed fraud signals, or compromised decision-making if not detected and controlled. 

Banking example: Introducing misleading transaction patterns into training data to weaken a fraud detection model. 

Deepfake 

Definition: A deepfake is AI-generated or manipulated images, audio, or video content that appears realistic but is not authentic. 

Why it matters for bankers: For banks and credit unions, deepfakes pose risks in areas such as fraud, identity verification, and social engineering. 

Banking example: A synthetic voice or video used to impersonate a customer or executive to authorize fraudulent transactions. 

Deep learning 

Definition: Deep learning is a subset of machine learning in which artificial intelligence learns from multiple layers of data. Facial recognition software uses deep learning technology to recognize people. 

Why it matters for bankers: Deep learning can analyze the vast amounts of data held in separate systems across financial institutions to improve business outcomes and service. Banking example: Years of customer or member data can provide tailored recommendations and offers to at-risk depositors or owners of growing businesses.  

Embeddings 

Definition: Text or word embeddings are numerical representations of words or phrases that capture their meaning and relationships. They allow AI systems to understand and compare text, enabling tasks like search, classification, and similarity matching. 

Why it matters for bankers: Embeddings provide essential context so that a chatbot, for example, can generate an appropriate response.  

Banking example: Matching customer queries to relevant policy documents in AskAbrigo or another knowledge assistant. 

Explainable AI (XAI) 

Definition: Explainable AI (XAI) refers to methods and practices that make AI outputs understandable, traceable, and defensible. It enables users to clearly describe how a model arrived at a prediction, recommendation, or decision. 

Why it matters for bankers: For banks and credit unions, explainability is critical for model validation, internal challenge, and regulatory confidence. It also supports fair lending reviews and, where applicable, adverse action explanations. 

Banking example: Identifying which factors most influenced a credit decision or risk score. 

Foundation models 

Definition: Foundation models are large AI models trained on broad datasets that can be adapted to perform many different tasks, such as generating text, analyzing documents, or answering questions. 

Why it matters for bankers: In banking, foundation models power many generative AI use cases but require controls around data use, accuracy, and governance. 

Banking example: Using a foundation model to summarize loan documents or draft customer communications. 

Generative AI (GenAI) 

Definition: Generative AI creates new content (text, summaries, narratives, and sometimes code) based on patterns in data it learned during training.  

Why it matters for bankers: In banking and credit union workflows, generative AI can be especially useful for drafting and summarizing content, with a human still accountable for accuracy. It can quickly review lengthy documents and save time developing written content. 

Banking example: Content for banks and credit unions that can be generated by generative AI includes credit memos or loan presentations, summaries of loan review investigations, fraud-investigation summaries, internal policies and procedures, and summaries of new regulations or laws.  

Guardrails 

Definition: Guardrails are controls and constraints put in place to ensure AI systems operate safely, consistently, and within defined policies. 

Why it matters for bankers: They help limit inappropriate outputs, enforce business rules, and reduce risks such as hallucinations or misuse. 

Banking example: Restricting an AI assistant to approved data sources and preventing it from generating unauthorized advice. AskAbrigo, for example, provides audit-ready answers and content with clear sourcing and traceability, as well as controls for enterprise-wide management and oversight.  

Hallucination 

Definition: A hallucination occurs when an AI system generates information that appears credible but is incorrect, unsupported, or fabricated. 

Why it matters for bankers: In financial services, hallucinations can lead to inaccurate documentation, flawed decisions, or inconsistent examiner-facing narratives. Managing this risk requires controls such as grounding AI in approved data sources, human review, citation of sources, and limiting use to appropriate use cases. 

Banking example: An AI-generated credit memo that includes incorrect financial details not supported by the underlying data. 

Human oversight in AI (human-in-the-loop, HITL) 

Definition: Human oversight in AI refers to the role of people in reviewing, approving, or monitoring AI-driven outputs and decisions. This is commonly referred to as human-in-the-loop (HITL). 

Why this matters for bankers: Regulators and model risk frameworks emphasize human review, oversight, and accountability, especially for higher-risk use cases. 

In practice, HITL can take different forms depending on how much autonomy is given to the AI: 

  • Human-guided AI: AI generates suggestions, but humans review and make all decisions. 
    Banking example: AI drafts a credit memo; an analyst edits and approves it.  
  • Human-supervised AI: AI takes limited, predefined actions, while humans monitor outputs and handle exceptions. 
    Banking example: AI prioritizes alerts; analysts review escalations or sample outputs.  
  • Human oversight of automated AI: AI operates within defined boundaries, with humans providing periodic oversight and intervening when needed. 
    Banking example: AI processes routine transactions; staff monitor dashboards and step in if thresholds are exceeded.  

Intelligent document processing (IDP) 

Definition: Intelligent document processing (IDP) uses AI to extract and structure data from documents such as PDFs, scanned files, financial statements, and tax returns, enabling use in downstream systems and workflows. 

Why this matters for bankers: For banks and credit unions, IDP reduces manual data entry, speeds up processing, and improves consistency. It still requires validation, spot checks, and exception handling to ensure accuracy. 

Banking example: Extracting borrower financials from tax returns or spreading financial statements for credit analysis.  

Large language model (LLM) 

Definition: An LLM is a generative AI model trained on large volumes of text to understand and generate language.  

Why this matters for bankers: For banks and credit unions, LLMs are often used as “drafting engines” or Q&A assistants, but they need controls so they don’t invent facts (“hallucinate”), leak sensitive data, or conflict with policy. 

Banking example: An LLM trained on knowledge about a bank or credit union’s specific products and targeted customer or member segments can generate automated emails and marketing materials for cross-selling offers.  

LIME 

Definition: LIME is an explainability technique that helps interpret individual AI predictions by identifying which inputs most influenced a specific outcome. It works by approximating how the model behaves for that single case. 

Why this matters for bankers: For banks and credit unions, LIME can help explain decisions from complex or “black box” models during validation, review, or regulatory exams. 

Banking example: Showing which factors most influenced a borrower’s risk score for a particular loan. 

Machine learning (ML) 

Definition: Machine learning is a type of AI that learns patterns from analyzing historical data to predict or classify outcomes. The technology is especially useful when patterns change faster than policies can be rewritten.  

Why this matters for bankers:  

Banking example: Machine learning in banking and finance has multiple use cases, including fraud scoring, credit risk signals, and alert prioritization.  

Memory 

Definition: Memory is an AI capability that allows a system to retain relevant context across interactions or steps in a workflow, enabling more consistent responses and better task continuity. 

Why this matters for bankers: For banks and credit unions, memory can improve efficiency and user experience, but it must be carefully controlled to address data privacy, security, and governance requirements. 

Banking example: An AI assistant that remembers prior steps in a loan review process to avoid rework and maintain context across tasks. 

Model Context Protocol 

Definition: Model Context Protocol (MCP) is a standard that enables AI systems to connect with external data sources, tools, and applications in a structured and consistent way. 

Why this matters for bankers: MCP helps AI applications access the right context—such as data, permissions, and task state—without requiring custom integrations for each system. This is especially valuable in regulated environments where access, usage, and interactions must be controlled and auditable. 

Banking example: Connecting an AI assistant to internal systems (e.g., loan data, policies, or transaction records) with controlled access and traceability.  

Model drift 

Definition: Model drift occurs when an AI model’s performance degrades over time because the data it was trained on no longer reflects current conditions. Changes in customer behavior, economic environments, or data patterns can all cause drift. 

Why this matters for bankers: For banks and credit unions, model drift can lead to inaccurate predictions or missed risks if not monitored. Ongoing validation, performance tracking, and periodic retraining are essential to maintain reliability. 

Banking example: A fraud model trained on past transaction patterns becomes less effective as fraud tactics evolve 

Model risk management (MRM) 

Definition: Model risk management (MRM) is the framework for identifying, assessing, and controlling risks associated with models, including their assumptions, limitations, performance, and potential misuse. 

Why this matters for bankers: For banks and credit unions, MRM ensures models are validated, monitored, and used appropriately. As institutions adopt machine learning and generative AI, MRM becomes essential to balancing innovation with risk management and regulatory expectations. In practice, this includes independent validation, ongoing performance monitoring, and clear documentation. For example, Abrigo’s models undergo independent validation and regular review to support transparency and examiner readiness. 

Banking example: Validating a credit or fraud model, monitoring its performance over time, and documenting its use for internal review and exams. 

Model training 

Definition: Model training is the process of teaching an AI model to learn patterns from data so it can make predictions, classifications, or generate outputs. Instead of being explicitly programmed for every scenario, the model learns from examples. Common approaches include supervised, unsupervised, and reinforcement learning. 

Why this matters for bankers: For banks and credit unions, model training determines how well a model reflects real-world conditions and regulatory expectations. High-quality, relevant data and proper design help ensure models are accurate, reliable, and fit for their intended use. 

Banking example: Training a model to detect fraud patterns, score credit risk, or personalize customer offers based on historical data. 

Model validation 

Definition: Model validation is the independent review and testing of a model’s design, performance, and limitations to ensure it is reliable, appropriate for its intended use, and well understood. 

Why this matters for bankers: For banks and credit unions, validation confirms that the model works as expected, identifies where it may fail, and establishes boundaries for acceptable use—supporting regulatory compliance and risk management. 

Banking example: Independently testing a credit or fraud model before deployment and documenting its assumptions, performance, and limitations. 

Next-best-action (NBA) 

Definition: Next-best-action (NBA) models use AI to recommend the most relevant action to take for a customer at a given moment—such as an offer, outreach, or follow-up. 

Why this matters for bankers: For banks and credit unions, NBA can support growth and customer engagement by identifying timely opportunities for cross-sell, retention, or service. However, in regulated environments, these recommendations must align with eligibility rules, fairness expectations, and proper documentation. 

Banking example: Recommending a rate review, product offer, or relationship manager follow-up based on a customer’s recent activity or profile. 

Performance monitoring 

Definition: Performance monitoring is the ongoing tracking of an AI model’s accuracy, stability, and behavior over time. 

Why it matters for bankers: For banks and credit unions, it helps detect issues such as model drift, errors, or unexpected outcomes, ensuring models continue to perform as intended. Effective monitoring includes tracking performance metrics, setting thresholds, and conducting periodic reviews. This includes practices such as model refreshes, prompt refinement, user feedback loops, and structured evaluations against known “ground truth”—all areas where Abrigo applies a rigorous, audit-ready approach to maintaining model performance over time. 

Banking example: Monitoring a fraud model’s detection rates and false positives as transaction patterns change. 

Predictive analytics 

Definition: Predictive analytics uses data, statistical techniques, and AI models to forecast future outcomes or behaviors. 

Why it matters for bankers: In banking, it supports decision-making in areas such as credit risk, customer retention, and fraud prevention. 

Banking example: Predicting the likelihood of loan default or customer churn. 

Prompt 

Definition: A prompt is the input or instruction given to a generative AI system to guide its response or output. 

Why it matters for bankers: The quality and clarity of a prompt directly influence the accuracy and usefulness of the results. 

Banking example: Instructing AskAbrigo to “summarize this borrower’s financials and highlight key risks.” 

Prompt engineering 

Definition: Prompt engineering is the discipline of writing and structuring instructions to get more reliable AI outputs.  

Why it matters for bankers: In regulated workflows such as those at financial institutions, it often means standardizing prompts so outputs follow bank or credit union policy, required formats, and documentation expectations, rather than letting every user “wing it.” 

Banking example: A prompt can incorporate applying a financial institution’s loan policy to an application as part of the application workflow and require supervisory review before allowing an out-of-policy application to move forward, helping to limit lender-by-lender subjectivity. 

Responsible AI 

Definition: Responsible AI refers to the principles and practices that ensure AI systems are used ethically, transparently, and in compliance with regulations. It includes considerations such as fairness, accountability, explainability, privacy, and governance. 

Why it matters for bankers: Regulators expect financial services companies to innovate responsibly to protect consumers and manage risks.  

Banking example: Implementing controls to ensure lending models are fair, explainable, and compliant with regulations. 

Retrieval-Augmented Generation (RAG) 

Definition: Retrieval-augmented generation (RAG) combines generative AI with information retrieval by pulling relevant data from approved sources before generating a response. 

This helps improve accuracy, reduce hallucinations, and ensure outputs are grounded in trusted information. 

Why it matters for bankers: RAG helps improve accuracy, reduce hallucinations, and ensure outputs are grounded in trusted information. 

Banking example: An AI assistant that answers staff questions using internal policies and procedures rather than general knowledge. 

SHAP (SHapley Additive exPlanations) 

Definition: SHAP is an explainability method that shows how much each input feature contributed to a specific AI prediction. It assigns a value to each factor, indicating how it increased or decreased the outcome relative to a baseline. 

SHAP can also be aggregated across many predictions to show which factors are most influential overall. 

Why it matters for bankers: For banks and credit unions, SHAP supports model validation, review, and regulatory exams by making model decisions more transparent and defensible. 

Banking example: Explaining how factors like cash flow, debt levels, or transaction behavior influenced a borrower’s risk score or why an alert was prioritized.  

Synthetic data 

Definition: Synthetic data is artificially generated data that mimics real-world data without directly using actual customer information. 

Why it matters for bankers: It is often used for testing, training, or development while reducing privacy and data security risks. 

Banking example: Creating realistic transaction datasets to train or test fraud models without exposing customer data.