Skip to main content

Looking for Valuant? You are in the right place!

Valuant is now Abrigo, giving you a single source to Manage Risk and Drive Growth

Make yourself at home – we hope you enjoy your new web experience.

Looking for DiCOM? You are in the right place!

DiCOM Software is now part of Abrigo, giving you a single source to Manage Risk and Drive Growth. Make yourself at home – we hope you enjoy your new web experience.

Looking for TPG Software? You are in the right place!

TPG Software is now part of Abrigo. You can continue to count on the world-class Investment Accounting software and services you’ve come to expect, plus all that Abrigo has to offer.

Make yourself at home – we hope you enjoy being part of our community.

What is parallel testing: Why it matters in model risk governance

Terri Luttrell, CAMS-Audit, CFCS
February 26, 2026
0 min read

Why parallel testing is important in risk governance

Technology change is a constant for financial institutions. Whether the shift involves financial crime monitoring, lending platforms, portfolio risk, or asset/liability management models, new systems promise efficiency and insight but also introduce risk. Parallel testing when implementing new software is one of the most practical ways institutions can manage that risk while maintaining day-to-day operations and model risk management expectations.

What is parallel testing?

Parallel testing is the practice of running a legacy system and a new system simultaneously using the same data. The goal is not speed; the goal is confidence. Running both systems concurrently allows teams to verify data integrity, logic accuracy, and workflow performance without disrupting production. By comparing outputs side by side, institutions can validate that the new system performs as expected before entirely relying on it.

At its core, parallel testing answers a simple question: If the institution relies on the new system today, would outcomes change in a way that introduces risk? For example, alerts generated in a financial crime system, allowance calculations in a CECL process, or data outputs used for regulatory reporting should align closely between systems once configuration and tuning are complete. Differences are expected early in testing, but they should be explainable, documented, and resolved before going live. The new system must perform at least as well as the legacy system to support strong model risk management.

Gap analysis: Explaining system differences

A gap analysis is a natural extension of parallel testing and helps explain why differences appear between systems. By reviewing alerts, calculations, reports, or outputs side by side, teams can identify where the new system behaves differently from the legacy system and determine whether those differences reflect improved risk coverage, configuration issues, or data limitations. Not every gap will require remediation, but every gap must be reviewed during a system conversion. Clear documentation of why a difference exists, how it affects risk coverage, and whether it is acceptable is essential for model risk governance before the institution relies on the new system in production.

Benefits of running systems in parallel  

From a regulatory perspective, parallel testing demonstrates sound model risk management in banking operations. The OCC handbook describes a model as “a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.”

Financial institutions are expected to understand how system changes affect risk coverage decision-making and compliance outcomes. Model comparison testing shows that leadership took reasonable steps to prevent gaps, missed activity, or unintended consequences.

This evidence of control is essential in areas subject to heightened scrutiny, such as anti-money laundering, fraud detection, fair lending, and regulatory reporting. Parallel testing regulatory models helps demonstrate that critical functions, such as suspicious activity monitoring, continue without interruption, that reports remain accurate, and that reliable data support risk ratings and model-driven decisions.

Learn more about Nacha's 2026 fraud monitoring rules

Read the whitepaper

Regulatory expectations for parallel testing

Supervisory Guidance on Model Risk Management (SR Letter 11-7) describes the key aspects of effective model risk management and sets regulatory expectations. The guidance does not explicitly mandate parallel testing when models are implemented, but the expectation is clear. When institutions modify or install new models to reflect new data techniques or performance concerns, regulators expect meaningful evidence that the changes improve results.

The guidance highlights parallel testing, or parallel outcomes analysis, as an essential approach for identifying gaps in new models. If the new or adjusted model does not demonstrate stronger performance, the institution should recognize that further refinement may be necessary before replacing the original model. In practice, this reinforces the case for running models in parallel, supporting sound model governance and defensible decision-making. It demonstrates that financial institutions aren’t changing models for change’s sake.  

The guidance also addresses documentation requirements. Clear records of testing scope, issue resolution, and management approval not only help examiners understand what changed but also provide proof of continuity and control.

Beyond compliance: Testing’s operational benefits

While compliance is often the initial driver, parallel testing delivers operational value across all pillars of the institution. It gives teams time to learn new workflows, identify training needs, and fine-tune processes before the pressure of full adoption.

It also creates space for informed decision-making. Differences between systems can reveal data quality issues, process inconsistencies, or risk assumptions that might otherwise have gone unnoticed. Addressing those findings strengthens the overall program, not just the new technology.

Most importantly, parallel testing protects customers, members, and communities. Whether the institution is monitoring transactions, underwriting loans, or managing portfolio risk, accurate systems support fair, consistent, and timely outcomes.

Get a free AI readiness checklist and other resources on Abrigo's AI Hub.

AI resources for bankers

 

Testing systems with a risk-based approach

There is no one-size-fits-all timeline or scope for parallel testing. Institutions should tailor their approach based on size, complexity, product mix, and risk profile. Higher-risk activities typically warrant broader testing and more extended overlap periods, while lower-risk changes may require a more targeted effort.

What matters is intentionality. A defined plan, clear ownership, independent review, and documented conclusions all signal that the institution approached the transition thoughtfully. Download a checklist for parallel testing of AML/CFT systems for more information.

Building confidence before going live

Technology should make complex work more manageable, not more uncertain. Parallel testing helps bridge that gap by allowing institutions to move forward with confidence rather than blind hope. When done well, parallel testing supports continuity, strengthens model risk management, and reinforces trust at every level of the organization.

In an environment where change is constant, taking the time to validate before switching entirely is not a delay; it is a best practice.

FAQs

What is parallel testing in model risk governance?

Parallel testing is the practice of running a legacy system and a new system at the same time using the same data to compare outputs and verify that the new system performs correctly before fully replacing the old one. It builds confidence in model performance and reduces operational risk.

Why does parallel testing matter for financial institutions?

 

Parallel testing matters because it helps institutions manage risk when implementing new models or software by ensuring data integrity, logic accuracy, and consistent decision outcomes without disrupting production operations. It supports strong governance and defensible results.

What is a gap analysis in the context of parallel testing?

A gap analysis in parallel testing reviews differences between legacy and new system outputs to determine whether variations reflect improved risk coverage, configuration issues, or data limitations, ensuring that all differences are explainable and documented before go-live.

How does parallel testing support model risk governance?

In model risk governance, parallel testing provides documented evidence that new models or systems perform at least as reliably as legacy ones, helping institutions demonstrate control, continuity, and compliance to internal stakeholders and regulators.

What are the benefits of running systems in parallel before going live?

Running systems in parallel reduces risk by identifying data quality issues, process inconsistencies, and training needs early, strengthens overall model governance, and enhances confidence in risk decisions—protecting customers and supporting operational continuity.

 

 

About the Author

Terri Luttrell, CAMS-Audit, CFCS

Compliance and Engagement Director
Terri Luttrell is a seasoned AML professional and former director and AML/OFAC officer with over 20 years in the banking industry, working both in medium and large community and commercial banks ranging from $2 billion to $330 billion in asset size.

Full Bio

About Abrigo

Abrigo enables U.S. financial institutions to support their communities through technology that fights financial crime, grows loans and deposits, and optimizes risk. Abrigo's platform centralizes the institution's data, creates a digital user experience, ensures compliance, and delivers efficiency for scale and profitable growth.

Make Big Things Happen.