Is your financial institution ready for AI?
For banks and credit unions, the idea of model validation is nothing new. As manual processes like loan decisioning have become more and more automated, it has also become necessary to test and monitor these automations to ensure accuracy.
While regulators have traditionally moved (eventually) to formalize these model validation requirements, today’s technology landscape has introduced some new factors in the form of much more sophisticated machine learning (ML) and artificial intelligence (AI) capabilities. ML and AI are no longer in the future, and today’s banks and credit unions are already exploring these systems’ capabilities.
While the technology does offer significant solutions and benefits, it can also come with serious risks. For many financial institutions, AI-enabled functionality is being employed on a larger or broader scale, not only for back-office applications like loan decisioning but also for areas like pricing strategies and even the teller station (both online and in the branch). Because its degree of use can vary so much amongst financial institutions, effectively evaluating its risk can prove difficult.
What’s more, as this tech’s development and adoption have grown, so has its regulatory scrutiny. Earlier this year, the Federal Deposit Insurance Corporation (FDIC) issued a request for information (RFI) on “Standard Setting and Voluntary Certification for Models and Third-Party Providers of Technology and Other Services” to consider whether the industry can create a set of standards for AI and ML and whether the creators of these programs — whether financial institutions or third parties — can self-regulate through voluntary certification. Both will need to be able to show capabilities in two key areas for reporting:
- Transparency: How easily a reviewer can assess a model’s structure, equations, data and assumptions used
- Explainability: Both internal knowledge of the apps and the ability to explain how/why a model chose a particular decision or outcome
While interest is currently focused on self-regulation, findings from RFIs and other studies could lead to much more permanent, standing regulations in the very near future. This means it is no longer enough for banks and credit unions to just recognize the benefits of implementing these systems; they must also understand how the models actually work — and the possible risks associated with them.
These risks can be very damaging for a financial institution from regulatory, legal and reputational standpoints. Customers and members trust their banks and credit unions to make fair decisions on their services. However, if a model begins showing bias or weighting different criteria (for example, specific zip codes for loan decisioning), it can create a host of problems. Adopting a “trust but verify” approach for model validation and evaluation of their AI- and ML-enabled vendor software packages is the best way to mitigate these risks.
Banks and credit unions have a lot to lose, especially in the public eye, should bias show up within an algorithm. As Apple learned from the claims of sexism related to decisioning for its Apple Card, simply blaming the AI will not work. The trust a financial institution’s customers and members have for their institution can dissolve quickly if something goes wrong, and repairing those relationships can be difficult. Not only must financial institutions have a deep understanding of how their AI’s ML works, but this knowledge (and scrutiny) should also extend to their fintech partners providing these models.
Whether the technology is developed internally or through a third party, financial institutions leveraging this technology should employ ongoing bias testing (a sort of “AI quality control”) to monitor for biases that may develop over time. This should also be coupled with regression testing to help ensure the system’s algorithm is not inadvertently learning bad habits. Continuous evaluation is the key, greatly increasing the odds of spotting anomalies or problematic variances in specific areas (e.g., zip code, gender, etc.) and thus allowing financial institutions to address them quickly.
For some good examples of how financial institutions can best approach this monitoring, they can look to how they evaluate their data streams for Bank Secrecy Act/Anti Money Laundering compliance or the card-issuing requirements for Unfair, Deceptive or Abusive Acts or Practices (UDAAP).
Bias testing is a critical piece, but it is really the first of many needed in an effective model validation process. The model itself must also be scrutinized, reviewing its concept, performance and overall assumptions. This includes how its features and criteria are selected and distributed, its accuracy and how well it handles and adjusts to errors. The model’s data processing must also be evaluated — from the data’s quality and how it’s managed and stored to how it integrates in other areas.
In addition, the model’s reporting and governance capabilities should be reviewed to confirm its error reporting functions and data maintenance controls are functioning correctly.
Finally, the model’s overall security must be reviewed and validated, especially its risk management and policies and controls for keeping assets confidential.
Today’s technology is evolving rapidly, and while these innovations can offer a host of benefits for financial institutions, they also require greater levels of scrutiny and understanding. With accurate, effective validation of systems leveraging AI and ML capabilities, financial institutions can enhance their services, strengthen customer and member relationships and ensure their institution is ready for now and what comes next.