top of page

Thanks for subscribing!

Search
Writer's pictureRichard Pace

Dark Skies Ahead: The CFPB's Brewing Algorithmic Storm

Updated: May 6


CFPB Algorithmic Bias

In a year filled with many inflection points, perhaps one of the sharpest has been the CFPB's shift in regulatory tone on algorithmic-based consumer lending. While prior years saw a more propitious, but cautious, tone from the Bureau on the promise of artificial intelligence, machine learning, and alternative data to expand financial access to historically disadvantaged customers, the Biden administration and new CFPB leadership appear to have shifted such sentiment through actions that convey a growing unease with the potential risks and harms of these financial innovations.


In this blog post, I examine these shifting winds in the CFPB's sentiment towards algorithmic-based lending - particularly since the start of 2022. Then, based on recent CFPB actions, I describe two potential dangers brewing on the horizon for those lenders and technology companies developing and/or using these complex algorithms.


The seeds of the storm: 2022 begins with the CFPB's focus squarely on algorithms


This year's shifting winds arguably began in February when the CFPB announced a proposed rule-making initiative addressing the Bureau's growing concerns over algorithmic bias adversely impacting automated home appraisal values in minority neighborhoods, and the corresponding harmful effects of such bias on minority mortgage credit decisions and credit costs. Under the proposed rule, covered institutions would be required to:


"establish policies, practices, procedures, and control systems to ensure that their AVMs [automated valuation models] comply with applicable nondiscrimination laws" as "without proper safeguards, flawed versions of these models could digitally redline certain neighborhoods and further embed and perpetuate historical lending, wealth, and home value disparities."


Further concerns about algorithmic bias emerged a month later, in March, when the Bureau stunned the industry by announcing its intent to broaden significantly its anti-discrimination efforts - without prior notification, a formal rule-making process, or a proposed future implementation date - through the application of the Consumer Financial Protection Act's UDAAP provisions to non-lending products and processes. While the scope of this action is quite broad, algorithmic-based lending is squarely in its cross-hairs. Per the announcement,

"...the CFPB will undertake to focus on the widespread and growing reliance on machine learning models throughout the financial industry and their potential for perpetuating biased outcomes."

And, as just one example, the Bureau states:

"...certain targeted advertising and marketing, based on machine learning models, can harm consumers and undermine competition. Consumer advocates, investigative journalists, and scholars have shown how data harvesting and consumer surveillance fuel complex algorithms that can target highly specific demographics of consumers to exploit perceived vulnerabilities and strengthen structural inequities. We will be closely examining companies’ reliance on automated decision-making models and any potential discriminatory outcomes."


The storm begins to form: Supervisory authorities are broadened to include fintechs and technology companies


The following month, in April, the Bureau announced that it will invoke its authority to expand its supervision activities to nonbank financial companies that pose significant risks to consumers - noting that: (1) many nonbanks "operate nationally and brand themselves as “fintechs.”" and (2) "Such risky conduct may involve, for example, potentially unfair, deceptive, or abusive acts or practices, or other acts or practices that potentially violate federal consumer financial law."


While not targeted specifically to algorithmic bias, the risks noted by the Bureau would certainly include data-driven algorithms to support various aspects of consumer financial activities - such as marketing/advertising, AVMs, payments, and credit underwriting / pricing. Further, this announcement of expanded supervisory authority also dovetails with: (1) the Bureau's March 2022 announcement of its intent to broaden its anti-discrimination efforts beyond core lending activities under its UDAAP authorities (see previous section), and (2) the Bureau's December 2021 call for tech workers to serve as whistleblowers "who have detailed knowledge of the algorithms and technologies used by companies and who know of potential discrimination or other misconduct within the CFPB’s authority."


Flashes of lightning: The regulatory tone turns more critical of AI-based credit algorithms


Finally, in May and June, the Bureau made two announcements directly impacting AI-based credit algorithms. First, the Bureau released a new Consumer Financial Protection Circular 2022-03 clarifying its position on Adverse Action Notices for credit decisions derived from complex algorithms. In particular, the Bureau states:


"Some creditors may make credit decisions based on certain complex algorithms, sometimes referred to as uninterpretable or “black-box” models, that make it difficult—if not impossible—to accurately identify the specific reasons for denying credit or taking other adverse actions"


and reminds lenders that:


"ECOA and Regulation B do not permit creditors to use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions."


A couple of weeks later, the Bureau announced that it had terminated arguably the most well-known No Action Letter ("NAL") under the prior administration - specifically, the NAL to Upstart Network ("Upstart") that covered Upstart's AI-based credit underwriting and pricing models relative to potential ECOA / Regulation B regulatory actions. While the termination was described as requested by Upstart to modify its algorithms expeditiously in response to the changing economic environment, it's pretty clear from the Bureau's Order that current CFPB leadership was also supportive of ending the NAL - a position that dovetails with the recent decommissioning of the Bureau's Office of Innovation that sponsored the NAL and regulatory sandbox "safe harbors". With this move, the Bureau's experiment to explore cooperatively with industry participants solutions to align algorithmic-based lending with existing consumer protection requirements appears to be at an end (this experiment is discussed further below).


The Forecast


While no one (outside of the CFPB) knows exactly what its future enforcement plans may be, a closer analysis of its recent announcements suggests some potentially strong headwinds for algorithmic-based lenders around the corner. Below, I share my thoughts on what these risks may be.

"Black Box" credit models that require post-hoc explainability tools to populate Adverse Action Notifications may not be considered compliant with ECOA and FCRA.

To appreciate the basis of this risk, context is important. Going back to the Bureau's July 2020 blog post "Providing adverse action notices when using AI/ML models," we see - at that time and under a prior Administration - the Bureau adopting a relatively optimistic and cooperative, yet cautious, tone regarding: (1) the potential for algorithmic-based credit models to improve financial access for currently unscorable consumers, and (2) how such models may comply with existing consumer protection regulations - particularly, ECOA's adverse action notifications. The Bureau's stated goal to the industry, simply put:

"By working together, we hope to facilitate the use of this promising technology to expand access to credit and benefit consumers."

One of the ways they sought to achieve this goal - given the regulatory uncertainty surrounding AI/ML-based credit models - was through tools made available through the CFPB's Office of Innovation. More specifically, through the creation of regulatory "safe harbors" that included No Action Letters and regulatory sandboxes, the Bureau intended to foster an environment where they could explore with lenders ways to align AI/ML-based credit decisions with existing regulatory notification requirements - particularly in the following areas:

  • "The accuracy of explainability methods, particularly as applied to deep learning and other complex ensemble models."


  • "How to convey the principal reasons in a manner that accurately reflects the factors used in the model and is understandable to consumers, including how to describe varied and alternative data sources, or their interrelationships, in an adverse action reason."


This announcement, the Bureau's NAL with Upstart Network, and its 2017 Request for Information ("RFI") Regarding Use of Alternative Data and Modeling Techniques in the Credit Process, represented positive, proactive efforts to work cooperatively with industry participants to explore whether, and how, algorithmic-based lending could achieve the goal of increased financial access in a regulatory compliant manner.

Now, however, through its release of Consumer Financial Protection Circular 2022-03, the Bureau appears to be backtracking on its July 2020 blog post - going so far as to include a new header to the post stating that it:

"conveys an incomplete description of the adverse action notice requirements of ECOA and Regulation B"

and to refer readers to the new Circular. Additionally, by also decommissioning the Office of Innovation's "safe harbors", the Bureau appears to be moving towards a less flexible and cooperative industry approach towards aligning algorithmic-based lending models with ECOA compliance requirements. In the 2022-03 Circular, for example, the Bureau makes clear that:

"ECOA and Regulation B do not permit creditors to use complex algorithms when doing so means they cannot provide the specific and accurate reasons for adverse actions."

Now, some would say that this simply represents a change in regulatory tone, but otherwise there is really nothing new in this Circular. However, evaluated through the lens of algorithmic model risk management, I believe that the Bureau may be communicating something new and far more consequential to the industry. Consider the following points from the Bureau's Circular (emphasis added by me).

  • FCRA and Regulation B adverse action disclosure requirements are different.


  • FCRA requires disclosure of


"up to four key factors that adversely affected the consumer’s credit score...",

while Regulation B requires that the statement of reasons for adverse action taken

“must be specific and indicate the principal reason(s) for the adverse action."

  • The Official Staff Interpretation to Regulation B states:


"“[t]he specific reasons disclosed . . . must relate to and accurately describe the factors actually considered or scored by a creditor." and "Disclosing the key factors that adversely affected the consumer’s credit score does not satisfy the ECOA requirement to disclose specific reasons for denying or taking other adverse action on an application or extension of credit."

First, let's focus on the term "accurate". What the Bureau may be alluding to here is that many AI/ML credit models are "black boxes" - meaning that they possess such a degree of mathematical complexity that one cannot explain why a particular individual received the estimated credit score (or credit decision) that they did. The industry has responded to this lack of transparency by developing a set of "explainability tools" - that is, a separate set of analytical devices designed to deconstruct a given model prediction into its individual parts and produce input-level "importance measures" that can be used to "explain" the model prediction based on its data inputs and, therefore, comply with the FCRA and ECOA adverse action notification requirements.

With full knowledge of these explainability tools, the Bureau then plants a bomb in Footnote 1 of the Circular,

"While some creditors may rely upon various post-hoc explanation methods, such explanations approximate models and creditors must still be able to validate the accuracy of those approximations, which may not be possible with less interpretable models."

This statement is critical. It implies that if the lender's model is inherently non-interpretable and, therefore, requires a post-hoc explainability tool to deduce the drivers of its predictions, then the lender must validate the accuracy of these analytically-derived explanations. However, these explanations - which are typically generated by explainability tools such as SHAP and LIME - are not exact, they are approximations.[1]

So how can a lender evaluate their accuracy if such an evaluation requires knowledge of the "true" exact explanations - which are unknowable due to the black box nature of the model? Said differently, if a lender had knowledge of the true drivers of the prediction to evaluate the accuracy of the explainability tool, it wouldn't need the explainability tool. Therefore, the Bureau's validation expectation to ensure explanation "accuracy" may, in fact, preclude the use of algorithms for which exact explanations are not possible - a potentially fatal blow to most "black box" AI/ML models.


While many AI/ML model practitioners may react to this conclusion by suggesting a migration to inherently interpretable model architectures in which exact explanations are feasible, I note that this - unfortunately - may not address the Bureau's broader potential concerns laid out in the Circular. I discuss this more fully below.

Even with exact explainability, AI/ML credit models relying on complex interactions and/or alternative data may not be considered compliant with ECOA and FCRA.

Let's start here by considering why the Bureau made a distinction between "reasons" and "factors" in its Circular.


Complex Interactions: ECOA's "Reasons" vs. FCRA's "Factors"


In its Circular, the Bureau appears to stress the term "specific reasons" for ECOA adverse action notifications and to differentiate this term from FCRA's "key factors that adversely affected the consumer's credit score." Additionally, in the Bureau's July 2020 blog post, the Bureau expresses the desire to explore


"How to convey the principal reasons in a manner that accurately reflects the factors used in the model and is understandable to consumers, including how to describe varied and alternative data sources, or their interrelationships, in an adverse action reason."


While these statements can have varied interpretations, one that is potentially problematic has to do with the difference between "reasons" and "factors" - as well as the phrase "or their interrelationships". This is because virtually all explanations of model predictions are grounded in the individual factors or data inputs that enter the model. For example, if a model has 20 individual data inputs, then the explanation for a given prediction would be based on the subset of these data inputs deemed "most important" to that decision. Under traditional logistic regression-based credit scoring models based on a relatively small set of data inputs that each affect a consumer's estimated credit score independently, there is an inherent logic to these input-based explanations.


However, today's much more complex AI-based credit algorithms use hundreds if not thousands of data inputs and leverage significant "nonlinear" interactions among those inputs to improve predictive accuracy. In these models, input-based explanations - even if completely accurate - may not be considered sufficient "reasons" for adverse actions under the Bureau's Circular. This is because the algorithm's creation of complex combinations of data inputs may be done to capture indirectly the effects of credit behaviors not directly represented by any individual input. That is, just as the Bureau is concerned that a complex combination of model inputs can potentially serve as a protected class proxy (and, therefore, inject disparate treatment into the model's predictions), so too can such complex combinations proxy for other non-demographic predictive drivers of consumer credit default behavior. Indeed, one of the primary benefits of algorithmic-based credit models is the ability of the algorithm to "learn" complex features that increase the model's predictive accuracy relative to simpler traditional models.


Unfortunately, complex AI models are currently subject to the same individual input-based explanation practice as traditional models - whether such explanations are exact (derived from inherently interpretable models) or approximations (derived from post-hoc explainability tools). What this means is that any "learned" feature based on a complex interaction of underlying data inputs is not explained or interpreted at the feature level; rather, the feature's explanatory power is typically decomposed into its individual data input components and then aggregated at the individual input level.[2] Accordingly, while input-based explanations may satisfy FCRA's credit scoring notification requirements (based on factors), such explanations may not be considered compliant by the Bureau with respect to ECOA's requirement for "specific reasons" for credit denial or other adverse action.[3]


Alternative Data: "Non-Causal" Specific Reasons May Not Be Considered Compliant with ECOA


In its Circular, the Bureau provides the following context related to ECOA's "specific reasons" (emphasis is mine):

  • "ECOA’s notice requirements “were designed to fulfill the twin goals of consumer protection and education.""


  • "The notice requirement “fulfills a broader need” as well by educating consumers about the reasons for the creditor’s action. As a result of being informed of the specific reasons for the adverse action, consumers can take steps to try to improve their credit status."


In this context, the term "specific reasons" means that the explanations are actionable - that is, the consumer can act upon the disclosed reason in an attempt to improve their credit status.


I note, however, that many of today's AI/ML credit models leverage alternative data to expand financial access to traditionally unscorable consumers. In some cases, such as data derived from the consumer's transaction accounts (i.e., checking / savings account), this alternative data has a logical and direct causal connection to the consumer's credit behavior - similar to the connection of traditional credit bureau data. However, other types of alternative data - even though predictive of credit performance - can have questionable linkages to causal credit behaviors as their predictive power tends to be driven by correlations; that is, these factors do not directly drive consumer credit behavior themselves, but instead exhibit explanatory power solely due to their correlation with such causal behaviors. As an example, suppose that one of a model's data inputs is whether the individual has a vehicle registration, and the lack of such registration is predictive of lower credit quality. Also suppose that this data input is one of the primary factors explaining an individual's adverse credit decision.


In this case, using this factor as a "specific reason" for the ECOA adverse action notice may be considered problematic under the Bureau's Circular as it's not a causal driver of the individual's lower predicted credit score; rather, this data input is serving as a proxy for the true underlying causal driver - perhaps lower income, lower assets, or employment instability.[4] Accordingly, identification of an alternative data input (such as the vehicle registration factor) as an ECOA denial reason may not meet the Bureau's definition of a "specific reason" since: (1) it is not the causal reason for denial, and (2) it also may not be appropriately actionable from the Bureau's perspective since registering a vehicle in one's name is not really going to improve directly the individual's credit status (unlike increasing their income/assets or improving their employment stability).


Grouping Related Reasons Into a Broader Adverse Action Category May Not Be Considered Compliant with ECOA


In the Circular, the Bureau appears to stress that ECOA adverse action notifications contain "accurate" and "specific" reasons that are "actually considered" (emphasis is mine):

  • "The Official Interpretations to Regulation B explain that “[t]he specific reasons disclosed . . . must relate to and accurately describe the factors actually considered or scored by a creditor.”"


  • "Moreover, while Appendix C of Regulation B includes sample forms intended for use in notifying an applicant that adverse action has been taken, “[i]f the reasons listed on the forms are not the factors actually used, a creditor will not satisfy the notice requirement by simply checking the closest identifiable factor listed.""


Consider the case where an AI/ML-based credit model contains a large number of data inputs (e.g., 1,000), where many such inputs relate to the same underlying credit behaviors (e.g., the 1,000 input factors ultimately reflect 20-50 individual credit default behaviors), and where there is a high level of correlation among those factors that relate to each individual credit default behavior. In this case, for the purposes of ECOA adverse action notification, lenders may group related factors together into broader "reason" categories to reduce the granularity of its adverse action codes; that is, rather than have 1,000 individual adverse action reasons, the lender "clusters" the 1,000 factors into 50 broader reason code categories - and, therefore, reports to its applicants the same adverse action reason for all factors mapped to a category. This practice may also be adopted to protect the proprietary nature of the lender's algorithm by not disclosing the exact data attributes used to evaluate applicants - particularly if the lender considers some of these attributes to provide a unique competitive advantage.


The potential issue with this practice is that the CFPB may not consider these broader categories to be "specific reasons" that were "actually considered" under ECOA as they lack the necessary detail of the underlying factor specifically used to deny the applicant credit.


* * *


ENDNOTES:


[1] See, for example, Aas, et. al., "Explaining individual predictions when features are dependent: More accurate approximations to Shapley values", Artificial Intelligence, Volume 298, September 2021.


[2] While there are extensions of explainability tools can, in some cases, separately quantify the relative importance of certain types of interaction effects, such methods are unable to interpret specifically what these interaction effects represent conceptually.


[3] Indeed, should an algorithm indirectly code or proxy a protected class attribute, the associated Adverse Action Notices may be considered inaccurate by the Bureau as they fail to describe the protected class attribute as a "specific reason" for denial. Instead, the learned protected class proxy is disaggregated into its individual data input components that are then aggregated into factor-based explanations.


[4] I ignore for this example whether such a factor may pose a disparate impact risk to the lender.


© Pace Analytics Consulting LLC, 2023.

624 views
Share your feedback on the AI LendScape Blog
Please rate your overall satisfaction with our blog content
Very dissatisfiedA bit dissatisfiedPretty satisfiedSatisfiedVery satisfied

Thanks for sharing!

Your feedback is anonymous.

bottom of page