Predictive Models Can Lose the Plot. Here’s How to Keep Them on Track

Good insights into the mess that was Moody’s credit rating modelling around the time of the Global Financial Crisis. Introduces the concept of ‘algorithmic inertia’, which is just model risk with extra steps.

Reading notes

This article talks about ==algorithmic inertia==, i.e., when organizations fail to adapt their models to new realities. It is essentially a different look at model risk. It talks about the failure of Moody’s credit rating models during the Global Financial Crisis as an example.

Zillow

It starts with an example of Zillow (a real estate marketplace) when it used its proprietary Zestimate model to predict the sale prices of property to allow owners to sell their property to Zillow, so that they can renovate and resell the property for a profit (a line of business called “Zillow Offers”). The model worked well until the highly volatile market of 2021 at which point Zillow made an average 25k USD loss on each property and had to write down ca. 900m USD.

M3 Subprime (Moody’s)

The author’s core research revolves around Moody’s models around the time of the Global Financial Crisis. Apparently, Moody’s took the effort to actually establish a process to revise the assumptions, but failed to follow through and actually change the assumptions.

The managing director of credit policy at Moody’s told a federal inquiry panel that he sat on a high-level structured credit committee that would have been expected to deal with issues like declining mortgage underwriting standards, but the ==topic was never raised==. “We talked about everything but … the elephant sitting on the table,” he said. {.callout}

Moody’s also tried to spend less on modeling, so when it introduced the M3 Subprime model, it extrapolated the loss curves from the M3 Prime model, instead of developing new ones. This assumption proved to be too optimistic.

Four sources of ==algorithmic inertia==

Buried Assumptions

The organization relies on old or stale assumptions for the algorithmic model despite recognizing significant changes in the environment.

Quote:

Failing to revisit fundamental assumptions undergirding inputs of the algorithmic model in light of changes in the environment contributes significantly to algorithmic inertia. {.callout}

The authors make an example of how Moody’s failed to recognize the diminishing actual quality of data as people found ways to artificially inflate credit scores.

To me, this problem seems to overlap with the concept of fossilization in De Langhe & Fernbarch, 2019.

Superficial Remodeling

The organization makes only minor modifications to the algorithmic model in response to substantive changes in the environment.

Quote:

This phenomenon occurs when organizations make only minor modifications to the […] model in response to substantive changes in the environment. {.callout}

They give an example of how Moody’s was more focused on profit, to the detriment of their product’s accuracy.

Quote:

Moody’s response to these changes was to seek to capture more business in the rapidly growing market, so it fine-tuned the model to be “more efficient, more profitable, cheaper, and more versatile,” according to its chief credit officer — not to be more accurate. {.callout}

I see it relating to two key problems:

  • Organizations often struggle to change and adapt.
  • Risk and financial models (especially if used by banks) are exposed to a conflict of interest between short-term profitability and model accuracy.

Simulation of the Unknown Future

The organization overconfidently relies on the algorithmic model to predict the future environment.

Quote:

Moody’s constructed a simulation engine featuring 1’250 macroeconomic scenarios that enabled it to estimate […] future losses […]. However, the simulation engine was limited by its underlying structure and assumptions, so analysts did not consider the changes that were occurring, did not update scenarios, and failed to accurately represent the changing macroeconomic environment. {.callout}

In other words, the future always changes assumptions, so you should change them too.

Specialized Compartmentalization

Responsibilities for the algorithmic routine are divided between team members in distinct roles based on their specialized expertise.

Quote:

This situation arises when experts in different domains are involved in an algorithm’s design and use and there is no overarching single ownership or shared understanding of the model. {.callout}

Again, the problem of lacking ownership.

How to solve it

Expose data and assumptions

Quote:

Organizations should articulate and document the data used in their algorithmic models, including data sources and fundamental assumptions underlying their data selection decisions, which can have deleterious effects. {.callout}

==I’m not sure about this being a true solution.== The actual problem is lacking ownership of the model and its results. Without it, you’ll get a similar thing that has happened to agile: a lot of rituals and activities around rubber-stamping assumptions and no actual impact.

Periodically redesign algorithmic routines

Quote:

Organizations should regularly redesign — and be willing to overhaul — their algorithmic model and reconsider how it fits into broader organizational routines. {.callout}

Yes and no. Overhauling models is immensely expensive. Especially in banks, where it has implications on regulatory capital requirements.

The first question should be “Do we need a model at all?” and the second should be “What’s the simplest model we can get away with?” It’s way easier to change the assumptions or overhaul a model if it’s simple. It’s easier to explain it as well.

Assume that the model will break

Quote:

To address algorithmic inertia associated with the simulation of an unknown future, it is important to assume that the model will break. Consider scenarios beyond the scope of the algorithmic model. {.callout}

Build bridges between data scientists and domain experts

And finally, the herculean task of trying to make the engineers talk to business and vice-versa.

Quote:

Organizations must create processes for data scientists (or quants - jev) and domain experts to work closely together to design their algorithmic routines. {.callout}

And how to do it in practice:

Quote:

One structural bridge-building practice […] is establishing a position such as a product manager. This should be held by one individual with both domain and data science experience who has direct responsibility for overseeing algorithmic routines. {.callout}

This (again) describes ownership. I don’t think coming up with another management position is the solution here. Ownership should be with someone who:

  • Is actually making business relevant decisions, i.e. a “business department”.
  • Is personally accountable for the model failing.

Verdict

Interesting insight into the mess of GFC-era Moody’s and their M3 credit rating models, but could be improved by covering model risk more thoroughly.