To get the best model and the best results possible, you need to understand your data, be clear on what you're testing against, and constantly monitor and attend to your models.
With over 2.5 billion consumer accounts, Mastercard connects nearly every financial institution in the world and generates almost 75 billion transactions a year. As a result, the company has built over decades a data warehouse that holds “one of the best datasets about commerce really anywhere in the world,” says Ed McLaughlin, president of operations and technology at Mastercard.
And the company is putting that data to good use. The fastest growing part of Mastercard’s business today is the services it puts around commerce, says McLaughlin.
IDG’s Derek Hulitzky sat down with McLaughlin and Mark Kwapiszeski, president of shared components and security solutions at Mastercard, to discuss how the company turns anonymized and aggregated data into valuable business insights and their advice for getting the best results out of machine learning models.
Following are edited excerpts of their conversation. To hear directly from McLaughlin and Kwapiszeski and get additional insights, watch the full video embedded below.
Derek Hulitzky: Mastercard’s Decision Management Platform won our CIO 100 award in 2020. And it uses AI and data for fraud detection. Can you tell us more about the platform?
Mark Kwapiszeski: We use it for several purposes, primarily in our fraud products for creating things like fraud scores on transactions. But what’s really exciting about the platform is just the size and scale and scope of what it does. It’s built on about 900 commodity servers and it processes about 1.2 billion transactions per day at a rate of about 65,000 transactions per second, all of which it does in about 50 milliseconds per transaction.
It uses a lot of different AI technologies and techniques; it uses about 13 different algorithms, including things like neural networks, case-based reasoning, and machine learning. But it’s not just running one model at a time. We’ve actually built layers, where it can run multiple models at the same time, so that it can analyze all sorts of different variables within that transaction.
Derek Hulitzky: You’ve described how your analytics models aren’t static, and that you continuously monitor them to understand what’s happening with a transaction and why it happened. Can you describe what you mean by that?
Mark Kwapiszeski: When you consider every transaction that we see, every interaction, it could be fraud or it could be a mom trying to buy medicine for their child. Every transaction matters. So, we always have to know not only what happened, but the why behind what had happened.
And while the models tend to get the headlines in conversations like this, to me it’s all this stuff around the model that really becomes interesting when you think about—how do you not only know what happened, why it happened, and then how do you watch that over time to watch for things like model drift.
One of the best ways to see if you do have a model that is drifting, is by putting a challenger model in and watching it over a period of time. And, in fact, we’ve done that for periods of upwards of a year before, watching a model, comparing it to another one, so you actually really get the best model and the best results possible.
Derek Hulitzky: So Mark, you talked about drift. Can you talk a little bit, Ed and Mark, about how you solve for that, how you react to it?
Ed McLaughlin: I think often people almost use the wrong metaphor when they talk about AI and modeling. They use more of a code metaphor, where you build it, you run it, and it stays fairly static until you end up end-of-lifeing it sometime down the road. Whereas we see more with these models that need to be constantly attended and monitored.
Mark Kwapiszeski: Yeah, it kind of manifests itself in two ways. We have an entire analytic environment that’s really dedicated to what are those outputs and what were the results? And then we look to marry that up with the actual end result of a transaction, because often we won’t know if an approved transaction actually turns out to be fraud until sometime later.
So, our data scientists then take that fraud information and the signals that we’re getting, compare it back to that analytic information of what the DMP [Decision Management Platform] is putting off in the fraud scores that we have, and then they constantly then look to tweak those two things in order to find that right balance.
Ed McLaughlin: One final thing I would add, because if you want to make sure you’re not drifting, you have to be clear on your concepts. You probably remember, just as a consumer, as a cardholder, years ago, a lot of declines, a lot of really blunt rules were out there, because the emphasis was fighting fraud. Now, what we’re saying is … [make] sure as much good stuff gets through as it can, while you fight the fraud simultaneously.