Learn to Trust Your Data and Analytics: A Q&A With KPMG’s Bill Nowacki

With trust comes both the agility and speed that you need to operate at today's breakneck pace.


Financial executives continue to mistrust their data and analytics, which can result in slower decision-making. FEI Daily spoke with Bill Nowacki, Managing Director, Data and Analytics at KPMG about their recent survey on building trust in analytics. FEI Daily: What surprised you about the survey results? Bill Nowacki:  The surprise is that while everybody seems to acknowledge analytics are important in business, or maybe more generally, analytics are important in life, the low levels of trust seem to start at the top. When you look at the results, you see a lot of the executives saying they have supreme confidence and trust in the analytics and I think that was surprising to us. We're left to ponder, why is that? Because it does seem incongruent with the reality of the businesses. We suspect it's because of the black box nature of analytics. As the industry has moved to tackle more complex questions, and the complexity is, by the way, arising out of the complexity of markets, they're forced to use more and more sophisticated methods, they're forced to look at bigger and bigger data universes, and oft times, the analytics are given to more black box-type approaches. You'll hear of approaches like neural networks, or random forests, or deep learning, and while they solve the problems, while they answer the questions, they do so in a very black box way. FEI Daily: Where is the mistrust coming from and what’s the impact of that level of mistrust? Nowacki:  I can divide it into institutional trust, the trust within the institution, and public trust, because both influence the people at the top of organizations. Let's start with the public trust. You and I might be CEOs or CFOs of companies, but we're also citizens and consumers. Therefore, if we learn not to trust, if we believe we're being snooped on, we are on our mobile phone and suddenly AT&T says to us through our phone, hey, I noticed you were just on the Amazon website, here's another offer, we don't trust. Therefore, that bleeds into our perception of analytics, as professionals. Increasingly, we're seeing executives having to make big decisions and in order for them to be successful, they have to make decisions about markets they've never been in before, locations they've never visited before, about clientele with whom they've never traded before. Because so much of this is kind of flying blind as our world gets bigger and more knitted together, suddenly, they have to trust this agent, this intelligent agent, this algorithm, to help them make the decision. Think about your success as a decision-maker being predicated on getting it right or getting it wrong and having to trust a black box and new market, people you've never seen before. Let's take the example of a borrower and think about a Millennial, and she has never had a credit card, only a debit card. She's never owned a car, she uses public transportation or Uber. She has never had a mortgage before and suddenly she wants a secured line of credit. By the way, she's coming in through a portal, not through a branch, and so we don't even see her. Suddenly, the bank, the senior lending officers, the chief risk officers of the bank need to make a decision about her, and their success as a bank really depends on the ability to make this decision right and a hundred thousand other like it. If you think about it, how do you answer that question, whether the borrower is a good or a bad credit risk? The answer is you have to find or use unusual data. What the industry is learning how to do, and I say the industry, whether it's the bank or people like KPMG that serve the bank, is learning how to find proxies for traditional credit artifacts or credit data. We're finding advanced methods that let us take very sparse data and weave it together into a reliable credit score. Now, this simply exacerbates the problem. There's already this worry  “I'm making lending decisions to a person that doesn't have any of the traditional information, that has no real experience with credit, and I'm using all this extrapolation,” and so I think that just weighs on the worry or contributes to the burden of this executive trying to be successful in making these decisions. FEI Daily: How do you reduce opacity or increase transparency of the black box? Nowacki:  One of the ways to develop trust is to add transparency. If you go to a bank today and you apply for a car loan and they turn you down, they have to tell you why they turned you down, that's a law. If you look at how a lot of data scientists would like to solve the problem or the question “are you a good or a bad credit risk?” they'd like to use a neural network. A neural network does not explain why. It doesn't give the scientists insight into why you were turned down, it simply says you're good or you're bad. What has happened is in order to meet the regulatory requirement of banks, lenders actually use decision trees because we can see very clearly why you were turned down or why you were accepted. In other words, it provides explainability. As we look to the future, one of things we might want to do is to select approaches that run parallel paths where we have a black box approach, like a neural network, and we also are developing a decision tree. Certainly, approach has a lot to do with it. Effectiveness is our second anchor of trusted analytics (after Quality).  Do the analytics work as intended? One of the realities we're seeing in the market is that analytics have grown organically within companies. They're not centralized, they're all over the place, and so there's an awful lot of data scientists that are working in a vacuum or cloistered. An emerging best practice, and we certainly advocate it, is the use of scientific peer review approaches so that if I have a scientist developing a set of analytics in one part of the house, I have somebody from an unrelated part of the house, who's also a scientist, evaluate them during the design, during the build, and during the proof. Peer review and other methods, the way that you test, the way you select your holdout samples, the way that you back validate, all of these things will lead to ensuring that the result, whether it's a prediction or a score or a recommendation, is actually as intended. The third anchor we talk about is Integrity, and this is an interesting one. If you're going to use exogenous data, meaning data that are outside of your four walls, and you're going to use those data to enhance the accuracy of your models, and many people are doing that, you have to make sure that those data sources are out there and being maintained and are consistent and are licensed for use in the way that you're using them. We find a lot of consumer equations, The question becomes one of where are we getting those data from, how often are those data being updated, are those data integral? Can we give those data in the long run? What happens if those data go away? Our model is predicated, in part, on them. Can we find alternative sources that are equally good in quality and content? That third anchor being integrity says, are we using it as permitted? Have we gotten these data in an ethical way? Are we using them in a way that would be permitted? Those are really important questions because if a particular datum, or data source, goes away, suddenly the model doesn't work anymore. Which brings us to the fourth anchor which is Resilience. Models and algorithms are really built to represent the dynamic of right now. Models will, over time, begin to wobble and not perform. It’s important to monitor all of your models and all of the recommendations they're making and the actual outcomes to confirm that things are still resilient, still robust or do they need to be retrained, do they need to be propped up, do they need to be re-engineered? Building a great foundation through these four anchors goes a long way towards buoying trust. FEI Daily: What’s the significance of these survey results for financial executives? Nowacki:  I would hope that an executive would see the incongruity between their core belief that analytics are important to their company and the mistrust of the analytics of the company. What you'd like to think is that it'll be a call to action to try to get closer, to understand what's causing the lack of trust, or what's not stimulated trust. I do think that no matter how sophisticated the analytics that I've been party to, what's amazing is they're always easy to understand. You can take the most complex analytic in the world, I can explain it to a seven-year-old, and they get it. Therefore, an executive who endeavors to get a little closer and to understand, to be a part of, if you will, the sausage making for just a moment in time, I think immediately becomes much more comfortable and much more trusting. Just asking the questions what data are you using? Why do you use that? That is often illuminating. To me, that's the big “aha” is that these executives can self-heal by getting a lot closer to the flame. There are some basic things financial executives can go after. There are certain routine tasks that are done again and again and again, that can be relegated to rules. That's analytics. A senior executive can peel off a few of these easier to do things, low risk, medium reward, and then trumpet them within the organization. There are some things that can ease them into the deep end of the pool.