Technology

The Encouraging, But Not Surprising, Spread of AI in Financial Audits


by FEI Weekly Podcast

How will AI change the audit process with KPMG's Thomas Mackenzie.

We all know that AI will forever change the finance and accounting landscape, but how quickly is it happening and what are the actual benefits?
In this episode, we speak with Thomas Mackenzie, US and Global CTO, about the speed of AI adoption and the critical areas where preparers need to catch up.
An edited transcript of the conversation is below.
 


Olivia Berkman: I understand that it's pretty obvious, but maybe you could just explain the impetus for this report. What's the reason for focusing on AI and financial reporting in the audit now?

Thomas Mackenzie : It's a great question, and we know that not a moment goes by today where there's not a conversation around generative AI and the disruptive force it's going to have within our personal lives and our professional lives. So we have been on a technology journey as an audit profession, as a finance profession, and we felt it was important that we took a pulse point around where the market is from an AI perspective. There's lots of noise, conversation, and direction in the marketplace, and we really wanted to understand where the market is at and where the market is going, to both inform us at KPMG, and to inform the clients and the preparers that we serve and those that we don't serve. So necessary to just, as I say, take a pulse point.

Berkman: Makes perfect sense. And the report does go into detail regarding expectations for AI use and finance. What was the biggest surprise for you out of the results?

Mackenzie: I don't know if I'd use the word "surprised." I'd rather use the word "encouraged." And the reason I want to use the word "encouraged" is we have been on a technology journey, and the speed is unrelenting. And I was encouraged that the respondents were in the most part on the journey, and there was a recognition that whilst generative AI takes a lot of the conversation at present, it is part of the bigger puzzle around technology. We have AI, which is more transaction-based. We then have generative AI, we have cloud, we have data. So I was encouraged, and this is really through our experience at KPMG, in that preparers in the marketplace we interviewed understood that this is an important component in the journey around technology.

At KPMG, we've been on the journey for a while around modernizing our audits. We have put in place technology to improve the way we collect data, analyze data, leverage data. We have some alliances with Microsoft and MindBridge to bring RPA or process automation, as well as transaction scoring or more of the generic artificial intelligence. And we are working on generative AI, the chat that we all read about. So we are making the investment, we're on the journey, super-encouraged that the marketplace is on the journey as well. And further down the path than perhaps we have been historically in adopting technology.
Berkman: That is good news. And so given that the survey said that industry believes that their auditors are ahead in the use of AI, I know you said the preparers are on the journey, but what are some of the critical areas where the preparers need to play catch-up that you saw?

Mackenzie: I think the finding you're referring to is it's about 72% believe that the external auditors were ahead. So when you look at that more closely, and you try and unpack that particular finding, at least the way we are interpreting it is, there's an avenue where preparers are looking for assurance providers, KPMG and those in the marketplace, to help them in terms of certifying and validating artificial intelligence, giving a little bit of comfort around that.

Then the other component is, how do we using that to transform the execution of actual audits? So when I think the finding indicates we're ahead, I don't necessarily know if preparers are behind. It's just given the journey and that second area around transforming the way we do audits is it's been a process. As I mentioned previously, we had to get data, we had to learn how to handle data. We started doing analysis around that data, started to bring rudimentary artificial intelligence.

So it's not a moment. It's been a journey with our clients. And so I'm not surprised that there's a perception we're ahead, because we have been on this journey for a while, and we have a fairly innovative audit approach at KPMG where we've been working with our clients on this journey. So it's probably more of a perception we're ahead, as opposed to the preparers are behind. And then on the first component around looking for guidance, assistance, certification around AI, I think it's the fact that we've been on the journey, we're able to have a conversation, we're able to point to process and people around how to think about artificial intelligence. So we can lead the dialogue, lead the conversation with preparers, which may be why they believe we're ahead.

Berkman: That makes sense. So you mentioned learning to handle data. One of the issues that we hear discussed is the concept of AI governance. So it'd be helpful if you could just define what we mean when we say "AI governance," and maybe even some examples would be helpful.

Mackenzie: Absolutely. It's a foundational item to artificial intelligence is governance. The technology is exciting, but without governance, it's just technology. So when we think about governance and the way to approach this, there's really governance around the analysis we're doing today, the AI that we're using today that just looks at data and analyzes, does anomaly detection, et cetera, the governance we're looking for there is transparency around what the rules are doing. So to use the nomenclature of what's happening inside this black box so we can understand the outcomes. So the governance is really all around ensuring we've got the right rules, the right testing, the right data, the right certification. When we start thinking about generative AI, that governance just expands. The mandate expands, and it starts to think about things around are there potential biases that may be coming out of this? What are we doing to ensure that the outcomes are ethical, that the algorithms that are running are responsible?

So what governance is all about, then, is how does the generative AI generate the outcomes that we're looking for? How do those outcomes tie into what the organization or the company is trying to do? Is it aligned culturally? Is it done ethically? Is it done in a responsible manner? And then most importantly, in our profession, is it done safely? So everything needs to be accretive to quality. And just because we can get an outcome out of generative AI doesn't mean we want to use that generative AI. We've got to have a governance structure around that to ensure it's the right outcome, it's accretive to the capital market, or accretive to what we're trying to accomplish from an audit perspective around quality.

So a few examples, just to give you two very simple examples is on that first one around more typical AI predictive analysis is how do we take a population of transactions, run a series of rule-based AI against that, and look at the outcomes? So the governance would be all about testing the rules, ensuring the outcomes are validated, and ensuring that we've gone through a process of deploying it, testing it, measuring, changing, and then expanding deployment.

On the generative AI side, where it's a little bit more opaque as to what's happening in the box, the examples there would be is how do we ensure we keep the human in the loop? So just because generative AI is generating a response, let's ensure we have the right measures and people around it at all levels to look at that response. So the right training, the right deployment, the right guidance. So folks are not just accepting the outcome at face value, they're thinking about that outcome. And governance is all about that.

Berkman: Makes sense. And you mentioned ethics and safety. So what are some of the specific challenges or even dangers that you see if both auditors and industry do not have the right governance structure in place as AI adoption increases?

Mackenzie: I believe, I think the biggest risk we have is really two things. One is we don't understand the biases in the outcomes, and we over-index or over-rely on the outcomes, and it then almost becomes a circle of ever-increasing or spiraling over-reliance on the outcomes. So for us, it's around professional skepticism, always challenging, always looking for confirming guidance, evidence. And I think the risk is if we lose that human in the middle, if we lose that inquiring mind, that professional skepticism, both on the profession side as well as on our preparer side in the finance department, we run the risk of, as I say, over-reliance on the outcomes, which could take us down a path that we don't want to go down due to just faults in the respective generative AI algorithms and models.

Berkman: And you mentioned the human in the middle. So interestingly, the report pointed out that staffing and employment levels are not necessarily being influenced by AI. So if the efficiency of AI isn't in the labor costs, where do you find it in accounting and finance?

Mackenzie: I certainly can appreciate the anxiety of AI in terms of our profession. We're a knowledge profession, and all the publications and guidance are around how knowledge professions are going to be disrupted. So I can appreciate the anxiety, but I am actually encouraged by the technology. Because I think, and the people we are recruiting, I believe what our preparers are leveraging in their finance departments is the beauty or the benefit of AI is going to be replacing of that routine, mundane tasks. It's around elevating the value that is provided.

So instead of working transactions through a system, as an example, where there's data capture, data review, that will happen, and should happen by technology. So now the value of the controller or the auditor is around risks, around outcomes. So the mundane task should go away, but that doesn't mean the task gets eliminated. It just means it creates the capacity to focus on that higher value, higher risk. So I think once you get over the anxiety of the technology, you recognize the value it could bring to job satisfaction, moving at speed, velocity in terms of making decisions and addressing risks. So I do think there's some efficiency that will come from productivities, but I think they will be quickly swallowed up by the additional value that comes out of the outcomes and the speed of processing.

Berkman: So the last question that I have for you, Thomas, is which industries do you see adopting AI in their accounting functions maybe ahead of other industries, and why do you think that is?

Mackenzie: It's a great question. So everyone is going to get there. Industries are at different paths on that journey. Not surprisingly, when you look at the results, consumer retail and healthcare are ahead of adoption. And the reason that's not surprising is we are all consumers. We use generative AI and AI in our lives, and those marketplaces or those industries are serving us as consumers. So I think there's a rapid adoption there, because they directly engage into the consumer. So just like I want to use generative AI to make a booking for me at a restaurant because I don't want to phone somebody, I'm expecting that from our provider. So those industries, I think, are meeting the market where the market needs to be from an adoption perspective. Some of the other industries are taking a much more targeted approach based on their industry. So if you think about financial services or the federal government, lots of data, but a lot of requirements around security and protection of that data.

So they are adopting generative AI, but in a much more targeted manner focused on their operations and the products that they bring to the market. So they're on the path, it's just, it's more targeted to the services they provide, versus consumer and healthcare. And then I think the last industries or grouping of industries are those that are more on the heavy industrial and the energy side, where they are really using that to streamline and improve their operations. They are adopting it, but perhaps in a bit more of a measured process, given the cycle to change those industries.

Berkman: Thomas, you said that you were encouraged by the results of the report. Is there anything else that stood out to you that you want to share before we let you go?

Mackenzie: I just want to underscore the word "encouraged." The thing I would share is in my professional career, I've been working with technology for a while, and the speed is really encouraging in terms of adoption occurring. The conversation is really encouraging. People are having the dialogue. The second thing, or the third thing that's incredibly encouraging is there's an acknowledgement, we need to do it in a responsible manner. So the thing that really I pull out of this is generative AI is coming. We are going down this path, we're going at maximum velocity. The market's going there, preparers are going there, we are going there. So that's what encourages me the most. And I think the survey results collectively bring that out in the number of respondents that were A, aware or B, have already started to adopt this. So it's encouraging, the speed and the guardrails we put in around that speedy adoption.