Strategy

The Future of Financial Reporting Part 2


by FEI Daily Staff

One initiative that has been moving forward in the U.S. is the development by the SEC of a data mining system called the Accounting Quality Model” (AQM), otherwise known in the industry as “Robocop.”

This is the final installment of the story. The first portion of the story can be found here.

Robocop Arrives

So far, XBRL has served as a beneficial tool in helping the SEC’s Division of Enforcement identify accounting fraud cases, and it has also proved useful to the Division of Corporation Finance as a means to improve the quality of financial disclosures being presented by companies. The downside, however, is that the review process is rigorous, and time and labor intensive.

That is where AQM comes in. The tool, currently in prototype state, works in conjunction with XBRL and is designed to trawl through 10-K filings and trigger automatic alerts when the system detects suspicious or incorrect accounting practices by publicly traded companies. To date, the SEC’s Accounting Fraud Task Force, a group inside the SEC’s Division of Enforcement, has been one of the early adopters of the AQM model, although it has yet to be rolled out as an integral part of the Division of Corporation Finance’s review process, Lewis notes. Currently, it is being “beta tested” by select review teams, but wide-scale adoption has not yet happened, he says.

AQM works in conjunction with XBRL to access data and information straight out of company filings. “The idea behind AQM was simple,” says Lewis, who developed the prototype. “There was all this information now arriving at the SEC through XBRL submissions.” The DERA’s Office of Structured Data is responsible for taking that data and creating a large, aggregated database of those individual filers. “Using this data, we tried to develop a type of report that the Division of Corporation Finance’s review teams could use to help them complete their individual 10-K and 10-Q reviews,” Lewis says.

AQM is designed to highlight suspect areas on company reports, such as discretionary accruals that can be manipulated easily by company management. It can, for example, highlight when there are a high number of off-balance-sheet transactions, changes in auditors or delays to earnings announcements. Once a specific area is highlighted, the examiners can review the material it pulls more closely and go back to the company to request further information or clarification. In this way, the software allows the review teams to more easily figure out which firms to focus their attention on, based on the data the system pulls.

Lewis makes sure to point out that “there was a conscious decision on my part to call the system ‘the accounting quality model’, not ‘the accounting fraud model.’” He adds that, “my initial idea for developing the model was to effectively find a way to go through and electronically screen these companies in a way that would assist the teams when they performed their review,” he says.

However, while AQM was not initially designed as a way to detect fraudulent activity being perpetrated by companies, it can do so by performing an initial read of the company’s financial statements and pinpointing potential areas of concern.

“It can let the team know which companies are at risk and what to focus on,” Lewis explains. “If through a data analytic approach you could identify certain types of accounting treatments that caused a firm to look like an outlier, relative to its industry peers, those accounting treatments would naturally be things you would want to focus on in your review,” he adds. Lewis also points out the system was not meant to replace the people on the review team, but instead is one more tool the team can use as part of its review process to enhance efficiency.

Another potential benefit of AQM is that it may serve as a preventative measure for stopping certain firms from misreporting. “If the system can take away the easiest strategies to implement fraud by highlighting them and taking them off the table, then a company is just not going to use them anymore,” Lewis explains.

If fully adopted by the SEC, AQM will continue to become more developed and dynamic, says Lewis. It will continue to filter for old strategies that have been used to obscure information or provide inaccurate reporting in the past, and will also look for new strategies that will be incorporated into the model. Currently, there is no official release date for the prototype, as it is still being refined to make it more intuitive for users, Lewis notes.If Congress does, however, decide to pass the bill exempting smaller firms from submitting their filings in XBRL tagged submissions, it could greatly affect the SEC’s use of AQM going forward. “If it is going to work, it will work best on firms that aren’t as actively followed by everybody,” Lewis says. “There’s a lot of information about Apple out there, but it’s the smaller firms that don’t have analysts following them, and there isn’t a lot of market-driven information about them out there,” he says. “So that is where AQM could be a real benefit to investors, and I view the bill to exempt them as undercutting the data analytic efforts of the SEC,” Lewis adds.

It’s All in the Language

Another area that soon may have an impact on the world of financial reporting is the use of linguistics to evaluate company reports. Nerissa Brown, associate professor of accountancy at the University of Delaware, is the author of a new study, entitled: What Are You Saying? Using Topics to Detect Financial Misreporting. She and her team are working on a model that will be able to do just that.

The study uses a prediction model that relies on accounting variables that have already been reviewed in the past and adds textual analytic measures to it to scan a company’s 10-K filings. By highlighting areas such as language, style characteristics, topics and tone within a financial report, the model can create a textual measure of what is being disclosed, and then highlight instances of possible misreporting.

The creation of the algorithm, called the Latent Dirichlet Allocation (LDA) was motivated by the SEC’s tool, the AQM, which uses mathematical indicators and basic word dictionaries to detect financial misstatements or potential areas of fraud. By contrast, the tool Brown’s team is developing uses other signifiers as well, such as language, which may be an even better way to detect inconsistencies, Brown asserts. “These measures are a better predictor of financial restatements and of intentional misreporting than financials alone,” she says.

The system works by going through some three billion words of text that have been coded in an algorithm. “It lets us train the algorithm to pick up different types of topics being discussed,” says Brown. “We don't tell it what topics to look for; the algorithms look at the words discussed, gives weightings to those words and spits out the number of topics discussed,” she explains.

Other predictive measures the model uses include the readability of the disclosure, the tone, the style, the length of the 10-K, and word choice, based on preset lists of deceptive and negative words, says Brown. “Litigious words and how you emphasize certain words within the 10-K are also factors,” she notes.

The LDA algorithm can also pick up when a report is omitting certain topics it should be discussing. The study attempts to do so by taking the percentage of a firm’s 10-K that is spent on one topic, detected by the algorithm, and compares it to the average amount spent on that topic in peer firms’ filings, explains Brown. The model then examines the predictive power of these topic deviations. “We find that both the topics discussed and how these deviate across peers can predict intentional misreporting, and that improves our predictability by two times compared to looking at just financials,” she says.

So far, the study has shown its researchers that looking at “softer measure” topics predicts financial misreporting 200 percent better than by simply using financial terms, according to Brown. Still, the model has a ways to go before being completed. “One question we are getting quite a bit of is what type of topics are the most predictive, in terms of intentional misreporting, so we are going back to revise the computer algorithms to get a sense of that.”

While the algorithm is still a work in progress, Brown and her team hope that at some point regulators and practitioners, such as analysts and investors, will find the predictive model useful and will implement it into their own accounting risk systems. “If you have a way of estimating a firm’s risk of engaging in misreporting, you can put it into your valuation model, so that it can help investors to price protect themselves from potential losses.”

Brown adds that one drawback of the SEC’s current automated programs, such as AQM, is that they can also end up wrongly flagging some firms as fraudsters. “A better prediction model not only picks up misreporting at a higher rate, but also does so with a lower error rate,” Brown notes. “We hope to highlight this in our research,” she says. The team is looking to have a revised version of their study completed by the end of October, at which time it will be posted publicly and they will start submitting it to conferences and accounting journals.

A Push for Plain Language

Still another area that continues to be an issue for the industry is the question of whether or not better standards should be mandated for the use of plain language in reporting. While progress is being made on this front in Europe, “it’s not on the radar or priority list of many companies in the U.S.,” says Lutz, who worked on the SEC’s Plain English Handbook from 1997 to 1998. “Corporations still see it as an extra obligation and not in their own financial interest,” he notes.

Lutz predicts “American corporations will soon have to change that view, because we live in a global market, and if you are not producing transparent information, you won’t be able compete,” he says. “People won’t invest in things they don’t understand, they are gun shy now.”Lutz is not alone in his views. Several hedge funds and some major banks are already demanding better information from companies and pressuring them get on board with plain language initiatives, Lutz notes. “The old ploy that companies have used that, well, you don't have a PhD in economics, so you can’t understand this, is nonsense,” he says. “By law, those arguments don't fly anymore, and investors are now saying that if you can’t explain it to me in language I understand, then you don't understand it either,” Lutz remarks. “As a result some corporations are starting to move in the right direction.”

Lutz would also like to see an end to the production of overly lengthy financial reports. The prospectuses for complicated investment products, such as collateralized debt obligations (CDOs), can run some 15,000 to 750,000 pages. Many corporations are also providing overly verbose annual reports that few investors have the time to read. Lutz remembers one corporation that filed a 263-page 10-K back in 1996; by 2009 the same company’s 10-K was 1,376 pages long. “These need companies to provide better summaries or provide the information in a more palatable fashion,” Lutz states.

One advocacy organization that is already addressing this issue is the Data Transparency Coalition. The group is not only pushing for adoption of XBRL in all financial disclosures, but throughout the U.S. government as well, so that citizens can find out what is going on in the various government departments. Overall, change will come when individuals start to demand better communication, Lutz concludes.

This article first appeared in the Fall 2014 issue of Financial Executive magazine.

Leslie Kramer has worked as a journalist for over 10 years covering a wide range of corporate, investment and personal finance topics.