GDPR: Should Financial Institutions Be Comfortable?

by Farid Vij

Many financial organizations falsely believe that their extensive track record of implementing technologies and processes to remain compliant with various other regulations will enable them to meet GDPR requirements.


With the General Data Protection Regulation (GDPR) going into effect in May, organizations handling EU resident personal data will face steeper requirements for its management, protection, and privacy. GDPR replaces the 1995 General Protection Directive, implementing similar yet farther reaching and globally enforceable data requirements. These requirements include the right of access of data by the data subject, the right to erasure (the right to be forgotten), the right to restriction of processing, and numerous other responsibilities such as the requirement of data protection “by design and by default.”

Considering that financial institutions, by nature, deal with inordinate volumes of personal data every day and have grown accustom to facing massive fines, one might assume they are ahead of the curve. Yet, of the companies that I’ve spoken with, this simply isn’t the case. Many financial organizations believe that their extensive track record of implementing technologies and processes to remain compliant with various other regulations will enable them to meet GDPR requirements.

To an extent, this is true, but it’s not the full story. While financial institutions may be relatively well prepared to manage structured data in compliance with GDPR’s privacy requirements, unstructured, employee-created data is going largely overlooked. Key repositories that are not traditionally well managed include file shares, SharePoint, and Office 365, and these high-risk areas could cause fines of up to 4 percent of global annual revenue if left unaddressed.

The Siloed Approach

Under the current model of managing unstructured data, after receiving a subject request for access, erasure, etc., an organization is faced with searching numerous repositories for an individual’s data. This means using different approaches, often manual, that eventually produce inconsistent results. Even where these repositories are searchable, each has different capabilities as far as searching the content, making it difficult to defend a consistent approach to regulators.  Even after finding this data, it becomes another task to figure out what other uses it has, if it has duplicates, whether it might be on legal hold in a different location, and whether it can even safely be released or erased. Perhaps it is being retained to comply with industry document retention requirements.

How can an organization efficiently reconcile these many considerations? Using a manual approach, a single request – most of which require a tight turnaround – alone becomes prohibitively costly, forcing the company to either produce data in a manner they cannot defend or risk fines for non-production of the data in a timely manner. Now imagine receiving waves of such requests on a daily to weekly basis, which recent research indicates will quickly become a reality for large organizations. If you do the math and estimate how many employee hours per month will need to be dedicated to this function, the associated costs add up quite quickly – with no guarantee that regulators will find a manual approach that relies on employee efforts defensible and compliant.

Centralizing Access

Implementing a system that provides centralized control of enterprise repositories can help account for the many silos that occur across an organization and the many functions for which data is used. Under such a unified architecture, an organization can search across unstructured repositories and apply global policies in a single place regardless of where the data sits. Actions taken on data are applied across all repositories and all duplicates, ensuring a uniform process that allows for a more defensible process at a lower cost and in less time.

Crucially, because various policies put in place for regulatory, legal, and business purposes—retention, deletion, restriction of processing, access control management, preservation, etc.—are all executed centrally with complete transparency of all of them, functions for which data is being processed are automatically synchronized.

The result is a much more streamlined and defensible process for responding to subject data requests. When a file is deleted, it’s deleted everywhere. If an individual requests data that is on legal hold or being retained for regulatory purposes, it’s immediately evident.

The value of such capabilities transcends GDPR requirements with clear applications in eDiscovery, analytics, records management, and end-user search. However, if they sound like important features, it’s because they’re not just features: They’re a function of unified information governance. When you put this in the place, the rest simply follows.

Farid Vij is the Director of Information Governance at ZL Technologies.