Data: Asset or Liability? Regulatory Compliance Innovation

Data-Chess-Liability-1500-x-906.gif
 

In this series of blog posts, we explore how the next generation of intelligent regulatory reporting solutions are delivering insights far beyond their original purpose.

This second feature in the series looks at the importance of managing data as an asset, and hidden risks that should be mitigated to prevent it becoming a liability.

The wave of complex regulatory compliance requirements, combined with advancements in ‘big data’, has created an explosion of data that financial services firms collect, store and report. In our previous blog, we looked at how to maximise the data asset. In this post, we look at how to ensure it does not become a liability.

Big data is slowly beginning to fulfill its immense promise but new regulatory constraints and ever shifting business needs have left some firms unsure of how to appropriately manage this data. While the default position in your firm may be to hoard all data for future use, it is increasingly becoming more important to ensure the quality of this data is assured, and the techniques for interrogating this data are sound.

Data governance by design

There are certain principles of data governance that all firms must follow to fulfil their responsibilities as a data controller and a data processor. While the implementation of the General Data Protection Regulation (GDPR) was an additional regulatory burden for most firms in 2018, it has driven best practices for data protection. However, one key aspect of data governance, which can often be overlooked, is the requirement to continuously monitor the quality of data. Data is increasingly being gathered from alternative sources, and is being aggregated with data from multiple origination points. Quality assurance processes must be ingrained in the firm’s culture and managed as a continuous process, as the data moves from the front office to the back office. This is not a once-off task, it is a firm-wide responsibility to ensure data is accurate, cleansed, standardised and profiled correctly.

Another important factor for successful data governance is adopting a partnership approach to data quality assurance between the data owner and the data processor. A data processor, with cross-market insights, can apply outlier analysis and peer group analysis to identify data anomalies and quality issues that may be invisible to the data owner. A strong relationship between the data owner and a data processor will foster feedback loops to ensure continuous quality improvement.

Data denial – proving the in-house hypothesis

The abundance of available data, combined with the proliferation of complex big data tools, is dramatically changing the way firms manage data. Misuse of large data sets and big data intelligence tools can produce misleading results that impair, rather than enhance, decision making. This can be adversely amplified with confirmation bias – an internal “yes-man” per se. It is human nature to filter results or interpret data in a way that confirms existing preconceptions and ignore any contrary insights. Ignoring such insights and choosing the results that best align with corporate groupthink can create false feedback loops and ultimately prevent firms from unearthing the true insights hidden in the data.

By falling victim to these traits, can we reasonably expect to identify when a black swan event becomes a market event?

The Future

In the financial services industry at present, there is significant interest in the future use of machine learning and artificial intelligence for regulatory compliance. The use of these technologies is appropriate in certain circumstances, for example natural language processing lends itself to fuzzy logic and semantic algorithms. However, the misuse of these technologies can be a liability, for example the creation of large volumes of false positives in pattern matching. A key advantage to any machine learning system should be the ability to filter out any analytic models that are inappropriate for a given data set, until the right one (or at least the best fit) is qualified. This decision-making process can be based on any number of data attributes, such as its probability distribution for example, and can largely mitigate the risk of the aforementioned error propagation.

Implementation of these technologies will ultimately only be successful if the other fundamental elements of data governance and data model design are executed carefully. Data can be optimised as an asset when it is used to communicate insights clearly, at the point of highest impact, with a sound legal basis to do so. Anything less is a missed opportunity at best, and a costly, potential liability at worst. The work required to implement the right process may seem like a daunting task, but the rewards are plentiful.

Previous
Previous

How the new Securitisation Regulation 2017/2402/EU will impact AIFMs and UCITS in 2019

Next
Next

How to turn data into diamonds - regulatory compliance data innovation