Wild data is roaming about your organisation. The new Markets in Financial Instruments Directive (MiFID II) and Regulation (MiFIR) will demand you round it up, process it and release it as trade and transaction reports.
The rules will apply from January 2018 to European Union firms, their branches outside of the European Union and financial institutions operating in the EU. Nearly all instruments are subject to the new regulation and directive, where under MiFID I only equities and exchange-traded funds were affected. Over-the-counter (OTC) trade reporting has been captured within the organised trading facility (OTF) and multi-lateral trading facility (MTF) frameworks, where MiFID I only affected trading of instruments on regulated markets (RMs).
Trade reporting will make public information including volume, price and time of execution via an Approve Publication Arrangement (APA). Transaction reporting to authorities is more substantial and must be conducted via an Approved Reporting Mechanism (ARM).
The increase in data fields for the purposes of transaction reporting under MiFID II – from 24 to 65 – multiplied by the increased number of instruments and range of trading involved gives an indication of the greater complexity of data that will need to be managed.
Pulling data together so that it can be normalised takes considerable effort. In the report by analyst firm Aite Group entitled, “Reconciliation Trends in 2016: Regulation and Nervous Recs”, it was estimated that it takes nearly 65 days to develop and build a single new reconciliation.
Investment firms need to understand how and where to report data, then take the operational steps to make it happen. Collectively, these requirements add up to massive project to aggregate and report data.
Get it together
Step one is identifying where this data is generated, if it is already captured and if not how to capture it. Some of the new additional data – for example unique identifiers for traders, such as national ID number – may never have been captured and stored before. The risk around storing data of that granularity is considerable and may well require a review of data security measures.
Transforming that data into the right format to report, via extensible mark-up language (XML) offers further operational challenges, with new data required than before, and certain reporting not having been needed under MiFID I.
The second step will require firms to develop in-house capabilities that they have not had before. They will need to build test harnesses, then build the platforms, harnesses, testing systems, and the silos. This will require a full technology build-out, setting up a rock solid system that can report to the regulator.
For trade reporting, APAs have a five-minute window, near real-time, in which to take on the trade, pass it to the regulator, receive a response and pass that back to the system to determines whether it has been accepted or not. For high volumes of trading that could create real challenges around data latency. A system failure or a slowing down of processing will lead to challenges in fulfilling obligations. Any system built to handle reporting will also need to be able to scale in order to manage this workload.
Working the system
The key to handling data at scale is the capacity to tag and reuse it. A smart method of handing that process is using EXtensible Markup Language (XML). It handles the multifaceted nature of the data by managing the many relationships that pieces of information have with one another. Every aspect of the trade is tagged which allows it to be handled by different parties according to their needs.
From an internal perspective, if a firm is warehousing this information it can map fields to tags in the XML, and then run programs – typically Java-based applications – to the XML file, and then run a schema validation. The European Securities and Market Authority, the Financial Conduct Authority and the Central Bank of Ireland will all provide an XML schema to validate it the data.
This creates a massive advantage for firms that employ the technology because the XML file that they have created can be validated and if it comes back with no errors, can be sent to the regulator, with near certainty that it will be good, or nearly good.
Using technology effectively in this process allows a firm to round its herd of data up and brand it, so that the business can be certain of what it has and does not have, removing a massive amount of cost and complexity from the validation process.