Case Study · Discovery Services
Technology Assisted Review
Predictive Coding In a Regulatory Investigation
Morae was called upon to assist our client, a large financial services institution, in complying with investigation requests from regulators in the US and Europe. Data from the matter spanned across three jurisdictions.
The purpose of the investigation was to look at custodians responsible for having developed an index fund’s strategy and its subsequent modifications for other jurisdictions (the product was developed for two different European countries and later modified for the US.). It also involved analyzing data from custodians responsible for marketing and selling as well as trading behavior that was linked to the index’s fluctuations. This involved the analysis of a wide range of custodians’ data, amounting to more than 5000 GBs. The initial volume to be analyzed was over 10 million documents hosted on multiple client databases. Morae’s mandate was to utilize technology to efficiently identify relevant information while concurrently making all efforts to limit costs.
The bank had established an index fund managing a number of assets based on an internally developed trading strategy. This fund had then been marketed and sold as shares to clients. The regulatory investigation emerged from concerns that the methods of calculating value created by the fund’s developers were inconsistent with marketing material such as offering circulars. There were additional concerns pertaining to specific misconduct related to traders who had foreknowledge of the fund’s strategy.
Morae was asked by the client to develop key facts from the matter as well as to assist the client in meeting the regulator’s deadlines for production.
Our mandate was to utilize technology to efficiently identify relevant information while concurrently making all efforts to limit costs.
We determined that the best way to efficiently identify relevant information while reducing costs was to develop a defensible predictive model (leverage technology assisted review – TAR) to identify key documents for investigative and production purposes with minimal linear review.
Our subject matter experts reviewed the sample and seed sets of the data population to build the predictive model. This model was built by harnessing targeted Boolean searches, relationship analyzers, and concept and lexicon searches and backing it with a unified strategy – to rapidly identify critical documents while defensibly excluding non-relevant documents. In addition to these targeted searches, we layered additional analytics tools such as near-duping technologies over the predictive model.
We ran the model over the entire dataset. Afterwards we first analyzed the results in the 90 -100 range, highest probability of relevance, of the tiers created by the predictive model. This yielded approximately 150k documents to use as the base review population.
By leveraging a predictive model, we reduced the number of documents requiring review by 95%, for a total cost savings of up to $3m.
The predictive model not only streamlined review but proved to be more effective in its analysis than standard review. We discovered that the relevancy rate using the predictive model was 3 to 4 times more effective compared to similar projects where review was based solely on search terms.
Manage your e-discovery and document review obligations with the first and longest running RelativityOne certified partner in the world.
Related Case Studies
Get in touch with us to learn more.