Home Blog Page 523

An Urgent Need For Artificial Intelligence At Chief Investment Offices

By Gaurav Chakravorty, Co-Founder, Qplum

The only way the investment industry can deliver high quality returns at low recurring cost is by using artificial intelligence.

gaurav-chakravortyAdoption of artificial intelligence (AI) is happening faster in the institutional investment community than many asset managers want to believe. Proprietary trading firms and family offices have been the innovators and early adopters.

Hedge funds also fall into this early-adopter group; and lately we have seen a strong shift away from discretionary and a resistance to “pedestrian quant” strategies among hedge funds. Even individual investors (typically “laggards” in the asset management industry) are embracing AI-based methods, as speakers observed at the Equities Leaders Summit held in Miami on 4-6 December 2017.

However institutional investors, such as pension funds, endowments and insurance companies are stubbornly sticking to old data-less methods. In this article I will show the trajectory of evidence-based or model-based investing and why chief investment officers should reinvent their processes using AI.

It follows a recent podcast with Nvidia, a leading maker of GPU cards, who invited me to explain why deep learning promises to package the best algorithmic investing knowledge into a system that works and is universally accessible to all of us.

Evolution of data-driven trading
The earliest instances of data-driven systematic trading are value investing and trend-following. However, among these I am focusing on trend-following since it was a method where the outcome of the model was not filtered by human intuition. Trend-following was the original quant success in the Crash of 1987. Ten years later statistical arbitrage (StatArb) became the quant trade, and trend-following had a crash of its own in 1997. Another decade later, on 6 August 2007, StatArb crashed and high frequency trading (HFT) emerged as the top quant trade.

Now, the overwhelming consensus is that HFT isn’ profitable any more apart from the for biggest firms. There is a ten year cycle of changing quant trades due to changes in cheaply available technology. The dominant quant trade has changed over time due to the availability of new technology. New technology is why trades crest and fall.

This brings us to the exponential growth of quant we see today.

op2_q1-draw-1

No mandate for guesswork anymore: allocators are overwhelmingly choosing systematic/quant
A partner in an institutional allocator summarized the current view: “There is no mandate to invest in anything discretionary unless it is activist investing. We feel the need to invest in strategies that are substantiated by research, by a scientific method.”

Allocations to quant funds are set to exceed $1 trillion. That would be about a third of the net allocation to hedge funds.

“Pedestrian quant” strategies don’t work: allocators need to stop chasing the past
Neal Berger and I described in a recent Opalesque CIO roundtable why “pedestrian quant” strategies aren’t working any more. However, the machinery of relationship-based consultants and outsourced CIOs set up for institutional investors makes them very slow to react to this change. The dominant institutional investing approach is still to make an asset allocation pie chart and shove everything remotely data-driven into a small allocation to “alternatives”. A flat CIO structure would be an improvement, and help provide investors with what they really need: solutions not products.

In a panel I led with Michael Weinberg and Michael Dziegielewski, Weinberg tried to differentiate between quant and autonomous learning.

“Where we are now is far more revolutionary for asset management than simple AI-driven quant. What we have now in investment management is the equivalent of AlphaGo Zero, but for investing. Just like AlphaGo Zero generates its own new ideas, so do Autonomous Learning Investment Strategies (ALIS). We think that this disruption to the asset management industry will come from outside discretionary and even quantitative managers. These so-called ‘third wave, ALIS managers’ exploit the confluence of data, data science, machine learning and cheap computing: their brains are wired differently.

They are often hackers and computer gamers with a healthy disrespect for convention”, he said. Weinberg also mentioned the fact that allocators need to spend a lot of time understanding the approach of the manager and their competency in deep learning and AI. It is not just about chasing returns.

This brings us to the multidisciplinary nature of AI today.

op2_q1-draw-2

AI is learning from many fields not just finance.
The progress we see in AI today is very multidisciplinary. Every time we make a breakthrough in speech recognition technology, the same can now be directly applied to portfolio management. Every time we go up a notch in computing power and big data technology to support AI algorithms, we can use the same in portfolio management. In a recent article, ITG’s Ben Polidore mentions how cheaply available computing power is growing to a supercomputer level.

FIX Protocol Enables Pre-Trade Allocation

By Damian Bierman, Global Co-Chair, Asia Pacific Technical Committee, FIX Trading Community, and Head of Asia-Pacific, Portfolio Management & Trading Solutions, FactSet

Regulators should recognise the properties of the FIX protocol in their discussions about the merits of introducing pre-trade allocation rules.

damian-biermanThere are already established standards and processes in place for buy-side firms to communicate share allocations to their brokers.  Most fund managers should be able to send instructions in their native jurisdictions using the FIX protocol, which is well-equipped to channel and manage the information efficiently.

Moreover in several markets, such as Taiwan, there are regulations that stipulate that allocation information must be communicated on a pre-trade basis.  Buy-side institutions need to show accurate allocation details when sending their orders across so that brokers can check if they have sufficient funding to make a purchase or enough shares to make a sale.

In some cases, institutions will have an omnibus agreement in place whereby their broker can use an agreed-upon allocation scheme to handle the requirement, and in such cases, typically an omnibus ID suffices, and individual account and allocation details need not be sent.

In other instances, where it’s determined that there is a need to send individual account allocations to the broker, the responsibility usually falls to the trader to communicate this information. If it’s not transmitted electronically at the time of trade, that is, directly from the trading system via FIX, then it has to be sent by some other means, and usually this means via email or chat.  While this is understandably less complex from a systems integration standpoint, it means that the job is essentially a manual one, and therefore requires more time, focus, and effort on the part of the trader.

In today’s world of rising trade volumes and increasing complexity, there’s a good case to be made for streamlining manual processes when possible.  Indeed, the Securities and Exchange Board of India is, it seems, discussing the merits of introducing regulations similar to those already in place in Taiwan, so it is apposite and timely to consider how FIX can help bring further efficiencies to workflows.

Pre-allocation information can be sent electronically via FIX today using the Tag 78/79/80 repeating group.  By specifying the underlying account and share allocation information through these tags in the new order message, buy-side traders can be freed from having to communicate the information manually, and brokers can run their pre-funding checks in a more systematic fashion.

Complications
Nevertheless, as with most matters pertaining to electronic trading, it is wise not to oversimplify, because the devil is always in the details.  There are a number of factors which can add complexity to the workflow, and it is the role of the industry’s trading technologists to work through the various scenarios to make sure that the nuances are being accounted for.

For instance, at a given firm there could be internal requirements that stipulate that pre- and post-allocation information match perfectly.  In such cases, the trader needs to be able to control whether the account information is sent, and this can impact how the resulting child orders are traded.

Another complicating factor comes with handling order amendments.  Additional quantity, potentially with new or different account information, can be added to the upstream parent order at any time throughout the day, and this will have to be taken into account when sending updated allocation information across in new child slices.

Despite these (and other) complexities, having the ability to transmit pre-allocation information natively via FIX is already creating efficiencies in the order-execution lifecycle.  As new regulations are being considered by markets throughout the world, the practice of streamlining this workflow via FIX has the potential to gain additional traction in the near future.

We’d love to hear your feedback on this article. Please click here

globaltrading-logo-002

Announcement : In memory of Tim Healy

April 11th, 2018.

It is with the deepest regret that we inform you that Tim Healy has passed away.

Many knew Tim as FIX Trading Community’s Global Marketing and Communications Director, where he had worked tirelessly for the past few years raising awareness of FIX’s efforts to engage and help the industry.

Before that, many will know him from his days working in the City of London. He spent his formative years building up a wealth of experience at Deutche Börse, ITG, CitiGroup and Coutts, to name just a few, working in investment banking in equity trading, front office trading systems and FIX Connectivity.

Tim had been battling a rare illness for some time, but was stoic about working through it. His strong will and positive outlook never left him; and he worked right until the end, even attending the EMEA Trading Conference just weeks before he died.

Tim’s sharp wit and calming presence made him hugely popular with his colleagues and peers alike, and our thoughts are with Tim’s family at this very sad time.

He is greatly missed.

 

[divider_to_top]

Buyside profile : Huw Gronow : Newton IM

NewtonIM_Huw GronowBrave new world?

Huw Gronow assesses the impact to date of MiFID II and how Newton Investment Management is responding.

What do you think the impact of MiFID II has been to date?

MiFID II has not been a big bang type of event, which on reflection has not been surprising because we all knew what the requirements were and there was a lot of preparation. The key thing for us and other buyside firms was to improve and enhance the processes in order to meet the new higher standards. The regulation on best execution for example, requires firms to take the higher standard of “all sufficient steps” versus “all reasonable steps” to achieve the best possible results for clients. For many firms who have hitherto introduced a culture of best execution as a process, this should not represent major change.

As for the actual impact on trading, it is still too early to say. Due to the delaying of the implementation of the dark pool caps, the transition was perhaps smoother than it might have been. Arguably the delay has made the transition less of a cliff edge than it may otherwise have been as brokers will likely have made the changes to their routing logic anticipating the DVCs (double volume caps) being implemented on time. In these early stages though, some predictable outcomes have occurred: the dark MTFs have lost significant sub-large in scale (LIS) market share, and block trading seems to be on the increase as we see adaptive behaviour from institutions.

Where do you think the flow will migrate to?

It will depend on what mechanisms and routing logic brokers will use and how they will adapt in this new environment. At the moment there has not been any significant migration to lit exchanges and MTFs, while non-displayed large in scale, systematic internalisers (SIs) and periodic auctions have attracted flow. It is still to be seen whether that will continue to be the case. There is no doubt though that the regulators will be closely looking at trading flows.

As for the research side of unbundling, there were an increasing number of asset managers, including Newton IM, who decided to pay for the research themselves versus passing it down to clients.

What was the reason behind this?

Newton IM covers external research, rather than asking clients to pay. We have a strong global research capability tasked with producing stock ideas for exclusive use in its client portfolios. It represents a fundamental pillar of our wider investment process. However, the firm also draws on some external research and analysis to inform and support this process, and to incorporate relevant and healthy challenge to its perspectives.

In terms of unbundling itself, I think the more widespread it becomes, the links between research and trading commissions will decouple. I expect we will see open competition for execution given the freedom of choice as well as evidence that the high standards of best execution under MiFID II were applied.

There has been a shift from TCA (transaction cost analysis) to BXA (best execution analysis) – what have been the drivers and how do they differ. What tools and analytics do buyside firms need?

The discipline of TCA has evolved considerably over the past ten years and now we are seeing it being applied across more asset classes aside from equities. However, it is only one part of the process and you have to apply the principles of best execution across the entire pre- and post-trade lifecycle. This has meant that the dealing desk has been promoted from just being a clerical adjunct to an integrated part of the process. There is now continual feedback between the traders and portfolio managers so that they can add actionable insights to the investment process. This has meant that the relationship is more of a partnership which is why we moved our trading desks and investment desks closer together. It has improved communications between the two groups.

Also, under MiFID II, the priority is to create a process that consistently delivers the best outcomes for clients. The regulation creates an opportunity for buyside dealing desks to add value to the investment decision-making process through using execution data. It allows you to display and decompose the costs with the portfolio manager and then create a feedback loop throughout the process, evolving a culture of continuous learning and process improvement. Each buyside will choose and apply technology and analytical solutions that best applies to their own clearly defined processes; but it is clear that as the volume of data increases exponentially, the capabilities to interrogate, interpret and produce actionable insights will fall on the execution function.

How successful do you think firms will be to apply TCA to other asset classes such as fixed income?

I think in time the principles of TCA will be extended because there will be more data available to calibrate for fixed income trades. It will take time but we are already beginning to see this happen in FX. One of the keys will be the metrics and not the benchmarks used to measure TCA. There will not be a one size fits all solution, but they will have to take into account different trading styles as well as the volatility of trading.

Huw Gronow

How do you see the execution landscape changing in the wake of MiFID II and how is that changing the relationship between buy and sellside?

The fragmentation that we have seen since MiFID I has not led to lower trading costs for institutional investors from the data that we have seen; one of the reasons is that by definition you are spreading your risk across a larger number of venues. I expect to see further fragmentation under MiFID II and the continued move from high touch to low touch as well as to large in scale venues.

This shift has also changed the relationship with the broker. A generation ago, it was based on people holding the information, while today it has shifted to a more technology-based relationship. Buyside traders have also become much more sophisticated and analytical. They have adopted a DIY approach and are doing much more in the way of self-directed trading. In equities, this has meant interacting with exchanges and different venues across markets as well as being more selective and trading in large in scale blocks. The key is to look at the investment strategy and to determine how urgent the trade is. If it is less urgent, for example, you can be more patient and trade in block venues. The introduction of conditional order types has also offered some flexibility in the process.

Has Newton IM met the challenges of MiFID II and market conditions?

Newton IM has devoted significant resources to implementing MiFID II on time and has put in place the necessary systems, controls and checks to help ensure compliance on an ongoing basis. Overall, though we have ensured that the team contributes more to the investment process in terms of insights from trading experience as well as assessing the impact of positions and inventory aggregation on portfolio construction. We have adopted real time analytics and tools which inform dealers on the technical decisions that they can take through the trading cycle.

What do you see as some of the biggest challenge going forward?

Volatility. The challenge is to be nimble and to be able to quickly adapt the trading style and execution and to ensure that all channels of communication are open with the portfolio manager. There is also the perpetual buyside challenge of searching for liquidity. This is why it is important to have a close relationship with the broker who can help you navigate the different venues in the market ecosystem and for dealers to understand that environment and the impact of their interactions within it.


Huw Gronow joined Newton Investment Management (IM) as Head of Dealing in October 2016. He was formerly Head of Equity Trading at Principal Global investors for 12 years. Prior to that, Gronow worked for BlackRock International and TT International as an equity trader. He holds an MA in Natural Sciences from the University of Cambridge, as well as the Chartered Member status of the CISI (Chartered MCSI).©BestExecution 2018

[divider_to_top]

Sellside profile : Thomas Biotteau and Mark Freeman : Kepler Cheuvreux

OW.THE EBB AND THE FL

Thomas Biotteau, Deputy of Global Head of Electronic Execution Services and Mark Freeman, Head of Execution Sales at Kepler Cheuvreux provide a sellside assessment of the ongoing changes in the equity arena post MiFID II.

What have you seen in the first weeks after the MiFID II implementation date?

Biotteau:It has been business as usual because the dark pool caps (DVC) were not implemented. However, we have seen an increase in periodic auctions on Cboe Europe as well as a general rise in large in scale (LIS) trading volumes. Since the 12th of March and the DVC implementation the trend towards periodic auctions and LIS has been accelerating, as it does at Kepler Cheuvreux in terms of volume traded on these new pools. The buyside are also cutting their broker lists due to unbundling, and in Europe I expect the core list to be below 20.

Freeman:I think its testimony to the industry that despite the increase in complexity, everything seemed to go smoothly and there were very few outages. There was a great deal of work being done in the second half of 2017 and that paid off during implementation. However, MiFID II will be a work in progress because you can’t bring in 7,000 pages of regulation and expect everything to be done in the first few weeks. There were a few little niggles from a regulatory perspective, such as the delay in the dark pool caps. We have also not seen people take advantage of the new tick size regime.

In general how do you see trading change in terms of high and low touch etc?

Biotteau:There has been a transition from high touch to low touch, with more trading moving towards electronic trading. We have seen an increase of around 40% in low touch and I think this will only grow in the months to come.

Freeman:If you look at the US, it is evenly split between low and high touch and Europe is catching up. In the past there was not that focus, or client interest, on the quality of execution but now the buyside are more accountable. This is not only being driven by MiFID II but by greater scrutiny of fund charges and the continual move to passive from active investing. There is a much more detailed process in place and clients are now looking at execution, timing and strategy in greater detail.

How are firms changing the way they use TCA?

Biotteau:The larger buyside firms are equipped with their own TCA and are running their own broker analysis comparing brokers and algos. On our side, we are an agency broker and have always had an ‘open bonnet’ as to where we trade. People are looking at a wider group of benchmarks to measure TCA against, but I think one of the biggest changes is that it is not only being used for compliance purposes but also how to improve performance.

Freeman:One of the big questions today is how the data can be used in the investment decision making process. In the past, the quality of data to conduct meaningful analysis was questionable but today, there is much greater transparency and granularity.

How do you think the SI regime will develop?

Freeman:In the US, there is essentially one regulated trading venue, but there are too many trading options in Europe. One is too few, but 100 is too many. An SI is basically a risk trade and they are slightly more popular in the UK than the rest of Europe because the country has traditionally operated on a risk framework. One of the problems is that there is a lack of information in terms of an SI market share and trading activity.

Biotteau:It is still very early days, and I agree that there is not enough information available. I am optimistic that we will be able to make a meaningful assessment after the end the first quarter. However, it will also take time for liquidity to shift there.

Kepler-Freeman:Biotteau

How can firms differentiate themselves post MiFID II?

Freeman:Although there is a lot of focus on the technology and tools, that is only one part of it. It is also about hiring people from different backgrounds and disciplines working in the company. This helps generate different ideas which can set a firm apart. For example, we look for PhDs or people with maths and engineering degrees who may never have worked in financial services but have an interest in the sector and want to want to explore a new area.

In 2017, Kepler Cheuvreux signed on more than 300 clients across its range of execution, research and advisory services – what were the drivers behind the expansion?

Biotteau:We have always been an agency broker with great expertise in small and mid-caps and in that sense we have not had to change our business practices under MiFID II. Nor has our focus on excellence of execution, liquidity access and performance. As the flow increasingly moves towards low touch or algos, we aim to beat the benchmarks with a strong emphasis on maximising the reward and minimising the risk. We adopt a passive approach because we believe that the more aggressive you are the greater the market impact.

Freeman:In the same way as execution is evolving as a product so too is research. We have expanded our footprint through strategic partnerships with Piper Jaffray in the US and Swedbank in Sweden to cover the Nordics. We have also opened offices in Oslo and Brussels due to our co-operation agreement with Belfius. There had been a concern that small to mid- cap research would disappear because of unbundling, but the retention of our research product has been 98%. This is because people still want quality information on these more difficult to trade, less liquid and tightly held stocks.

What do you see as the biggest challenges and opportunities going forward?

Biotteau:Navigating the changes brings both opportunities and challenges. The lines are blurring not only between the buy and sellside but also among regulated stock exchanges, trading venues, block trading platforms and SIs. For us, the most important thing is to stick to what we do best, which is being an agency broker and adding value in terms of execution and research.


Thomas Biotteau started at Kepler Cheuvreux in 1999 (formerly Bank Julius Baer Brokerage). He has a background in statistics and computer science, and has held different positions in the group in both execution services and IT. Since 2000 he has been responsible for the electronic product build, and has led the Electronic and Algo Quant teams. In October 2016 he became deputy Global Head of Execution Services, overseeing all execution channels, alongside Patricia Shin.

Mark Freeman joined Kepler Cheuvreux in September 2008 to head sales of the electronic trading product and is currently Head of Execution Sales, covering all the firm’s European accounts.  Freeman has over 20 years’ experience in the electronic trading space.  He initially joined Instinet in 1990, where he was head of UK and European sales and then spent five years at ITG Europe, covering European institutions and hedge funds and in addition, has worked for Portware and Nyfix.

©BestExecution 2018

[divider_to_top]

Profile : Jon Foster : SmartKarma

FSmartKarma, Jon FosterORGING A NEW RESEARCH PATH.

While MiFID II’s research unbundling rules may be an impetus, Jon Foster and his co-founders realised the research model needed fixing long before the new rules saw the light of day. He explains how they are trying to fix it.

What was the original idea behind Smartkarma?
Raghav Kapoor, Lee Mitchell and I started the business because we became convinced that the operating research model at the time was unsustainable and not meeting the needs of the customer. It was ridiculously inefficient, as the process was so manual and there were conflicts of interest within the big sellside institutions. We decided we could offer the same wide range of service with independent analysts including high quality research, analyst calls, corporate access, financial models and bespoke projects.

Our platform would help achieve economies of scale as well as provide content from different areas of expertise. For the analysts it offered the opportunity to be part of a bigger eco-system and to share their thoughts with clients and each other.

Why did you choose Asia as your launchpad?
One reason was that we were all based in the region, but we also felt that there was a gap in independent research space which is more established in the US and Europe. The region is developing but it is a particularly complex and fragmented marketplace with different cultural biases and business practices. This is especially true in Japan, which is a very conservative market and the power of the corporate brand is sacrosanct. We also wanted to enable them to make linkages so that research from an analyst covering the semiconductor industry in Taiwan could also be relevant for someone looking at upstream tech companies in China and Japan. Our thinking was that if we could crack Asia, then we would be successful in other parts of the world.

It’s early days but what has the impact been of unbundling?
When we launched our platform (in 2016) we never set out to be a MiFID II solution. We thought that the whole investment research industry was in need of a rethink and we were responding to the structural problems where the needs of the customer were not being met. Our aim was to bring investment research into the future and create a new model that uses different technology such as artificial intelligence, data analytics and connective networks. We think that MiFID II, with its focus on transparency and creating a more efficient structure, serves as a good tailwind for us. We do not see it as the end but the start of a process with people looking at how they can improve that process. The shakeout won’t happen for another two to three years.

What are the challenges of valuation? I have read that macro or maintenance research has been assessed as harder to value – at least, in a way that would be meaningful to clients. How do you see this developing?
You have to think carefully about valuing research products because what is meaningful to one person, may not be valuable to another. Sellside firms provided global waterfront coverage and the costs were packaged into trading commissions with their buyside clients. I think their pricing structure will evolve over time but the question you have to ask is whether you want to buy research off the shelf, or as part of a suite of services. We see it as being a dynamic process, similar to the streaming services seen in the music industry. We charge a flat annual fee which gives people unlimited access to a wide range of information including research reports, analysis on initial public offerings and news on different political events.

Jon FosterWhat do you think will happen to research budgets?
I believe research budgets will shrink because many asset managers have decided to pay for research out of their own P&Ls. This is making them focus much more on getting value for money than they did in the past. They will be looking more closely at research that can help them make better informed decisions and generate alpha. The evolution will also be on the other side and a more level playing field will be created. If you can offer great expertise, a quality product and services at a highly efficient price point, then you will win out.

What do you think will happen to sellside research?
One of the main problems with sellside research is that up until very recently there was virtually no change to the product or the material that was covered. There was little innovation and reports were still being distributed over email in a PDF form, which no one was reading due to replication. The process was also very manual, with banks relying on sales people for distribution, leaving asset managers to find the relevant information in the reports, which at times was like finding a needle in a haystack. We have seen big changes over the last few months and technology is playing a greater role in speeding up the process.

There are several new initiatives such as Visible Alpha – how do you see the research landscape evolving?
There are different initiatives but many are bank led vehicles and they are clearly being developed to reinvigorate the sellside model. This digitalisation of what has been done before is what we are seeing with many aggregation platforms being launched. I believe we are at the maximum point of saturation and I would be surprised to see any new entrants in that space. I expect that there will be consolidation over time because they need large economies of scale to be successful.

What do you see as the future of the research analyst? Will technology take over their role?
Technology is at the heart of what we do but in many ways there is a slight danger of putting too much reliance on AI and machine learning. They are just tools and what is important is the way they are being used and whether they can help generate returns in the investment decision making process.

I do not think it spells the end of the research analyst. While there are teams producing completely quantitative reports, the human element, expertise and insight should not be underestimated. Investment teams are still run by humans and markets are still driven by human psychology.

You have opened offices in Europe, what is the next stage of development?
The next step for us is the geographical expansion of the product into the UK and Europe, which is why we opened offices in London and Frankfurt. We are aiming to recreate the successful model that we built in Asia and to provide support to the independent research houses in Europe. We are also focusing on the US which is spending vast amounts of money and is in need for unconflicted research. We are scaling up our support for independent analysts with products that allow analysts to provide their full suite of research services, which will ensure they can grow their businesses.


Jon Foster, co-founder of Smartkarma has spent more than 15 years in Asian financial markets. Previously he was a director and head of the Global Special Situations Group at Religare Capital Markets from 2010 to 2014. The company acquired Aviate Global (Asia) where he was a founding partner and director from 2009-2010. Previously, Foster was a Junior Portfolio Manager at Millennium Partners, and one half of the two-person team that founded Churchill Capital in Asia. He graduated from RMIT University, Melbourne with a Bachelor of Business, Economics and Finance.

©BestExecution 2018

[divider_to_top]

10 Year review : MiFID : Lynn Strongin Dodds

A DECADE OF CHANGE.

Lynn Strongin Dodds looks back at the post-2008 world and how MiFID has altered the landscape.

Long before the financial crisis wreaked havoc on stock markets and investment banks, MiFID I was being hatched in the hallowed halls of the European Commission. It was officially launched in 2007 with the aim to shake up the status quo in the equities landscape. However, as events the following year proved, much more work was needed to be done to corral the worst of the excesses and improve transparency.

MiFID I also did not create the promised single market in investment products and services because the 27 European countries had their own interpretations and moved at their own paces. Moreover, best execution was still seen as a tick-box exercise due to the wording of the directive which only required sellside participants to take all ‘reasonable steps’ to achieve their objective. Equally as riling, but common with all new rules, the process was an expensive exercise. A study published last year by Thomson Reuters showed that it not only cost more than expected but also took longer to bed down, as well as highlighting the need for a flexible business model.

Breaking new ground

Hindsight, of course, is always a wonderful thing and there is no doubt that MiFID I broke new ground in loosening the grip of the dominant stock exchanges. The introduction of MTFs such as Bats, Chi-X and Turquoise was initially dismissed but they soon proved their mettle and steadily chipped away at the market share of the large incumbents.

In fact, a report last year by the European Central Bank (ECB) showed that the share of equities trading on MTFs in Europe increased quickly from 0% of turnover in 2008 to 18% by early 2011. Although primary exchanges still accounted for the lion’s share of exchange-based equity trading in each country – around 60% in 2016 – MTFs comprised the remaining 40%. The irony of course is that they are now back within the stock exchange fold with Chi-X being acquired by the Chicago Board Options Exchange (Cboe) while Turquoise is part of the London Stock Exchange Group.

Dark pool trading also enjoyed a boost, climbing from less than 1% of the volume of equities trading recorded on European exchanges in 2008 to 8% in 2016, according to the ECB report. While some of the initial growth in market share coincided with the emergence of new venues, in later years, the increase was mainly driven by existing venues consolidating liquidity and increasing their market share.

The one new breed of venue that did not take off was the systematic internaliser (SI) regime. Brokers instead opted for the more flexible broker crossing network (BCN) platforms which allowed their clients to interact with their principal flow on an over-the-counter basis.

“If you wind the clock back that far, the concept of equity trading was tied to the national exchanges and one intention of MiFID I was to create competition and allow the creation of pan-European MTFs,” says Adam Toms, CEO Europe of OpenFin. “Electronic liquidity providers arrived and capitalised on the new liquidity landscape. The process has only become more sophisticated with the introduction of smart order routers and more complex execution algorithms, but it has also led to fragmentation and smaller order sizes”

Dario Crispini

The dispersion of liquidity has been another major challenge. “The removal of the concentration rules under MiFID I was a good thing, in terms of competition, but one of the unintended consequences was the fragmentation of the liquidity,” says Dario Crispini, founder and CEO at Kaizen Reporting and former manager of the transaction monitoring unit of the Financial Service Authority between 2001 and 2010. “As the market evolved, this led to more complex trading strategies and high frequency traders and algorithmic trading which caused distortions.”

One of the most notable was the infamous ‘flash crash’ which was immortalised in Michael Lewis’ book Flash Boys. The US markets nosedived by 1000 points in record time due to hyperfast, computer-driven trading outfits, which now account for about half of all US equity trading, according to Credit Suisse. They also constitute a substantial chunk of all trades – between 20% and 40% – in the European, including UK, markets.

The evolving buy and sellside relationship

Not surprisingly, the advent of electronic trading and low touch tools changed the dynamics between the sell and buyside. “The buyside still values the sellside but it is from a different angle,” says Steve Grob, director of group strategy at Fidessa. “The relationship is being driven by technology, so they are still interested in the sellside kit – smart order routers, algos – but also in the wider execution consulting services they can offer. This means someone who is au faitwith technology, is on top of compliance issues and can also communicate clearly and in a timely fashion in terms of the different trading options that are available as market conditions change.”

Jeremy Davies

Jeremy Davies, co-CEO and co-founder at RSRCHXchange also believes that the sellside will have a role to play in the more complicated trades. “Voice trading will always be used for the larger block trades or the complex multi-leg trades.” He adds, “However, the relationship has changed over the years and it has become more professional. I think both sides appreciate the synergies they can create together. For example, corporate entertainment has virtually disappeared since the global financial crisis and the focus is much more on the services provided and not who took me to the rugby.”

Rebecca Healey

The changes are not only between the two different camps but also within the confines of the buyside. “If you go back a decade, the buyside trader was almost an appendage to the portfolio manager,” says Rebecca Healey, Head of EMEA Market Structure for Liquidnet, “Today, they have a greater role in the investment process before a strategy is implemented. This is mainly due to technology and the evolution from the blunt tool of direct market access to tools that allow a more intelligent way of trading.”

Tom Doris

Tom Doris, co-founder and CEO of OTAS Technologies, a Liquidnet company, echoes these sentiments. “Trading desks were seen as a necessary evil and more of a clerical function,” he adds. “Today, head traders are actively involved in the investment process and are seeking to generate alpha. They have adopted a much more scientific way of trading and are systematically looking to eliminate the sources of friction. One reason is that algos have become more commoditised and the easy stuff does not require human judgement. This leaves them time to make judgement calls and apply risk management to the remaining and more difficult trades.”

The technology boom

While MiFID did have an impact on the buy and sellside connections, it was not the only factor. The credit crunch and ensuing regulations, such as Basel III and Dodd Frank, have had in many ways a more significant impact on the interactions between the two. The same can be said of technology. The original directive may have led to an increase in the number of cutting edge solutions, algos and services providers but the steady stream of post-financial crisis legislation triggered the fintech revolution.

In terms of MiFID itself, investment banks and exchanges were forced to bolster their infrastructure and enhance their data storage, connectivity and routing technology to comply. For example, the best execution requirement may not have been that stringent but financial institutions still had to store as well as retrieve data to attest to their commitment in trying to obtain the best outcomes.

MiFID also fostered strategic alliances such as BOAT, the trade reporting platform, which was created by major investment banks to help firms meet their pre-trade quoting and post-trade reporting obligations. “We have seen more partnerships between fintech and banks because financial services has become much more complicated,” says Healey. “The rules have been broken down by rapid advances in technology and institutions can’t function in the same way as they did in the past.”

Sylvain Thieullent

“There has been a fundamental change over the past ten years where software and technology take up much greater space in terms of the value that they provide to an organisation,” says Sylvain Thieullent, CEO at Horizon Software. “This is because there is a common agreement that people have to be smarter and much more efficient in the way they trade, invest and collect data. The banks are still struggling with legacy systems but there is more of a strategic focus and long-term approach. They are looking at solutions partnerships that over the next five to ten years are scalable and will meet their requirements.”

The next chapter

MiFID II of course. “When it went live, there was a clause in the directive that said a review would be held in two to three years but it took much longer than that,” says Crispini. “We knew there were gaps, especially in surveillance and transaction reporting. In the intervening ten years the credit crunch happened and that delayed the rewrite because the regulators thought there needed to be a much more comprehensive assessment.”

©BestExecution 2018

[divider_to_top]

Buyside focus : Conflicting interests : Dan Barnes

SEC MAKER-TAKER PILOT REFLECTS LONGSTANDING CONCERN.

Concern about maker-taker impact on order routing leads to regulatory response. Dan Barnes reports.

Inducement is a loaded term amongst asset managers. Fiduciary duty to the end investor prohibits the acceptance of payment from third parties. Sellside firms have no such obligation and so they can be exposed to conflicts of interest.

The maker-taker model is an exemplar of such a conflict. With a flat fee model, an exchange, electronic communication network (ECN), or MTF charges firms that post or take liquidity the same fee.

Using the maker-taker model, placing a passive order on a book (i.e. a limit order) earns a rebate. Placing a market or limit order that instantly executes against an order sitting on the book costs a fee. Using the ‘inverted’ or taker-maker model, firms that take liquidity are paid a rebate. These models allow a broker to get paid by their client and by the exchange.

Whether they would do that to the detriment of their client’s execution quality is an open question. However, it is not a new question; where ten years ago firms may not have realised that brokers were potentially routing for economics rather than best execution, the issue has been very public in the intervening period.

Clive Williams

“I am concerned about maker-taker, but less concerned than I used to be,” says Clive Williams, global head of trading at T. Rowe Price. “We are asking for and seeing a lot more data than we were in 2010, for example. We are analysing that data, so we can ask why a broker posted to its internal dark pool 50% of the time first. Now, they might have a justifiable reason for it, but it’s certainly much more visible so it can be challenged.”

For the dealers, routing can be a complex process. Decisions can be based upon the liquidity of the stock in question, the size of markets on which it trades, as well as any rebate model. As a result, best execution must be carefully assessed.

Phil Pearson

“The venues that have the maker-taker model are the biggest venues, so they have the majority of the market, which can make an apples-to-apples comparison difficult,” says Philip Pearson, director for algorithmic trading at ITG. “In illiquid stocks you are kind of at the mercy of where volume has traded previously, and where current bids and offers are. Feeding new bids on more exchanges is generally not the best idea for information leakage, whereas in the liquid stocks generally there might be 10 exchanges on the bid. Generally it’s better to post on inverted venues because you could get ahead of the queue a little bit.”

New pilot

As electronic trading grew between 2008 and 2010, both US and European markets were competing heavily for equity order flow and adaptation of pricing structures was a commonplace competitive tool, and pricing was contentious.

For example, Bats, now owned by Cboe, operated the maker-taker model for some US securities, with alternative pricing models for others, since 2006. The firm’s then-CEO, Joe Ratterman was publicly critical of pricing models of other venues. US equity market operator Direct Edge, now also part of Cboe, introduced the maker-taker fee structure on its EDGA platform in November 2009, reflecting the model operated by rivals.

However, the models were not always welcome. Bats abandoned plans to offer maker-taker in Europe following feedback from customers in October 2009. The issue has since become largely a US concern.

Duncan Higgins

“I think the costs associated with trading are a bit higher in Europe, and then the market structure is different; the combination of the two wouldn’t lend themselves to someone who is trading simply for rebate collection,” says Duncan Higgins, head of EMEA Electronic Products at ITG. “So, I wouldn’t think the trading for rebate collection would have an economic return in European markets.”

In North American markets concerns persisted. An October 2017 study by analyst house Greenwich Associates, found that 65% of US equity traders believed maker-taker pricing “creates distortions and is bad for market structure”, while only 10% saw it as “a benefit to market structure.”

In April 2016, US market regulator, the Securities and Exchange Commission (SEC) Equity Market Structure Advisory Committee (EMSAC) proposed a pilot program to modify exchange pricing, known as the Access Fee Pilot Program, to assess the impact of both maker-taker and inverted models upon order routing.

On 14 March 2018, it announced it would be opening the pilot programme, after a 60-day consultation period. Given the decade over which the concerns have been known, the length of time taken for the SEC to deliver the pilot may seem surprising, however the regulator has been fully occupied and under-resourced.

“They have been trying to get a pilot programme going for a while,” says Brian Urey, head of Americas trading at Allianz Global Investors. “It is definitely still an issue, there is a vocal minority who feel that it has no material impact on the market and a majority who feel that structure does impact routing decisions and isn’t necessarily in the best interests of the buyside.”

The programme involves the creation of a new ‘Rule 610T’ of Regulation NMS to conduct a Transaction Fee Pilot in NMS stocks. It would subject stock exchange transaction fee pricing, including fee-and-rebate pricing models, to new temporary pricing restrictions across three test groups, and require the exchanges to prepare and publicly post data.

Jay Clayton

“The proposed pilot is designed to generate data that will provide the Commission, market participants, and the public with information to facilitate an informed, data-driven discussion about transaction fees and rebates and their impact on order routing behaviour, execution quality, and market quality in general,” said SEC Chairman Jay Clayton, in a statement.

A test group of NMS securities will prohibit rebates and linked pricing, while other test groups will impose caps of $0.0015 and $0.0005 for removing or providing displayed liquidity. The pilot would apply to all NMS stocks regardless of market capitalisation and will include all equities exchanges.

The pilot will require the exchanges to prepare and post public and downloadable data including monthly aggregated and anonymised order routing data, and an XML dataset of standardised information on their transaction fees and rebates. It is planned to last for up to two years, ending after one year unless the SEC extends the pilot.

Self-determination

Regulation is not always the preferred way to effect change in capital markets. Self-determination often has fewer unintended consequences. However, in this instance it seems the buyside believes that the regulators have not gone far enough.

In the Greenwich study, 62% of respondents argued that the proposed access fee pilot should be modified to include an outright ban on rebates, with just 13% stating it should be approved without modification.

Yet EMSAC noted that while several commentators said that any pilot should either ban rebates altogether or include a “no-rebate” test bucket, EMSAC considered that but did not ultimately recommend it.

“You have to make some decisions based on what type of stock you own, what the book looks like,” says Pearson. “And I think that those are conversations that the buyside are now willing to have, whereas before they might have just said, rebates are bad.”

Williams adds, “If we want to avoid the regulatory hammer coming down on us, then I guess it’s up to people on the buyside to ask the right questions, and the sellside to be providing the right answers and transparency.”

©BestExecution 2018

[divider_to_top]

Equities trading : Market structure & liquidity : James Baugh & Sai Dharmayogan

NAVIGATING THE NEW LIQUIDITY LANDSCAPE.

J.Baugh - S.Dharmayogan

By James Baugh, Head of EMEA Market Structure and Sai Dharmayogan, Head of EMEA Execution Advisory Services, Citi.

Throughout 2017 and in the run up to the MiFID II go-live, there was significant industry debate around the potential rise of the Systematic Internaliser (SI) and the role that bank central risk desks would play in the new liquidity landscape.

Many anticipated an immediate shift in liquidity, particularly towards market maker SIs, as brokers looked to both outsource internal crossing opportunities that were no longer permissible and to adjust routing logic for smaller child orders in response to the new MiFID II, 4 and 8 percent Double Volume Caps (DVCs) in dark trading.

However, more in line with our own thinking, what’s transpired has been a more gradual change in the market microstructure and overall liquidity landscape, with continued growth in the use of conditional Large-In-Scale (LIS) block platforms and an uptick in the use of periodic auctions.

There are a few different factors that have contributed to this. Notwithstanding our own quantitative approach to adding new liquidity sources and market makers to the algo scheduler and smart order router, the greatest impact has arguably been the two-month delay in the implementation of DVCs. This delay provided some sense of continuity and allowed market participants to gradually adapt to the changing liquidity landscape. Whilst the share of overall business transacted on MTF dark pools was most impacted on the date of implementation itself, dropping from approximately eight percent to below five percent, this gradual change in market participant’s behaviour mitigated to some extent the actual impact of the DVCs. For example, there were several analyses by various market participants in 2017 that showed all but one of the FTSE 100 stocks would be capped out. Whereas, when the caps went into effect on 12th March, fewer names were impacted, with 85 stocks in the FTSE 100, 324 in the STOXX 600 and only 2 names in the STOXX 50 included on the list of capped securities.

Market innovation has also played a significant role in mitigating the impact of MiFID II. Of particular interest over the last month or two has been the growth of periodic auctions – notably the Cboe Periodic Auction, which has no doubt taken advantage of being the first mover in this space. The use of over the day micro-auctions has grown from around $50 million a day in December 2017 to over $900 million a day, just one week after the introduction of the double volume caps. These auctions are quite unique as they are considered lit, meeting pre-trade transparency requirements, whilst in practice being used as a source of dark liquidity due to the indicative nature of price transparency.

Initially envisaged as an alternative to midpoint price referencing dark pools impacted by the DVCs, periodic auctions are now also proving an efficient way to re-house broker crossing flow, with participants able to take advantage of broker preferencing logic when having both sides of the trade. Whilst there has been recent focus on the growth of the Cboe Periodic Auction and the amount of broker preferenced flow, we must keep in mind that at the time of writing the Cboe Periodic Auction only counted for less than 2% of the overall lit market by value traded.

In addition to periodic auctions we have seen continued growth in the use of conditional block orders, with both Turquoise Plato Block Discovery™ and Cboe LIS seeing significant growth into 2018. These orders allow participants to access above large-in-scale block liquidity, not impacted by the caps, without incurring significant opportunity cost as business can continue to be worked across the wider market whilst passively posting on multiple conditional venues without the fear of over trading.

Last but not least, SI market maker liquidity is slowly starting to gain wider acceptance. Though the extent of its growth and impact on market fragmentation is yet to be determined, we are increasingly becoming better at understanding and quantifying their benefits to certain segments of the execution process. In addition, Bank SIs and central risk functions are also proving a valuable source of liquidity where principal flow can be packaged to meet liquidity demands, whilst minimising cost of execution and market impact.

Impact on execution algos and TCA
Though the high-level impact on market microstructure and liquidity is being mitigated, these market innovations have added complexity to the execution process. Any inefficiencies in incorporating these sources of liquidity into the overall execution process is likely to impact overall performance. Moreover, if the evaluation process is not adapted to incorporate these innovations, it’s very possible one might get a false sense of comfort around the execution performance that’s being achieved.

Scheduled strategies such as VWAP, TWAP, Participate and IS are very popular amongst traditional investors looking to execute against a benchmark, and quantitative funds seeking passive execution whist minimising impact. These strategies have been very successful in taking large orders and executing them in smaller slices at different venues whilst minimising market impact. The market microstructure under MiFID I evolved to a large extent as a consequence of these strategies and helped such strategies achieve excellent outcomes. Under MiFID I, fragmentation was for most part across venues and a smart order router usually helped navigate both lit and dark pool liquidity with significant efficiency. However, with MiFID II, liquidity has been fragmented across multiple dimensions – venue, size and potentially counterparty. The latter two are of particular concern to execution algos trading to a schedule. If these strategies are not optimised to access different types of liquidity appropriately, it’s likely to result in an increase in execution costs. For example, a VWAP strategy that previously sourced 20% of its liquidity from dark venues and 80% from lit markets and now sources 10% of its liquidity from alternative venues and 90% from lit markets, is likely to see its costs increase by around 6%. However, if the different sources of liquidity are incorporated intelligently, it’s possible that the execution costs could decrease. For the same example above, if the strategy was now able to source 30% of its liquidity from blocks, periodic auctions and intelligent interaction with SIs, the execution costs could potentially decrease by over 6.5%. The end result could however be significantly impaired if the interaction with the SIs fails to take into account potential information leakage from a predictable bilateral transaction.

To be able to best achieve the execution objective in the new liquidity landscape, one however needs to adapt and be flexible with trading instructions so as to optimally incorporate all liquidity sources to achieve the best outcome for the entire parent order. Tight bands around participation and specific routing and venue selection instructions could be particularly harmful, especially in a gradually evolving liquidity landscape.

In addition to the impact on execution strategies, most TCA tools are likely to only include the lit market volume when calculating benchmarks. Even within the lit market, they are unlikely to include periodic auctions. The omission of this liquidity could potentially underestimate opportunity cost significantly. For example, the Cboe periodic auction has a 4.6% market share in the FTSE MidCap names that are impacted by the double caps. Though some of this could potentially be broker self-matches, we see that a significant proportion of this liquidity is accessible. PWP benchmarks in particular should be enhanced to account for this liquidity as these are most commonly used to determine opportunity cost.

The growth of block liquidity and conditional venues should also be closely monitored to ensure potential gaps in accessible liquidity are spotted in a timely manner. One approach to evaluating this opportunity cost would be to calculate the relevant PWP price and duration that includes all electronically accessible conditional block liquidity. Any consistent deviation of price and duration could be an indication of incurred opportunity cost.

As briefly discussed in this article, potential complexity in the market microstructure can be skilfully navigated by ensuring appropriate metrics are used to evaluate performance. It is important to ensure the TCA framework is flexible enough to adapt the metrics and benchmarks as needed to identify areas of improvement in the execution process. This in turn will allow you to appropriately calibrate the execution strategy to meet your objectives.

©BestExecution 2018

[divider_to_top]

Data management : The impact of MiFID II : Heather McKenzie

THE DATA CONUNDRUM.

Heather McKenzie looks at the impact of the new MiFID II reporting regime.

In January, the MiFID II regulations were rolled out. Between them, they ushered in new reporting requirements and tests that will increase the amount of information available and reduce the use of dark pools and OTC trading.

The transparency regime is composed of two core obligations. Pre-trade transparency, which is designed to provide market participants with near real-time broadcast of basic trade data around firm quotes. And post-trade transparency, which aims to provide market participants with near real-time broadcast of basic trade data around executed trades.

For trade reporting, MiFID/MiFIR requires reports that are near real-time broadcasts of trade data for price formation and operation of best execution obligations. These are reported via trade reporting venues from where they are disseminated to the market.

Virginie O'Shea

Challenges exist regarding the exchange of information and availability of data between and within counterparties required for timely and accurate reporting, according to Association for Financial Markets in Europe (AFME). Its report, MiFID II/MiFIR post-trade reporting requirements, understanding bank and investor obligations, states that these challenges can exist both before and after the trade is reported.

The challenges before a trade is reported include that the time required to manually input trades may exceed the reporting timeline. Missing data may lead to late or erroneous reporting. Contextual factors including counterparty status confirmation (SI/QIF/third-country venue for transparency, etc), deferral period, product scope within ‘traded on a trading venue’, counterparty location and waiver flags can also cause problems.

After-trade reporting challenges include the requirement for a transfer of information once the trade has been reported. AFME says agreeing on a trade identifier could help with end of day reconciliations. Further data may also need to be extracted from the Approved Publication Arrangement (APA) trade report to meet transaction reporting obligations (e.g. OTC post-trade indicator). Where a client is FIX-enabled, the transfer of data may be automated. Voice may be the fall-back mechanism for data transfer.

Overcoming hurdles
“The challenges of trade reporting are numerous, and the wide net of financial instruments captured by MiFIR/MiFID II means that many data fields are proving problematic,” says Virginie O’Shea, research director, Aite Group. “For example, securities identifiers such as ISINs have been inconsistently applied to securities across the globe and they are a particularly poor choice of identifier for the OTC derivatives markets; hence the very basics of identifying a security are complicated. There are a plethora of duplicate identifiers popping up because the mandatory fields for reporting these instruments are not sufficient and it makes price transparency opaque – thus negating the point of reporting. Firms are embroiled in these headaches and that is just one field of many.”

Brian Charlick, regulatory SME consultant, CGI believes many banks are struggling with MiFID II reporting requirements. “Firms are compliant, but a lot of remediation work is being done. To maintain quality and control they are using many more manual work-arounds than they would like.”

As with so many challenges, the core of the problem is data. Charlick says data quality and consistency is “still not good”, which means firms must work hard to ensure their data is up to scratch. Years of working with legacy systems that have been amended over time, or with data that was never as robust as it is now required to be, are taking their toll. “Regulators require much more information to be stored than previously. Doing that in a consistent way is the problem for most banks. They are trying to link all their client data together, but one problem is that names change over the years; often the links to accounts aren’t good. Getting information out of old Cobol systems and siloed databases is creating more problems than were first realised.”

Jerry Norton

Getting data out of a multitude of systems in time to meet reporting obligations is a significant challenge, says O’Shea, especially as firms can no longer “throw the data over the fence” at a counterparty for delegate reporting. “Both buyside firms and sellside firms are liable for the quality of the data and they cannot over-report, which has been a typical strategy for some time under MiFID I. The approach of burying the regulators under data so they cannot see the wood for the trees is not a sensible strategy in this environment.”

Making connections
Most firms’ main strategy has been to employ “armies” of compliance staff to deal with the clean-up. Smaller firms, particularly those on the buyside, “hope they will pass under the radar for as long as possible,” she adds.

A good “first step” in solving trade reporting challenges is to take a confederated approach to providing standardised data, says Charlick. Some firms have put in applications for basic customer information that hold a series of links to other systems that maintain related data, such as finance and risk systems. Legal entity identifiers can be maintained in all systems and used to link them together.

Data dictionaries, where all the information is in the form of metadata (data that provides information about other data), can link data across different fields. If a customer is described in one system as 1, 2 or 3 and in another system is defined as A, B or C, the information can be linked to produce a standard view of that customer.

 

Charlick’s colleague, Jerry Norton, head of strategy, global financial services at CGI, says the linking idea is a good way of working around trade reporting challenges. “Banks don’t want yet another data warehouse,” he says. “Data warehouses didn’t deliver what they promised to. By federating data, firms can build virtual data warehouses, which means it doesn’t matter where the data is because firms will still be able to get to it.”

It’s one thing getting to the data and another being able to exploit it to a firm’s advantage. David Pagliaro, EMEA head of State Street’s data analytics arm, Global Exchange, says it is still early days for firms on trade reporting. “However, I think we can look at how we can do more across all our data sets and customers to derive insights from trade reporting,” he says.

Dave Pagliaro

Such insights can help firms better benchmark their performance. For example, by examining block order executions over a period, portfolio managers and traders can look at how successful trades were and what drove any success. “There’s an opportunity with the data to do a lot of benchmarking on elements such as the cost of trade and success of one trade versus another. This is good information that will feed back into a firm’s best execution requirements,” says Pagliaro.

Cian O'Braonain

Cian Ó Braonáin, global lead of Sapient Global Markets’ regulatory reporting practice, says more value can be derived from data if firms follow basic data management principles. Owners should be assigned to different data elements on a wide scale across an organisation. Each owner will be responsible for the quality of the data, whether it meets certain criteria for regulatory reporting, and how it can be reused. “This cannot just happen; it has to be an organised effort across a firm because there may be too many weak links in the chain to guarantee the value of data.”

Data lakes, which are simple internal trade repositories can ensure that a ‘golden copy’ of data is used by everyone within the firm. Doing this should reduce reconciliations problems and ensure the right data is reported. “A data lake provides a use case that goes beyond regulatory reporting. It also allows firms to drive better revenues and reduce costs. Every piece data will have a specific purpose and be available to add value to clients,” he says.

Pagliaro adds that given that regulatory requirements have increased the amount of data that must be stored and reported, firms should use that information to become more targeted in terms of what they offer to clients. “The investment industry is being forced to become more like consumer markets, where information is captured in a more systematic fashion. By capturing these new layers of information, firms can better plan and make much more informed decisions for their clients.”

©BestExecution 2018

[divider_to_top]

We're Enhancing Your Experience with Smart Technology

We've updated our Terms & Conditions and Privacy Policy to introduce AI tools that will personalize your content, improve our market analysis, and deliver more relevant insights.These changes take effect on Aug 25, 2025.
Your data remains protected—we're simply using smart technology to serve you better. [Review Full Terms] |[Review Privacy Policy] By continuing to use our services after Aug 25, 2025, you agree to these updates.

Close the CTA