Home Blog Page 616

How many exchanges can the world handle? : Roger Aitken

HOW MANY EXCHANGES CAN THE WORLD HANDLE?

Roger Aitken

Among the many trading conferences I have had the fortune to attend over the years the question has always arisen as to how many stock and derivatives exchanges there would be in three, five or even ten years time. Depending on who one canvassed the answer was always different but tended to be less – not more. But the degree was often debated. Some pundits suggested there would be five or six global players in the end analysis.

My thoughts returned to this issue with news that Aquis Exchange, a proposed pan-European stock exchange established in October 2012 that has applied for regulatory approval as a Multilateral Trading Facility (‘MTF’) from the UK’s Financial Conduct Authority.

For sure Alasdair Haynes, CEO of Aquis Exchange who was once in the running to head the London Stock Exchange (‘LSE’), is an astute operator and seasoned City veteran. Clearly the opening of this exchange’s doors to the BT Radianz Cloud community provides them with an opportunity to gain rapid access to the widest possible range of market participants and traders. It is also touted as extending the benefits of their subscription pricing model to all professional investors.

However, have we not been here before? Aquis Exchange now joins over 100 trading venues that are already part of the BT Radianz Cloud community and one wonders how much longer that number can be maintained. To its credit Aquis is seeking to “revolutionize” the European trading landscape by introducing subscription pricing and innovative order types. Others have tried before and not always succeeded.

The EU’s Markets in Financial Instrument Directive (‘MIFID’) led to a plethora of trading venues across Europe in its aftermath and resulted in a significant contraction in execution tariffs for trading equities. This was welcomed by banks and broking houses who felt they were frankly being ‘milked’ by incumbent exchanges in the shape of the London Stock Exchange (‘LSE’) and Deutsche Boerse amongst others.

At one point post ‘MiFID I’ coming into play there were nineteen separate trading venues for UK equities, while Europe became home to 27 exchanges and 19 MTFs. Clearly it spurred competition but such fragmentation was not sustainable long term, And, so it proved. Mondo Visione, a firm monitoring the share price performances of quoted exchanges globally, today still analyses 25 such entities in its FTSE Mondo Visione Exchanges Index.

In the intervening years new players – MTFs and dark pools – have either closed down or been acquired by stronger operators. With the average trade execution cost for trading equities in Europe having plummeted from 2 basis points (bps) at MiFID’s outset to 0.2bps now the commercial model for many was unsustainable. The upshot? Many new entrants operated at a loss, with just a few at breakeven and fewer still making a profit.

Exchange venue fragmentation subsequently gave way industry reconsolidation. And, even the big operators – the LSE included via its LCH.Clearnet Group acquisition – have sought to provide their customers with value-added services on the post-trade side (clearing and settlement).

The current position where over 90% of European equity trading in each individual European country takes place on just two exchanges equally might not be viewed as so rosy either. Aquis’ aim like rival exchanges/MTFs such as Boerse Berlin’s Equiduct with its innovative market model to bring fresh competition into the marketplace and to lower the trading costs maintained by the existing duopoly is admirable.

Price and choice are one thing, but fundamentally it will all come down to liquidity, speed of execution, added offerings across the trade lifecyle and the most efficient model – vertical or otherwise. Europe probably needs more than just two exchanges, but probably less than twenty.

Roger Aitken

 

 

 

What do we do about the Dark? : Lynn Strongin Dodds

Lynn_DSC_1706_WEBWHAT DO WE DO ABOUT THE DARK?

It seems that the MiFID review has been with us forever but the debates and discussions continue. In a recent twist, The London Stock Exchange Group, nine trade bodies and four of the world’s largest asset managers have joined forces to try and change some of the key provisions. At the heart are the strict rules on dark pool trading. They are proposing a price improvement rule which ensures trades to be completed at a bid or offer price that is better than that available in the wider public market. It also ensures dark pools offer a benefit over displayed markets. It has been adopted by Canada and Australia and other markets are exploring their options. However, it is draining liquidity out of the system and both countries have seen volumes drop. Is this ultimately a good thing for markets and is there a better way to curb some of the excesses with trading in the dark?

Lynn Strongin Dodds

 

TCA : WHAT’S IT FOR?

LiquidMetrix - Darren Toulson
LiquidMetrix - Darren Toulson

TCA – WHAT’S IT FOR?

LiquidMetrix - Darren Toulson

Darren Toulson, head of research, LiquidMetrix

We’re often asked: beyond a regulatory duty, what’s the purpose of TCA?

Done correctly, TCA can tell you many things about your current execution performance, including why your performance is good or bad and what you can do to improve it. Done poorly, TCA is something you run once a quarter, file away and forget about.

So how can you best use TCA, what might your TCA be telling you and what kind of questions should you be asking of it?

TCA performance measures: Implementation Shortfall and VWAP – Are they useful?

The summary sections of most TCA reports
are usually presented as a set of high-level performance metrics showing how good or bad your order execution was. Results are typically presented with your flow broken down by broker, region, order size and algo; the purpose being to present your overall performance as well as highlighting specific areas where you may be under or over performing.

Which performance metrics are generally used?

There are two key families of TCA metrics that most people focus on. VWAP measures compare the price you achieved versus the market price over the period of your order. Implementation Shortfall (IS) examines the difference between the market price when you decided to trade (or started trading) versus the average price you actually achieved when completely filling your order.

Within these two broad definitions there are many different ways to calculate the measures. For instance, the benchmarks used may be based on VWAP for the whole Trading Day, Interval VWAP from first fill to last fill, VWAP including or excluding certain venues or trade types, VWAP with a limit price applied or a ‘PVWAP’ simulating what price you would achieve if you had participated at a fixed percentage of market volume until your entire order was filled.

The level of complexity and customisation that has been required over the past years for the
top level IS and VWAP figures hints at the fact,
that for TCA to be really useful to a wide range of different participants and trading styles, it’s very hard to tell the whole story using one or two simple measures.

There’s nothing wrong with IS and VWAP. IS tells you how close your execution price was to
the price in the market at the time you decided to trade; VWAP measures how well your broker/trader has managed to time the executions such that they capture market spread and trade at a constant participation rate. These are useful things to know.

But what it does suggest is that you should see any ‘top level’ TCA measures as exactly that: numbers to be explained by other statistics or breakdowns so you know why performance is good or bad.

Moving from simple TCA performance reporting to ‘Why?’

Let’s consider one of the more popular, top level TCA measures: Interval VWAP. Say your TCA report tells you that for a certain batch of orders, you underperformed Interval VWAP by 2.4 BPS with a standard deviation of 8.6 BPS.

The obvious question is why? To answer this question let’s consider what you have to get right in a trading strategy to match or outperform VWAP.

First, you need to trade consistently at the market rate throughout the order duration. This
will reduce the standard deviation. But if you want to beat average VWAP over all of your orders you need to capture the bid/offer spread as often as possible on individual fills, otherwise on average you may expect to miss VWAP by half the bid/offer spread. To capture spread you will need to rest on various lit and dark venues rather than aggressively trading in the market. But some venues you
rest on may not always provide as much spread capture as desired due to HFT participants
picking you off at bad times or because of simple adverse selection.

Care must be taken when you’re unable to fill passively as when aggressing the lit market, it’s easy to go too deep into the order book and pay prices away from short term TWAM (Time Weighted Average Mid). Also, when accessing markets in different locations you need to be careful you don’t get front run by HFT traders cancelling or trading orders in front of you.

In other words, VWAP performance is a combination of timing, spread capture, adverse selection, short-term impact/reversion, venue selection and HFT gaming. In the past, most TCA analysis glossed over this kind of low-level detail and presented just the top-level numbers. The reasons for this may have been a combination
of lack of clean high frequency client and market data, relatively less powerful computing options and perhaps the fact that in the past, markets and algos offered fewer choices. But in today’s markets, there really is no reason not to dig deeper and look for the ‘why’ behind the top level numbers.

The TCA ‘Pyramid’

We view TCA as not just a set of top-level numbers, but rather a pyramid of different metrics with each level helping to explain the performance of the level above. (See Figure 1.)

TCA-LiqMetrix_Fig1_1050x750

At the top of the pyramid are IS and VWAP. As we go down the pyramid we increasingly look at more and more granular aspects of trading relating to fills, venues and even the effectiveness of order submission strategies to counteract HFT gaming at the millisecond level.

Most people looking at TCA reports, especially on the buyside, are unlikely to have direct responsibility or control over many of the micro- strategies being used to execute their orders.

However, it’s important to realise that overall execution performance will ultimately depend on how well others are implementing those choices on your behalf. Brokers / Venues / Algos that perform well at a micro level should, on average over a period of time, mean better top-level performance. Mistakes made at the low level through bad
venue selection, bad algo selection or bad trading connections / SORs will ‘bleed’ into overall performance figures. Individual mistakes at the
fill level are usually trivial, but repeated over many executions can lead to death by a thousand cuts.

Another advantage of drilling deeper is to separate out luck from deserved good performance. The fact that Broker A’s VWAP algorithm is better than Broker B’s for a couple of hundred orders in one quarter may well be fairly random and the ranking may reverse next quarter. But if you can see that over thousands of fills, Broker A had better spread capture or that Broker B was accessing a ‘toxic’ Dark Pool (illustrated by sharp price impacts and bad TWAM) then the fact that Broker A is better is more likely due to skill than luck.

Drilling into this level of detail also allows buysides to get more comfortable that their broker is acting strictly in their interest. By seeing breakdowns of which venues a broker routes most orders to, can illustrate any venue biases the broker might have when compared to the overall market. For instance, if a broker sends a relatively large proportion of flow to its own broker crossing network (BCN), this is fine as long as it’s clear that the performance of the flow sent to that venue is at least as good as flow routed elsewhere. If a broker posts a lot of flow passively to an MTF with a rebate scheme, then again, this is fine as long as you can see that the strategy is effective at both the fill and order levels.

Some practical examples

We will show two practical examples of the TCA pyramid. The first is an example of using micro metrics to explain macro performance (see
Figure 2) and the second of using micro to identify good / bad routing choices.

TCA-LiqMetrix_Fig2_1350x750

The view above shows a breakdown of a set
of day-long VWAP orders executed via a broker.
The basic top-level TCA metrics are that the
orders missed VWAP by 2.62BPS and had an Implementation Shortfall versus market opening price of 35.95BPS. So performance is OK but not great.

The rest of the view breaks down these top-level numbers into factors that explain the performance:

 

 

The price chart to the left shows market prices (in the same direction as each of the orders) before, during and after the order. The key point is the large step in prices between the open and arrival time of orders: over 20BPS. So, most of the 35.95BPS of IS cost is caused by orders arriving ‘late’ to the trading desk. There was also a small cost (2.12BPS) for the delay between orders arriving at the trading desk and the first fill.

Spread Capture for fills was approximately 50%, so the 2.62 BPS ‘miss’ for VWAP wasn’t due to not capturing enough spread but more likely a result of bad timing. We can also see from the Fill Rate graph that the orders are somewhat rushed towards the end of the order (the fill rate curve is not perfectly uniform and rises at the end). Generally this will not help VWAP.

There is clear evidence of mean reversion occurring after the last fill – about 4.5 BPS within fifteen minutes after the last fill and a similar amount again by day close.

Price momentum the day before the orders are received is actually slightly negative, so the relatively poor performance is most likely not due to having to chase a rising/falling stock.

This only scratches the surface but does show how an overall VWAP or IS number is more useful when decomposed down to reasons why the number is good or bad.

Further down the pyramid, at fill level, the figure below gives an idea of the type of analysis that can be done at this level of detail.

TCA-LiqMetrix_Fig3-4_992x375

The two histograms below (Figs. 3 & 4) show spread capture for resting orders in two dark pools. Pool B is a near perfect EBBO mid-point matching pool and almost all fills happen at exactly 50% spread capture. Pool A, on the other hand, shows strong adverse selection; many of our fills actually happen at EBBO touch rather than mid and overall we only get about 35% spread capture.

Figure 5, below shows high frequency price movements just before and after fills happening
on 3 different pools. Pool C is excellent with very little price movement on the main market before or after our trades so we can be confident of trading at a neutral time. In contrast, our fills on Pools A and B appear to happen mainly when prices are moving so we can see adverse impact followed by mean reversion.

TCA-LiqMetrix_Fig5_890x375

This probably means the other side of our trade is picking this time to trade to their advantage and our disadvantage: we are being gamed.

The point of this type of low-level analysis is to explain why top-level numbers are good or bad. A bad spread capture on a dark pool will lower average spread capture and mean we may underperform VWAP.

Mean reversion following passive fills means we will be trading away from TWAM and underperform VWAP. Price impact following aggressive fills means we are probably trading too fast and pushing prices and so will underperform IS. And so on.

Conclusion

TCA shouldn’t be something to do once a quarter and file away. Done right it can tell you not just how well you or your broker are performing but also why and how to improve.

With recent press and headlines about toxic dark pools, high frequency traders and flash crashes, there is understandable concern that these things are affecting the overall performance of buyside funds. TCA can be the means to prove or disprove to what extent any of this might be happening, what impact it has and even suggest where to look for improvements.

©Best Execution 2013

[divider_to_top]

 

 

TCA : MULTI-ASSET TCA

MULTI-ASSET TCA.

ITG_Yossi.Brandes-2 ITG_Ian.Domowitz-1

Yossi Brandes and Ian Domowitz, Managing Directors at ITG ask what can multi-asset TCA learn from the equity experience?

(Photos L to R: Y. Brandes, I. Domowitz)

Lessons learned and things to forget

The movement of transaction cost analysis (TCA) into futures, FX, and fixed income instruments
is largely powered by experience with equities.
The experience is limited. Five years ago, only 28 percent of institutional desks employed TCA on a daily basis, although the number is now 68 percent and growing.(1) As late as 2010, the main use of equity TCA was for compliance purposes. One year later, survey results across 408 institutions globally suggest that two-thirds of institutions place the responsibility for trading cost performance and improvement squarely with the desk.(2)

TCA faces an obvious challenge – to proactively lower transaction costs by helping managers
and traders construct portfolios and leverage implementation strategies using imperfect information. Equity TCA is no longer relegated to forensics. Nor is it about the predictability of cost for any particular order.

TCA is about insulating trading and portfolio performance from changes in market conditions, by providing information which leads to intelligent strategy choices and the ability to change them. This is the first lesson, and it applies to all tradable financial instruments.

A second lesson comes from survey results. As early as 2008, 40 percent of survey participants suggest that alpha is lost primarily through trading costs, with timing (a major component of costs) accounting for 14 percent of respondents.(3) TCA is all about investment returns, not about achieving an average price. This lesson was taught as early as 1981.(4) Some examples might help make the point in the current environment.

We examine the relative return of approximately 160 emerging market funds in Figure 1. Data are arranged by quartile, with the relevant return along the vertical axis in the dark bars.

TCA-ITG1_Fig1

The average return in the first quartile is 5.56 percent, while the average in the second quartile is 1.81 percent. The average cost for an order in our emerging market universe (the lighter bars) is 74 basis points (bps). The first quartile of funds lose 13 percent of their return to transaction costs while the second quartile loses 41 percent. In a high volatility environment, 26 percent of average return is lost to implementation for the best funds, while a whopping 79% of the relative return disappears in the second quartile.(5) A mere 11 bps of return separate the best fund in the second quartile
and the worst fund in the first quartile. A modest improvement in the implementation process moves a fund from the second to the first quartile.

Management

Other examples come directly from buy-
side traders and portfolio managers. Robeco Investment Management reports a saving of 20 bps by combining TCA with portfolio management style analysis.(6) Principal Global Investors make a strong case that TCA should be part of pre-trade, post-trade, and trade monitoring, with numbers illustrating an 86 bps return to the exercise.(7)

Another way of phrasing this lesson from equity TCA is, it’s all about the money and potential savings are large. That message translates directly into other asset classes. An examination of FX trading, linked to global equity transactions, suggests that poor execution can cost the average large institution roughly $40 million per year, a three-fold increase over implementation which uses TCA to identify liquidity supply in the market.(8)

There are other common lessons, and things to be unlearned as TCA moves to disparate financial instruments. We begin with market structure.

Market structure

The biggest single change in equity markets over the last fifteen years is the empowerment of the buy-side trading desk. Desks are expected to
add value, as opposed to simply facilitating the implementation of investment choices. Traders are looking for control, and the motivation is efficient and inexpensive execution. One outcome is the demand for TCA services. Interest extends beyondforensics, to tools permitting a proactive approach to the cost issue.

Control requires accurate measurement.
This lesson applies directly to other asset
classes. Interest in futures TCA is coming from trading desks, which link futures and equities
in their implementation strategies. As interest in control increases, so does the demand for TCA. Consolidation of trading desks is part of the
story in fixed income, following a separation of trading and portfolio management. In FX, there
is a movement away from custodial execution to proactive management of currency trading. Some interesting tales revolving around such activity are told by Eastspring Investments Singapore and AXA Investment Managers.(9)

Buy-side control cannot be separated from the movement to electronic market structure, which permits self-directed trading and presents headaches in the form of fragmentation across time, destinations, and geographies. The headaches also are a driver of TCA.

The electronic revolution in equities is well- established. Futures trading received a large electronic boost during the late ‘80s and early ‘90s. During that period, 28 new derivatives exchanges were built around the world; 26 were fully electronic. This movie is coming to a market near you, with a recent round of electronic fixed income venues, and a strong move in FX to electronic trading.(10)

The benefits and drawbacks of electronic market structure drive TCA and the nature of its tools. This lesson is becoming more important over time. A survey by Greenwich Associates suggests that 25 to 30 percent of institutional clients in the UK and Continental Europe already use some form of TCA for fixed income. Fixed income TCA is prevalent
only in 17 percent of US institutions, in a country which (perhaps surprisingly) has historically lagged with respect to electronic markets. On the FX side, among the world’s institutions, which manage greater than $20 billion in assets, 41 percent of institutions that employ TCA use it in their FX investment process.(11) This development is linked in our minds to a 65 percent utilisation rate, by volume, of electronic execution in the FX market.

Data

Sophisticated analytics are great, but comprehensive data are the life blood of TCA. The equities world has all but forgotten about data availability issues. Need market data on some small stock in a tiny market? No problem. Need buy-side transaction data for a granular analysis of strategies in Latin America? Just bypass
the order management system and go directly
to the network FIX connections or execution management system. Good data are critical,
but all you need to do is pay up for it, in cash or development resources.

Except for some exchange-traded instruments, forget the second part of this lesson in the context of other asset classes, at least for the near future.

Publication of indicative prices in FX is routine, but access to tradable quotes with good time stamps is market-by-market, bank-by-bank. Our understanding from the field is that most FX TCA providers rely on a combination of indicative quotes and one, possibly two, providers of actionable information. We have spent considerable time and resources to compile
a tick-based order book from twelve major banks and five ECNs, and can say that it was a heavy (and expensive) lift. A TCA provider must virtually replicate an exchange in terms of market data, in order to perform the simplest of tasks.

The experience with time-stamped buy-
side data has not been any easier. Despite the construction of custom extraction tools, the data are, well…a mess, requiring a great deal of work. Custodial data are easier, of course, but lack a variety of information, not the least of which is the time when the executions occur. TCA is about timing and tagging of information, and custodial data leave much to be desired.

Fixed income is marginally better, and to
the extent that it is, there have been tangible advantages to the market. The establishment of the US TRACE system in 2002 brought transparency into the corporate bond market. In 2010, trade data on debt issued by federal government agencies were incorporated. TRACE appears to reduce investors’ cost significantly, supporting a transparent pricing regime.(12) In Europe, regulatory mandates currently limit transparency rules to instruments admitted to trading on regulated exchanges. They do not apply to OTC markets, where the majority of fixed income trading takes place. Similar comments apply to other regulatory jurisdictions around the world. The situation appears worse, when one considers the large number of bonds which can be characterised as off-the-run. TCA for fixed income will rely in part, as dealers do, on prices generated from a matrix.

The advance of electronic trading in FX and fixed income, as well as the “futurisation of swaps” envisioned as part of the outcome of swaps execution facilities, will ameliorate the situation. Execution management systems are becoming multi-asset capable, which enables granular data with good time-stamps. We look forward to those developments.

Usage and tools

Compliance is the easiest market for TCA vendors. Equity TCA is learning a difficult lesson over time, as the focus shifts to trading and portfolio performance: post-trade TCA is a performance attribution exercise, in which a variety of factors require isolation to judge a particular implementation strategy and pinpoint the “own” impact of an order. An entire book could be written on this topic.(13) The analogy is portfolio performance attribution, where a goal is to isolate the manager’s contribution to returns.

This lesson must be quickly absorbed, as one considers other asset classes. Compliance remains an issue, given changes in regulatory mandates
for the OTC markets. The focus on performance
is virtually immediate, however. That focus took years to develop in equities. In contrast, based on a survey by Greenwich Associates commissioned by ITG, 92 percent of respondents cite investment process improvement for FX as a driver of
TCA adoption.

The drive to actionable measurement is manifested in part by a virtual obsession with
child algorithmic trading orders and millisecond granularity of data. The movement also follows from a narrowing of the definition of ‘best execution’ as pursued by buy-side trading
desks, in light of practical circumstances, market structure evolution, and uncertainty as to what best execution really means. At the level of the trading desk, pre-trade and post-trade TCA are now employed for analysis of order routing, venue-type selection, and strategy performance. The effective goal of such work is the verification of best price in the face of liquidity and momentum conditions.

Best execution reverts to best price at the level of the trader. Best execution may not immediately translate to best price for the investment process, represented more clearly through the parent order. The difference generates the next lesson.

TCA cannot afford to ignore upstream technology in the form of order management systems, execution management systems,
and managed financial networks. For example, Greenwich Associates isolates a critique related to the trade/order distinction, consisting of a failure of systems integration relevant to data flows.(14) The issue is a lack of linkage between EMS and OMS systems. The OMS provides order-level data. The EMS and the managed financial network deliver placement and strategy information. Data from
the network or EMS often constitute child orders relative to the parent residing in the OMS. How does one learn about the parent by studying only the children? They are only reflections of strategy.

As TCA moves away from equities, and as market structure evolves, this issue is binding. Obsession over fragmented child orders must
be reconciled without glossing over the portfolio
of trading strategies used to execute the parent. Self-directed trading in FX, futures, and fixed income is spawning a new generation of EMS and revisions to FIX protocols for the network. Novel extraction utilities are yet to be built out to their fullest possible extent. That responsibility rests with TCA providers.

Pre-trade analysis and post-trade reporting

The separation of pre-trade and post-trade analysis has a long history, with the former based on market data and the latter combining such data with buy-side transactions. This separation is incompatible with a lesson reflecting the drive towards performance.

Actionable information requires the transformation of TCA reporting into decision support tools. This may seem obvious, but examination of many reporting formats might convince you otherwise.

A different perspective on TCA takes shape, if one abandons the dividing line between post-trade historical analysis and pre-trade strategy selection and monitoring. Moving from reports to decision support requires the delivery of information from post- trade performance data to pre-trade tools, based on the blotter’s orders and tuned to decision making. An order’s characteristics are enhanced by performance information given current market conditions. This permits broker evaluation, strategy analysis, the determination of optimal broker packages, and
the like. From the perspective of post-trade TCA,
we move from T+1 to real time. Viewed from the pre-trade side, market data are complemented by information based on execution history.

Is anybody listening?

In their 2011 study of TCA, Greenwich Associates present what they term, “a thoughtful and powerful critique of TCA.” They present this critique using quotations from survey respondents, of which the following stands out: “More non-traders are looking at [TCA] and this is a very bad thing.” There is a lesson here, but first, a question: bad for whom, and why are TCA results not being explained by those who trade to those who don’t? This ‘thoughtful’ critique disparages a class of real and concerned consumers of TCA information, including buy-side boards of directors.

If one takes the quote as a lesson, it is one of those to forget. It does provide a stepping stone to an important point for multi-asset TCA.

Develop a narrative, avoid a blizzard of numbers, and use language appropriate to the audience. Things are bad enough in equities, with hundreds of ‘named’ benchmarks and jargon
to fill the gaps between pages of numerical information and graphs. We are entering worlds
in which ‘duration’ joins price in fixed income benchmarking, ‘pips’ become delimiters, USD.TRY is a noun, and ‘rolls’ represent behaviour. Equity TCA has let a thousand flowers bloom with respect to presentation and terminology, without pruning them in such a way as to be widely understood. Lack of understanding and communication undermines any move into other asset classes.

TCA providers must communicate better, but they need help from their constituents. To this end, we applaud the efforts of FIX Protocol groups, led by Mike Caffi of State Street Global Advisors and Mike Napper, from Credit Suisse. They recognise concerns that inconsistency in TCA terminology across providers presents practitioners with challenges. A consolidated glossary is being developed, and as of this writing, groups devoted to TCA for FX, for Fixed Income, and for futures/ listed options are being created.

This last lesson may be the most important of all. ■

 

Footnotes:

1. Tabb Equity Report for 2012/2013, The Tabb Group.
2. “TCA: Taking the Next Step,” Greenwich Associates, 2011,
which also discusses the 2010 results.
3. “Imperfect Knowledge: International Perspective on
Transaction Costs,” Tabb Group, 2008.
4. J. Treynor, “What Does It Take to Win the Trading Game?”
Financial Analyst Journal, 1981.
5. Average cost in this scenario is 142 bps.
6. “Saving 20 bps and a Lot of Time,” GlobalTrading, Q3 2013. 7. “Integrating TCA,” GlobalTrading, Q1 2013.
8. “How Big is ‘Big?’ Some Evidence from Aggregate Trading
Costs in the FX Market,” this volume.
9. See “Transforming FX Through TCA,” GlobalTrading Q3 2013. 10. Statistics and references are provided in “How Big is ‘Big?’
Some Evidence from Aggregate Trading Costs in the FX
Market,” this volume.
11. “Transaction Cost Analysis: Into FX and Beyond,” Greenwich
Associates, Q3 2012.
12. Corporate Bond Market Transparency and Transaction Costs,
by Amy Edwards, Lawrence Harris, and Michael Piwowar, Journal of Finance, June 2007; Market Transparency, Liquidity Externalities, and Institutional Trading Costs in Corporate Bonds by Hendrik Bessembinder, William Maxwell, and Kumar Venkataraman, Journal of Financial Economics, November 2006.
13. One does exist, although admittedly it does not cover very recent developments. See Below the Waterline, ITG, 2009, available upon request.
14. “TCA: Taking the Next Step,” Greenwich Associates, 2011.
© 2013 Investment Technology Group, Inc. All rights reserved. Not to be reproduced or retransmitted without permission. 91613-17027
These materials are for informational purposes only, and are not intended to be used for trading or investment purposes or as an offer to sell or the solicitation of an offer to buy any security
or financial product. The information contained herein has been taken from trade and statistical services and other sources we deem reliable but we do not represent that such information is accurate or complete and it should not be relied upon as such. No guarantee or warranty is made as to the reasonableness of the assumptions or the accuracy of the models or market data used by ITG or the actual results that may be achieved. These materials do not provide any form of advice (investment, tax or legal). ITG Inc is not a registered investment adviser and does not provide investment advice or recommendations to buy
or sell securities, to hire any investment adviser or to pursue
any investment or trading strategy. The positions taken in this document reflect the judgment of the individual author(s) and are not necessarily those of ITG.

©Best Execution 2013

 

 

TCA : TRACKING THE CURRENT

Saoirse-Kennedy_7523TRACKING THE CURRENT.

Saoirse Kennedy, Analyst Consultant at GreySpark Partners explains how Transaction Cost Analysis is following the route of trading electronification in its advance across asset classes.

Post-financial crisis regulations and buyside client expectations are a central force in this shifting landscape that is driving a change in emphasis from post-trade TCA, to pre-trade and real-time TCA.

TCA is emerging as one of the next great talking points in the capital markets technology space, following in the footsteps of low-latency technologies, digital investment banking and swap execution facilities. This is driven largely by the buyside, and highlights the significance of cost control in a post-crisis financial industry that is just beginning to grasp the reality of its increasingly frugal existence. TCA should be considered as a renewed competitive differentiator with investment in a robust cross-asset tool essential in today’s climate. Next generation TCA solutions are rapidly becoming critical in a marketplace where increased regulatory demands can negatively impact returns.

The current TCA landscape is one that is
no longer characterised by an equities market using post-trade TCA solutions but as one that incorporates all aspects of the trading lifecycle, across multiple asset classes. This refocus is due, in part, to a general trend toward electronification which is itself catalysed by the recent spate of regulations as shown by Figure 1.

TCA-GreySpark_Fig1_750x750

Regulators and clients both demand better cost transparency

Financial market regulations since the 2008 financial crisis, particularly those regarding Best Execution and increased transparency in trade decision-making have provided TCA offerings a new impetus. Buyside firms are now pressed to justify risky strategies in asset classes beyond equities with this pressure coming from both regulators and clients. These firms look to TCA
to address this need – having multiple tools to measure Best Execution is considered prudential. TCA allows the buyside and sellside relationship that was eroded during the crisis to regain trust, particularly due to the pre-trade and real-time offerings that are increasingly available, which grant greater cost transparency.

Asset class coverage is increasing as Electronification Progresses

Traditionally only an equities phenomenon, over the past three years TCA solutions have begun to seep into the FX space. This trend is expected to continue into other asset classes as both the buyside and sellside demand increasingly granular analysis and decision-making techniques when making investment choices. This expansion of TCA across asset classes is following the path set out by trading electronification (see Figure 2). We observe a gradual movement to FX, fixed income and futures businesses with derivatives businesses bringing up the rear due to the complexity of analysing costs for these relatively complex products. As reporting requirements across asset classes increase there will be a deluge of new data for use in cost analysis. Furthermore, FX mispricing scandals, such as State Street overcharging pension funds, have triggered an increased demand for TCA in a market where regulators are yet to insist on centralised trade reporting.

TCA-GreySpark_Fig2_675x400

The FX market is thus the latest target for TCA solutions. Buyside firms are acknowledging the costs of trading in FX markets and the benefits
a TCA solution can bring. There are, however, some limitations. The first is a general lack of transparency in the D2C space meaning that banks do not necessarily share the information required to make a proper assessment of transaction
cost. Secondly, the large, global, 24/7 nature of the FX market makes TCA in the FX space more complex than for equities. A comprehensive FX TCA offering should bring a lot more to the table than a rebranded equities product. It should give due consideration to the nuances of trading in
an FX market, handling the nuances required
for measuring the cost of both liquid and illiquid currencies with access to data sources relevant to the currency being traded.

TCA techniques are encompassing the entire trading lifecycle

There is acknowledgement that TCA improves trading outcomes. However, there is also a general sentiment that TCA tools could be improved to have a better impact on trading outcomes. This
is where real-time and pre-trade TCA comes to
the fore. The impact of electronification has aided TCA’s push into the pre-trade, real-time space. As a post-trade function, TCA serves to provide an analysis of the cost of executing a trade after the event, providing a measure of a trade’s efficiency. Pre-trade TCA is actually a misnomer, however,
as it provides a transaction cost estimation, rather than the analysis of a past event. Pre-trade TCA offers an additional opportunity for risk assessment of pricing prior to trading based on the assumption that the explicit costs of a trade can be estimated from those of past trades. This is not a fool-proof approach but it does provide a good starting point for improved investment decision-making. This permits a more comprehensive analysis prior to trading and an additional input to algorithmic trade decisions. Pre-trade TCA essentially aids smoother electronic execution by providing this additional input to the trading decision.

Understanding the pre-trade TCA process gives a grounding upon which to conceptualise real- time TCA, it also allows a view of how market risks are being mitigated, allowing the buyside greater confidence in their trading strategies.

The pre-trade TCA process hinges on the following:

• Pre-trade data analysis

– pre-trade analytics provide instrument-specific, market-related
and trade-specific information that characterise and assess the difficulty involved in a proposed trade. The market impact and opportunity costs of a trade are assessed by this with trading strategy adjustments being made for onerous trades.

• Cost and risk estimation

– consideration is given to the implicit costs of a trade that are influenced by the trading strategy and can,
to a certain degree, be controlled. Central to these estimations is an understanding that cost estimates include a risk parameter and both vary depending on the implementation strategy. This estimation allows for an assessment of alternative strategies.

• Trading strategy optimisation

– optimisation should find an appropriate trade-off between cost and risk, managing the conflicting outcomes of market impact and timing risk. This can, generally speaking, be achieved by minimising the cost subject to a specific level of risk, balancing the trade-off between cost and risk, or maximising the probability of price improvement.

Real-time TCA is growing in eminence as the demand for real-time decision-making increases on electronic markets; this is particularly relevant where the impact of event-driven market volatility can be incorporated into the real-time cost analysis. These types of events can all contribute to ‘slippage’ and real-time TCA mitigates this.
It reduces the lost opportunities in the trade-off between time and price by neutralising the time element. The essence of the real-time element
is that, theoretically speaking, returns should be improved based purely on a price decision as
the impact of other temporal factors affecting execution price are taken into account prior to the decision to trade. A continuous stream of analytical data should allow trade execution strategies to
be updated to reflect changing market conditions without delay.

There is a cost consideration in the choice of TCA strategy – whether to choose post-trade, pre-trade or real-time analysis. Post-trade analysis, based on historical data, can be gathered at
a minimum of cost. Pre-trade has the further requirements of cost and risk estimations, pre-trade data analysis and trading strategy optimisation, incurring additional costs. Real-time analysis, in addition to the costs of pre-trade analysis, requires a real-time data feed, further increasing costs. Though there is a cost increase associated with each level of analysis this must be considered in light of any reduction in transaction costs that may be gained.

Conclusions

The renewed emphasis on TCA promises to be an exciting development as businesses continue to focus on cost reduction and transparency in the wake of the credit crisis. And, perhaps like never before, the increasingly electronic marketplaces of today should provide the data needed to build real, bottom-line-affecting solutions in this space. ■

 

©Best Execution 2013

 

 

TCA : DEFINING THE GOAL

DEFINING THE GOAL.

Bloomberg_Mike-Googe_CROP

In an ever more complex trading world, Mike Googe, product head of post trade analytics at Bloomberg explains why the quality of results in transaction cost analysis for multi-asset operations depends on the right questions being asked.

Improving trading and investment performance
is a common goal for any desk, irrespective of
the asset class being traded. Just as important
is demonstrating compliance to regulations and best execution mandates for all asset classes.
In equities, Transaction Cost Analysis (TCA) is established as a significant contributor towards this goal but now a clear demand has emerged to extend TCA in FX, Fixed Income and Derivatives beyond simple best execution reporting to deliver actionable insight.

Bloomberg engages with thousands of clients across the globe daily, and an increasing theme of discussion is trading analytics and in particular multi-asset TCA. In response to this, we held an industry event in London earlier this year to debate the subject. Traders, compliance officers and investment managers engaged in multi-asset operations were surveyed throughout the event, starting with two key questions:

‘In which asset classes do you currently practice TCA?’

A significant 31% of respondents are already conducting some form of multi-asset TCA

TCA-Bloomberg_Fig_1600x750

‘What is the most important reason for conducting post-trade TCA?’

61% cited compliance and best execution as the most important reason, but broker/algo evaluation is a significant requirement along with trader evaluation and client reporting.

What was clear from the response and the discussions on the day was:

A significant amount of multi-asset TCA is 
already being conducted but there is a clear 
lack of consensus regarding best practice.

Best execution monitoring remains the primary goal but is coupled with a growing demand for 
a more sophisticated approach that delivers 
decision support in a timely and integrated way.

Demand is growing for a credible, transparent 
and more robust multi-asset TCA capability.

So what are the kinds of questions clients are trying to answer through implementing a multi- asset TCA strategy? What are the key drivers and the unique challenges to deliver TCA for other asset classes and what can be learned from the equity experience as multi-asset TCA evolves?


Cost de-composition

Understanding the overall cost of implementing
an investment idea is important, but the critical requirement is to de-compose the costs and
look at attribution along the entire order lifecycle. Isolating and measuring the key contributors to performance allows good practice to be identified and reinforced, and corrective steps to be taken where weaknesses are found. For example, comparing a price snapshot at the time a currency exposure is generated or when a portfolio manager decides to invest, with the price at the time the resultant order is created, can give insight to the opportunity cost of time delay. Capturing that discrete moment is particularly tough for FX trading when coupled with the natural netting of currency exposures from the underlying trades. A practical solution to this problem might be to consider using the opening price on arrival day as the trigger 
point for exposures. (In itself an arbitrary point,
but by convention 5pm NY.) The aggregation of these results by account or portfolio manager
can begin to build a picture of momentum bias and identify where natural delays are creating a negative impact. That, in turn, enables traders to anticipate strategy selection better and also guide on aggression levels. Another common use case for our clients would be to compare the snap price from the order arrival until the first route to isolate the opportunity cost attributable to trader timing. The value added by electing when to trade can then be demonstrated.

De-composition can provide even deeper analysis to help in a more subtle way. For example, one can analyse potential ‘leakage’ cost by comparing price at the time an RFQ goes out to the price when quotes are returned. Additionally, one could look at a reversion based on the price at a defined interval – e.g. 1 or 5 minutes after trading, to build a profile of latent impact. When combined with fill attributes, e.g. natural agency
or principal execution, will build a profile of which bonds or currency pairs are more likely to exhibit negative impact and which dealers manage the unwind of their resultant exposures better to protect the market for follow on business. All of these results can deliver insight to optimise the trading and investment process.

Insight for improved investment and trading performance

Armed with a de-composed view of costs, multiple forms of analysis can be aggregated to understand true opportunity and impact costs. By isolating
and grouping specific factors, it is possible to profile trades and contrast positive and negative performance. To illustrate this, a typical approach
is to compare costs according to order ‘difficulty’. In equities, clients will often analyse orders in aggregate using a combination of factors – order size/ADV is common, for example, because trading 1% of a day’s volume is typically easier to execute than 100% or greater. Volatility is measured to determine whether the market is consistent or skittish, and momentum is used to gauge whether general market direction is favourable or adverse to your intent. This is more challenging in OTC- centric markets like FX and Fixed Income due to incomplete volume data. However an approach being adopted by the industry today is to group together orders within ranges of size according to the underlying attributes of an order. For example G10 majors could be grouped within defined levels of trading difficulty based on the value: less than US$3m orders are ‘easy’, US$3m –US$15m are ‘medium’ and >US$15m are ‘difficult’ for example. A different definition could be created for restricted currencies or for different groups of bonds, according to their credit ratings. Building aggregations of ‘similar’ orders allows a team to analyse performance in context.

Let’s say orders for a certain desk are grouped according to the time of day they are traded. A desk might decide to split the trading day in to
one hour ‘buckets’, allowing similar orders to be compared according to the time executed, which can highlight the optimal time of day to trade. Combining both aspects – difficulty level and optimal timing – can provide additional clarity – for example trading spot USD/KRW for “easy” or “medium” orders might not be significantly affected by the time of day but for ‘hard’ orders it may prove better to execute during live hours in Korea. This insight can quantify and prove something that may already be suspected by intuition, and will provide pre-trade decision support the next time a similar order arrives.

Pre-trade decision support

Our clients want to answer questions like “Who
is the best dealer to use for this order in hand? Which is the best strategy, time of day and what order instructions should I give?” Experience
tells most traders who their ‘go to’ guys are for certain types of orders and TCA can reinforce that judgement as illustrated above, but the next step in leveraging TCA is to incorporate this information into the decision making process at the point of consumption. For example, the results of TCA can be used to help identify the most appropriate targets for RFQ. By incorporating those results
into the RFQ ticket a list of target dealers can be delivered based on factors such as hit ratio i.e. how often you trade relative to the amount of RFQ sent, performance when trading by measuring slippage to the prevailing ‘best’ price, and also performance when not trading, or the aggregate slippage of all rejected quotes compared to the taken quote, to gain perspective on how aggressive their quotes are even when they don’t get the trade. When put in the context of the order profile, this provides a complete decision/action cycle that calibrates as subsequent trades are analysed.

Regulation & compliance

Multi-asset operations increasingly need to conduct effective trade surveillance. To do
this, compliance departments must monitor conformance to regulations while simultaneously detecting and investigating outliers as trades are executed. This is true in all asset classes now, and will become more important as MiFIR makes its passage through the EU. In addition, Dodd-Frank and the movement toward more organised trading facilities such as SEFs and increased pre and post-trade transparency make the case even more compelling for implementing TCA for non-equities. For example, testing for possible suspicious trades can be achieved using TCA. There are a variety of possibilities, comparing to a day’s high and low can indicate suspicious trades, or by comparing the achieved average execution price to a close ‘n’ days later and isolating trades that exhibit a positive performance greater than a certain threshold can indicate a possible insider trade. These can then be investigated to determine if any further action is required. To investigate effectively requires market context and content that can be used to recreate market conditions at the time of trading, for example viewing any contemporaneous news items or spikes in quantity or readership of news related to the asset in question at or after the time of trade.

Context and content

Using TCA to provide insights on possible trends can lead to a step change in behaviour and is
an important first step for every firm. In addition, the context in which a result is crystallised is critical to conducting proper trade analysis – that is, the ability to see relative as well as absolute performance. For example a trader or dealer working an order with full discretion should likely be viewed differently to one who has received instructions to ‘get me done NOW’. In addition being able to view performance against relative measures becomes increasingly important. The use of pre-trade cost estimates can provide a ‘budget’ of likely cost against which you can compare achieved results. In addition, using an aggregate of all observed performance within a universe allows relative performance to be measured in the context of peers. Both are elements which are beginning to emerge in a multi-asset context. Wider context means analysing results in light of macro events or news. Knowing that an economic release came out during the order which negatively affected the price would provide shading to the performance. Similarly, news of a credit rating review that emerged subsequent to trading and drove the bond positively in a trader’s favour might indicate the need for further investigation of potentially suspicious trades. Analysing costs in context provides a deeper and more usable result.

What is driving the demand for a more sophisticated multi-asset TCA?

Change is a constant we must all face, however the broad drivers towards a more sophisticated approach to multi-asset TCA could be characterised as regulation, market evolution and competition. From a regulatory standpoint, I’ve mentioned the growing compliance burden with initiatives such as STR in the UK, but there is a
lot of debate on MiFIR right now and on pre/post trade transparency in the USA, which makes future regulation and resultant market structure uncertain. Finding a consensus around organised dealing for bonds, requirements for firm and continuous pricing, etc, are proving to be a challenge. Most parties can agree however, that whatever the changes are to come, they will certainly yield a different market structure from any we’ve seen
in the past. Other market factors, such as the reduction in liquidity in secondary markets, are also contributing to a desire to better understand costs and reduce market impact. Due to commission unbundling, trading is now seen as a profit centre in its own right, and so the ability to demonstrate performance becomes increasingly important. We saw this in equities through MiFID and RegNMS which created fragmentation challenges. However, markets evolve and adapt and in this case the response was an ‘execution services arms race’ with the development of increasingly diverse and sophisticated electronic trading services – from the array of algos through DMA, smart order routing technology and dark pools etc. With this choice in execution and fragmented liquidity, conducting an effective TCA became far more challenging, almost impossible at a basic level without some level of automation. The days of checking against the close or Day VWAP were gone and people started to really get serious about using TCA as a core part of their investment and trading process. Preparing for how these potential changes might impact other asset classes must now be considered. In my opinion, these all drive towards how to better demonstrate competitiveness. Returns are scrutinised, so it is becoming increasingly important for both buy- and sellside
to demonstrate a control of trading costs. For the buyside, the focus is still on ‘stock picking’ in the broad sense, but the marginal gains achieved through efficient trading cost management is now a clear differentiator when courting new clients and trying to retain funds. For the sellside offering new and innovative execution services being able to demonstrate execution quality is key.

So what are the challenges to delivering an effective multi-asset TCA?

• Pricing data:

For exchange-traded instruments such as equities and futures, getting pricing data is simple with a continuous stream of tick data. However, for bonds and currencies, no single view of price and trading data exists because liquidity is fragmented between electronic platforms and single dealers. Even so, data is more available and of higher quality the further up the liquidity profile you go and the previously mentioned initiatives to greater transparency should lead to improvement in data quality in the future.

For example, trading G10 majors on spot or
US Treasury 10 year notes doesn’t represent a challenge as the available data to price against delivers a good proxy for an effective market best bid/offer. However pricing for a broken date in an obscure currency pair or for a distressed corporate bond with esoteric coupon or redemption profile presents a much more significant challenge.
This is the case in equities to an extent as well. Getting pricing for IBM or Vodafone is simple,
but less so for a micro-cap stock in an emerging market. Market participants commonly deduce that the quality of the data will drive the quality of the analysis.

However, it is worth remembering my earlier point about context of costs and that measuring the ‘easy’ in the same way as the ‘difficult’ doesn’t make sense. Whilst pricing data might constrain
a complete analysis, it is still possible to take a pragmatic approach and gain valuable analysis on a good portion of the value traded.

When we consider that transparency
initiatives in the market are likely to improve this situation, it appears there is a good basis for achieving meaningful analysis. For this to work, of course, the equivalent trade data needs to be of equal quality.

• Trade data:

With de-composition and context such a large part of achieving a quality analysis, having accurate and plentiful timestamps that match the events within the order lifecycle is key. This is a significant challenge when trading isn’t electronic, as is the case for a significant proportion of FX and Fixed Income flow.

While the increase in adoption of electronic trading will benefit TCA quality, the lack of adoption does not negate the ability to gather meaningful insight. Benchmark selection is of key importance for interval or implementation shortfall based measures but benchmarking against points in time (such as open/close, reversions, time off-sets etc) can still be effective.

In addition benchmarking to fixings can also
be achieved. A common benchmark in equities, VWAP (Volume-Weighted Average Price) is of less use in OTC markets because of the incomplete volume data previously mentioned. To that end, using another fair value proxy such as TWAP (Time-Weighted Average Price) has seen wider adoption. Implementation would typically be on an all-day basis to overcome the timestamp issue.

Another key aspect of a complete trade
data record is capturing other attributes such
as instructions, limits/stops etc. Being able to capture the softer aspects of the order, including any amendments to them throughout the life is also critical.

Finally we have to consider the specific requirements of conducting TCA on items like swaps, or rolls in futures. Keeping these trades separate is essential to remove the potential for material skew when looking at the single legs. Being able to isolate them enables a more refined analysis to take place, for example by removing the volume attributed to futures rolls from the aggregate participation rate for a contract. From a technology point of view this is a challenge of the OMS/EMS data set and the integration process.

• User experience:

The challenge for multi-asset TCA is to reconcile the need for a common environment and procedures in order to present a normalised view of costs across a firm, whilst at the same time accommodating the differences in methodologies between asset classes. For firms conducting multi-asset business there is little value to having separate TCA solutions for each asset class, as it requires separate skills to use, produces different reports, and most importantly cannot produce the holistic view of performance to provide a single report for clients.

To encompass all trading activity and enable
a normalised view is of utmost importance as
the ultimate benefit of a single platform for multi- asset TCA is that it allows greater insight to be captured. For example by looking at aggregate performance across assets within groupings
like ‘easy’ or ‘difficult’, trading method (e.g. electronic versus RFQ) or when comparing traders, brokers or portfolio managers who have cross asset responsibilities, it is possible to compare performance cross-asset which can promote the sharing of best practice. It is also a challenge to ensure that consumers of TCA can draw useful conclusions from the analysis. You can measure, de-compose, attribute etc, and once you have
the results, a user will invariably ask the question ‘so what?’

Conclusion

Multi-asset TCA is a reality and is already delivering answers and insight to those that participate. The market landscape looks set to allow an increase
in sophistication and thus the value that TCA can deliver should increase. The key is to determine a very specific TCA policy which focuses on the questions you are trying to answer, what entities you want to measure and to determine the correct benchmark or benchmarks required. It takes time to develop history upon which to gain context and trends, so a TCA policy should be dynamic, evolving with the firm or trading desk. Given the status of current thinking, this framework – both the types of analysis and the data used – can be used to answer some of the important questions posed above: ‘when to trade?’, ‘who to trade with?’ and so on. ■

 

©Best Execution 2013

 

TCA : RISING TO THE TASK

RISING TO THE TASK.

OneTick_Louis-Lovas

Louis Lovas, Director of Solutions at OneMarketData, LLC explains that the rise in uptake of transaction cost analysis has created a technology challenge, which has to be met.

Technology is making a sweeping transformation in trading styles as the accelerating use of algorithms creates a more competitive environment. Market participants are witnessing a new normal defined by thinning margins, diminishing volumes and uncertain regulatory policy. Paradoxically this translates to the increased use of algorithms as firms look to squeeze alpha out of a diminishing pot. Regulators are also watchful as technological change blankets the industry. They wrestle with fears of systemic risk in the technology fallout manifested in HFT-induced crashes, IPO mishaps and rogue code debacles (e.g. Knight Capital). However, this does not signal an end to profitability and the discovery of alpha for the institutional firm, but rather a changing attitude. Adapting to this new normal has driven a think out-of-the-box approach to penetrate the fog of market structure, seek asset class diversification and explore far off geographies.

While high-speed algorithmic trading has been grabbing the headlines there has been another technology evolution occurring, one that leverages the same state-of-the-art, high-speed computer and software technologies. As a corollary to high frequency traders’ low-latency objectives, asset managers and institutional investors are focused on overall trade performance. This has pushed best execution beyond price to an overall understanding and management of trade performance and opportunity costs, thereby creating the incentive to invest in technology for Transaction Cost Analysis (TCA) simply because it can generate alpha by exposing, and ideally lowering, the cost at which you buy and sell.

Widespread liquidity across lit and dark pools has pushed firms to expand their hunt for alpha across brokers and borders. The disparity across markets is a natural barrier for efficient executions. Once discovered, the goal of stealth execution algorithms is to protect alpha by taking prices within range and working to blend in with other participants’ activity. While every trade has a lasting effect, the goal is to minimise market-moving impact. Measuring the change in a benchmark (e.g. VWAP) before and after order completion
can indicate an algo’s stealth effectiveness. If the market reverts to previous levels, it’s an indication of just how much an order may have influenced
the market. This type of collateral market statistic combined with performance analytics offer a profile of trade executions that add up to an overall view of execution quality.

TCA is intended to provide a number
of quantifiable objectives, first to track and compare broker performance – looking at intra- day efficiencies by measuring and monitoring executions against benchmarks, including arrival price and market price. Second is to identify order exceptions that have become problems – to find the outliers, and highlight the impact of implicit costs or slippage, measured as implementation shortfall and opportunity costs such as crossing the spread.

To achieve the cross-broker, and cross-asset visibility, institutional investors are demanding custom TCA solutions. The same technologies that enable algo-trading are being leveraged for their analytical and data processing abilities. These low-latency engines encompass the core set of capabilities necessary for measuring both real-time and historical trade performance. TCA relies on three fundamental components: data management, analytics and visualisation. These are needed for both traditional post-trade TCA and intra-day, real-time cost analysis.

Time-series tick database and Complex Event Processing (CEP) is the ideal technology mix for the organisation and understanding of data. The ability to consolidate, filter and enrich the raw markets is a hallmark of CEP. Trading diversity creates challenges for price transparency and measuring execution quality against benchmarks. Achieving this requires highly-customisable analytical tooling, technology that customers can easily pilot. And that forces vendors to pay close attention to tooling design to ensure their technology is easy-to-use, robust and scalable. Therefore it makes sense to leverage the same algo-trading technology to build customised systems for cost management.

The role of data management

Data management for TCA is about bringing together disparate data types. It starts with consuming market data – trades and quotes which come in many shapes, sizes and encodings. However, tick data is usually derived from many sources. There are 13 major exchanges in the US alone and 20 across Europe and Asia. The determinants of price discovery, volume, and trading patterns define the structure unique to each market, asset class and geography influenced by participants and current regulation.

Measuring trade performance demands a confidence in the accuracy and quality of pricing data. Tick data management has to deal with cancellations and corrections, consolidating order books across exchanges and applying corporate action price adjustments and symbol name changes. The creation of accurate and reliable price benchmarks for measuring trade performance is only possible with clean, consistent data. By
the same token, capturing all order activity is the cornerstone for understanding trade performance.

Data management for TCA demands access to a broad view of market data. Whether traditional end-of-day analysis or real-time monitoring, historical content along with real-time intra-
day price action play a vital role to establish benchmarks. It starts with consuming market data, often in differing formats and protocols from liquidity suppliers.

Tick data has to be consolidated, coalesced and price-adjusted across providers for true price transparency. The creation of accurate and reliable order-book analytics for meaningful benchmarks is only possible with this scrubbing. This is especially true in markets that do not provide a national
best bid-offer (NBBO) such as foreign exchange. Establishing an accurate Arrival Price is derived from the broader consolidated view of liquidity.

Capturing and time stamping individual orders and their corresponding fills play a vital role. The accuracy of measured execution quality against benchmark prices depends on the technology behind managing the data. This is especially true for real-time analysis where benchmark prices and determinants of notional prices include a historical context.

The analytical advantage

Analytics is central to TCA’s value. Increasing competition and thinning margins has heightened sensitivity to trade costs and has brought into sharper focus the need for versatile tools for analytical TCA. Customisable analysis offers the flexibility to show order and fill performance against a variety of price benchmarks by venue, industry, algorithm or sector. The analysis can then provide insight into the best and worst execution performance.

Yet, analysis of trade executions is necessarily complex and involves comparison of execution prices with collateral market statistics or benchmarks. These measure market participation analysis and implementation shortfall analysis. The following chart (Figure 1. Execution performance by algorithm) shows the results of a historic volume-profile algorithm where an order is carved up into varying clip sizes (Period order quantity) to correspond with a historic volume pattern of past trade activity (Period market volume). This stealth algorithm determines participation rates from a previous timeframe (i.e. previous day, week or month) to best mimic normal market activity.

The result shows execution quality and market impact. As an algorithm works an order it inevitably moves the market price in the direction traded.
As this chart (Fig. 1) shows, the impact is felt in
the rapid rise in fill price (Fill average) against the comparative benchmark (Period market VWAP) up to a mid-point in the day where resistance levels peak. Deeper analysis can also show individual fill performance against a variety of price benchmarks (Arrival, TWAP, etc.).

TCA-OneMktData_Fig1_1350x750

Execution performance analysis as depicted
in this chart (Fig. 1) can represent end-of-day execution quality for completed orders or point-in- time (intra-day) as an order is worked through the day. Benchmark prices should maintain consistent (time) periodicity relative to fill activity. Real-time TCA provides traders with information on costs, quality and potential market impact as it happens, where analytics become actionable information
at the point of trade. Determining these metrics on an intervalised basis offers the ability to adjust an algorithm’s behaviour in real time. Execution strategies and routing logic can be adjusted intelligently in response to outlier conditions; either aggressively or passively in reaction to market conditions or broker behaviour.

The value of visualisation

The third component, visualisation, fashions the analytical metrics into a human readable format. As Figure 1 easily shows, visual representations that plot execution fill rates and participation rates to price-focused benchmarks offer a perspective for easy interpretation.

The purpose of data visualisation is to simplify comprehension of data and promote understanding and insight. The terabyte volumes of market data and order activity can be easily consumed and processed by high-speed technology. However, the single easiest way for our brains to interpret large amounts of information, communicate trends and identify anomalies is to create visualisations that distil, filter and smooth the content.

A visual tier that sits over TCA analytics lets users view the results of complex algorithmic processing. Not only can they get insight into what’s happened (e.g. spotting outliers), it is also possible to forecast what might happen. Rich graphics – scatter plots, line graphs and heat maps – are more concise and offer the means to quickly derive business actions. Figure 2 depicts execution quality as a ratio of order dollar value by an order’s duration in the market. Outliers (in red) are indicated by their distance from the norm. The transition from spreadsheets to charts visually registers comparative values, trends and outliers because they are seen as a whole. An impossible task poring over results in rows and columns is made manageable.

TCA-OneMktData_Fig2_1220x750

So having visual representations that plot order activity, performance metrics and participation analysis against benchmark prices will pinpoint outliers and can vastly improve an order’s final quality.

Trading in transition

Technology is reshaping the trading landscape
as algorithms and low-latency analytics continue
to dominate. But such is not the sole domain of high-speed traders. As the buyside becomes more discerning, demanding improved quality of execution, the sellside is forced to up the ante, offering expanded services to better understand and manage the trade lifecycle. Complex event processing (CEP) and tick data management are the consummate tools that can easily be recast, moulded to
unearth trade performance, a goal that is central
in the investment process as liquidity continues
to be fragmented and fleeting. Now uncovering
the performance of trading behaviour through customised, personalised transaction cost analysis is a critical component to any investors’ profitability. ■

 

Best Execution 2013

 

 

TCA : LEARNING FROM THE FLASH-CRASH

LEARNING FROM THE FLASH-CRASH.

Scott-Burrill_1864 Rosenblatt_Xiang-Li_Oct2013

Scott Burrill, CFA, Partner and Managing Director, and Xiang Li, PhD, Director and Head of Quantitative Research at Rosenblatt Securities Inc, argue the case for the effectiveness of a volume based TCA framework.

(Photos L to R: S. Burrill & X. Li)

 

Living and driving in heavily populated Southern California has made us appreciate our smartphone’s map and traffic applications. Being able to quickly navigate where we are headed, as well as avoid congested routes and find better alternatives greatly enhance the travels to our destination. In that same vein, traditional TCA tools and outmoded modelling techniques have failed to provide the trader with the ideal route in their quest for best execution or quickly navigate the changing market environment.

Many TCA tools have been developed since MiFID and MiFID II, but few of them take live “market theme” (i.e., a selling market or buying market) into consideration. The fragmentation of venues, the heavily “quant” oriented HFT and algo players, and the attendant need to avoid information leakage, all call for a comprehensive suite of tools which can readjust constantly to live market conditions, detect adverse selection, and help traders make informed decisions. Volume based TCA solutions shed light on such solutions.

This framework was initially developed in the study of the proverbial US Flash Crash to detect toxic trade flows and monitor a healthy trading environment live. Instead of looking at the market in traditional clock time, the new approach separates volume into equal sized bins and works on volume grids (we call it the “volume time”). By differentiating the possible informed buying money flow from the selling money flow, an adaptive model can create a reliable gauge of the current market regime of probable informed trades. The Volume Synchronised Probability of INformed Trading, aka VPIN, is one such pervasively studied measure and has proven to be a strong predictor of liquidity induced volatility, which is one major factor of trading costs.

Information is king

As trading pros know, price and volume information is highly informative. Trading volume is considered one of the most widely used factors in price prediction. The variation and interaction between price and volume not only provides information regarding the rate of information flow into the marketplace, but also reveals whether investors’ interpretations of the information are consistent or differing. VPIN, as an advanced mathematical model, was initially used to measure order flow toxicity. The model received wide attention in its ability to anticipate the US “Flash Crash” on May 6, 2010, more than one hour in advance. After that, researchers continued to study its application to futures trading and modelling market microstructures. We developed our dynamic, adaptive pre trade trading system based on VPIN called “SNAP” (Second Nature Analytics for Pretrade) as a next generation tool to help traders level the playing field in their quest to achieve best execution.

TCA-Rosenblatt_Fig1_1330x750

The benefits of adopting the Volume based TCA system are threefold.

First, it provides transparency on probable informed order flow versus uninformed order flow. With the Volume based TCA framework, you can not only tell whether there is potentially toxic order flow in the underlying trading activity, but also have a clear mind on whether the probable informed orders are on the buy side or the sell side. From this information, the system can provide an optimised balance of sourcing liquidity and hiding your footprint in the market.

Second, volume-based analysis enables comparison between liquid equities with illiquid equities since the volume bins will be based on uniformly distributed liquidity instead of time. An illiquid equity may have interrupted volume with a flat price line in the time space. But it would have continuous volume and price variations in the volume time space.

Third, working in a volume-time space captures the dimensionality of the market environment more clearly. As Robert Almgren succinctly stated in his 2005 paper, “The level of market activity is known to vary substantially and consistently between different periods of the trading day… this intraday variation affects both the volume profile and the variance of prices.” By looking at variations in the volume time domain, the smile curve of volume becomes a uniform flat line. Many measurements, such as volatility and associated trading risk, enable better statistical characteristics with the uniformly distributed volume.

Trading performance is increasingly under scrutiny. The microstructure of the market is continuously changing. The fragmentation of the venues both in the lit and dark markets, whether exchanges, MTFs or systematic internalisers, as well as a diverse asset class of trades, and sub millisecond HFT algorithms in the market, all call for a new TCA framework which can be adapted to multiple asset classes within different market environments, readjust quickly and constantly in real time with inherent flexibility, and provide a comprehensive solution to optimal execution.

TCA-Rosenblatt_Fig2_1300x750

Conclusion

Volume based TCA fits this task and can become a pervasive solution in future. As an industry practitioner with a long innovative history, we have built the SNAP system to incorporate its benefits. From our experience, the dynamic weighting between historical values and current market information, and the fine tuning of the execution cost curve to fit trading assets and styles are crucial in the model’s successful implementation. We view it as the next generation of TCA supported by solid mathematical modelling and deep insights into market microstructure.

 

©Best Execution 2013

 

TCA : FX – HOW BIG IS “BIG?”

HOW BIG IS “BIG?”

ITG_Milan-Borkovec ITG_Ian-Domowitz-2 ITG_Chris-Escobar

Milan Borkovec and Ian Domowitz, both Managing Directors, and Christopher Escobar, a Director at ITG examine execution performance using evidence from aggregate trading costs in the FX market.

(Photos L to R: M.Borkovec, I.Domowitz, C.Escobar)

Buyer beware

On July 3 of this year, the courts pronounced caveat emptor with respect to execution performance in the FX market. US District Judge Denise Cote threw out a lawsuit, which accused JPMorgan Chase & Co. of breaching a fiduciary duty to custodial clients by charging “hidden and excessive mark-ups” on currency trades. Judge Lewis Kaplan dismissed a lawsuit directed at officials of Bank of New York Mellon, for ignoring “red flags” or knowing that trades were being processed at the worst or near- worst prices of the day.

In the case of JPMorgan, allegations were rejected that the custodial agreement obligated the bank to process trades at “the best available market rate” or by any other measure. Judge Kaplan said it was improper to hold Bank of New York Mellon executives and directors responsible for alleged currency trading practices leading to the lawsuit. The lawsuit did not create “reasonable doubt that the board’s inaction was a valid exercise of business judgment.”

We have no opinion on such rulings or the legal actions which engendered them. But, what do buyers do when caveat emptor is the way of a market? They look, or at least, they should. An examination of currency trading activity is more involved than Judge Cote believes, when she said, there was “nothing secret about the mark-ups” charged, because they are disclosed in public databases and on trade confirmations. We are still searching for those public data, and simply note that trade confirmations are hardly evidence of execution quality.

Are we looking yet?

Credible examination of FX transaction costs is difficult, but not impossible. Early studies by Russell Investment Group and Record Currency Management used data from sub advisors to measure the contribution of FX to overall portfolio transaction costs.( 1) In 2004, Russell found that the distribution of transactions in major currencies was highly skewed towards the worst rates of the day, with an average cost of nine basis points (bps). Record found costs to be between 10 and 12 bps; they noted that “approximately one half of the audits conducted to date by Record revealed that clients received uncompetitive FX pricing on a routine basis.” A 2010 paper from Russell shows that most FX trades of investors, who rely on the executions of managers or custodians, are executed at prices inferior to the average rate of the day.(2)

In 2007, a survey of 17 transaction cost analysis (TCA) providers revealed that none offered foreign exchange TCA.(3) Today, there are at least 9.

Some ascribe this growth to regulatory pressures. The concept of regulation is tenuous in the global FX market, but mandates for best execution are on the horizon. Legal cases provide some impetus, but TCA providers are shy about appearing on the witness stand. Customer demand, focused on process improvement in pursuit of alpha preservation, is a driver. A survey of FX traders in 2007 suggests that 73 percent would look at reports if available, and 37 percent of respondents would want such reports daily.(4)

Demand for measurement grows with changes in market structure and their consequences. Earlier this year, ITG commissioned a special report from Greenwich Associates in order to assess this evolution, as seen by the global buyside. FX volume executed electronically increased 55 percent from 2011 to 2012. Electronically-traded volume was 65 percent of the total, an increase from 57 percent the year before. Multi-dealer platforms account for the majority of volume, dwarfing the use of single-dealer systems, messaging systems, and dealing on the telephone. Algorithmic trading already is used by 23 percent of respondents.(5)

The survey also touched on TCA for FX, with 44 percent of respondents reporting usage. Early equity TCA was focused on compliance; similarly, 50 percent of users in FX also report compliance requirements. Surprisingly, 92 percent of respondents cited investment process improvements as a driver of FX TCA; this type of impetus was slower to develop in equities. Another 33 percent report client requests for execution information as being important.

The relative magnitude of FX transaction costs

Before looking at a single order from any individual trader, one goal of TCA is to quantify transaction costs prevalent in the aggregate market, for various deal sizes, times of day, and across market conditions. We give a flavour of such analysis for five major currency pairs and six minors.(6) The time period is January 1, 2013 through March 31, 2013.

The key to the analysis is the construction of a consolidated limit order book for each currency pair, based on data from twelve banks and five electronic communications networks (ECNs).7 Tradable quotes are identified, and all statistics are based upon them; we examine indicative quotes in the next section. We limit ourselves to a discussion of spot rates.(8)

Figure 1 illustrates depth of book for one major and one minor currency pair.

TCA-ITG2_Fig1_980x700

The advantage to using an empirical order book is the ability to construct size-adjusted spreads for any time of day. Intuitively, the spread should depend on the notional amount available at any given price. The order book quantifies this notion, based on the cost of climbing the book for any given deal size.

The top panels illustrate depth at the best quote and cumulative depth of book for GBP.USD, and the bottom panels for USD.TRY. The median number of price levels from which total depth is calculated is remarkably constant across the day, ranging between 17 and 20 for GBP.USD and 10 to 12 for USD.TRY.

The basic patterns are the same across all currency pairs. Median depth at the best quotes is 1mm, rising at most to roughly 1.5mm for the EUR.USD. The 90th percentile exhibits a bit more fluctuation, but is still relatively constant for GBP. USD at 3mm; for the EUR.USD pair, the upper percentile range hovers around 5mm.

In comparison, cumulative liquidity across available prices rises to 300mm, on average, for EUR.USD, and between 200mm and 250mm for the Pound, depending on time of day. Median cumulative depth for USD.TRY is between 25 and 30 times that available at the best quotes. This minor pair is not completely representative, however. Book liquidity is sparse for all CZK pairs and for EUR.PLN. Median cumulative depth is below 5mm for these pairs. In contrast, USD.PLN exhibits liquidity on the order of 25mm to 30mm.

Based upon these book data, we construct a measure of cost, by currency pair, time and order size.(9) Figure 2 contains the results of the exercise for two major and two minor pairs, which are representative of the eleven currency pairs studied.(10)

TCA-ITG2_Fig2_980x700

Time-of-day effects are small during London trading hours; the 1000GMT and 1600GMT curves virtually lie on top of each other. Off-hours trading is substantially more costly, although such differences are minimised for Asia-Pacific pairs such as AUD. USD and USD.JPY.

Aggregate costs are far lower than previously reported estimates, such as those cited for Russell and Record Currency Management. In those cases, time is a factor, since results date back to 2003. Morgan Stanley reports more current numbers based on their model-based methodology, which are multiples of those shown here.(11) Our liquidity-based estimates for 50mm GBP, for example, are a tenth of the Morgan Stanley estimate of 4.35 bps. For USD.CAD, they report 4.40 bps on average, with minimum cost at 1.82 bps; in contrast, for the same 50mm deal size, the graph suggests about 0.5 bps. In the case of USD.ZAR, Morgan Stanley’s estimate is seven times what we see in the aggregate data.

As is often the case, the truth is probably somewhere in between. Our method takes advantage of real liquidity provision from seventeen sources. The assumption in deriving size-adjusted spreads is a bit heroic, however. We are assuming a trader’s ability to sweep the aggregate book, taking advantage of all liquidity for all deal sizes. While this is closer to reality in equity markets, fragmentation of data sources and the mixed nature of the dealer-ECN markets in FX suggest that our estimates constitute a lower bound, at least in view of current trading practice. Finally, we model size-adjusted spreads only, ignoring latent liquidity and slippage costs. In contrast, the Morgan Stanley numbers are single-sourced, and depend on models, for which waiting times for new orders and volatility are sufficient to determine cost.(12) Depth of book is not taken into account. A true reality check awaits serious examination of individual buyside data with good time-stamps.

Indicative quotes and their tradable counterparts

Indicative quotes are widely disseminated, and underlie most studies of the FX market.(13) A comparison of tradable quotes (TQ) to indications (IQ) is therefore of interest. Although there are some quantitative differences in quote levels across currencies, patterns are similar enough that the relevant points can be illustrated using one major and one minor pair. The first comparison appears in Figure 3.

TCA-ITG2_Fig3_980x350

Spreads calculated from indicative quotes are significantly greater than those computed from tradable quotes. The ratio of the two spreads ranges between five and ten, on average for all currencies. Although the indicative quotes are not updated as quickly as tradable quotes, they track each other fairly closely, albeit at different levels.(14) In other words, the intraday patterns are essentially the same. Although it is not obvious from the USD. JPY example, the difference between tradable and indicative quotes narrows during London trading hours for most currency pairs.

Indicative quotes do not vary by size. A natural question concerns the size of deal for which the indicative spread “correctly” prices a trade relative to what is actually available in the market. Figure 4 contains representative plots, taken from the GBP. USD and USD.ZAR pairs.

TCA-ITG2_Fig4_980x350

For the major currency pairs, size-adjusted spreads cross the indicative quote at deal sizes between 80mm and 120mm. Indicative spreads overstate cost for all sizes below that range, illustrated here by the GBP.USD pair. For the minors, where deal sizes tend to be smaller, the crossing point is much lower, at about 50mm in the example above and for pairs such as EUR.PLN (roughly 30mm).

Linking FX costs to institutional equity demand

One of the grander aspirations of TCA is to provide the “all-in” cost of a transaction. Appropriate linking of orders is an ongoing issue even in straight equity TCA, depending on the workflow of any individual buyside institution. Making the connection between global equity trades and their corresponding FX costs eventually will require information from the buyside, which enables the connection to be made.

We can provide an idea as to what might be expected. For the first quarter of 2013, we select the ten most active equity trading firms from our TCA Peer database. All equity orders requiring a foreign exchange transaction are identified.(15) At the end of each trading day, the size of the FX transaction is calculated based on the aggregated executed sizes of all equity transactions in a country.

We contrast three polar outcomes. The first is immediate execution of the aggregated FX volume of all equity trades at the time the last equity order is completed. The second and third are FX executions at prices which deliver the best and worst outcome of the day, excluding the period 21:30-22:30 GMT, during which prices are not representative.(16) The results of this exercise are contained in Figure 5.

TCA-ITG2_Fig5_1300x750

FX order sizes tend to be relatively small, on average, but have some sizable outliers for certain days. As expected, the magnitude of the ITG Peer client order sizes varies greatly across currency pairs. Euro and Pound lead the pack with the largest order sizes of 550mm and 600mm, respectively. The average order sizes at 100mm and 170mm are also substantial for both pairs.

Further relating the results to costs in Figure 2, orders in Canadian currency have order sizes up to 200mm (with an average around 60mm) and for the Polish Zloty, only about 50mm (9.5mm). The last is not representative of order sizes in all the minors, however. Equity-linked FX order size for the South African Rand is in the range of 30mm on average with a maximum around 90mm.

How much does it cost the average firm to implement the FX leg of an equity transaction? The answer from this sample is an annualised $13.8 million.

How much could it have cost, if executions were consistently at poor prices? The annualised figure per firm would be $40.8 million. Enough said.

Buyside data and the way forward

The purpose of this article is to outline available evidence with respect to aggregate FX transaction costs. There are both caveats and opportunities associated with the exercise.

Our estimates are derived from liquidity information based on seventeen data sources, all providing tradable quotes, and permitting the construction of an order book. It is no surprise that indicative quotes are generally useless in judging levels of cost, for which we provide evidence. In effect, however, we present a lower bound on transaction costs, with results being a fraction of those presented in other published sources. The reason for this is an assumption that a trader can sweep the book in a market fragmented not only by time and space, but also by the proliferation of dealers and ECNs. The difference between our aggregate estimates and realised cost represents the opportunity to save money. Hence the rationale for FX TCA, and some motivation for changes in market structure and sellside applications, which would permit such liquidity aggregation.

We note that a reality check awaits a serious look at a cross-section of buyside firms’ FX dealings, using data with good time-stamps. A preliminary examination of buyside trading in our own files suggests that process improvement can lower costs. Trading in EUR.USD, for example, appears to cost roughly three times what we would have predicted based on the order book. For the AUD.USD pair, the factor is four. For the ten firms for which we match equity transactions with their FX counterparts, the cost is $3.5 million, per firm per quarter, based on our lower bound estimates of size-adjusted spreads. Poor execution multiplies this figure three-fold. These are serious numbers, which call for a serious attempt at measurement and analysis.

There is much more to do, and many more questions than answers at this stage. Forwards and swaps constitute part of our individual buyside analyses, and similar aggregate information would be useful, especially for minor currencies. The impact of the common practice of netting currencies is certainly a topic, especially since any residuals from that process are executed by a single dealer, as opposed to being exposed to the type of liquidity described here. The effects of volatility are not yet well understood. In preliminary work, we find that volatility, per se, may not be as strong an effect as commonly believed. Volatility surprises, deviations from expectations, constitute a powerful driver and can be quantified, not only for forensic analysis, but also as a pre-trade tool. Explicit links between pre-trade and post-trade analysis have been shown to reduce costs in equity markets. We believe the same to be true in FX.

How big is “Big?” – regardless of disparities in alternative estimates, FX trading costs, if not measured and managed correctly, can be a meaningful drag on investment performance. Solutions now exist, which permit leverage to achieve better performance. Investors and traders are beginning to expect counterparty accountability in terms of execution. Focus and measurement are the first necessary steps.Footnotes:

Footnotes:
1. Robert Collie, “It’s Time for More Choice in FX,” Russell 
Investment Group Viewpoint, December 2004; Record Currency Management, “Paying Heed Pays Off,” Record Research Summary #5, July 2003; Record Currency Management, “Report to Frank Russell on Currency Transaction Costs,” February 2005.
2. See https://investment.russell.com/public/pdfs/Consulting/ Asset Class Strategy/0110 RR FX Fees.pdf.
3. Michael DuCharme, “First Steps in Foreign Exchange Transaction Cost Analysis,” Journal of Performance Measurement, Spring 2007.
4. Tabb Group, “Just What is Best Execution in FX?” Tabb Group Perspective, July 2008.
5. There also is nascent dark pool activity; see Foreign Exchange Trading Creeps into Dark Pools, Wall Street Journal, October 11 2012.
6. The majors are EUR.USD, GBP.USD, AUD.USD, USD.CAD, and USD.JPY. The minors are USD.PLN, EUR.PLN, USD. CZK, EUR.CZK, USD.TRY, and USD.ZAR.
7. This approach was discussed at length in work by Morgan Stanley, but not acted upon, due to lack of available data. Instead, aggregate cost models were built based on options pricing formulae and an assumption of Poisson arrivals of orders. See, “A Guide To FX Transaction Cost Analysis, Parts I and II,” Morgan Stanley White Paper Series, October, 2009 and February, 2010.
8. Space constraints preclude discussion of forwards and swaps.
9. The measures to follow are based on five-minute intervals 
and adjusted for daylight saving time regimes. Costs are computed for six deal sizes: 0.1mm, 2.5mm, 7.5mm, 15mm, 35mm, 75mm, and 200mm; remaining data points are simply interpolated. Median values of the size-adjusted spreads are illustrated in the graphs.
10. Although extrapolation produces reasonable results for large deal sizes for the CZK pairs and EUR.PLN, the lack of substantial liquidity on those books precludes reliable estimates past the 5mm deal mark.
11. See, for example, “How Much Does It Cost to Trade 50M?” Morgan Stanley Fixed Income and Trading white paper, June 2013.
12. Earlier work by Morgan Stanley cites EBS as the single data source. The 2013 document from which we make the comparisons simply gives the source as their own Quantitative Solutions and Innovations group. Their characteristic waiting time can be approximated by the notional amount of the order, divided by the average size of arriving orders, times the average rate of order flow in the market during the transaction period. This follows from a statistical distribution assumption for order arrivals.
13. An early example is by Ian Domowitz and Tim Bollerslev, “Trading Patterns and Prices in the Interbank Foreign Exchange Market,” Journal of Finance 48, 1993.
14. IQ quotes tend to lag the tradable quotes consistently by a few seconds for major currency pairs.
15. For reasons idiosyncratic to our own database, the analysis is restricted to USD pairs, e.g., AUD.USD and USD.TRY.
16. Even excluding this period, the worst prices of the day still are temporally close to this interval.

 

© 2013 Investment Technology Group, Inc. All rights reserved. Not to be reproduced or retransmitted without permission. 91613-17020
These materials are for informational purposes only, and are not intended to be used for trading or investment purposes or as an offer to sell or the solicitation of an offer to buy any security or financial product. The information contained herein has been taken from trade and statistical services and other sources we deem reliable but we do not represent that such information is accurate or complete and it should not be relied upon as such. No guarantee or warranty is made as to the reasonableness of the assumptions or the accuracy of the models or market data used by ITG or the actual results that may be achieved. These materials do not provide any form of advice (investment, tax or legal). ITG Inc is not a registered investment adviser and does not provide investment advice or recommendations to buy or sell securities, to hire any investment adviser or to pursue any investment or trading strategy. The positions taken in this document reflect the judgment of the individual author(s) and are not necessarily those of ITG.

 

©Best Execution 2013

 

Viewpoint : Mo M’Rabti : ETFs

Mo M'Rabti, Euroclear
Mo M'Rabti, Euroclear

ETFs HAVE OUTGROWN THEIR HOMES.

Mo M'Rabti, EuroclearThe growth of the European ETF market looks assured, but it is not without significant challenges as Mo M’Rabti, director of product management, Euroclear explains.

Clearly, investors are starting to buy into the European recovery story, and are putting their money into European-focused exchange traded funds (ETFs). Funds concentrating on developing markets are also seeing solid interest levels. That said, the operational challenges and inefficiencies associated with ETF trade settlement remain largely unaddressed and are likely to impair future growth.

It has become increasingly evident that cross-border transfers of listed ETF shares are among the most pressing challenges that the European ETF industry faces today. Currently, ETFs are “fungible” between European stock exchanges, but there is no single settlement location where ETF trades on different stock exchanges in Europe are processed. ETFs are treated like equities, which means they follow the same local post-trade processes as equities and settle in each stock exchange’s national central securities depository (CSD).

The domestic post-trade structure was the right solution when ETFs were first launched in Europe in the early 2000s, as most of the trades were done within a given national market. For example, ETFs traded on the London Stock Exchange are settled via Euroclear UK & Ireland, the UK’s electronic settlement system, while ETFs traded on Deutsche Bšrse must be settled via Clearstream Banking Frankfurt.

The European ETF landscape has since evolved to cater for the interests of non-domestic investors, where the same ETFs are now traded on multiple national exchanges. In effect, European ETFs have rapidly become international securities. But the post-trade infrastructure for cross-border ETF trading hasn’t evolved to accommodate their international nature and growth in volumes. ETF settlement remains fragmented, which has a negative impact on ETF trading liquidity in Europe and is constraining further growth.

For example, the same ETF product could have multiple securities reference identifiers, depending upon where it is listed, e.g. a “DE” ISIN code for Germany or an “IE” for Ireland. The codes may be different, but the underlying ETF is one and the same. A source of confusion? Absolutely.

Moreover, under the existing regime, broker/ dealers buying ETFs on one exchange and selling the same ETFs on another face the cumbersome process of ETF realignment from one CSD to another, converting or re-registering their ETFs from one European market to another. In addition, because market rules and practices are not harmonised across Europe, dealers are often faced with managing different corporate action record dates, not to mention FX exposures on market claims.

Brokers use multi-location ETF inventories to mitigate settlement failures, which can result in large financial penalties. Rather than constantly converting or re-registering ETFs, brokers maintain buffer ETF inventories in multiple CSDs, which is extremely costly.

The international ETF – the industry responds

Clearly, it is time to create a more efficient post-trade structure to increase capacity for the ETF market to grow further. We need a post-trade infrastructure that is geared for cross-border ETF trading. The market believes the solution is an ETF structured as an international security.

There is already an equity-linked product – depositary receipts – which is traded on several trading venues and settles very efficiently in the ICSDs. The established working relationships between infrastructure providers – stock exchanges, MTFs, CCPs and ICSDs – are easy to extend to the ETF industry.

BlackRock and Euroclear, together with stock exchanges and CCPs, have been working together for some time to come up with a way to address the above issues impacting the liquidity of ETFs in Europe. The solution is to issue ETFs under an international structure for settlement at an ICSD – like Euroclear Bank. Effectively, the scheme provides ETF investors, regardless of their geographic location, with a single settlement location to process cross-border trades in these products. BlackRock will issue an iShare ETF using the new structure with Euroclear Bank in 2013. It is envisaged that other ETF issuers will benefit from ICSD settlement of their ETFs.

Under this new scheme, for broker-dealers which previously had to manage multiple CSD relationships, the benefit is obvious. The management of ETF inventories will be much more efficient and straight forward. And, we will see a significant reduction in settlement fails.

ETF transaction costs borne by brokers will plummet, which we expect will translate into compressed trading spreads for investors. This has important ramifications for the European ETF landscape when compared with the US. For example, BlackRock’s flagship MSCI Emerging Markets ETF, the EEM, trades with a spread of around two basis points (bp) in the US, whereas the same product is bought and sold in Europe at a spread of approximately 20bp.

This market-driven solution heralds a new dawn in ETF issuance, trading and post-trade processing. It is very likely that ETF issuers across Europe will continue to cross-list their ETF securities, and liquidity in these instruments will certainly improve by addressing the cumbersome operational issues. The growth potential of the European ETF market can climb to more than USD1 trillion in the coming three to five years, as leading ETF issuers now expect.

©Best Execution 2013

 

We're Enhancing Your Experience with Smart Technology

We've updated our Terms & Conditions and Privacy Policy to introduce AI tools that will personalize your content, improve our market analysis, and deliver more relevant insights.These changes take effect on Aug 25, 2025.
Your data remains protected—we're simply using smart technology to serve you better. [Review Full Terms] |[Review Privacy Policy] By continuing to use our services after Aug 25, 2025, you agree to these updates.

Close the CTA