Home Blog Page 571

TCA Across Asset Classes : Mike Googe

TCA2-MikeGooge.960x375

TCA FOR FIXED INCOME – REALLY?

TCA2-MikeGooge.960x375

Adoption of TCA for fixed income is showing signs of acceleration, however approaches and opinions on best practice differ greatly from firm to firm. Mike Googe, Global Head of Post-Trade Transaction Cost Analysis at Bloomberg, considers the key challenges and factors in implementing an efficient and effective fixed income TCA policy.

There are people who question whether it is possible to gain any kind of value or insight by conducting TCA for fixed income (FI). While there are limitations to FI TCA, it can and indeed does deliver value, and more and more firms are starting to realise that. At the annual Bloomberg TCA conference in 2013, our interactive survey showed 38% of respondents would like to implement FI TCA and another 30% are looking at a full multi-asset implementation. In 2015 Greenwich Associates published a paper which supported this observation. Their survey revealed use of FI TCA has grown from 19% of firms in 2012 to 38% in 2014.

Interestingly, the reasons put forward to explain why fixed income TCA can’t be relied upon – quality of pricing data, lack of a complete volume picture, market trading practices, the liquidity crisis – are issues which also affect the quality of analysis in equities markets where TCA is a firmly-established practice. Yes, equity markets have published market data, but given the imbalance of HFT to real money flows, fragmentation of liquidity across lit and dark venues and the disparity in quality of pricing data between the most liquid large caps and small and micro caps, accurately measuring your trading performance is still very challenging.

The challenges of conducting TCA for fixed income can be overcome. What is imperative is that firms clearly define a TCA policy that meets their objectives, taking into account the limitations of the analysis and using it in conjunction with good judgement.

Why conduct FI TCA?

The primary goal cited by market participants is generally to gain insight in order to improve their trading and investment processes, and be able to demonstrate control and resultant benefits in terms of performance. According to the Greenwich Associates survey, 64% of respondents said ‘post-trade review to analyse trade effectiveness’ was their main use of the analysis.

Repurposing TCA for compliance surveillance is also gaining traction. Having an independent analysis of performance allows firms to implement a defendable policy for the identification of potential market abuse, conflicts of interest and suspicious transaction breeches.

Last but not least, the quality of analysis, and thus the value of conducting FI TCA, is set to improve significantly with initiatives like MiFID II in Europe, which will lead to increased pre and post-trade transparency on fixed income markets.

Key challenges

By considering the following questions you can begin to address the issues surrounding pricing data quality, cost decomposition, benchmark selection and aggregating results.

What do you want to measure and why?

This might include which trade types or flows to include, which part of the order lifecycle to consider and are you doing this to detect outliers or trends or compliance breeches etc.

How do you want to measure and what effects are you trying to observe?

This includes benchmark selection, which pricing data to measure against, which contextual data to incorporate, such as momentum or volatility, in order to isolate insights such as which dealers perform best under what conditions or how best to reduce impact etc.

When do you want to measure and how do you want to use the results?

Select a frequency of analysis to give you the best insight and the best output for the various types of internal and external consumers of analysis, e.g. summary vs detailed reports, charts and visual representations, exceptions or alerts or even a feed to contribute to pre-trade decision support tools.

Pricing data

In the absence of a continuous stream of published tick data it is impossible to achieve a complete picture of pricing and volume for a given bond. At some point in the future regulation may mandate publication, or even create the conditions for a central order book, but there are many challenges to overcome before anything like that can be considered.

Currently pricing data consists of contributed indicative pricing, firm pricing, quotes, evaluated pricing and to an extent runs and axes. The higher the liquidity of the issue, the more reliable is the pricing data. Of course purely indicative prices can be skewed, but in liquid names where multiple contributors are in a competitive environment the onus is to deliver accurate prices, and many participants view these as a good proxy for a continuous tick data set. Contributions can also be made by exchanges and reporting agencies such as TRACE (the Trade Reporting and Compliance Engine). As markets become more electronic, firm prices further increase the quality of pricing data and time-stamps, as will initiatives to enhance pre-trade transparency.

As liquidity deteriorates, so does the quality of contributed data. If it falls below a given threshold it is time to consider using evaluated prices, which are based on a variety of direct and indirect observations and are often provided with a ‘confidence score’, allowing consumers to determine to what extent they want to rely on them. In this case, it is fair to say that any TCA will not be useful from a trading insight point of view, however it can be used for trading surveillance for the purpose of compliance testing. This introduces the idea of contingent benchmarking, e.g. selecting the benchmark to use based on the characteristics of the bond or indeed the order.

Cost decomposition

One of the critical insights TCA strives to deliver is an understanding of where costs occur during the lifecycle of a trade. Being able to understand the opportunity cost between the various stages can help identify areas of best or poor practice and assist in changes of procedure as well as strategy selection or timing. Being able to answer what happened to the price between portfolio manager decision, order creation, RFQ going out, quotes back, execution time etc. helps provide these insights.

With a highly-automated electronic workflow providing accurate timestamps, equities can be reliably measured. But with a high proportion of fixed income flow conducted over voice, the quality of decomposition is seriously eroded. In this case the systems and workflow of the firm must be considered when looking at what insight can be gained from any decomposition. In the case of a high proportion of manual workflow, benchmark selection is critical. For example, measuring an observed price when requesting quotes, and comparing it to the price when a quote is accepted can provide insight on price sensitivity. When aggregated and compared, this can indicate areas where more or less caution in timing or better dealer selection is required.

Benchmarks

The next factor to consider, once you have the underlying pricing data in place, is what ‘measuring sticks’ you want to use. Benchmarks use different approaches to detect specific effects and broadly fall into two categories: absolute and relative.

Of course making comparisons for benchmarking requires matching conventions. Prices should be against prices, be they clean or dirty, yields against yields, but in all cases normalised. Conducting TCA based on spread is fraught with difficulty because of the changing nature of the underlying benchmark, curve or interpolation method from firm-to-firm.

  • Absolute benchmarks

Absolute benchmarks are calculated by comparing raw pricing data with transactional data. The most prevalent is implementation shortfall, or arrival price, which simply compares the price at a given action point in the order lifecycle with the achieved execution price. This price is taken as the observed mid-price. You can then compare the observed price when the PM made the decision, or when the order was created, or when the trader sent the RFQ or executed against the execution price, and calculate the slippage.

This can be used to calculate the opportunity cost in a decomposition model: what was the price when the order landed on the traders pad compared to when they sent the RFQ out? Did the price move favourably or adversely? When you aggregate the results over a period of time, trends can be detected (and even more so when put in context of the order in hand) to determine if the effect is peculiar to one type of bond over another, for example.

  • Near/far touch benchmarks

A variation of this is the near/far touch benchmarks, which operate in a similar way, but depending on the side of the order they use the bid or ask price instead of taking the mid-point. For example a buy order benchmarked to the far touch will use the offer price in its calculation. The benefit of this approach is that it allows the cost of crossing the spread to be factored into the performance. Whilst it only looks at the arrival time, it allows seeing how much spread was captured. A natural evolution to this will be to take the average spread from the arrival to the final execution time, but this is a more complex operation.

  • Time Weighted Average Price

Volume Weighted Average Price benchmarks are challenging to deliver due to the absence of a complete volume picture. Different platforms or dealers could provide something, but to avoid the pitfalls of self-fulfilling prophecies this is realistic only for very liquid bonds. Future post-trade transparency initiatives might allow this, but for now you can consider the alternative of Time Weighted Average Price (TWAP). This requires aggregating prices into time intervals where each interval has an equal weighting. A TWAP can then be constructed over a full day’s trading or a given interval, for example the order interval or the interval from the order arrival to the end of trading day. Again this is most effective at the higher end of the liquidity spectrum.

  • Point in time benchmarks

Point in time benchmarks allow measurements against discrete times not related to the lifecycle of an order, for example a given prior or post days close. These benchmarks can be used in tandem to create a momentum profile less centred on the trading process but more looking at the timing of trades. When aggregated by portfolio manager or fund for example, this can reveal momentum bias and whether the trading approach can afford to be more passive or aggressive.

At a lower level of resolution, time offset or reversion benchmarks will take snap prices at a much shorter interval, usually measured in minutes or even seconds, around a given action (e.g. arrival time or execution time). These work in a similar way to point in time but look to detect short term momentum or impact, which can again be aggregated by order characteristics to see which orders or bonds etc. are more sensitive than others.

  • Relative benchmarks

Relative benchmarks help to calculate costs within some sort of context. There are two main approaches: using peer performance to see how your trading compares to others who have conducted similar business, and using pre-trade cost estimates which calculate a likely cost of execution given the characteristics of the order in hand.

Creating a meaningful set of peer benchmarks is a balancing act between measuring into such detail that you risk leaking information about contributors’ trading activity against aggregating results to such a high level that the benchmark becomes meaningless. The balance is to use 2-3 levels of aggregation to get a more relevant result, e.g. what is peer arrival when trading French sovereign debt in 7-10 year maturity or corporate debt of high order difficulty during adverse momentum?

In addition, it is imperative that the sample of peer results has thresholds for minimum acceptable observations, for example specifying a minimum number of relevant orders from a minimum number of contributing firms (excluding yourself to avoid material skew). If these criteria are not met then no result should be generated to avoid misleading observations. A peer equivalent of any of the absolute benchmarks will allow relative comparison of performance, but typically they tend not to include point in time benchmarks.

Pre-trade impact estimates in fixed income are beginning to emerge. Through approaches such as cluster analysis, you can leverage scarce transaction data observations while considering a large number of relevant factors that can influence the liquidity of a particular security. In other words, big data offers promising methods to estimate impact cost and time to execute for a given volume, even for bonds with limited historical trading activity.

This is a sophisticated way of doing benchmarking which allows using market impact models similar to those well proven for equities and provides a natural way to extend coverage for TCA in fixed income markets. Again, to enhance transparency, results can be caveated with an uncertainty or confidence score.

  • RFQ benchmark tests

Finally, Request For Quote (RFQ) trading allows for another set of benchmark tests to be conducted. These have tended to fall into the category of best execution. A firm might go out to several dealers requesting quotes and will typically select the best quote to trade before comparing it to the next best quote. This measure is very simple to capture and is known as the cover, which gives the price improvement achieved.

There is now a growing interest in capturing all returned quotes, including rejected quotes and abandons, and comparing them to the traded price to calculate the opportunity cost when you didn’t trade with a certain dealer. Further metrics such as hit ratio (how often you trade with a given dealer within a given context) or average time to respond to a request can provide further context to the analysis.

Aggregating results

Typical aggregations are group results based on prevailing market conditions (e.g. volatility, momentum etc.), order characteristics (e.g. maturity, security type, country, sector, rating etc.), entity (e.g. dealer, trader, PM or account etc.) or time period (e.g. day, week etc.).

One of the most important goals of TCA is to group according to the difficulty of completing an order. In equities the standard proxy is order size/average daily volume, but the absence of a full volume picture in bonds makes a similar approach unreliable. Compounding the challenge is the fact that as bonds mature their liquidity profile typically deteriorates.

An alternative methodology is to attribute a notional value to a level of difficulty. This doesn’t, however, reflect the wide variety of liquidity profiles a firm trades. Trading up to $5 million of the current US Treasury 10 year is different from trading up to $5 million of an off the run corporate bond for example. Further elements need to be considered, e.g. the security type.

The confidence index score given by evaluated prices can also be used to map difficulty. Bonds tend to have high confidence scores when many reliable direct observations are made, indicating higher liquidity and vice versa. A more simplistic approach could be to look at the size of the order relative to the issued size or available float, but this is constrained when you factor in the difficulty in assessing how much of an issue has been locked up and can no longer be considered accessible liquidity.

Another important step to produce a more accurate analysis is to ensure that things like flow type or semantics are captured. For example repos should be identified so they can be grouped separately to look at the overall performance, to avoid looking at individual legs, which can introduce contingent skew under some other aggregations (e.g. side).

Capturing semantic elements is a lot more difficult and relies on the capabilities of the feed system, but can provide meaningful insight. For example, an order instruction which seeks to get executed ASAP would typically attract a less favourable cost profile vs. one where the instruction is full discretion. If this data can be captured with syntactic consistency it can be a very insightful aggregation.

Turning analysis into action

Typically, TCA takes the form of reports looking to answer one or several questions concerning best/worst trades, dealer performance etc. However these require someone to review the analysis and pull whatever insight they can from them. With time pressure and firms seeking to increase the tempo of their decision/action cycle, TCA increasingly focuses on determining outliers on the basis of thresholds and tolerance, pushing results to the most appropriate user and feeding them into decision support tools.

Integrating trends or aggregate results for decision support is an area of increasing interest in all asset classes. Having an arbitrary ‘show me my best 5 and worst 5 trades’ on a given day doesn’t take into account the quality threshold which all the benchmarking discussed previously delivers. Increasingly, firms are looking to set acceptable tolerances on performance for a given benchmark for a given set of order characteristics. This means you don’t have to rely on a single quality line for all your flow.

Within fixed income the area of dealer selection is a focus. Using some of the previously described benchmarks it is possible to say for a given order profile which dealers are the most likely to respond to your RFQ, which are likely to give you the most competitive prices etc. The result of this can be a probability of execution score or list of ranked dealers to go to. If presented within the EMS or even as a set of options when creating the ticket or RFQ, the trader can make an informed decision with insight directly at their fingertips.

Again, using groupings, a user can say ‘if I have an easy US treasury order which slips by more than 5 bps against arrival then tell me’. The result is that users only get alerted to those items that they have specifically flagged, thereby reducing the incidents of false positives.

This is of particular importance for the growing community of compliance users who are moving away from random sampling to a more defendable surveillance policy which tests every trade and presents only those which require investigation.

Really? Yes!

Fixed Income TCA is still emerging but is evolving and being adopted at an increasing rate. The crux is not to view TCA as a ‘one size fits all’ activity but to consider it in light of the unique attributes of your firm and the variety of flow types traded. With this in mind you can establish a TCA policy which gives you the insights that can deliver real value add to your business.

[divider_line]©BestExecution 2015

[divider_to_top]

TCA Across Asset Classes : Jim Cochrane

TCA2-JimCochrane-960x375

ADJUSTING FOR SIZE.

Liquidity and risk effects in foreign exchange trading

 TCA2-JimCochrane-960x375

By Jim Cochrane, Director, ITG TCA® for FX, Ian Domowitz, Managing Director, Head of ITG Analytics, Milan Borkovec, Managing Director, Head of Financial Engineering and Sandor Ferencz, Vice President, ITG Analytics.

As the next advance in FX TCA reporting, our clients in the investment community have requested size-adjusted spread (SAS) benchmarks that account for risk and liquidity on a pre-trade and a post-trade basis. However, one of the more frustrating aspects of over-the-counter trading is the lack of transparency around these spreads. An accurate size-adjusted spread based on aggregated electronic foreign exchange quotes would replace the old method of supplying expected spreads: manually-filled matrices for each trading region with spreads for given currency pairs and sizes. Buyside traders depended on this information to both hold banks accountable for their agreed spreads as well as manage their own expectations for costs. Now that buyside firms are more responsible for currency risk, they need a system that will digitally re-create those matrices and give them a benchmark that will show that they add value to the investment process.

While every benchmark is a useful tool for the analyst, the lack of a benchmark that measures the impact of their trading skill and discretion in FX has been sorely missed over the past several years. Most traders feel that they add value to the funds for which they trade. A true accounting for that currency risk management will not only provide them with a skill measure, but it will also provide valuable information to every constituent group at a buyside firm regarding portfolio performance, implementation shortfall and process improvement.

As a provider of foreign exchange transaction cost analysis (FX TCA) ITG created FX cost curves from our tradable quote database that provide insights into expected costs for any size trade at any time of day for liquid, deliverable currency pairs. Our curves were split up into four distinct categories from our quote sources: all sources, the best bank quotes, the average bank quotes and ECN-only quotes. Each cost curve had a different expectation for SAS, which would be expected for FX trading. While each have their uses for pre-trade analysis and trading strategy, which is the “best” curve? Which curve produces the most accurate cost estimate?

ITG2-Fig.1

Since ITG also has access to executed transactions in our database, we tested our cost curves for accuracy. The result of this examination was a modified cost curve that can be used to adjust for the size of currency transactions during times of differing liquidity and risk profiles. This cost curve can then be used to create a SAS benchmark rate that will account for trader skill as well as provide a more precise pre-trade measurement of expected trading costs. The result can then be a source for decision-making and research at firms that seek to outperform their peers.

First we will review FX market structure and our methodology for managing both opacity and aggregation. Then we will detail the first results of our study in size-adjusted spreads and the creation of our cost curves. Lastly, we will review our test of those curves against actual transactions to see which curve is the best fit for SAS.

The lack of a central exchange in the FX market has challenged analysts pursuing accurate cost measurement and useful benchmarks. In a market that has literally dozens of bid/offer spreads at any single point in time (see Figure 1), it is difficult to discover correct pricing even under normal market conditions. Transparency in foreign exchange has become a house of mirrors where the real rate is difficult to find, let alone define, among the dozens of false images. One way to instill clarity among the chaos is to capture and organize as many of these quotes as possible into a limit order book.

The development of a consolidated limit-order book using aggregated FX price streams was once reserved for the interbank market. That changed with the advent of electronic trading, but the rates became separated and sub-divided as the market became fragmented. Aggregators with different feeds would produce different results for any study. One way to increase transparency is to build a comprehensive data set and create cost curves with that data and test them against actual executions.

At ITG, we aggregate pricing data for 38 liquid, deliverable currency pairs from 12 global FX banks and five major electronic communication networks (ECNs). This data is then culled for duplicate and stale quotes. We then review the cumulative depth of each currency pair to ensure that enough trade data is available for in-depth analysis. The resulting empirical order book permits the construction of size-adjusted spreads for any time of day. Intuitively, the spread should depend on the notional amount of liquidity available at any given price. The order book quantifies this notion, based on the cost of climbing, or sweeping, the book for any given deal size.

We apply the following model to create daily FX size-adjusted cost curves:

SizeAdjCostt,i = SizeAdjCostb,i • (ImpvolSurpriset-1,i)b (1)

where:

  • t is the day index,
  • i specifies the currency pair (38 pairs currently),
  • SizeAdjCostt,i is size-adjusted cost for date t and currency pair i for a specific size depending on the currency pair,
  • SizeAdjCostb,i is the median size-adjusted cost during the benchmark period,
  • ImpvolSurpriset-1,i = Impvolt-1,i
    Impvolb,i
  • Impvolt-1,i is the implied volatility of the currency pair i at as of 8pm Eastern Time on date t-1,
  • Impvolb,i is median implied volatility during the benchmark period.

The model produces results that estimate costs by time and size and provider.

One of our initial observations was that our SAS did not match earlier research (Borkovec, Cochrane, Domowitz and Escobar [2014]). These observations were based solely on sweeping the book of liquidity using all sources, and produced spreads that were lower than other studies. Our own experience led us to believe that SAS reported in earlier research was too generous. True spreads, while wider than the bid/offer spread in the market, were not as wide as estimated in other papers and articles. That same insight also led us to believe that sweeping the book was not a popular method for achieving low costs and our results were too extreme due to the microstructure of the aggregated market. Even after culling our data, our size-adjusted spreads were probably not achievable.

The above insights prompted us to alter our methodology in order to account for venue performance and last-look liquidity. It is very unlikely that sweeping the book would be successful if the banks have the ability to take one last look before accepting the trade, especially if the top of book has already been triggered. And, in cases where an investor was successful in sweeping the book for a large trade, then the last banks holding the now toxic position would either stop quoting to the successful investor or widen the quotes to that investor in order to account for the increased risk in her trading style. In response to our observations and understanding of the FX market, we created a variety of cost curves that would match a trader’s execution style.

While keeping the swept book concept in our “ALL” size-adjusted spreads, we also derived three other cost curves: Avg Bank, ECN and Best Bank. As you can observe in Figure 2, “Cost by Provider,” in sizes above 50m USDCAD at the end of the NY trading day, Best Bank outperforms Avg Bank, which outperforms the ECN curve. The Best Bank cost curve is based on the best dealer quotes for each size at this point in time. The Avg Bank curve is calculated from the average cost of all dealer banks for each size at this point in time. Lastly, the ECN line is the average of the consolidated ECN tradable quotes. We discovered that our wider costs matched earlier research more closely than the ALL or Best Bank curves.

ITG2-Fig.2It is interesting to note that the ECN cost curve significantly outperforms in relatively smaller sizes, even beating the finest quotes of the Best Bank curve. Clearly this is an indication that in sizes up to 25 million U.S. dollars, the banks would prefer clients to use electronic trading platforms for this currency pair. Similarly, the inflection point between the Avg Bank curve and ECN curve at about 60 million US dollars reveals a desire to have clients pick up the phone and deal directly. Now we have four curves that could represent normal operating procedures for dissimilar trading desks: one that relies solely on ECNs, one that calls up a bank randomly (Avg Bank), one that knows the best broker-dealer for each currency pair (Best Bank), especially in large size transactions and one that sweeps the book (ALL). While each provides a cost expectation that fits a unique scenario, we still do not know which curve is the closest to actual experiences of FX traders in the market. We will discuss which of the four curves is the best curve in the next section of the article.

The table in Figure 3 below represents the first step of our inquiry into which cost curve is the best to use to create a SAS benchmark. It contains the results of comparing our test SAS benchmark rates against actual executions. The “Quoted Mid bps” column is the difference between the execution rate of a transaction subtracted by the mid rate prevailing in the market at the time of the transaction (if selling the base currency) expressed as a percentage of the same mid rate in basis points. That difference between the two rates is then compared to our four SAS benchmark rates. The results in the “SAS” columns are the expected SAS minus the “Quoted Mid bps”. A result of zero indicates that the expected SAS benchmark matched the actual transaction rate. A positive result indicates that the actual trade was a “gain” against this benchmark. A negative result indicates that the execution was outside, or worse, than the expected spread. “We did not differentiate for size. All trades are between 5 and 100 million base currency units. In total, 82,705 trades were analyzed for this study.

ITG2-Fig.3

The observations do not create a clear picture of which SAS curve should prevail. In other words, there is no clear winner using this analysis. The cells outlined in white are the closest to a zero in comparing the total cost of the transaction against the predicted cost using the SAS benchmark rate. The cost curve that produces differentials closest to zero changes by currency pair. Further tests made in this fashion revealed that the SAS_Avg cost curve came closer to the actual trades more often than the others, but not enough to say it is the best choice. The next test was to create a simple regression between the total cost of the actual executions against the cost curves and produce scatter plots. The results of this phase of the test were more conclusive and striking.

The analysis in Figure 4 compares the actual spread (Total Cost) on the y-axis to the expected SAS Benchmark spread on the x-axis for each of the cost curves. As seen by the R2 values for the various charts above, the best-fit line corresponded to the All Providers setting and not the Avg Bank setting. That is not to say that the ALL cost curve, which sweeps the book, is closer to zero across all trades. It is the cost curve with the best fit (R2=0.8497).

ITG2-Fig.4

This insight provided the focus for the last phase; modeling the ALL cost curve against actual transactions and deal size. As seen in Equation 1, the model attempts to explain the relationship between actual transaction cost using the top-of-book quoted mid rate as a benchmark, and ITG’s size-adjusted spread, taking deal size into consideration.

Equation 1:
Total Cost = B0 + SAS_[P]b + Deal Sizeg

where:

  • Total Cost is the difference between the mid rate of the best bid and offer in the market and the execution rate of the transaction expressed in basis points,
  • B0 is the intercept for the model,
  • SAS_[P] is the expected cost of the SAS benchmark expressed in basis points, and
  • Deal Size is size of the trade.

The multiple regression analysis results are shown above in Figure 5.

ITG2-Fig.5

As seen in Figure 5, the model was able to explain 83.7% of the variation in the actual Total Costs against the all-venue based size-adjusted spread (SAS_All). The model shows a positive relationship between total costs and the size adjusted spread benchmark. Moreover, for every basis point increase in the size-adjusted spread, total costs, on average, will increase by 1.0075 basis points. It is important to note that the p-value stochastically approaches zero as the sample size increases. Significance was tested by seeing if the lower and upper bounds crossed zero. If the lower and upper limits contain 0, then the independent variables are not considered to be a significant predictor of total cost. Based upon the results found in Figure 5, the following model was created:

Total Cost Prediction =
-0.344 + 1.0075(SAS_All) + -2.50E-09(Deal Size)

After re-running the actual costs against the modified size-adjusted spread benchmark, the results fell within ±0.5bps of the actual top of book quoted mid benchmark values 90% of the time, and within ±0.25bps of the actual top of book quoted mid benchmark values 69% of the time. To state this differently, of the 82,705 initial observations, 74,117 fell within ±0.5bps of the actual total cost and 56,682 fell within ±0.25bps. Based on the findings of this high-level study, the next logical step would be to investigate estimated total costs given different scenarios and real time market inputs, such as deal size and market volatility, as well as individual currencies.

In an OTC market such as foreign exchange, an overload of prices can lead to not only a lack of transparency, but also a debate on which price is best or which mid rate is correct. By comparing actual trades against our estimates of costs, it is clear that in the multitude of prices in an aggregated limited order book, an accurate size-adjusted spread prediction can be achieved. Additionally, contrary to earlier research results, lower costs in foreign exchange transactions should be expected. Attention to these details and matching trading strategy to execute in the market at the times of lower volatility and tighter spreads will certainly improve performance and increase competitiveness.

[divider_line]©BestExecution 2015

[divider_to_top]

TCA Across Asset Classes : Vlad Rashkovich

TCA2-V.Rashkovich-960x375

ORDER PROFILING.

Learn from the past to improve your returns

TCA2-V.Rashkovich-960x375

Vlad Rashkovich, Global Product Manager for Trade Analytics at Bloomberg explains how buyside traders can rip sizeable benefits by using a top down approach in the analysis of their order flow.

You have your EMS connected to brokers, news and orders are flowing through. You monitor the slippage of each order against various benchmarks in real time. You might handicap your performance with a cost model. You’re paying for all that, of course. It’s the cost of doing business. Right?

Right. This morning you got a number of difficult orders. How does your setup help you to win them? You look at the market, at the news, you size your order versus the Average Daily Volume. A good pre-trade tool with a solid cost model and a volume estimate can definitely help. You couple it with an IOI flow and you’re up for a good start.

Wait a moment. What about all these orders you executed over the years? Don’t they have some information about portfolio managers you work with, perhaps their patterns and biases? What about brokers and algos you have used, and which worked better for which type of order? Shouldn’t this history tell you, at the very least, about your own trading style and potential blind spots?

Another aspect is venue analysis, which has been very hot recently. With the proliferation of execution outlets, high-frequency players and dark liquidity, it’s natural to be concerned and want to control where your orders are being sent. As a result, the buyside demands increasing transparency from brokers. An urge to make some changes in algo routing logic is understandable. It is also relatively easy to implement.

Portfolio Manager (PM) profiling

A trade usually starts with a portfolio manager, and the largest improvement could come from analysing PMs and trading strategy alignment to the short-term momentum implicitly built into the orders. If the trader is wrong on the expected short-term alpha, he can try to speed up and find the best algo and venues to help him, only to realise that the right way to handle this order was to slow down. Venue analysis, broker selection, dark pools, IOIs – nothing will help if the trader’s overall strategy is not aligned with the portfolio manager’s.

A case in point, you work with a growth portfolio manager. Her “buys” might be momentum driven and require aggressive executions. Her “sells”, however, might be driven by the dips, where the momentum seems to be vanishing. If over time these dips are statistically significant to point for reversion, then the sell orders might benefit from a slow execution.

Real life and behaviour is more nuanced of course, thus the same portfolio manager might respond differently to various market conditions across countries, sectors, volatility and many other factors. As a result, you’re facing a multi-dimensional analysis where advanced machine learning techniques could decipher the signal from the noise.

Learning from similar orders in the past to quantify a potential built-in momentum is the first step. The next one would be to use a trade cost model curve to build a utility function for finding an optimal execution strategy which will balance the momentum and the impact. Ensuring that the trade cost model scales well for various order sizes, aggressive execution strategies and the markets where you trade is essential; otherwise the suggested strategy might be far from optimal. A robust volume model is also important to assess the duration of each potential execution strategy and thus the exposure to the expected momentum.

The optimal strategy will be defined by the target participation rate and might be characterised by potential improvement (profit) comparing to historical results, probability of profit and other statistical characteristics to ensure there are no skews and abnormalities which could result in a large loss or other undesirable results.

It is worth emphasising that the improvement and the probability figures should be assessed in the context of the entire order flow, and not on any single order. It is a numbers “game” and the more you play it, the higher is your chance to win.

To focus the effort, the highest return on investment is to identify not only behaviour patterns of portfolio managers, but to focus on cases where the trader and the portfolio manager are not aligned. Thus, if a portfolio manager has behaviour patterns and the trader knows them and adjusts the execution strategy accordingly, you might be able to reinforce the trader’s appropriate behaviour and potentially highlight short-term biases to the portfolio manager.

However, if the trader is not fully aligned with the portfolio manager, our research, based on over 200 portfolio managers, based in various countries and continents, shows that there is a 5-25 bps range for potential improvement (profit) on 10-20% of the order flow from most PMs.

Bloomberg-Fig.1Let us take all orders for a hypothetical portfolio manager over the last few years. If you look at the distribution of daily stock momentum (side adjusted) from order arrival time till 5 days later, you can see that it ranges widely between – 3% to +3% per day (Fig.1).

Executions for this portfolio manager can’t be always passive or aggressive, due to the wide range of built-in momentum.

If you look at duration and participation histograms of the executions for this PM (Figs. 2 & 3), you will notice that most orders have been executed over a day and some over 2 days. The duration spike is around 6 hours, which suggests day VWAP orders. As a result the participation rate ranges widely with most orders executed passively.Bloomberg-Fig.2

Historical execution slippage versus arrival was 26 bps +/- 104 standard deviation, while interval VWAP versus arrival was 26 bps as well +/- 105 bps. It suggests that the firm is most likely “VWAPing” its orders. When you apply PM profiling you can identify the orders where history contains a signal that is statistically significant.

Bloomberg-Fig.3The signal strength can be measured as potential improvement in bps juxtaposed against probability of positive results (Fig. 4). The higher an order scores in both dimensions, the stronger the signal.

If you use only those orders to learn from history, then the results from your PM profiling reduce the slippage to 12 bps and the standard deviation is down to 95 bps (Fig. 5).

This result is achieved for about 20% of the trading value ($1.1bn out of $5.6bn) and the overall improvement is about 10% ($1.5m out of $15.8m) (Fig.6).

When you look at the profit opportunity cumulatively you want to see it consistently growing over time (Fig.7). It’s easy to see that the Executed (dark blue) strategy closely matches Interval VWAP (light blue), which confirms the day VWAP assumption about the execution style of this trading desk. When you apply PM Profiler (dark green) you can see that over time it has been consistently outperforming an actual, executed strategy.Bloomberg-Fig.4

You can further enhance the profiling strategy, by limiting multi-day executions not to exceed 3 days. This is illustrated by the Adjusted profiler strategy (light green line). It is easy to see that this adjusted strategy keeps most alpha since the light green line is very close to the dark green one. This adjusted strategy also ensures that the profiler is close to existing trading practices and the risk profile of the asset manager. Limiting the maximum number of execution days also reduces standard deviation from 104 bps to 95 bps, as you can see in Fig.5.

If you apply the profiling across all PMs at the firm, thus multiplying potential improvement by the trading volume of the firm (no high frequency firms are in this sample), it could translate to an opportunity to increase their returns for clients by tens of millions of dollars, euros and pounds annually.

Bloomberg-Fig.5 Bloomberg-Fig.6Based on the research published by Bloomberg over the years, 83% of trader success comes from reading the momentum correctly. Therefore PM profiling should be at the top of the list for decision support systems for the modern trader.

Portfolio managers can also be interested in understanding their short-term momentum patterns, so this tool can and should serve both the trader and the portfolio manager.

Bloomberg-Fig.7

Broker-Algo profile

Once you have identified the optimal execution speed, you can analyse historical executions to see which Broker-Algo combinations have been the most successful for an order type you are about to trade.

To help you profile Broker-Algos, some trading platforms provide peer universes containing millions of routes contributed by hundreds of buyside firms in a non-attributable way over the last few years. When a firm decides to contribute its history, it benefits in return from the entire community experience.

The same concept of multi-dimensional analysis and supervised machine learning could be used here. Profiling of algos can be slightly different comparing to the portfolio managers, though. For instance, most algos are sector and side agnostic – two important factors for PM profiling. Spread, however, and target participation percentage are important factors for an algo selection.

The country might play a role in algo selection, since brokers adjust their algos across regions and countries. Thus an algo from a broker can carry the same name across markets but behave differently and yield different results.

As for measuring Broker-Algo results, optimal algo selection could be based not only on historical performance against a chosen benchmark or a set of benchmarks, but could also take into account consistency of historical performance and the number of observations. In some cases the outcome could be a group of Broker-Algo combinations that have been yielding similar results.

An important nuance to consider is that the same algo can behave very differently depending on the parameters selected. A passive algo, which reaches and invokes an ‘I would’ criterion, can become very aggressive and thus drastically change the algo’s behaviour. Mixing up these aggressive executions with passive ones will create a huge noise and an inconsistency in algo analysis.

Thus, you have to take parameters’ selection into account to understand the algo’s behaviour and to reduce the noise in historical observations. This is key for a successful Broker-Algo profiling, which can, according to industry advertised results, lead to 2-3 bps of outperformance.

Venue analysis

After you have identified an optimal execution speed and the best Broker-Algo combinations for a particular order, you can afford to drill down into these algos to see what they are doing under the hood.

You should quantify the efficiency of anti-gaming mechanisms, protection against adverse price selection and front-running. Ensuring there is no conflict of interest between the best execution obligation and the collection of maker/taker rebates is part of this analysis.

If you choose to customise Broker-Algos, you will be able to compare your results versus standard algos to measure the improvement. Custom algos might limit the ability to use peer universes and will narrow the analysis to the client specific routes only.

Once you have an optimal speed and the best algo, you should expect a potential improvement left in the venue analysis to lie in the execution tactics, and not the execution strategy. You should expect a smaller profit from venue analysis compared to PM and algo profiling.

Learn from the past

Based on the above-described research and assumptions, multi-dimensional order profiling which aligns the portfolio manager, the trader and the market can generate the highest returns. This should be the top decision support tool for the modern trader.

Broker-Algo profiling can ensure the best match between your order characteristics and selected strategy. It has the second largest potential impact. Venue analysis could be a beneficial tactical tool to complement the two previous ones.

The three aforementioned decision support tools can work separately, but using them together will bring the most powerful results in maximising trading profit.

Using big data and advanced decision support tools can allow the trader to be at the vantage point and have better control in executing his order. Portfolio profitability will naturally improve, so why not learn from the past to make better pre-trade decisions?

As William Faulkner famously wrote: “The past is never dead. It’s not even past.”

[divider_line]©BestExecution 2015

[divider_to_top]

TCA Across Asset Classes : Sabine Toulson

TCA2-SabineToulson-960x375

THE CHANGING FACE OF TCA.

 TCA2-SabineToulson-960x375

Sabine Toulson, Managing Director, LiquidMetrix examines the impact of regulation on trading at both the macro and micro level, and predicts the trajectory of TCA under the new European regulations.

After many years of deliberation, discussion and informed guesswork, judgement day will soon be upon us. In Europe, we now have pretty firm outlines for the scope of the updated MiFID II/MiFIR and MAR/MAD II regulations with hard implementation dates in the next 18 months. The impact of these new regulations and directives is likely to be significant.

On a purely practical level there will be an array of new reporting requirements and procedures associated with trading that will need to be implemented. Both buy- and sellside firms will need to update their systems and processes in time for the July 3rd 2016 (MAR / MAD II) and January 3rd 2017 (MiFIR/MiFID II) go-live dates.

However, the medium term technical challenges brought about by the new regulations are unlikely to stop there as the implementation is quite likely to alter the market microstructure itself. As an example, and foretaste, the imminent imposition of limits on ‘Dark’ trading (the 4% / 8% caps) and the demise of Broker Crossing Networks (BCNs) are most likely the driving factor behind a number of recent innovations in European equity markets. Examples include: London Stock Exchange’s new intraday auction, BATS Europe’s new continuous intraday periodic auctions, Turquoise’s Block Discovery service and the proposed new Dark Pool, Plato. These changes are already throwing up new opportunities and obligations for investment firms to consider.

One could argue that the most significant practical impact to trading firms – especially sellsides – from the MiFID I regulations adopted in 2007, was not only the immediate new direct reporting/process requirements that went live in 2007, but also the fact that the new regulations heralded lit market fragmentation, starting a technological arms race of Smart Order Routers / HFT market making / Dark Pools / BCNS and so on. Keeping up with these rapid changes in trading microstructures brought about by the new regulations required significant investments by firms in the years following MiFID I implementation. The real sting was in the tail.

It will be interesting to see what impact MiFID II will have on the market microstructure of, for example, fixed income trading. It may well be that the immediate requirements imposed by the new regulations are the start, not the end, of larger changes to the technologies and processes used to trade these markets.

Let’s consider the areas of the new regulations that are likely to impact TCA, best execution and reporting.

Extension to more asset classes

One of the biggest changes in MiFID II is that a number of the more onerous requirements related to Best Execution monitoring and pre- and post-trade reporting, that were previously limited to equity instruments, are now extended to other asset classes; most significantly perhaps fixed income and listed derivatives.

This will be challenging. One of the more persistent complaints following MiFID I implementation was that the lack of an official ‘consolidated tape’ (such as NBBO in the US) made monitoring best execution difficult for firms. In fact, the situation was not really so bad. All the ingredients required to make your own ‘EBBO’ tape were available (market data feeds from exchanges / reporting venues) so the challenge was more of a technical nature to gather this data and stitch it together (or asking someone else, whether a market data supplier or TCA firm, to do it for you).

However, if we now consider best execution for fixed income trading, the problems of a lack of an EBBO tape in equities look pretty small by comparison. For many fixed income instruments, instead of deep public, lit order books with firm prices, we have less formalised, quote-driven trading. Also there are many (by equity standards) illiquid instruments that trade alongside relatively liquid ‘similar’ instruments.

So, when we try to assess a fair benchmark price to measure fixed income best execution:

  • Do we use all quotes visible in the market, or only private, firm quotes? Can we add volumes from multiple quotes or assume only one quote from one venue / provider is ‘valid’. And how do we assess the time validity of quotes if they are not explicitly stated?
  • If there is a trade today in an instrument that last traded three weeks ago and for which we only have very wide indicative quotes for today, do we use the firm traded price of three weeks ago or do we use the more recent private or non-firm quotes? Alternatively, do we use a model that takes the firm price of three weeks ago, adjusted by the implied price change due to market wide yield curve movements in the meantime?

Of course TCA firms like ourselves are currently busy sourcing data and coming up with solutions to the questions above. But it’s most likely the case that methodologies for best execution are going to evolve significantly, not least because as mentioned above, the regulations themselves may well alter the style of trading and the types of market data that will be freely available for these asset classes post-2017.

Venue and broker reporting and best execution (RTS 6 and RTS 7)

There are two major new pre- and post-trade market quality / best execution reporting requirements:

  • For venues (and possibly some other liquidity providers) there will be a requirement (RTS 6) to publish on a quarterly basis an instrument level, daily breakdown of trading volumes and market quality (spreads, market depths, IOC fill rates, etc.) of all activity occurring on their venue.
  • For investment firms that execute client orders to publish annual summaries of the top five venues, or trading destinations (i.e. brokers) split by each class of financial instrument (RTS 7), including information on the quality of execution obtained. Details to be provided will include execution costs/incentives and also passive/aggressive or liquidity adding/removing ratios of orders sent to venues. Investment firms will now also be required to update their order execution policies to “explain clearly, in sufficient detail and in a way that can be easily understood by clients, how orders will be executed by the investment firm” (MiFID II Article 27), and be able to demonstrate that they have executed orders in accordance with their policy.

Firms will be expected to produce the above information on a regular basis (quarterly for RTS 6, yearly for RTS 7) in a fixed format and the information must be made feely and publicly accessible.

There is an immediate and obvious impact to venues and brokers who will need to put processes to create these reports into place. An interesting related question is how should the rest of the market then be using this information as part of their own best execution due diligence? In principle, the RTS 6 reports provide a lot of detailed daily instrument level information on execution quality (albeit on a delayed basis). So what weight, if any, should firms be taking from this RTS 6 data when selecting which trading venues they should use? The RTS 7 reports give some useful detail on venue preferences and best execution policies of brokers, but how might buysides use this data in their broker selection decisions? From a TCA perspective is this data actually useful and if so how should it be used?

More generally, the requirement to clearly explain best execution policies to clients and to follow “all sufficient steps” (MiFID II Article 27) to achieve best possible results for clients is likely to lead to investment firms moving beyond ‘tick box TCA’ and towards providing a coherent, well thought out and quantifiable approach to all aspects of execution quality (see Figure 1).

TCA2_Liquidmetrix-Fig.1

Starting from parent level TCA analysis pre MiFID I, TCA has expanded to incorporate all aspects of execution quality over multiple venues and asset classes.

What does this mean for TCA?

If we wind the clock back to before MiFID I, TCA was a relatively simple exercise based on parent order analysis (usually limited to equity markets) with some standard top-level benchmarks such as Implementation Shortfall (IS) and VWAP giving general guidance on relative implicit costs of trading.

Post MiFID I, LiquidMetrix often makes the case that the additional complexity of trading due to lit venue fragmentation, algos and the emergence of HFTs and dark pools meant that to get a proper sense of execution quality, TCA really required drilling down into the micro level: our ‘Pyramid’ view of TCA.

Many of the high profile perceived ‘problems’ related to equity trading – such as HFT gaming and toxic dark pools – in recent years were largely invisible to standard top-level TCA analysis. To properly address concerns about such activities one needed to go beyond the basics. Some of the changes in MiFID II (for instance the new emphasis on venue/fill level reporting) are playing catch-up with the way equity markets currently work.

Possibly the most significant changes in MiFID II are the extensions to other asset classes. This presents an immediate set of challenges in acquiring market data sources to properly benchmark and assess best execution. As the trading microstructures of some of these new markets are very different to cash equities it will also mean thinking more deeply about how to define best execution for these asset classes.

As always, the temptation is to try and come up with a single set of measures common across asset classes. But as we saw with equity trading in Europe post MiFID I, if the underlying market structure and dynamics change then any effective best execution monitoring must be able to adapt to reflect these changes. So trying to simply copy ‘equity TCA’ methods to other asset classes may lead to inappropriate or worse, misleading, analysis of execution quality.

So be prepared for TCA methods to initially ‘fragment‘ for a period of time as the most appropriate methods of measuring execution quality in different markets evolve.

Of course, in the longer term, it is possible that the trading ecosystems of different asset classes might start to converge (we see some evidence of this in equity/FX markets) and a single set of TCA methods may work for all asset classes again. But in the short term, we think care needs to be taken not to try to force square pegs into round holes.

In summary, over recent years TCA has evolved rapidly to stay relevant to today’s markets. In the coming years that speed of evolution is only likely to increase.

[divider_line]©BestExecution 2015

[divider_to_top]

TCA Across Asset Classes : Anna Pajor

TCA2_AnnaPajor_960x375

TRANSACTION COST ANALYSIS IN A MIFID II WORLD.

TCA2_AnnaPajor_960x375

Anna Pajor – Managing Consultant and Head of GreySpark’s Capital Markets Intelligence practice.

Imminent transparency

The EU’s Markets in Financial Instruments Regulation (MiFIR) and the second iteration of the Markets in Financial Instruments Directive (MiFID II) are changing the region’s capital markets landscape by altering the approaches buyside firms and investment banks take in making trading processes more transparent to their respective groups of clients. This article comments on the main challenges and changes within the range of transaction cost analysis (TCA) services expected by and offered by asset managers, institutional investors and banks, as well as the best execution reporting mandates each type of company will need to comply with in the near-future.

The overarching principles of transparency inherent in the EU’s new rules are affecting all aspects of the trade lifecycle. Specifically:

  • pre-trade and post-trade through the publication of quotes and transaction reporting;
  • all asset classes, through rules for equities, equity-like products, bonds, OTC and listed derivatives and structured products;
  • all types of trading organisations – regulated markets, multilateral trading facilities, newly-created organised trading facilities and systematic internalisers;
  • the creation of granular information related to the decision-maker behind an investment decision or a detailed breakdown of all costs related to a trade; and
  • the annual reporting of execution quality at the leading trading venues.

The main game-changers for the EU’s revised trade transparency rules are the creation of new transaction reporting mechanisms and data aggregators. In particular, MiFID II is set to create a consolidated trade tape for Europe’s equities markets that will provide investors with an even view on equities prices and which will set European benchmarks for best-bid-and-offer statistics.

The three main consequences of greater transparency in European capital markets trading will be: the application of equities-like best execution models for other asset classes; changes to client trade reporting and internal sellside trade reporting mandates; and requirements to conform to external price benchmarks. MiFID I established a set of principles for best execution for equities in Europe, which will be expanded to non-equities instruments in MiFID II. However, many non-equities instruments have different structural liquidity dynamics, and they are traded in different ways. Therefore, an expansion of MiFID I’s best execution mandate requires market participants to engage in a rethink of their approach to trade execution on an asset class-by-asset class basis.

Additionally, existing buyside and sellside metrics used to evaluate broker performance on trades or to evaluate the performance of specific traders will be challenged under MiFID II to provide a greater level of insight on the quality of trade execution. The additional benchmark – a consolidated tape – for equities will challenge the veracity of previously-used, broker-specific or client-specific benchmarks.

Navigating to a more transparent world

Based on an assessment of common industry practices across the leading banks, hedge funds, asset managers and technology vendors, GreySpark Partners has observed a variety of MiFID II trade execution issues that are common to them all. For example, when offering their clients TCA and best execution reporting, the companies must evaluate if existing practices are sufficient in the context of the new regulatory framework. In particular, the evaluation needs to ask:

  • Are the right instruments covered under MiFID II’s best execution reporting and TCA standards?
  • Is the time and trading context captured properly?
  • Are all direct and indirect costs captured?
  • Are the costs properly categorised and attributed?
  • What is the accuracy of the timestamp, and are the time-stamping clocks synchronised?

The availability of new European trade execution benchmarks for equities through the creation of a consolidated tape for each benchmark may challenge the nature of some existing buyside and sellside execution policy methodologies. Specifically, the MiFID II requirements will challenge underlying models for providing and proving best execution reporting and for calculating costs.

Banks should also consider if the frequency at which they distribute TCA information to their clients is sufficient. At the least, the EU regulations specify an annual reporting schedule. But, in the new, data-driven rather than relationship-driven world, annual communication of transactions analysis data is considered as significantly insufficient.

The end of ‘not-the-worst’ execution?

Post-MiFID I, the most common claim regarding the directive’s best execution rules was that the client did not receive the worst-possible execution level – trade execution was done on a price/at-the-cost that was not the worst in the given context of when the trade was made. One of the problems with obtaining the best rather than not-the-worst execution is the lack of reliable, comprehensive pricing benchmarks.

A European consolidated tape would fix that problem. However, it will not be sufficient to provide best execution. This reality emanates from the fact that the quality of the execution can be evaluated fully only after the trade is done, in the context of the market’s movements in reaction to the order, and in the context of all events that occurred during the execution period.

Execution strategies are typically built and selected based on historical information on market behaviour for an instrument, and on the historical outcomes associated with trading it at any given point in time. Therefore, by default, the pre-trade assessment is not universally applicable to all future market scenarios and, as a result, is insufficient to guarantee ultimately the best outcome.

A European consolidated tape would, however, change the benchmark used to evaluate not-the-worst execution, and it will raise the bar for pre-trade TCA or execution advisory to take into account a Europe-wide context for the underlying analytics.

The best-possible tool for use in supporting – but not yet resolving – MiFID II’s goal to create a best execution environment is real-time TCA software. These types of so-called in-trade analytics can advise their users if the original, planned execution strategy is on course to be the best one possible.

A brave new world in the making

However, the development landscape for TCA tools supporting real-time analytics and proof of best execution is uneven. A GreySpark review of the leading TCA providers in 2014 showed that post-trade TCA for equities trading is fully commoditised, with in-trade and pre-trade analytics delivered by 10 out of the 11 equity specialists surveyed.

FX analytics is the second-most sophisticated asset class in terms of real-time TCA maturity, with eight out of 11 equities TCA specialists surveyed by GreySpark seen offering FX post-trade TCA. Out of those eight FX TCA providers, three were seen by GreySpark as offering pre-trade execution advisory tools.

Fixed income is the most difficult asset class for TCA tools to penetrate due to a lower level of electronic trading in, primarily, corporate bonds, and a lower level of overall transparency in the asset class as a whole when compared to equities or FX. Therefore, only two out of 11 vendors surveyed by GreySpark were seen offering post-trade TCA for fixed income products. Meanwhile, pre-trade TCA for fixed income products remains an unexplored area.

There are some emerging technology vendor solutions focusing on FX or on fixed income TCA only. But the leading banks in each market are now also offering or building their own bonds, currencies or interest rates derivatives TCA and execution advisory tools in-house. It is important to emphasise, however, that those tools will only be effective for their client base of users in the context of the liquidity that is accessible through the bank.

As the technology supporting TCA matures and expands across asset classes, incorporating real-time analysis, the EU’s regulatory principle of providing best execution as it is enshrined in MiFID II will move closer to becoming a reality. However, as the end of 2015 approaches, the provision of independent buyside and sellside technology tools and pricing benchmarks making the development of truly robust best execution still seems like a holy grail.

[divider_line]©BestExecution 2015

[divider_to_top]

Guest blog : Michael Horan : What have the Romans ever done for us?

RomansDoneForUs_560x350

WHAT HAVE THE ROMANS EVER DONE FOR US?

RomansDoneForUs_560x350

By Michael Horan.*

Most of us are familiar with this famous scene in Monty Python’s Life of Brian where John Cleese asks this rhetoric question in a rip-roaring speech to his band of oppressed compatriots, expecting a resounding venomous backlash of anti-Roman slur. As it turns out, and much to the surprise of Mr Cleese, one by one, they slowly speak of all the great and good things the Romans achieved over time…, roads, education, law and order, irrigation, medicine… and the list went on. He ends up agreeing with them. [Ed. And if you haven’t seen it, here’s a link YouTube:WhatHaveTheRomansEverDoneForUs]

Well you could ask the same type of question about the benefits of regulatory change in European equities market structure to a room of traders. The responses would probably be of the colourful nature at first.

Not many people like change, and the changes since 2007 have certainly ruffled many a feather in the Square Mile. Firstly, competition amongst exchanges didn’t really feed through to competition in the executing broker space. Only the larger players could afford the technology spend on smart order routing and post-trade infrastructure. This left the smaller firms having to surrender their flow to their larger rivals as they could not compete. Even firms at the bigger end of the scale had to alter their business models. Some of the investment banks quickly moved into agency trading as they simply could not commit desk capital solely to client orders anymore. Ultimately this left clients with less choice in who to trade with and how to trade in size.

Today we are still very much suffering from a lack of transparency – mainly due to the absence of a consolidated tape – and there is a growing nervousness around toxicity in dark pools of varying designs, whether they be market operator or broker run.

A regulatory revolution

MiFID I kicked off a revolution in the depth and pace of regulatory change across capital markets trading. Following various stages of fear, denial and confusion the industry is now experiencing the most impressive period of innovation, collaboration and forward thinking that we have ever witnessed in capital markets trading.

The buy and sellside are having real and honest conversations about how to work together. This level of collaboration is best highlighted by efforts such as Plato, where some big names are sitting round the table with one goal – to create healthy block liquidity and at a reduced cost – a dichotomy at best. These “all to all” platforms will no doubt see further proliferation as we collectively drive towards making markets more efficient, open, and easier to understand.

On the trading periphery we see regulation influencing a level of creativity like never before. Efforts such as central counterparty (CCP) intermediation are clear indicators that we are jointly moving in the right direction as an industry. We also have the order management system (OMS) vendor community developing amazing compliance tools which extend far beyond their historical remit of simple connectivity and order management. And lastly, who would have thought that some of the world’s largest brokers would take a stroll down our very own ‘Silicon Roundabout’ on Old Street and talk to the kids about Fintech.

It seems apparent that regulatory change is making us all think inventively and not defensively anymore. Good things are coming out of this. It is making us work together for the benefit of the industry and ultimately the end investor.

So the next time someone asks you… “What has Capital Markets Regulation ever done for us?” you can tell them.

Michael-Horan_500x615*Michael Horan, is Head of Trading at Pershing, a BNY Mellon company.

[divider_line]©BestExecution 2015

[divider_to_top]

 

Towards Unbundling

Lee Bray, Head of Trading APAC, J.P. Morgan Asset Management, looks at the ongoing trends in unbundling and its impact on the buy-side and sell-side.
Lee-Bray-edmThere is a definite advantage to the underlying client as there’s a lot more certainty around commission spend under the new European regulatory regime. This is not to say that there was anything wrong with how commissions were being managed before, the mechanism just changes when you disaggregate research and execution completely. However, there is a discussion over the unintended consequences of the new rules and how is that going to affect the overall client experience? The problem is that the impact is yet to be determined because even at this late stage in the legislative process, it’s not exactly clear how it’s all going to settle down. Will the costs be absorbed by the client or the fund management industry?
Smaller buy-side
On top of the uncertainty that we all face regarding what final shape the rules will take, it is becoming increasing obvious that some of the smaller asset management firms will be less able to absorb the extra bottom line costs that this new legislation might impose. It may be difficult for them to remain as a significant client to the brokerage community, and therefore you might see them lose out.
After much back and forth, and some optimism that there would be a softening of the stance of the regulators, we are reaching the conclusion that the rules are likely going ahead in their current form. This obviously raises some questions about whether that would be good or bad for the industry, from a European and global perspective. Many global asset management firms over the last two or three years have been working towards a position where they will be well-positioned to meet that new legislation on day 1 if it goes in as intended.
From my perspective, the key question is how we work with the brokerage community to get a price for the research that the buy-side firms are consuming. It could be argued that there is a slight reticence from the brokers to put a value on items while there is not an obligation to currently do so, this conversation needs driving forwards. As a result of the uncertainty, exaggerated partly by the legislative uncertainty, the sell-side will hold off on providing us with menu pricing, as they will want absolute clarity before taking this step. Will they be prepared? Of course, but I think that they will leave an option open until the very last moment.
I think that these conversations around the proper pricing of research and execution will be a big focus for the sell-side in the next year or so, and there will need to be a lot of dialogue between the buy -side and the sell-side.
Fixed income
Although I would say the equity world seems to be more prepared given the focus that has been on commissions recently. One difficulty is that the equity world has explicit commissions, whereas in fixed income there is the added complexity of not having the same structure of commissions to break down.
It would require a steep change in the structure of the market to implement some solutions similar to those that we see in equities.
To some extent the process has already begun. There has been a lot of effort to implement TCA and general cost transparency in fixed income, with larger firms now using systems to give quote comparisons. Many of these things happened in the equity world seven or eight years ago, and you can see that evolution happening at the very early stages in fixed income.
Asia and global regulation
In APAC we have a ‘wait and see’ attitude towards how European regulators are approaching this new environment. Clearly in this region, we have several regulators that don’t explicitly recognise something as simple as a CSA payment. If a firm wanted to implement a global model it may be an even bigger leap to operate the system which MiFID II is proposing.For example, in Taiwan and Japan there will be ongoing difficulties in breaking down commissions as a firm may wish to.
There is a lot of money that is global and it may be difficult to reconcile the Asian regulation with the European situation, and then wider global regulation. For example, in Taiwan and Japan there will be ongoing difficulties in breaking down commissions as a firm may wish to. The precise mapping of funds and trading within and between the vast range of global firms that have to be involved is a very difficult topic to deal with.
We are heading towards the separation of execution and research, and the environment will evolve. We will reach the situation where both the regulators and the asset management firms are comfortable, but there are still question marks on a number of areas and we just need to get that clarity. Global firms are well prepared and we are operating unbundled where we can with CSA programs etc.
We’d love to hear your feedback on this article. Please click here
globaltrading-logo-002
 

The 12% Rule For Asia’s Closing Auctions

By Gary Stone, Chief Strategy Officer, Bloomberg Trading Solutions, Tom Kingsley, Head of the APAC, Bloomberg Tradebook, Gabriel Kan, Senior Quantitative Analyst (APAC), Bloomberg Tradebook
Q3_15_p63
Liquidity and spreads have always been a challenge when investing in Asia’s equity markets. Some of this challenge is being alleviated by Asia’s closing auctions growing into a significant liquidity event. Averaging 3-5% of average daily volume (ADV) in 2009, during Q1 2015, the closing auctions across developed, developing and emerging Asian markets account, on average, for almost 13% of the average daily volume. Given the importance of the closing auction, the Hong Kong Stock Exchange is planning to re-introduce one in 2016. Traders, whether they are seeking liquidity to accumulate or distribute an investment or replicate an over-the-day execution, simply cannot ignore the closing auction anymore. The new challenge, however, is how to trade them. At Institutional Investor’s June 2015 Asia Trader Forum in Hong Kong, more than 60 head traders were surveyed – they said that extreme price movements were their biggest concern in the auction process.
We would like to introduce the 12% “Rule of Thumb.” Price movement is impacted and the question from a quantitative perspective is: Can we determine the maximum amount that can be allocated to the closing auctions while minimising potential market impact? The answer involves first estimating closing auction volume and then determining an optimal cap on the participation rate.
Q3_15_p63 Figure 1
Estimation of the closing auction volume
Whether expressed as shares or as a ratio of the day’s total volume, closing auction volume is volatile (Figure 1). To estimate closing auction volume, a predictive model must not only provide a reasonable volume prediction, but, more important, it must minimise and, if possible, avoid overestimation. Overestimation will drive adverse impact. Ideally, if the model misestimates the volume, it should err in a way that minimises tail risk, i.e., does not cause the trader to introduce significant market impact and drive the closing price.
Our quantitative research suggests that estimating the closing auction volume requires an estimation of “today’s volume” combined with other factors such as yesterday’s closing auction volume, the number of trades on the consolidated tape and special events such as month-ends, derivatives expiries, Japan special quote days and index rebalancing days. A model based on these and other factors appears to stabilise around midday, implying that traders can start to allocate shares into the closing auction based on midday information.
Q3_15_p64 Figure 2_3Formulating a “Rule of Thumb”
To estimate an optimal participation rate, we used actual ASX execution data over the past year. We believe that the ASX is a good proxy for the rest of Asia’s closing auctions. It used to be that Australia’s markets resembled Europe. However, we see the closing behavior maturing toward the ASX as the close has become a benchmark and the closing auction a significant ADV liquidity event. We found in Hong Kong, Singapore, Japan, South Korea, Taiwan, Malaysia, New Zealand and Thailand that traders’ participation in the closing auctions was skewed heavily to the left.
To measure the market impact at the different participation rates, we benchmark the closing price with respect to the last traded price in the continuous trading period. This is analogous to the arrival price benchmark in the common transaction cost analysis. (Figure 3.) shows the average shortfall measured in bid-ask spread for closing auction orders. The shortfall is about zero when the average participation rate is less than 7%. However, the shortfall starts increasing when the participation rate rises above 7% and the average shortfall crosses the zero mark near 11% participation. The 25%-tile lower band rises above zero when participation rate reaches 13%.
The 12% Rule of Thumb
Taking the middle point establishes our “Rule of Thumb.” Our statistical analysis confirms that institutional traders are currently not leveraging closing auctions to their full capacity. The Australian Stock Exchange was used as a proxy; the data suggests that traders should cap their closing auction order size at 12% of the predicted closing auction volume to avoid significant market impact. We believe that this rule of thumb generally applies to the other markets.
In practice, the “Rule of Thumb” works as follows: Take the lesser of 12% of the predicted closing auction volume and 12% of your order and allocate that share amount to the closing auction.
We’d love to hear your feedback on this article. Please click here
globaltrading-logo-002

Fixed Income Liquidity: The Sell-Side Perspective

With Liana Seah, UBS’s APAC Head of e-Fixed Income Sales
Q3_15_Liana SeahTwo thirds of 116 institutional investors across Asia surveyed by UBS on secondary liquidity in corporate bond markets were dissatisfied with available levels of secondary market liquidity. The survey, conducted in January this year, showed that a decline in liquidity had promoted a shift in asset allocation away from high-yield bonds towards investment grade and sovereigns, where credit quality is superior and issue sizes tend to be larger. In addition, we also saw a shift in client strategies to a more long-term, less trading-orientated strategy.
Role of the sell-side
The role of the sell-side is twofold: first, as a principal which will manage the risk of the inventory (RFQ-driven model); and second, as a facilitator to identify interest among its client universe (order-driven model). Clients still benefit from the bank’s holistic service of content and execution. With the deterioration in secondary market liquidity, the RFQ-driven model is no longer the default execution model, especially with smaller issue-sizes. We can expect the current constraints affecting sell side’s ability to provide liquidity to remain.
The more pertinent question is: Does the buy-side have the capacity and/or the will to become liquidity providers when liquidity is scarce? Our survey showed that clients want an easy-to-use and transparent client-order facilitation system, and that they are also amenable to us accessing different liquidity pools across time zones and regions. A clear and transparent mark-up schedule shows clients both that their orders are not being treated as free options, and that the reach of our electronic platform is in their best interests. This shift in buy-side mentality towards liquidity contribution will be a crucial point in the success of the agency model.
New venues
There is a race to providing all-to-all execution venue, looking to build up liquidity provision from buy-side firms. While an all-to-all venue is the Holy Grail, fragmentation in the fixed income market can be expected to prevail for the foreseeable future. We believe that no one single platform will be dominant, and we expect to see a more balanced split between RFQ model and order-driven model emerging.
In the RFQ model, different electronic platforms dominate particular markets, while in domestic markets there are electronic platforms for local government bonds. Similarly, using the agency model, the buy-side now requires access to all relevant venues without incurring operational or other technology-related overheads. There will also be requirements for sell-side firms to route the client orders to the best venue, both on voice and electronically. A concept known as smart-order routing, this is widely used in exchange-traded markets.
The market needs many sources of liquidity, such as auction venues and crossing venues. These innovations are necessary to ensure there is enough liquidity during times of distress and dislocation.
UBS clients are served by UBS Bond Port (previously UBS PIN-FI), a Matched Principle Trading venue where clients can access various sources of liquidity: from other UBS clients, other third-party venues, as well as from UBS.
We’d love to hear your feedback on this article. Please click here
globaltrading-logo-002

Changing Analysis Of FX

With Brigitte Le Bris, Head of Emerging Markets Debt and Currencies, Natixis Asset Management
Q3_15_Brigitte Le BrisI am from the fund management side of our business, and when we give the trading desk orders, fundamentally, we are asking for best execution. The trading desk is in charge of the process of improving the quality of the execution and they have been working hard on that by exploring new methods of transaction cost analysis (TCA). Our head of trading is now in the process of implementing FX TCA tools. While this is not strictly a requirement yet, it is definitely becoming something that more clients are asking for, and it won’t be long before it is the norm. There is a definite regulatory element to this drive as well, and while it is not explicitly on the radar of the regulators at the moment, we believe that it will not be far away, and we would like to get ahead of that curve by ensuring that we have proper systems and analysis in place.
At the moment approximately 80-85% of our trades go via electronic platforms and the rest remains being traded by voice. I think this percentage is about the maximum amount that we are going to achieve through electronic trading, just because there will always be certain orders and pairs that require more attention and specific contact with brokers.
The difficulty with TCA is to define which benchmark you want to use, but as soon as we have that, it will be very interesting for us to crunch the data. As a fund manager, I can see major consequences of this shift towards more analytics for our trading desk. It will help us to see which bank is providing us with the best price, and also it will help us to know how good the trading desk is, how quickly they enact our trades, how well they implement them and then how good a broker or bank is so that we can better delegate our flows.
From the perspective of the trading desk it will help them to check how good the price they receive is from the various counterparties and where there are areas they can improve upon to access better prices and liquidity. There are therefore two distinct areas – one quantitative, and one qualitative.
The fundamental idea would really be to implement exactly what has already been implemented on equities. The systems and processes there have changed as a result of increased electronic trading and platforms use. FX markets need to evolve in the same way that equities trading has.
We’d love to hear your feedback on this article. Please click here
globaltrading-logo-002

We're Enhancing Your Experience with Smart Technology

We've updated our Terms & Conditions and Privacy Policy to introduce AI tools that will personalize your content, improve our market analysis, and deliver more relevant insights.These changes take effect on Aug 25, 2025.
Your data remains protected—we're simply using smart technology to serve you better. [Review Full Terms] |[Review Privacy Policy] By continuing to use our services after Aug 25, 2025, you agree to these updates.

Close the CTA