Home Blog Page 614

The Buy-Side’s Role In The MiFID II Reform Approval Process

Gianluca Minieri, Global Head of Trading at Pioneer Investments examines reform in Europe and what the buy-side need.
Gianluca MinieriThe role of policy makers and regulators when dealing with financial markets should, in theory, be that of opposing speculation and short-term strategies and favouring long-term investments in the economy. They have the objective of creating a financial system that eventually serves the needs of those ultimately supplying or consuming capital, the real economy and the society as a whole.
It was with this principle in mind that MiFID I was introduced in 2007. The overarching objective of the original text was to lay down the foundation for a comprehensive, single regulatory framework capable of opening up European financial markets while ensuring significant levels of protection for end investors. The main belief on which MiFID I was based was that the European economy was getting insufficient funding from the financial markets and that one of the main factors was the high costs of transaction charged by trading venues. According to this theory, such high transaction costs were impeding the development of a secondary market, which, in turn, was detrimental to market liquidity. Based on the above considerations, I think we make no mistake if we say that the main target of MiFID I was to bring down the cost of transactions for investors and secondly, to facilitate the creation of a large secondary market which could eventually enhance liquidity. The tool that was agreed to address these issues was to promote competition amongst trading venues.
Despite some success in increasing competition, MiFID I could not keep the pace with the rapid developments in financial markets, trading systems and execution technologies and soon started to generate a number of major side-effects.
Liquidity was certainly the main one. MiFID I failed in terms of enhancing the level of liquidity in lit markets and had instead a devastating effect on it. It has increased fragmentation of liquidity across more trading venues, which made it more difficult for the buy-side to understand where the liquidity is. Where before there was a single provider of liquidity, now we see a proliferation of alternative venues, both lit and dark. Moreover, fragmentation of execution venues led to smaller transactions being traded with the aim of reducing market impact. Across all lit and dark venues, average trading sizes have reduced dramatically, roughly by two-thirds since its introduction. Now, because liquidity pools are smaller, the size at which the market is impacted went down as well, making it more difficult to execute large orders.
The concern to reduce market impact stimulated the use of algo trading specialising in slice and dice strategies. But because structured algos trade in a predictable fashion, it became easy for some High Frequency Trading (HFT) firms to create predatory strategies that watch for these footprints. Hence, instead of minimising market impact, MiFID I led to a growth of HFT, which constantly monitors orders seeking to take short term advantages by scalping them.

“The introduction of a single consolidated tape in Europe is of paramount importance to deliver best execution to our clients and enhance the level of transparency in the price formation…”

The review of MiFID I was therefore an opportunity. An opportunity to learn from the mistakes and, counter-effects brought by MiFID I and restore truth in Europe’s financial markets by encouraging a fair price formation process and efficient capital allocation. Usually the normal approach in a legislation review provides for an open debate with all stakeholders, which in turn provides policy-makers with comments, feedback, analysis, evidence and argument through lobbying. This was not the case in the current regulation reform approval process that soon become highly politicised and where many member countries and MEPs often took a philosophical approach on certain topics. Trading too much is always bad while transparency is always good. Competition is always good while dark trading is always bad.

To read more regulation content, click here.

As a buy-side community, we have tried to be as proactive as possible along this process. We have made many trips to Brussels to meet legislators and policy-makers. We have presented analysis, studies and data. During these meeting, attended also by many buy-side firms, we have realised how little we, as buy-side, were conflicted on most issues. Our shared concern was to ensure that we do not lose the capacity to invest money efficiently for our clients. With this in mind, the key message that we brought to European regulators was that we all wanted to keep the interests of long-term investors at the centre of the discussion. Three topics have in particular been highlighted as our main concerns.
First concern: fast tracking introduction of a European consolidated tape (CT)
We think that the fragmentation introduced by MiFID I will not be resolved by the MiFID II proposal. On the contrary, we think that MiFID II will introduce further fragmentation. The introduction of a single consolidated tape in Europe is of paramount importance to deliver best execution to our clients and enhance the level of transparency in the price formation process and we are frustrated to realise that after so many talks this topic fell again into the background. Best execution today is difficult to monitor because of the lack of good quality post trade data. The current text proposed is too vague and weak to succeed. In our view, any solution is good as far as it is mandated, reasonably priced and potential conflict of interests are properly addressed. Conflicts of interest will arise in any case where a CT provider is also a market maker or operates trading venues. CT providers will in fact have privileged access to trading data. Hence, it is of critical importance that they are unquestionably perceived to be impartial, otherwise no market operators will allow them to get access to such information.
Second concern: Maintaining the use of the “reference price waiver”
Waivers are a legitimate way for large investors like us to manage their trades without undue disadvantages. Before imposing any arbitrary volume limits, regulators should find a mechanism to monitor markets, gather data and better assess the dynamic of the price formation process. In our view, a single consolidated tape would provide such mechanism because it would allow assessing the volume of trading that flows through the waiver.
Instead of imposing more restrictions, regulators should therefore make it easier for all types of trading data to get included onto consolidated tapes and enable them to become part of the price formation process.

“When we trade in the dark, we feel our investors are better protected. We have also explained that we rely on Matched Principal Trading to provide anonymity to our clients’ orders and to ensure that we do not trade with an unwanted or unregulated counterparty.”

Until we have a genuine consolidated tape and investors are adequately protected in terms of good quality market information, we will need the protection offered by the reference price waiver. Long term value investors, in fact, are concerned that orders of their clients will be unduly exposed to those market players whose principal activity is to exploit short term movements in markets, resulting in a permanent transfer of value from long term investors to short term liquidity providers. That’s why I also believe that RPW should be allowed for trading on OTFs.
Third concern: Ensuring the new OTF category is meaningful for investors, by allowing the use of Matched Principal
Several analyses show that the cost of trading on the primary exchanges can be more expensive than on dark pools, mainly because in the dark you are not paying the spreads. Moreover, market impact can be significantly lower versus trading in lit market, enabling trades to be carried out in large sizes. Large asset managers represent the interest of a wide variety of clients, hence they want to build their positions in blocks and with the appropriate level of confidentiality to protect investors. If a large trade is spotted entering the market, our orders might be subject to abuse by HFT firms. We definitely don’t want to deal with these firms that are making it fundamentally more difficult for institutional long-term investors to find liquidity at fair prices. When we trade in the dark, we feel our investors are better protected. We have also explained that we rely on Matched Principal Trading to provide anonymity to our clients’ orders and to ensure that we do not trade with an unwanted or unregulated counterparty. Without Matched Principal, the OTF category would be of no use to us and to our investors for any asset class. We have also highlighted that OTFs should be used for all asset classes, including equities. We understand that excluding equities from OTFs is a move designed to push more equity trading back onto stock exchanges, but our concern is that the latter might suppress rather than encouraging liquidity. Prohibiting interacting with existing client orders in its own OTF will worsen quality execution especially for less liquid instruments. In our view, deployment of own capital should be allowed where the client has consented to it and any conflict of interest should be addressed at regulation level.
With the trilogue under way, negotiations are now in their final stage. However, considering how long the same process took for other directives to conclude, there might be still a long way to go before the legislation becomes law. With this in mind, we hope that policy-makers will give buy-side firms more chances to have their voices heard. It is imperative that this time policy makers take the time to understand how financial markets will be affected by their decision. In our view, the difficulty remains that the legislators’ approach is too concentrated in seeking a one-size-fits-all approach to a wide range of issues with different characteristics. My concern is that, once again, such approach might lead to unintended consequences for financial markets at the expenses of end investors and long-term savers, exactly those that the policy makers intended to protect in the first place.

To read more buy-side content, click here.

Disclaimer
This article was prepared by Gianluca Minieri in his personal capacity. The opinions expressed in this article are the author’s own and do not reflect necessarily the views of Pioneer Investments or others in the Pioneer Investments organisation.

Dark pools update : Lynn Strongin Dodds

A ROSE BY ANY OTHER NAME…?

Despite the trading community’s best efforts, the moniker dark pools has stuck. This is probably because “trading without revealing pre-trade price information” is not very catchy. One wonders though if the name was different whether regulators would be so intent on restricting activity.

The answer is probably yes and the arguments would have been the same. The proponents on both sides of the Atlantic argue that clients save money if their brokers do not have to pay exchange fees while institutional investors are going off-piste in order to avoid the market moving against them, or having to trade in small amounts with computer algorithms.

The public venues, on the other hand, feel hard done by because they have to comply with certain more stringent rules than their unlit counterparts. They also believe that the decreasing size of trades executed in dark pools, which can sometimes be only a couple of hundred shares, does not warrant some of the trading exemptions that dark pools receive.

In many ways it seems like a tempest in a teacup because dark pools are still a fraction of the market, but they are making their presence known.  According to a new study by TABB Group, these venues only accounted for 11% of volume, but that is up from 2% three years ago while dark pools at alternative trading venues operated by companies such as BATS Chi-X Europe, Goldman Sachs Sigma X, UBS MTF, Turquoise and Instinet have enjoyed a market share rise to 5% from 3.2% in the last 12 months alone.

This explains why the regulators are moving in. The European Commission wants dark pools to offer price improvement — which would only allow trading in dark pools at prices better than those found on public markets. This is in addition to imposing volume caps on the amount of dark trading that can be done on a single venue and on an EU-wide level. This concept was first introduced in the Council’s draft of MiFID II while the European Parliament favours a pricing improvement rule.

The lines are being drawn in the sand with the exchanges backing the Commission’s latest proposals while the sellside worries that the proposals may limit investor choice which is one of the hallmarks of today’s high-tech and fragmented market structure. Perhaps the best course would be to follow the advice of Rebecca Healey, author of the TABB report,who said, “Regulation would achieve more by ‘cleaning up’ dark trading, clarifying the rules within an appropriate framework to maintain choice for the benefit of the underlying investor, rather than obliterating dark pools in their entirety.”

Lynn Strongin Dodds

‘Alphatizing’ The Entire Firm

Mat Gulley, Global Head of Trading, Franklin Templeton Investments, is changing how his firm embraces alpha, and is making the PM, research desk, and trader a more integrated unit.
Mat GulleyI strongly believe in the alpha proposition offered by the truly engaged buy-side trading desk. With an increasingly complex current market structure, less intermediation in the execution process, new technologies, and the never-ending search for opportunities, now is the time for us to rethink and relearn buy-side trading and how it might best fit into the alpha generation cycle of research, portfolio management and implementation. The question, in my mind, is: “How do we leverage trading within the investment process in order to maximise returns?”
Along with revisiting the role of the trader, we must also continue to consider the importance of technology and how it drives structural change within the implementation process. Technology is no longer simply a source of operational efficiency; it is now an integral and valuable part of the entire investment process. Whether utilising internal applications to manage one’s proprietary data or using trade analytic systems to help manage execution strategies and create tactical investment opportunities, technology is vital to almost every aspect of the execution process. Furthermore, technology facilitates collaboration across the investment structure and helps frame our ideas on computer-based trading, algorithms, data analysis and performance. So, this is how I see the trading world: I view trading from a proactive tactical alpha standpoint with well-defined processes, procedures and risk management.
The ‘new’ structure
At Franklin Templeton, we have an experienced global network of over 50 traders on 13 desks in 10 countries who trade equities, fixed income, derivatives and currencies. We have set up these operations to specifically integrate with our research groups in order to integrate ourselves as closely as possible to the decision making process. The key themes for our desks are to integrate, collaborate and participate in the investment process. I believe that there remains unrecognised alpha that buy-side trading desks can add to the greater investment process. With an increasingly complex current market structure, less intermediation in the execution process, new technologies, and the never-ending search for alpha, now is the time for us to rethink and relearn buy-side trading and how it might best fit into the alpha generation cycle of research, portfolio management and implementation. The question, in my mind, is: “How do we leverage trading within the investment process in order to maximise returns?” We think about our investment process as a triangle having three areas of potential alpha creation, each with distinct time frames: long-term alpha, medium-term alpha and short-term alpha.

To read more trading technology content, click here.

We believe that the long-term alpha sits with the research analyst and, broadly-stated, might be in the one to five year range. This stage is where the fundamental research and catalysts for the investment are established. Then you have the medium-term alpha, which is roughly one month to one year, and that is where the asset allocator or portfolio manager (PM) gets involved. This person makes the decisions on what best combination of ideas to put into the portfolio. Then, we come to the short-term horizon, which is when the order is placed and up to one month. This is the implementation process. This is “trading”. The trader should optimise the implementation of the ideas and collaborate with the PM and analyst to make sure all information is understood and represented. The trader will also present new opportunities based upon market dynamics, volatility, liquidity, all the while understanding the PM’s strategic positioning and collaborating with all the various participants in the investment process. Asking this requires that our traders possess a unique set of skills: enhanced fun market knowledge, new technological tools and well-developed communication abilities.
So when we think about trading’s input into the alpha generation cycle, the trader is part of the “short-term alpha” or the tactical portfolio construction and implementation process. In my experience, this has historically not been the case. In fact, the traditional investment process has not recognised implementation within the alpha generation cycle; generally it was considered a cost mitigation center. As we evolve, the trader must participate at a much more dynamic and interactive level, and the PM and analyst must recognise this value proposition exists or has the potential to. As always, the PM sets the strategic portion of the portfolio according to the investment objectives and the PM’s views on the market or stock’s potential returns while the trader focuses on implementing those ideas using opportunity and impact as their guides, but also presenting the case for more tactical positioning around a core portfolio. Traders need to understand the investment process holistically and have some accountability and alignment with performance.

To read Part II, click here.

The new “trader” should embrace participating in research and PM meetings to the extent that time allows. The new “trader” should want to be involved and understand strategy in order to help explore possible opportunities. If you have traders who have the skills to be anywhere along the investment continuum and they choose to be in trading, why not maximise their skill set on behalf of the clients and fund returns? I see this as leveraging people and doing so in an unconstrained manner.
With this model, you have an integrated and unconstrained trading desk that can assist in tactical decisions based on market movements, liquidity, portfolio manager needs and analyst directions.

To read more Part III, click here.

 

Automating the desk

Damian BiermanIn today’s high-speed electronic markets, the role of the trader is only becoming more challenging. After all, it’s the trader who is ultimately responsible for the overall trade lifecycle; analysing market conditions, selecting the best strategy for a particular order, monitoring the speed and quality of the executions as they come in, and making appropriate adjustments as conditions change throughout the day. Unfortunately, strategy selection and optimization still remain largely a manual process with traditional execution management systems.
The more orders a trader has to handle, the more difficult it becomes to effectively monitor the factors impacting each one. Factor in the sheer variety of algorithmic strategies made available by the sell-side and the high degree of configurability that allows them to be tailored to so many different market conditions, and it’s easy to see how a trader can become overwhelmed. What’s really required today is a new type of thinking EMS that can bring measurable efficiencies and workflow automation to the trading desk, using artificial intelligence to give traders and portfolio managers the highest level of real-time color and TCA on their orders.
Following conversations with our clients, it’s become apparent that significant value can be delivered by intelligently automating this process.
Over time, it seems clear that the buy-side is going to start moving out of their traditional comfort zone in terms of the way they manage their algorithmic executions, because it’s becoming increasingly apparent that real money is being left on the table every month through non-optimal strategy selection. Adopting tools and technologies to automate and optimise this process is a trend which has already started and which is poised to grow steadily in the years to come.
As a result, we’ve spent the past few years working to develop a solution that uses quantitative predictive analytics to optimise algorithm selection and monitoring. At its core is a 45-factor model that diagnoses a trade at inception and generates an execution profile. As the day progresses, the model is constantly re-factored against the initial hypothesis and adjustments are made automatically to maximise alpha capture while minimising impact costs, adverse selection, information leakage and the impact of high-frequency trading.
A key facet to the technology is its transparency; we allow the user to see the decision-making process and the factors driving the decision for any trade at any given point in time. In this sense, it acts as a decision support tool for the trader. It gives them information and color on the key factors impacting the execution of their trades in real time, providing them the confidence and support they need to ensure their selections are appropriate.

Closing Equity Auctions In Asia: Rapidly Developing Into A Major Liquidity Event

Tom Kingsley, Head of Execution APAC and Gary Stone, Chief Strategy Officer, Bloomberg Tradebook look at the link between increasing institutional ETF use and closing auction volume.
Bloomberg TradebookETFs are a growing global phenomenon. The number of exchange-traded funds and their associated assets under management has exploded since the 2008 financial crisis. This includes ETFs that are Asia-equities focused. According to the Bloomberg Professional® service “fund search” (FSRC <GO>) analytic (Figure 1), 100 to 120 new Asia-equity-focused ETFs have launched each year since 2010. Total assets under management are approaching $28tn (Figure 2). ETF growth doesn’t look likely to slow — what started mainly as a retail product to provide passive exposure to sectors and geographies is now becoming a tactical portfolio management tool for institutional investors to gain long-term exposure to a sector or an asset class through an equity-traded product. According to a recent Greenwich Associates survey, 18% of institutional funds now include ETFs in their portfolios, compared with 14% in 2012; in addition, half of the institutional investors surveyed said that they intended to increase their ETF allocations by 2014. During the same time that ETFs have grown, closing auction volumes have also risen (Figure 3) as a percentage of ADV, making them a significant liquidity event. This raises the question: Are ETFs changing Asian market structures?
Equity ETFsHuman beings tend to assume that two observed events that seemingly happen together must be associated in some manner. Any two observed events could be attributed to correlation, causation or coincidence. In the world of finance, a correlation is a statistical measure of how two securities move in relation to each other. Causation is a trigger—one event triggers the other, while a coincidence means that events happen at the same time but not enough observations are available to establish statistical significance or the presence of a clear trigger. Although the data at this point isn’t clear, some anecdotal evidence supports the view that the explosive ETF growth is contributing to the growing significance of the closing auction. Some suggest that this may be because, according to authorised participants, institutional block orders for ETFs are executed at the close. The authorised participant (AP) will tap the primary market to satisfy the block. This is accomplished by the AP buying the constituent stocks in the auction at the official closing price and then submitting the basket to the fund administrator to receive an ETF at the closing day’s net asset value. Some are suggesting that this is contributing to the closing auction becoming a more significant liquidity event.
Bloomberg Tradebook’s quantitative research group found, not surprisingly, that although closing auction liquidity growth is affecting the majority of stocks, its impact is uneven. Because closing auction information is not readily apparent, traders need analytics to bring insights to the surface to help them make statistically guided and better decisions.
Deeper closing auction liquidity has several implications for popular trading strategies. For example, prior to the closing auction becoming a significant part of the ADV, a VWAP strategy could ignore it because the closing auction would have minimal impact on the average. Now, algorithms have to interact or at least provide the trader with the option of including the closing auction in the strategy’s volume profile.
Additionally, traders and portfolio managers may want to participate in close because the liquidity event could offer an opportunity to pick up size. Asia’s closing auctions, however, are not straightforward and are difficult to trade. Auction buildups are volatile and unpredictable. Prices and volumes change throughout an auction, thus predicting the uncrossing price becomes difficult. Many of the region’s auctions do not support market orders or pegged limit orders. Setting limits and resetting limits is manually intensive, so trying to simply place an order in a crossed market increases the risk of an order not being filled. Special closing auction algorithms are, therefore, needed to efficiently manage orders. They must be able to predict price and volume based on the buildup information and adapt to changing dynamics. Order placement is critical — parent orders have to be split into several “child” orders to establish queue position, hide footprints and effectively adapt to changing auction conditions. Asia’s market structure is changing — closing auction volumes are becoming more significant. At best, the growth in the region’s ETFs appears to be coincident with the growth in volumes in closing auctions. Nevertheless, the changing auction dynamics demand new technology. New analytics, such as Tradebook’s STAZ <GO>, are needed to provide traders with closing auction ADV to determine if execution strategies need to consider interacting with the closing volume. And, because auctions are difficult to trade, special algorithms are needed to execute efficiently.

Equity ETFs F2_F3

This communication is directed only to market professionals who are eligible to be customers of the relevant Bloomberg Tradebook entity. Communicated, as applicable, by Bloomberg Tradebook LLC; Bloomberg Tradebook Europe Limited, authorised and regulated by the U.K. Financial Services Authority; Bloomberg Tradebook (Bermuda) Ltd.; Bloomberg Tradebook Services LLC. Please visit http://www.bloombergtradebook. com/pdfs/disclaimer.pdf for more information and a list of Tradebook affiliates involved with Bloomberg Tradebook products in applicable jurisdictions. Nothing in this document constitutes an offer or a solicitation of an offer to buy or sell a security or financial instrument or investment advice or recommendation of a security or financial instrument. Bloomberg Tradebook believes the information herein was obtained from reliable sources but does not guarantee its accuracy.

The Buy-side: Transforming FX Through TCA

Richard Coulstock, Head of Dealing, Eastspring Investments Singapore.

Developing TCAs beyond equities
We began our equity TCA project about seven years ago, so we’ve been working at it for a while. It has been in place for two or three years now without much in the way of further adjustment, so it’s almost running on a care and maintenance basis. We have subsequently been asked by clients, senior management, audit and compliance, “OK, you’ve done this on the equity side, why don’t you try and replicate it into the non-equity asset classes?” and “how do you monitor custodial FX and your FX processes in general?” One of the outside influences for our looking at foreign exchange were the court cases a couple of years ago in the United States into custodial FX transactions. So there have been both internal and external forces that have worked on us to look into the non-equity side of things. I believe it is far harder to do FX TCA than it is on the equity side.
If you think about equity markets, everything’s pretty transparent. You have the exchanges which provide price and volume information and everything is time stamped, and if you do your equity trades through an order management system, you can do a like-for-like comparison at any point in time and get an idea of performance against whichever benchmark you like. In the foreign exchange world, we don’t have the benefit of equivalent transparency and it is far harder to do efficient comparative measurements. Generally speaking, foreign exchange has always been seen by long only asset managers as a bit of a nuisance, but something that has to be done. We have been trying to work through that and to change our thinking about FX, to give it the respect that an asset class deserves. I think that FX trading is generally inefficient because there’s been no measurement and there have been issues with custodial FX, and it’s imperative that we work towards getting this right. I think there are huge savings to be made for every fund.
There were no external products available to buy that would do the measurements for us, so we began by asking our custodians and counterparties from the FX side to do some self-analysis. In the beginning we didn’t know quite what we wanted and the counterparties didn’t know how to do the analysis that we were after. But after a while we started receiving monthly and quarterly reports from each counterparty about FX trading — this meant that we had a process in place to measure and monitor FX that we could explain to regulators and to clients. There is a psychological benefit to this, as soon as you tell someone that they’re being measured and that those results are being analysed, even if the results don’t tell you an awful lot, people do work a little bit harder and put a bit more care and effort into what they’re doing.
You also have to think about what benchmark you use in the FX world and also the fact that the benchmark might change depending on each individual funding you’re trading for. Our spot FX trades fall into three broad categories:

  • Currency trades in unrestricted currencies where you can trade with any counterparty you like.
  • Trades in restricted currencies, where you go through a custodian, but you get interaction at the time of trade; a phone call, Bloomberg message etc.
  • The third category is the most problematic; the custodial trades – where the asset manager only finds out what’s being done on a post-trade basis – so there’s very little transparency. You may just be given a rate and the trade date.

So that third category has been the main issue, but we will be looking at each of the three in our TCA project.
Data mangement
There are data cleansing issues, in particular with custodial trades. If we only receive advice of those trades on a post-trade basis and they are keyed into our order management system, we have to be aware that timestamp data in the OMS will not relate directly to the execution time and date. For example, if we get a trade that the custodians executed on the first of the month and we enter it onto our system on the second of the month at 2pm, you have to be aware that the particular FX trade wasn’t actually made at 2pm, it was actually traded at some point on the first. So the issue of data cleansing requires a lot of work that will take time and a coordinated effort between us and our counterparties.
Developing benchmarks 
Benchmarks are more complex in the FX world and I think that the benchmarks we use will vary depending on the fund for which we are trading. So for a custodial trade where all you know is the day you traded, then all you might need to look at is to ensure that they’re in the range of the high and low of the day or you might compare against the midpoint of that high and low and then look for trends over a period of time. If you are constantly, or 90% of the time, on the wrong side of that midpoint or heading toward the worst trade of the day, then that gives you an indication that perhaps the execution is not as good as it should be. But if we go back to the first category above, with a non-restrictive currency, we can trade whenever we like (and we will have clean time-stamped data), then we might start looking at TWAP or VWAP-type measurements as well as the range of the day.
We are not sure yet which is going to be the clearest and the most appropriate benchmark to use. But over the coming months, we will discuss this with both our provider and each of our counterparties. Benchmarks must be realistic, attainable and measurable and they must be fair to everybody concerned. I don’t want to come up with a measurement process that none of our counterparties are happy with, so we will engage with them to see what they think is realistic but also relevant in terms of improving FX executions for our clients.
Using the reports
I want to get to a position where I can share a monthly report that shows performance on a rolling 12-month basis, split by counterparties and split by currency pair with each FX counterparty. I want an idea of trends over time, who’s continually strong or weak in certain currencies, and then divert our trades according to those results. We already do this on the equity side and it works well and hopefully, we can transfer this to the FX world too. It will take time to get to that stage because we need to build up a period of clean data.

To read more buy-side content, click here.

Dealing with high volume

Jay Kagawa, Head of Radianz Services, BT, Japan
Jay KagawaSince Arrowhead — the next generation trading systems introduced by the Tokyo Stock Exchange (TSE) — launched in January 2010, the volume for electronic trading has steadfastly increased in the region, particularly since last November. Trading volumes for proprietary trading systems also indicates a significant uptrend along with it.
Regardless of this market trend, key factors to contributing to trading volumes are low cost execution fees high liquidity, low clearing fees and tax. Other costs are connectivity, colocation facility costs, hardware, market data feed and software for trading. All those factors needs to reach a reasonable level for the market participants to decide whether they ever come to Tokyo to trade.
Since the merger of TSE and Osaka Stock Exchange (OSE), Japan Exchange Group (JPX) completed system integration on the cash equity side this July and derivatives are to be consolidated to J-Gate next year.  It is true that we see some operational synergies out of this merger. However, one of my concerns is that although at the new JPX colocation site, both cash equities and derivatives are allowed to be traded, the OSE still operates their own colocation service where quite a few market participants are still there by connecting to other trading venues in the fastest possible way, including connecting overseas trading venues.
Also interesting is that the OSE colocation site has less limitation in connecting to other trading venues such as PTSs and overseas by skipping Arrownet provided by Tokyo System Service (TSS). JPX surely will tackle this matter in coming years but it is going to be a very tough decision to make, I believe.

Change In Japan

With Susumu Usui, Head of Electronic Trading Services Japan, Nomura, and Akio Hori, Deputy Global Head of Execution Services IT, Nomura.
Japan ArticelsIn our view, the exchanges in Japan are very stable and we don’t foresee any notable issues. From a business point of view, we are confident in the TSE’s technology. Two macro-level events will happen in the near future: first is the up-tick rule change, and second is the decrease in tick size the exchange is going to start implementing early next year. These two events will increase the number of messages drastically and will put the TSE technology infrastructure to the test.
With the TSE-OSE merger, OSE names have started trading on the TSE system. The volume in these names increased twofold for a couple of weeks after the merger, but we didn’t see any system issues.
Over the mid-term, one of the things that the exchange ensured when they introduced Arrowhead back in 2010 was to make the platform very scalable. They built some in-house benchmarks to ensure that when volume capacity reaches certain thresholds, they can easily introduce new servers to expand capacity. They have done this successfully several times over the last four years. With this new model and the implementation of Arrowhead, we think the exchange will continue to provide a scalable platform for market participants where they can increase capacity on demand.
One of the other things that the exchange has introduced was co-location service in 2010. The new JPX co-lo site was developed and went live in May. It has much better facilities and lots of services are being provided on the fly. In our view this is really good innovation.
Looking at the future implementation of Arrowhead, first, a major upgrade is expected in 2015. Due to industry demand and regulatory pressures, the exchange will likely implement a range of new risk controls such as cancel on disconnect and kill switches, and some of these may be rolled out prior to the major upgrade. That’s obviously something members brokers like ourselves and the exchange will continue to discuss.
Second, latency will continue to be a key theme in our industry. Almost every six months we hear a new story about somebody delivering better performance and better latency. The exchange will definitely need to continue innovating and improving this metric in order to maintain its status as one of the world’s top exchanges. There was always a long outstanding question about OSE versus TSE, but this is now resolved as the two have been consolidated.
The OSE was a relatively small market, with a different investment cycle and performance. With the consolidation, we can now operate on a single platform that provides very consistent latency and performance, which is definitely a benefit for us.
Also the OSE used to close later than the TSE, but we still needed to trade our baskets introducing additional risk for us. Now, these names close at the same time, which simplifies our processes.
We think the short-sell regulation changes in November, the small tick market introduction, and the Arrowhead innovation in 2015 are real drivers of change. We will see a big volume increase, more trading volume, more cancellations and more market data.
In terms of client trading style, once smaller tick sizes are implemented clients may not be able to enjoy decent spread savings by waiting on the near side, perhaps it would be better to use implementation shortfall as a strategy or a liquidity driven strategy as the average waiting time on the exchange will shorten.
There is definitely a change in the way vendors are coming into play. Over the last few months we’ve been seeing lots of these new players quite actively reaching us and our clients. We think this is an ongoing evolution. There will be more opportunities for vendors to come in to play, particularly in the low latency space.

Saving 20bps And A Lot Of Time

By combining historical and current data and PM trading style to optimise executions, Mark Kuzminskas, Director of Equity Trading at Robeco Investment Management has been able to make significant savings.
I wanted to devise an optimisation platform whereby we would be able to understand how algorithms performed under various circumstances. I was attempting to build a grid based on the performance of these broker algorithms so that when an order came in, we could look at the grid and determine which algorithm would be best suited for that particular order.
Moving on a couple of years, I was introduced to our new transaction cost analysis (TCA) vendor, providing daily, quarterly and annual review information, and during discussions with them I found out that they had actually already devised an optimisation tool.
When I started out in this industry in 1990, I had to learn the language of the business; we were trading in a fractional environment, but liquidity requires a lot of work. I wouldn’t simply put my order into one algorithm and leave it at that; I would use different algorithms at different times and that was the genesis of the optimisation platform.
The optimisation platform we developed and deployed routes out in a directed DMA fashion to the designated liquidity centre. It makes a determination as to whether the order that it sliced off at a given point in time from the parent order is going to go into a lit or a dark market, whether it is a limit or a market order, and how to react to the volatility within the bid-ask spread. This data is combined with the portfolio manager’s (PM’s) data — the stored trading investment profile with real time analytics. That is the key to this, i.e. marrying those two elements.
When I think about trading, the more that we cut out the middle, the intermediary and the process, the better the results. When you think about best execution as the best price given the liquidity characteristics of the stock, market conditions, relevant news in the marketplace and most importantly the PM’s objectives, the optimiser provides us with the PM’s profile and objectives, because each PM has an investment DNA. So we are coupling that with the market characteristics or conditions out there in the industry. Given the volatility in the marketplace these days, a human trader can only trade five to ten orders at a time effectively.
For example, it is 11:00am and I’ve been working an order for an hour and a half in the optimiser. The optimiser can see the volume in the stock over the last few days and can compare it to how it is today. It can look at the volatility of the bid-ask spread, the size of the bids and the offers, the overall performance of the stock relative to its peers, to the market at large and it can do this on a real time basis. A human trader can look at various factors to give an idea of the position, but cannot do it in the same way as a machine.
The optimiser marries the PM’s profile with the real time analytics in a way that feeds the order to the marketplace accordingly. We use broker DMA pipes to get to the liquidity centres to trade but the decision-making in terms of the routing is done by us using the optimiser. As you can see we have taken out many of the intermediaries throughout the process.
Over the years of using TCA I’ve found that the more we remove the intermediary layers, the better the performance. My advice to PMs is that they can have really great performance if they implement trades themselves, because for example, they really know how urgent an order is when they say it’s urgent. And when you consider volatility in the market on any given day with regards to executing a particular order, the PMs know how much sensitivity around price versus volume they truly have.
The advantage of the optimiser is that, although there is a fair amount of probability associated with superimposing their viewpoint into the real time analytics, it is a pretty accurate guess and is fully automated. By balancing out the positive versus the negative trading environments methodically, we have already improved our trading by 20 basis points. We have developed an execution blueprint for various types of order profiles and we have been able to engage with traders in such a way that they are brought into the process.
Typically between one and five orders account for the lion’s share of the costs. Instead of focusing on the tails that can move performance significantly in a positive or negative direction, what we have focused on is orders that are one to two standard deviations under the curve. It is by trading those orders in a much more methodical manner that we have been able to reduce implementation costs by up to 20 basis points.
What this does is free up the trader to focus on the trades that are in those tails. While it may be a handful of trades, they end up accounting for the largest costs as the tails are so big. The optimisation tool put us in the position of the PM as a trader so that the PM doesn’t actually have to do the trading. There is a fair amount of customisation associated with this. It’s not just an ‘off the shelf’ package. You have to develop the system because the optimiser learns more each day about the trades and continually improves the process. The more data you supply it with to develop a benchmark or baseline benchmark the better it allows you to know how to develop a trading plan where you’re having trouble. The quality of data is important particularly behind the scenes in terms of ensuring you have the proper data fields which come from the order management system into the main system as part of the overall baseline of analysis.
Data management issues
The processing is dealt with by S J Levinson, but we look at the real time tick data which has to be fed into the process. There are many hours of number crunching and processing each night in order to get to the starting point the next day with another day’s worth of implementation or execution knowledge that we can add on — so we are building layer by layer every day. Once we had developed a baseline we were then able to set goals for the various strategies, order types and put the plan into action.
There are two factors to consider. One is that from a practical perspective we don’t have to put an order flow into the optimiser on a given day so it’s at our discretion. That’s part of where a trader’s role is so important particularly on days like the Monday of the Boston bombings. But also as we have a baseline execution benchmark, we now can look at the job a trader does when he/she pulls an order out and trades it differently to our benchmark goals.
To illustrate, based on all the research available and the industry commentary, I didn’t put those orders in the optimiser, instead I traded them myself. Through our analysis it became apparent that the rate at which I was trading was not the rate at which the optimiser would have traded — so we can compare my trades relative to what the optimiser would have done.
It was then easy to determine whether I had added value or not. Those particular trades did add value, but there will be times that we can see when I might not add value. For example, say I have an order to buy in a down tape and I think this is the time to go all in, remove the trade from the optimiser and just go for it myself and it turns out that the market is in a downward spiral for the rest of the afternoon — it’s obvious that I executed too quickly. The optimiser would have flagged this up.
I know that there are desks that utilise TCA. It’s part of a compensation discussion for their traders. I’ve never been a believer in TCA as a means to benchmark traders on their compensation. But now that we have a plan based on the benchmarking of our historical performance and how we would expect the trade to develop, that when we go away from the optimiser — there are guidelines that help you analyse how a trader is performing.
As a relatively new platform for commercial use we are able to provide regular customisation around the optimisation that we are looking at, so we have been able to put things like kill switches into good effect.
If we see that certain things are going awry in the marketplace, we can hit one button and it shuts everything off at once. We have a warning system on the platform that alerts us to unusual activity. We also have a messaging system that is activated when something doesn’t appear to be executing according to plan; for example if the optimiser stops executing then we need to know about it. If it’s executing above plan we want to see why that might be the case. So there is some potential for development there when a product hasn’t been vetted by many clients to the fullest extent.
We are the risk control so we are building certain benchmark measures into the platform to alert us where necessary. There are some measures that are automatic in terms of notional limits — order size limits for example. Even when we put a market order into the optimisation platform, there is a calculated limit to provide an execution band so we don’t have a ‘runaway train’. There are risk control parameters embedded within the system but we have also added additional parameters so that the trader can take more responsibility and we are able to customise these parameters according to our risk tolerance measures.
Looking back, looking forward
I wouldn’t do anything differently I don’t think; I’m very comfortable with the solution that we have. Because we adopted the system early, we are very fortunate to be working with a provider who is enthusiastic about adding functionality that assists us and which would be beneficial for future users. So it’s a good working relationship from that perspective. In terms of further development, each portfolio manager (PM) has their own style. If the optimiser knows which PM it is dealing with and what their historic profile is, the optimiser will change the implementation profile for each PM.
So you could take the same stock for two different PMs with two different profiles. One is the kind of PM who is ‘legging into it’, who might recognise that its time horizon could be extended a bit further, whereas another might realise it has got to tighten up the horizon and really trade more quickly and look for liquidity in a slightly different way.
This is what makes our business unique. As traders we understand why we organise our desks to trade for specific PMs — so you get to know their tendencies — but it is actually difficult to adhere to that consistently because of the human element. We are emotional, we may lose sight of the PM’s DNA and we get caught up in the moment and the market dynamics at any given point. What we learn however is to let our winners run but to cut the losers more quickly.

To read more buy-side content, click here.

HFT misjudged? : Lynn Strongin Dodds

Lone Ranger
Lone Ranger

After being a target of abuse for so many years, it must have been heartening for the high frequency trading community to read something that extolled their virtues. A recent working paper by three US academics – Jonathan Brogaard, Terrence Hendershott and Ryan Riordan – found said these traders not only improved price setting and helped reduce “noise”, or short-term volatility in markets, but they also warned regulators that introducing measures to curb their activities “could result in less efficient markets”.

This isn’t the first of its kind as there have been a slew of academic probes into the practice which accounts for half the volume of US equity trading and 35% Europe. A widely cited UK-commissioned “Foresight” report, for example, published a year ago, also found high-frequency traders had improved liquidity and cut transaction costs but called for a tighter grip around the sudden swings in markets. Many blamed HFT rapid fire technology for the so-called “flash crash” of May 6 2010, when US equity indices plummeted in a matter of minutes only to rally sharply.

The big question is whether regulators will heed the academic’s recommendations. After all, the ECB published the report even if it did stress that the views were those of the authors and not necessarily theirs. It was though put on the website in the run-up to the European Parliament debating on new regulations on HFT for MiFID Review. There has been bad blood between it and the EU Parliament over how much accountability the Central Bank should have. Surely though politics would not get in the way of the regulator making a decision?

Lynn Strongin Dodds

We're Enhancing Your Experience with Smart Technology

We've updated our Terms & Conditions and Privacy Policy to introduce AI tools that will personalize your content, improve our market analysis, and deliver more relevant insights.These changes take effect on Aug 25, 2025.
Your data remains protected—we're simply using smart technology to serve you better. [Review Full Terms] |[Review Privacy Policy] By continuing to use our services after Aug 25, 2025, you agree to these updates.

Close the CTA