Home Blog Page 640

The Road from Tokyo to Osaka

Mizuho Securities’ Spyridon Mentzas discusses the status of the Japanese exchange merger and offers thoughts on how well the two systems will merge and the benefits investors can expect.
spyridon mentzasCompatibility
The merger of Tokyo Stock Exchange (TSE) and Osaka Securities Exchange (OSE) is not yet finalized, but it appears they will merge in the beginning of 2013, with the details yet to be specified. The first impression is that they have nearly identical trading rules with some minor differences, such as the OSE trading until 3:10, while the TSE closes at 3:00. When TSE decided to shorten the lunch time in November, the OSE did the same. When one of the exchanges (usually, the TSE) changes the rules, then the other moves in tandem: for example, changing the tick sizes. If the merger does go ahead, it is likely that they are going to use the TSE’s cash system, arrowhead, and OSE’ J-GATE for derivatives. They will use the old systems in parallel, which will achieve a reduction in cost because they will not have to maintain two systems.
Further Industry Consolidation
The ECN’s in the US enjoyed technological superiority versus the classic exchanges, where NYSE’s latency was significantly slower than Arca’s. This would have been reason enough for TSE to consider buying a PTS, but with arrowhead’s current latency of less than 2 milliseconds (and another upgrade in the next few months to target less than a millisecond), simply buying a PTS would not give them a noticeable advantage because the TSE and OSE are on par with the PTSs. The reason why PTSs are increasing their market share is that, unlike in the UK and US, where Reg NMS and MiFID have required trading on the exchange with the best price, in Japan the PTSs draw volume through decimal points and smaller tick sizes than the incumbents.
For example, Mizuho Financial Group might trade on the TSE at 105 yen bid, 106 yen offer. That one yen spread is close to 100 basis points or almost one percent, whereas the PTS trades at 0.1 yen. This is a major incentive for investors to buy and sell on the PTSs with their smaller increments to reduce market impact and trading costs. From the beginning, the regulators have not been overly concerned with the PTSs deciding to trade in decimal places and have 0.1 yen ticks. It was always up to the PTSs to decide and the TSE could do the same. If anything, I think the new exchange would rather reduce their tick sizes, than merge again.
However, not all participants would be happy to see new tick sizes, for example, some of the proprietary houses or small firms that trade with retail, as altering their downstream systems to handle decimal places would be costly.
This will also create a fragmentation of liquidity in tick sizes. The bids and offers on the TSE are often thick, with something like 50 billion shares sitting on the bid side, so with 0.1 yen ticks, the average order size might move to 3 million or 1 million shares. Traders who want to buy a large lot will have to scroll up and down to find out how much they have to go up to absorb the available liquidity. I think for the traditional long-only traders, this might mean an increased scattering of liquidity. There is sufficient liquidity in the market at present; even for stocks trading at a low price – there are market makers trying to make 1% during the day. If smaller tick sizes are introduced, that liquidity will likely be scattered or disappear.

<--break->Regulatory Hurdles

The regulators are not likely to be overly concerned about this merger. As a listed company, OSE has shareholder requirements that the TSE does not. There is also a great level of confidence because both operators are Japanese. With a foreign operator, the concern for the regulators would be that they may pull out of the market if they do not realize the expected profits.
The exchanges in Japan also participate in self-regulatory corporation activities, where they monitor trading, market participants and stock listings. A PTS, on the other hand, is not self regulatory, so while they perform market surveillance, they are not required to perform evaluations on listed companies; they simply track which stocks are listed on the TSE and OSE. The exception to this is BATS who added this function once they were approved as an exchange. For the TSE and OSE, a merger will create efficiency and a cut in the overheads for these self-regulatory activities as they will not have to duplicate these efforts.
Improved Trading Conditions
As a broker servicing domestic, international and retail clients, we have to consider all of our clients’ needs. If there was only one exchange, we would have to connect to that exchange; if there were 50, then we would have to connect to all 50. From a broker’s point of view, if the exchanges do consolidate into one and that one exchange covers equities, futures, derivatives and commodities, then it will reduce our investment in connectivity and trading systems, not to mention membership fees. Also, when you consider high frequency trading and colocation, currently most brokers colocate at TSE for cash equities and at OSE for futures, but a merger will hopefully mean we only have to borrow one rack for both equities and futures. For the OSE, the data center for futures is in Tokyo and cash is in Osaka, so the best case scenario is to have one primary data center for both products and one for backup creating efficiency and cutting costs.
One other point to note is fragmentation. There is not much fragmentation in Japan compared to the US or Europe, where the incumbent exchanges are trading less than the majority of the volume. In Japan, adding together the TSE and OSE’ volume would result in more than 95% of total volume and marginally less fragmentation, which might in turn result in new foreign investment firms and brokers entering the market and thereby increasing liquidity.

Views from India: An Algo Roundtable

Brian Ross of FIX Flyer talks to Buy- and Sell-side presenting the latest lessons on high frequency trading and algorithms from the Indian market.
India’s capital markets are experiencing increased interest from local and global firms and new rules are set to attract high frequency trading (HFT).
Brian RossThe capital markets regulator, the Securities and Exchange Board of India (SEBI), the exchanges, brokers and many investors are in favor of abolishing Securities Transaction Tax (STT). Eliminating STT will have a positive impact on market turnover, will help high frequency traders to be more profitable and, at the same time, narrow spreads should drive up trading volumes.
STT has been levied for all trades, domestic or foreign, on all transactions in either equities or derivatives markets since 2004. At the time, the purpose was to generate tax revenue and to protect market integrity by slowing down the pace of technological advancements of a few, well-funded players. Revenue generated by STT amounted to around USD 1.5bn in 2011.
It is widely expected that STT will be eliminated this spring, bringing new opportunities for HFT in one of the world’s biggest and fastest growingcapital markets.
To better understand the situation, we asked five panelists who are leading the charge in HFT in India, to share their insights with us.
You never forget your first algo. When you first got involved in algorithmic trading, what problem were you trying to solve? What was your decision process, and what technologies did you use?

Sanjay RawalSanjay Rawal, Open Futures:
We started off using algos for trading purposes and the first one we built was for a specific type of arbitrage that was getting difficult to run using manual input. We used third party software for the exchange connectivity and wrote our algo in C#.
VishalVishal Rana, IIFL Capital:
My first experience with HFT was trying to create a straight-arb model on a real-time basis. Although it was a simple model, the most difficult thing was to clean the data. We got the data dumps and it took a lot of effort to clean it. Most of the coding was done using C++.
Rohit DhundeleRohit Dhundele, Edelweiss:
At the onset of the project, the easiest yet most important task was gathering the business intelligence to be subsequently converted to algorithms. Some of the more intricate decisions were the selection of order, execution and risk management systems to ensure a stable back-bone to the platform. Other equally important criteria were a flexible programming environment and a friendly interface for users. To achieve these objectives, we had to decide whether to build or buy this technology.
At Edelweiss, we realized relatively quickly that there is a sweet spot between the two extremes of in-house vs. outsourced solutions. We have since been following this model – combining the best of both worlds, which has helped us deliver customized solutions within acceptable turnaround times, whilst still protecting our IP.
Sanjay AwastinSanjay Awasthi, Eastspring Investments (Singapore) Limited:
In the Indian markets, propelled as they are by rapid information dissemination systems, anonymity becomes a key factor in determining efficient trading. It was this need for anonymity that propelled us towards algorithmic trading. Continued use and familiarity lead to further benefits by way of better execution control. Algorithmic trading has thus become an important part of our execution arsenal.
Chetan Pandya, Kotak Securities:
The first algo I worked upon and put in production was calendar rolls for derivatives. Our trading desk had huge positions to roll from the current month to the next and manual execution was leading to slippages and erroneous executions at times. Using the 2 legged order of NSE we created a simple algorithm which would roll the position at desired spread.
My first observation regarding algorithmic trading was to appreciate the difference between an individual trading manually versus a machine trading automatically. There are so many things that come naturally to a human being but needs to be told to the machine. Sometimes I wonder whether an algorithm can fully replace a human being ever.  There are those nuances of the market and events that lead to erratic market behaviour that cannot be fully programmed for reaction.
Also, I had to ensure that there is no room for error when you are trading using an algo platform, primarily because of the sheer number of orders that it can process in a single second and also the inability to spot something going awry with the naked eye given the sheer speed. Hence, I had to also think of risk management capabilities of the Algorithmic platform while needing to ensure that risk management does not lead to inefficient execution due to latency.
In terms of technology, we were limited to applications that conformed to our market regulations. Once we had the base framework and architecture ready, we integrated it rapidly with our existing applications for order routing and downstream workflows.
There has been a great deal of press about HFT, in some cases, suggesting that it is unfair or even illegal. After the ‘flash crash’ in the USA and similar ones in Europe and Asia, how concerned are Indian exchange operators about HFT? What are your thoughts on HFT? How will SEBI move to address HFT?

SR: I think the exchange operators themselves are not against HFT and that they would like to see continued participation from HFT players. SEBI will certainly look towards regulation of HFT. I believe that as algos get more sophisticated, the market place will look unbalanced and therefore SEBI will be forced to look more closely at HFT. I believe that the opposition will come more from exchange participants rather than from investors, who will benefit from HFT.
RD: As has been pointed out, there has been much negative press recently concerning usage of HFT techniques. There have been a considerable number of studies conducted around the flash-crash and to the best of my knowledge none of them has confirmed a ‘one-to-one’ correlation between the two. The concerns might well have been blown out of all proportion; however they are not completely unsubstantiated.
HFT firms trade at lightning fast speed, create high throughputs and earn wafer thin margins on every trade. It is akin to flying a jet a couple of inches above the ground; there is little scope for error. Consequently, an error in the HFT domain can have an extensive impact. I hope this is not being misconstrued; low frequency traders are also capable of creating mayhem.
Such events directly impact the quality of liquidity available and have consequences reaching wide and deep into the trading eco-system. Given the risks involved, it is only prudent for the regulators and exchanges to interfere through proper risk management systems and processes. To banish HFT and similar products altogether is not the solution as there are proven benefits of these trading styles, moreover there is space for everyone in capital markets. SEBI has already expressed similar concerns and will act in tandem with the exchanges.
SA: As a predominantly long term fund house, we at Eastspring Investments (Singapore) Limited constantly strive to reduce the impact costs of trade execution for our clients. To this end, any measure that enhances liquidity and reduces spreads in the market is welcome. I believe HFT mandates changes in the market microstructure, which increases depth in the market and helps long-only funds, like us, to reduce impact cost. Without getting into the debate of the good and bad of HFT, I think any new trading strategy and its proliferation will alter market patterns. This requires us all to adapt our trading strategies in order to best satisfy the interests of our clients. It is important that there is a level playing field for all types of investors and that there are adequate systems and regulatory safeguards in place to protect the integrity of the market.
More specifically, in the Indian context, the market microstructure in both the cash and the derivative markets is ideally suited for HFT. The only major impediment is that the securities lending and borrowing market in its present form has not really taken off.
CP: SEBI is rightly worried about the rapid proliferation in usage of algorithms. We expect the regulators to come out with a more comprehensive risk management framework. In my view, HFT will form a significant part of trading in India in future. How and when this will happen and what will trigger this is anybody’s guess. While technologically we will be, or possibly already are ready, there will need to be changes in regulations to support HFT in India. As you are aware, foreign institutional investors today have to give and take delivery of each trade that they do on the exchanges and intra-day netting off is not permissible, which clearly is not supportive of the HFT philosophy.
What are the challenges of using algos on multiple exchanges in India? Is this kind of arbitrage or Smart Order Routing technology going to take off?

SR: If we get far larger volumes on BSE, which they are working very hard towards, we will see an explosive growth in Smart Order Routing (SOR). But the exchanges will have to allow not only SOR , but also allow for using market data feeds, so that more sophisticated algos can be run for valuing futures as well as options. I believe that it will lead to higher volumes on both exchanges, but rather more for the larger player than the smaller player.
VR: The biggest challenge we are facing is that respective exchanges have not opened their APIs to each other, so different algos need to be run for the exchanges and the same algo does not work across the exchanges. Eventually they will have to open their APIs and SOR will pick up.
RD: Prior to defining challenges with respect to SOR in India, perhaps it will be better to put this technology into perspective vis-à-vis developed markets. The trading landscape in these markets is significantly different from India due to the presence of dark pools and ECNs over and above the exchange platforms. The total number of such liquidity pools, which at times runs into doubledigits, makes efficient trading across venues a colossal task.
SOR consists of advanced quantitative techniques backed by sophisticated technologies for optimizing volumes, price impact, speed and trading costs across all possible venues. In India, the landscape is straightforward: there are two major exchanges and only lit-liquidity which makes the work of decision support systems relatively straightforward. While business objectives are simpler in India, the initial implementation will face challenges from a technology standpoint. Reconciliation of varied native formats for market broadcast and permissible order types across exchanges will be at the fore. Inter exchange arbitrage will also have to deal with dissimilar latency across venues due to their geographical separation. Notwithstanding the constraints, SOR does move the market microstructure towards efficiency and therefore will experience wider acceptance from the market community over the next few years as brokers, exchanges and vendors move up the learning curve.
SA: Algorithms were basically sell-side tools and thanks to new technology, they are now available to the buy-side as well. So to successfully use algorithms, the buy-side has to acquire and adapt sell-side skills as well.
CP: This is related to challenges of market microstructure, margining requirements, clearing and settlement challenges and transaction charges in India. While SOR is in place, my belief is it has yet to take off in a big way thanks to the above mentioned limitations. But I am sure that this will pick up as we go ahead as people in market are definitely looking at alpha. If they see returns in these strategies then challenges will also get addressed for sure.
What are the tax and fee implications for HFT in India? For exchanges and brokers?

VR: Transaction costs are an obstacle for HFT, hence the micro high frequency trading or very high frequency trading is still not taking place; the threshold levels to trade are still very high.
SA: Algorithms are just tools, the use of which is a function of various factors including the desired outcome and other benchmarks. At Eastspring Investments (Singapore) Limited, we use implementation shortfall as a benchmark to measure execution performance. So the choice of a volume-based or a price-based algorithm depends on the size of the order and the trading pattern of the stock and other benchmarks that may be applicable.
The choice of the algorithm is also highly situation specific and in the future we should see the evolution of intelligent parameter building, based on market microstructure studies and other research. This will result in algorithms being framed to apply in an automated fashion to a variety of situations.
In Asia, and in India specifically, there needs to be further research on the market microstructure before such parameter defining and algorithm building begins. Several broking bodies do invest resources in this type of research. Market microstructure research combined with proper historical transaction cost analysis will pave the way for customized solutions. That is where I see the industry moving to in the near future.
CP: There are multiple taxes and fees levied by the regulator/government on the exchange transactions and they vary for Equity and Derivatives segment. The Exchanges levy transaction fees which is a major source of revenue for them while the Government levies Securities Transaction Tax, stamp duty on exchange transactions and service tax on commission charged by brokers. These taxes make certain types of strategies unviable. HFT will also face this hurdle of high taxes and charges.
Is low-latency a requirement for many algorithms?
SR: Without question. Most strategies, including trend following will not work without low latency. The incumbent players will not leave enough on the table for others to get, if you are unable to compete on latency.
RD: Yes, low latency is a prerequisite for many but not all algorithms. As a general principle, each trade should reach the venue as quickly as possible; there cannot be a down-side to being first to reach the order book. But there are costs, exorbitant at times, of maintaining a low latency infrastructure and the cost vs benefit will not work out for all trading styles.
Low latency is a crucial ingredient only for trades where many market participants are chasing a similar metric, for example, arbitrage. In short, investing in a low-latency system should be driven by the traders positioning in the market and not because the term is in-vogue on public forums.
CP: Low latency is a prime requirement in derivatives where at times it becomes an all or none game for certain type of executions. The cash execution algorithms market is slowly but surely moving to low latency. With a gradual increase of alpha seeking algorithms, low latency will be the need of the hour.

Latency Measurement: Impact of the FIX IPL Standard

TS-Associates’ Henry Young, Co-chair of the FIX IPL Working Group, discusses the anticipated impact of the new FIX Inter Party Latency (FIX IPL) standard.
Henry YoungThe FIX Inter Party Latency (FIX IPL) standard, version 1.0, will hit the streets shortly after this issue has gone to press. Now that all the hard work of designing, formulating and testing the standard has been completed, thoughts turn naturally to issues of adoption and impact on the market for latency monitoring solutions. But let’s first revisit the motivation for FIX IPL.
The 1.0 release of FIX IPL is designed to achieve two things:
• The standardisation of where latency is measured, and
• Interoperability between latency monitoring solutions.
The first point enables latency statistics published by different firms to be compared more meaningfully – ‘apples to apples’ style. The second point enables the latency monitoring solutions operated by different firms to be interconnected or ‘peered’, so that inter party latency can be measured without requiring each firm to operate a latency monitoring solution supplied by the same vendor.
The next point to consider is the likely adoption of such an interoperability standard. What’s in it for financial market participants, and for latency monitoring solution vendors? As participation in the FIX IPL Working Group has demonstrated, both constituents see advantages in such a standard existing and being widely adopted. This will make inter party latency monitoring easier, it will reduce costs for participants, lower the barriers to entry for new solution vendors, thereby creating a larger, broader and faster growing market for latency monitoring solutions. This will be to everybody’s advantage. Those who follow the proprietary route and fail to adopt the standard will be left out in the cold. We expect support for FIX IPL to become a standard tick box item in latency monitoring RFPs.
FIX IPL StandardFIX IPL Architecture
The FIX IPL architecture has been designed to support latency monitoring for price and order flows, both FIX and non-FIX, in a wide variety of situations. This schematic shows a simple case that demonstrates the advantages of the FIX IPL interoperability standard. It shows an order flow between  two trading parties A and B. Thesecould be either a buy-side client and a broker, or a broker and an exchange. The flow is monitored in each party’s domain using a FIX IPL Source, which observes each message in the flow, extracts some content from each message for unique identification purposes and associates a time stamp with each message observation. A FIX IPL Source transmits a series of observation messages, each of which contains the unique identification content from the observed message and an observation time stamp.
The FIX IPL messages generated by each FIX IPL Source are then brought together by a FIX IPL Sink, which could be operated by either of parties A or B, or by an entirely different party C, as shown in the schematic. The FIX IPL Sink then correlates the observation messages from each of the FIX IPL Sources, matching up the message observations at A and B, and calculating latency by subtracting the observation time stamps. The resulting per-message or per-order latency metrics can then be aggregated into time interval statistics describing the latency properties of the order flow.
<--break->There is, however, a more subtle aspect to what the FIX IPL standard will achieve. Let’s consider for a moment the components of a latency monitoring solution (see FIX IPL Architecture). In order to monitor latency, the minimum requirement is for two FIX IPL Sources to generate FIX IPL messages, and for one FIX IPL Sink to correlate the FIX IPL messages and thereby calculate latency. Standardisation of the communications between these components means not only that the two FIX IPL Sources may be of differing origins, but the FIX IPL Sink may be of a third origin. The FIX IPL standard therefore achieves not only horizontal decoupling between FIX IPL Sources, but also vertical decoupling between FIX IPL Sources and FIX IPL Sinks.
FIX IPL ArchitectureThe implication of this vertical decoupling is that we can now decompose latency monitoring solutions into two layers – the generation of FIX IPL messages and the correlation/analysis of FIX IPL messages. This opens up the market to vendors who may wish to embed FIX IPL sources in their trading infrastructure components with no concern for correlation/analysis. Meanwhile, other vendors may wish to focus exclusively on correlation/ analysis of FIX IPL messages without needing to be concerned with their generation. The disciplines required for each layer are quite different. The former requires the use of specialist hardware techniques for precision time stamping and clock synchronisation. The latter is a purely software-based task with no dependency on specialist hardware.
In conclusion, the FIX IPL standard will herald an opening up and broadening of the market for latency monitoring tools. We expect to see an increasing number of solutions vendors offering more specialised products that will be interoperable. New entrants will probably focus on one of the two layers facilitated by the vertical decoupling between FIX IPL Sources and Sinks. FIX IPL message generation will become an embedded feature of trading system components, and specialist instrumentation solutions. FIX IPL message correlation/analysis will be open to a wider range of monitoring solution vendors.Some industry participants will probably feel that reaching the version 1.0 release point is the end of the story. Actually, it is only the beginning. Unleashed onto the open market as a non-proprietary standard that is free to all to use, the future adoption, use and evolution of the FIX IPL standard is impossible to anticipate with any certainty. But I think one thing is certain… it’s going to be a lot of fun watching how things play out over the coming years.
FIX Inter Party Latency (FIX IPL) Standard
The FIX IPL Working Group is part of the non-profit, global, FPL organisation and has over 190 members encompassing exchanges, brokers, funds, service providers and latency monitoring solution providers. The working group has brought together leading industry experts to standardise two specific aspects of latency measurement.
Firstly, the working group has developed a taxonomy of standardised measurement points.This is vitally important, so that when different organisations publish latency statistics that comply with the standard, financial market participants will know the data can be compared on a like-for-like basis. The FIX IPL taxonomy is shown in the schematic.
Secondly, the working group has developed a latency data interoperability protocol. This has been designed to enable interoperability between latency monitoring solutions, thus freeing the industry from unacceptable bilateral constraints on purchasing decisions. A widely adopted latency measurement interoperability protocol will, as with FIX, break down the barriers to the uptake of standardscompliant latency monitoring solutions.

The Wisdom of Crowds: Open Source for Middleware

Feargal O’Sullivan and Jamie Hill of NYSE Technologies discuss OpenMAMA, the open source middleware Agnostic Messaging API they hope will expedite innovation in services, reduce vendor lock-in and minimize implementation time and cost.

Feargal O'SullivanSolving a Problem
Choosing a market data vendor because of their API alone is not sound practice. The issue of how to come up with a standard way of accessing market data that allows clients to select a vendor for any range of reasons – other than the API that the vendor happens to offer – has been a struggle for a long time. Something that should be low on any decision-making tree has unfortunately tended to be much more important. There are a number of different consolidated market vendors, including some obvious names like Thomson Reuters or Bloomberg and there is also a range of direct feeds or ticker plant vendors, where instead of going to a consolidator, feeds are accessed directly from an individual exchange.
In selecting a vendor, users must write all their code to suit that vendor’s particular way of accessing the data. Changing to a different vendor requires opening up the source code and altering everything to match how this other vendor wants to access the market data. With a consolidated feed for broad international access and a direct feed for low latency algo trading in US equities, for example, many users have to write according to two to four different APIs. This has been a significant problem for the industry and with OpenMAMA we are trying to drive the industry towards a standard.
User Base
This API is an eight-year-old standard that was initially developed by NYSE Technologies as the Middleware Agnostic Messaging API (MAMA), and it is quite heavily deployed in the financial services industry; close to 200 clients already use this API in their custom applications, so today it has an established installed base. We have opened that up and made it a standard by taking the source code for the APIs these firms are using today and provided it to The Linux Foundation, which will physically host the code as a neutral body.
During this process we worked with multiple parties that would not ordinarily use our API. Since the launch of OpenMAMA on 31 October 2011, one of the key factors to this being taken seriously as an open installation, was getting the right level of adoption. Before we launched, we approached a number of customers, other vendors and competitors, out of whom we established our launch partners J.P. Morgan, Bank of America Merrill Lynch, Exegy, Fixnetix and EMC. These launch partners, along with NYSE Technologies, formed a steering committee to drive the direction and the future of OpenMAMA.
From that point forth, each of those organizations who are part of that committee has a stake in Open MAMA. The API is open source under the LGPL 2.1 licence, so it is now owned by the open source community. With participation from Interactive Data, Dealing Object Technologies and TS-Associates as well, we now have a group ten strong and it is a global mix comprising different industries. Whereas before the API was driven largely by NYSE Technologies and our commercial use cases, now it is being driven forward as an industry standard. The more people we have to adopt and participate, the higher the likelihood of achieving that.
<--break->Adoption
OpenMAMA is very consistent with the models of development used by subscribers of market data applications, so porting to this API is straight forward. The strong response from competitors who wish to join this initiative is proof that they understand its value. The measure of success is allowing firms not to feel locked-in with any particular vendor based on the technical hassles of changing, but more importantly to spark innovation.
No one vendor is going to be the solution to every particular need. By opening up this API we bypass the wasted cycles of designing and writing new APIs that only get used by one firm and so we are encouraging the industry to innovate beyond basic plumbing and to start to create new services that offer a community environment.
Enabling Innovation
OpenMAMA enables firms that are creating applications to turn those applications into an object and make them a serviceoriented application. Much like what FIX did for transactions, we hope to standardize application development for the front office. This has the potential to offer significant cost reductions to developers with new event-driven applications for analytics, algorithms, historical back testing or intelligent risk checks.
Typically most vendors have to build a more complex system  because they have their own proprietary API as well as those used by other service providers. Standardizing this API allows firms to focus on offering the best analytics without concerns about compatibility with market data vendors and order management systems, linking the application fluidly with existing OMS, desktop applications, Excel plugins, etc. This drastically simplifies the adoption of small but useful applications. Everyone involved has identified that OpenMAMA simplifies creating event-driven applications; there is a single API and no vendor lock-in.
At the moment, we have a preliminary road map brought together by each of the committee members. Some of the projects we are collaborating on include an open-standard market data model, standardizing monitoring interfaces and standard interfaces for entitlements. We have also had a number of prominent players in the Advanced Message Queuing Protocol (AMQP) space who have offered to build and contribute back a bridge support for that too. We are talking about building guaranteed messaging query APIs with all open source payloads.
All of this is still in the preliminary stages as the steering committee has now met twice, while discussions go on in between. Underneath, the steering committee comprises various working groups, which contribute technical resources. The beauty of this is that we have collaboration between vendors and end users, both working together to develop future-proof software.
Balancing Neutrality and Throughput
There are three levels of low latency trading: ordinary trading is measured in milliseconds, low latency trading is measured in hundreds of microseconds and ultra low latency trading is measured in tens of microseconds end to end. At the ultra low latency level, firms must customize the normalization into their application by learning each individual feed, processing it in their application space, FPGA programming and generally become an expert about everything related to that market. As a result, it is difficult to build a generic interface for the ultra low latency traders because they have a competitive edge by being different.
For the low latency space, however, OpenMAMA is ideal because latency applies not only  to how quickly market data is processed but also how quickly strategies are moved in production. Programming that FPGA can slow implementation, and a fast final product may be eclipsed by another participant who started trading the strategy six months earlier.
Next Steps
The next important area we are working on is the Data Model. Every exchange has its own data model for how it sends and receives market data. We hope to use the same committee to standardize the data model by harmonizing fields among various market data vendors and we have a very good head start, with over 200 feed handlers represented. The API already has an install base, interest and momentum, so we are very confident it will be adopted. The Data Model takes longer for vendors to standardize, but it is not difficult. After those two projects, our remaining goal will be to address symbology.
When we converted MAMA into open source in October 2011, we opened up the C portion of the MAMA API, and since then we have been working to open source all the other components within MAMA. We have a roadmap of contributions into the open source repository, and by the end of April 2012, the entire API will be made available, including the MAMDA API and a series of wrappers for support for any other programming languages.
Working groups were formed according to the roadmap and each group will go through a few iterations before they publish any findings. Having vendors and end users involved together means that discussions are productive and we expect new features to be added to the API through the end of 2012. We are also hosting the Linux End Users Conference at the end of April to discuss the future of OpenMAMA and Linux. Currently, we are working on the structure and talking to the key players in the financial community. As more people adopt OpenMAMA and consider joining the steering committee or even contribute through the open source community, it is our hope that this project will become more than the sum of its parts.

The Global Trading Perspective: An AllianceBernstein Case Study

Richard Nelson, Head of EMEA Trading for AllianceBernstein, shares his perspectives on navigating volatility, prospects for developing exchanges, new regulation and the balance between transparency and best execution.
FIXGlobal: How much does volatility affect the way that you trade and what are you using to measure volatility on the desk?

Richard NelsonRichard Nelson, AllianceBernstein:
We use an implementation shortfall benchmark, so the longer we take to execute an order, the wider the range of possible execution outcomes. Volatility, in particular intraday volatility, increases that potential range, so you could see very good or very poor execution outcomes as a result. In reaction to that, we take a more conservative execution strategy or stretch the order out over a longer time period. And, for instance, if we get a hit on a block crossing network, we will not go in with as large a quantity as we would in a less volatile market. In that way we try to dampen down the potential effects that volatility might have on the execution outcome.
FG: How is AllianceBernstein using technology to improve performance and cut costs on the trading desk?

RN: It plays quite an important part and has done so for quite a while. We are pretty lucky in that we have a team of quant trading analysts. Most of them are in New York, but we have one here on the desk in London, and they help us to analyze the changing market environment and recommend the best ways we can adapt to it. Our usage of electronic trading has increased in the last year, we benefit from the quant trading analysts looking at the results we are achieving with our customized algorithms. We are more confident about getting good consistent execution outcomes because they are monitoring the process and making the necessary changes to ensure the results are what we are expecting. This, in turn, increases the productivity of the traders I have on the desk. They can place their suitable orders into these algorithms and let them run which allows us to focus on trying to get better outcomes on our larger, more liquidity-demanding orders.
On top of that, as market liquidity has dropped significantly, we are trying to make sure we reach as much potential liquidity as possible, and ideally we want to do that under our own name rather than go to a broker who then goes to another venue. We believe that going directly into a pool of liquidity is better done under your own name rather than via a broker because we can then access the ‘meaty’ bits of the pool rather than the ‘froth’. We are looking into ways of doing that but one of the problems is that, potentially, you get a lot of executions from a number of different venues, which results in multiple tickets for settlement. Our goal is to access all these potential liquidity pools, yet also control our ticketing costs, which are a drag on performance for clients.
FG: Was it an intentional change to increase electronic trading or was it a byproduct?

RN: It was a little of both. Our quant trader has been with us for two years and when he first arrived he had to sort out the data issues that exist in Europe and to clean things up. Once the data integrity was sorted out, we looked at different ways of employing quantitative analyses. Having somebody here who is constantly monitoring the execution outcomes means we can proceed down this path with real confidence. As a London firm, we were a little behind in our adoption of electronic trading, but now we are in the middle of the pack in terms of usage. It makes sense from a business and productivity perspective that there are many orders that do not need human oversight, which are best done in algorithms.
<--break->
FG: With new regulation in Europe and forthcoming changes in the US, what is the biggest change you will have to make in your trading: e.g. broker crossing, HFT, access to dark liquidity? Will this change be made internally or by your brokers?

RN: We are in a very interesting stage here in Europe, as MiFID II is frequently talked about and discussion papers circulated. The review has changed from the first things that were mooted and I think it will change again by the time it actually becomes law. There is still quite a way to go to put flesh on the bones.
We have moved in a positive direction, however, in some aspects. The new Organized Trading Facility (OTF) regime is going to be something the brokers have to integrate with their Multilateral Trading Facilities (MTFs) and Systemic Internalizers (SIs), and that is something they must deal with and work out how it all fits together.
One area that concerns me is what may happen to waivers for block crossing networks. As a larger institution, these play a very important part in how we execute our orders, so we would not want to see that change in a way that disadvantages how we can participate in these venues. The other area that concerns me is the intention to increase the speed for reporting trades, which is great on an agency basis, but may not necessarily be so good in a situation where we have used the brokers for a principal trade, where the brokers provide a principal price for us to get some volume in a particular position. At this stage it is unclear how the regulations might affect the need to report that straight away. Agency trades are fine because they are third party to third party trades, but in a principal trade the broker has risk on their books and they need time to reduce or hedge that risk. It would be detrimental if those types of trade were required to have instantaneous reporting like agency trading probably will.
FG: Price discovery helps all participants, but large institutional investors benefit from greater anonymity when trading. What is the appropriate balance between these two concerns? Does it involve shorter reporting periods or increased reporting of unlit trades?

RN: An institutional investor is, by and large, an organization that represents pension funds and mutual funds; basically, it is the retail public bundled up into larger shapes. We have similar interests to the man in the street in terms of his investments. Pre-trade transparency and price discovery are popular discussion points, but we rarely discuss the fact that posttrade is as important as pre-trade transparency. There is a great deal of information to be discerned from a post-trade tape, because we can get an accurate picture of volume for specific names each day. It is good that regulators are tightening up the reporting regime for post-trade, and it is particularly helpful for agency trades, but there is still work to be done on the risk trades. The proposal for a consolidated tape would be beneficial as well, because at the moment trades are reported on many platforms and it is difficult to get a full picture in real time as to what is going on in the marketplace.
As regards to pre-trade and price discovery, even for institutional investors who trade in dark pools and block crossing networks, we still use those pre-trade markets as a reference point for where we trade in the dark or in block. The amount of business that is going through these dark pools and crossing networks is not large enough at all to detract from the price discovery that is occurring in the lit markets. Any crossing that a broker does in Europe is reported as an over the counter trade, but  that is not entirely accurate. Even crossing networks use the pretrade markets as reference points, so it is something of a misnomer.
FG: As more markets modernize their platforms and increase speed and capacity, will trading volumes spread more evenly between traditional market centers (London, New York, Frankfurt) and emerging markets (India, Korea, Brazil, Russia, etc.)?

RN: It certainly helps when markets modernize their platforms and a growing GDP obviously attracts more funds, but the trading and clearing processes are still critical to a market’s success; for instance, many markets require investor ID numbers. This is often perceived as unnecessary bureaucracy. In the Middle East, you have Euroclear and Clearstream and you need to let the broker know who your clearing agent is at the beginning so that you go through the appropriate means of execution and settlement. In Russia, they are making some major changes this year, but until these changes they have had no Delivery Versus Payment (DVP) settlement system, so you have had to use free of payment settlement to trade local shares. Some people are less comfortable with this form of settlement.
The fundamental processes of how they go about trading make it difficult and these will not be cured by increasing speed or capacity. If the bureaucratic aspects are not addressed it does not matter how big they get, their instruments will be listed on more established exchanges for trading.

Understand the Power of Data Convergence

By Max Colas

Max Colas of CameronTec looks at smarter approaches to information overload and explains how improved management of data convergence can result in greater business insight and edge.

Max Colas, CameronTecEvery day Twitter delivers 300 million messages, 4% of which are actual news. Every 20 minutes, 3 million messages are published on Facebook and 10 million comments are added. Such mind-blowing numbers would be anecdotic if they did not highlight a trend – perhaps even a threat – that is also relevant in the trading world: information overload.

As usage of FIX grows globally and firms increasingly rely on their trading platform to contribute to their business edge, the risk for FIX users is that they focus on the wrong snippets of information, or miss the truly relevant trends. Addressing those challenges becomes a differentiator for FIX technology providers.

Previous generations of monitoring systems focused on displaying information, for instance by adding value in the shaping of data or user-friendliness of the interface like displaying logs with FIX tag/value expansion or showing “conversation views” that gathered together relevant messages. The mostly static log formats even allowed vendors to claim some degree of compatibility across FIX engines. Although useful, such systems are inherently flawed for two reasons:

1.              They assume that FIX operators should approach information linearly, and

2.              They expect all information that is relevant to a business to be contained in the logs.

Neither assumption proves true in today’s environment.

When algorithmic trading is involved, it is not unusual for FIX logs to grow by 10,000 lines per second for each session. When data flows converge from a number of FIX nodes across a pan-European topology, the dataset size can increase by multiple orders of magnitude. We are way past the display of logs on a screen. Gone is the linear approach to FIX data; gone is the time of perusing pages of logs one after another, of X-term windows scrolling slowly on a screen.

In fact, the only approach that remains at this point is to expect monitoring systems to deliver on two channels: “I tell you in advance what I am interested in and you notify me when it occurs” and “I tell you what I am interested in and you bring me the relevant results”. These approaches are not new: in the outside world, they are called Google Alerts and Google Queries. Technologies developed to implement this paradigm in the financial industry, such as California-based Splunk, have been in use for a few years. They all tend to gravitate around the convergence of data into one central repository to broaden the breath of searches. This, too, is an industry trend that is highly relevant to the FIX world, with a peculiar edge that is worth analyzing.

FIX data alone is bare and lacks the information needed to build a business edge, and systems that solely seek to draw insight from them miss out on extra dimensions that make a real difference. A broker who enters performance-related service level agreements with time sensitive customers should proactively monitor their adherence to the contracted latency terms. This reasonable business need underpins four requirements:

1.              Being able to  monitor FIX message latency (i.e. generating performance data),

2.              The ability to analyse the data on a continuous and rolling basis,

3.              To be able to compare results against contracted thresholds (which can differ from one client to another and therefore requires client data), and

4.              Dissemination of the results (as confirmation of adherence or alerts of breaches) to account managers and perhaps the customer itself, thereby requiring contact details data normally held in corporate directories or configuration datasets.

In other words, business edge is no longer drawn from FIX data alone: a body of derived and peripheral, yet necessary, data gravitates around order messages, which contribute decisively to the shaping of the business. Separate solutions, each looking after one aspect of the process such as monitoring the latency figures, also prove suboptimal unless they truly integrate with all other data sources to nourish a central pool of data that provides the substance for business alerts and queries. The need for integration – which acknowledges that ‘niche’ expertise might be used if the business calls for it – places the onus on FIX infrastructure providers to open themselves through the use of industry standards, open and extensible languages, and rich and actionable APIs. Functional modularity and technical openness simply reflect the necessary diversity in the technology landscape which CTOs are called to draw.

Concentrating the actionable data in itself is hardly sufficient without analytical capabilities – a view that leading vendors have embraced as they now combine broad data sets with tools to build intelligence as part of their product suite. Similar capabilities are required for the lower level of trading backbones and infrastructures so that most specific business interests should be covered. To address these interests, advanced technology such as complex events processing and data mining analytics are entirely relevant and leading vendors make use of them. Such tools make it possible, for instance, to alert traders to perceived connectivity sluggishness whilst the FIX node reconfigures its FIX sessions on the fly to transit through alternate networks when, say, the average execution time for market orders from the algo desk exceeds 10 milliseconds for a rolling time window of 1 minute – all this automatically with complete control and, of course, the ability to simulate and test those scenarios as part of the daily automated QA process.

This final example sheds another light on the business requirements: understanding the gist of FIX data and all the data that gravitates around it hardly matters until the information is brought to the attention of the consumer. Avoiding information overload means sifting through a multitude of updates and notifications to separate the wheat from the chaff and then serving up the result however the consumer wants to absorb it. Twitter exists for virtually every platform precisely because different people call for different media. Escalation paths are also important aspects when information is deemed actionable. The ability for notification systems to integrate with existing technologies /platforms, and the flexibility to tailor the message with the adequate management of escalation is the precise differential between technology-centric and business-centric solutions.

Click to contact the author:

Open for Discussion

FPL Co-Chair Updates Feb 2012

By Annie Walsh, Jim Kaye, Zoltan Feledy, Stuart Baden Powell

As a member-led organization, FIX Protocol Ltd (FPL) empowers its members. Hear from some of the women and men who run the committees and working groups about the diverse, global initiatives within FPL. 
 
FPL Global Steering Committee Co-Chair, Jim Kaye
The GSC conducted its annual strategy planning session late last year and has been working to implement the outputs from that session. Specifically, the GSC will be focusing FPL on regulatory and market structure initiatives particularly in fixed income and OTC derivatives, and also investing in the protocol suite itself, with FIX 5.0 Service Pack 3, the Interparty Latency work and high performance protocol all delivering in 2012.
FPL EMEA Regional Co-Chair, Stuart Baden Powell
We continue to add value to our EMEA trading community and this year sees the development of the Investment Management and continental European based working groups. FPL EMEA is also at the forefront of ensuring the benefits of standards are clearly understood in the formulation of regulation, presenting significant advantages to all market participants.
FPL Asia Pacific Regional Co-Chair, Zoltan Feledy
The various subcommittees and working groups in the Asia Pacific region have been hard at work as we began implementing our recently created road map. We have rekindled the buy-side group with fresh leadership on the heels of the many successes of its global counterparts. Our education and marketing committee has been busy organizing the many conferences slated for this year while coordinating efforts to work with regulators in the region. One of our main goals as a regional committee is to start working more closely with Japan so we can all benefit from the vibrant sharing of ideas that is keeping the Asia Pacific region at the top of its game. 
FPL Membership Services Committee, Annie Walsh
The FPL Membership Services Committee plays an important role in providing members with a forum to collaboratively discuss and agree initiatives designed to grow the membership base and further enhance the unique industry benefits offered to buy and sell side firms, vendors, regulators, trading venues and industry associations. In 2012, implementation of several new initiatives will focus specifically on enabling firms to better leverage their strategic and operational association with the FIX Protocol organization.

Profile : Seth Merrin : Liquidnet

LEADER OF THE PACK.

i2i_Liquidnet_S.Merrin_Q

Seth Merrin, founder and CEO of Liquidnet explains how the company maintains its cutting edge.

Q. What impact is the current environment having on the trading environment?

A. The uncertainty over the global macroeconomic picture combined with the eurozone debt crisis is making markets swing up and down by 3% to 5% in a day, making it difficult for institutional investors. The retail investor has also been frightened away. If you look at 2008, Henry Paulson who was Treasury Secretary introduced the Emergency Economic Stabilization Act which led to the ($700bn) Troubled Asset Relief Programme. I think Merkel and Sarkozy should introduce a similar type of programme to stabilise the situation.

Philippe Buhannic : TradingScreen

philippe_buhannic
Philippe Buhannic, CEO, Trading Screen

CONNECTING THE DOTS.

TradingScreen_P.Buhannic_Q

Philippe Buhannic, founder and chief executive officer of TradingScreen explains how the firm has and is plugging the execution gaps.

Q. What is the history of company and its products?

A. I had the idea for the company while I was working at Credit Suisse First Boston. My team and I developed a product called PrimeTrade (an internet-based order-routing and execution system), PrimeClear (a trade-clearing system), and PrimeRisk (a risk management system). It dominated the market, but I realised that clients did not just want a single dealer system that traded one asset class. I also realised that most of the products were geared towards the sellside and that the buyside was being left behind. We launched the company in New York in 1999 and then opened an office in London, and Japan the year later to offer cross border trading in the US, Asia and Europe. Our first product was an execution management system that was multi-broker and multi-asset for the alternative asset manager.

Today we are one of the largest EMS providers with 180 people in 12 offices, 1,700 buyside clients and 6,000 daily users. We are unique in that most of the systems today are single asset class. Breaking it down, about 45% of our clients are hedge funds with the rest being mutual funds, wealth managers, private banks as well as broker dealers.

Q. I see you just won Financial News’ award for best buyside trading solutions – how does the firm keep its competitive edge?

A. We have created one of the most complete trader workstations and now address every point in the trader’s workflow. For both the buy- and sellside, our products help them connect to each other, improve market access, reduce connectivity costs, increase trading efficiency and fully automate their workflow. We also carefully and continually analyse the workflow of our clients and make sure that we continually are meeting their needs. If we stood still we would not be here.

Q. Can you discuss your products – Prime, EMS and Plus?

A. It has taken us three years but we now address the entire workflow. Trade EMS is for the traditional asset manager while TradePlus is dedicated to the broker dealer community. Our most recent product is TradePrime which is targeted at the alternative manager. We call it a “hedge fund in the box” in that it combines EMS and OMS functions along with risk and connectivity tools. This helps resolve one of the most difficult challenges the alternative asset manager faced – the complexity of integrating the trading-focused EMS, the position- and performance-focused OMS, and the administrator information into a seamless, intuitive workflow. Another advantage is the time it takes to implement the system. They can download the software, enter some parameters and connect within an hour. In the past, to get an EMS operational, you would have needed up to 18 months.

Q. What has been the impact of regulation such as EMIR and MiFID on the industry?

A. There will be multiple effects and it is difficult to assess the full impact while they are still being negotiated. There are so many uncertainties at the moment but there are a few things that will definitely happen. For example, there will be more venues being introduced for different asset classes and this will create a certain amount of complexity. The buyside may also have more difficulty in trading large orders because of the fragmentation of liquidity. There is though some good news in that the new rules will create more competition in not just execution but also in clearing and that should reduce costs.

Q. Can you provide more detail about the international side of the business such as your recent deal with Chi -X Australia?

A. Overall, we are leveraging our experience in the US, Europe and Japan to other parts of Asia, as new execution venues are being launched. We have made huge inroads into India and Thailand as well as Hong Kong which has a large asset management community. There is a greater understanding now about the importance of technology and that best practices need to be applied. Australia is different though in that it has a fairly large international trading community but the country is undergoing a revolution in terms of technology. It was not too long ago that people did not know what low latency was.

Our liquidity and multi-venue environment services provides a combined picture of stock liquidity of local equities jointly listed on the Australian Securities Exchange (ASX) and new venue Chi-X Australia which was launched on 31 October 2011. It allows the buyside to have a single view that includes table and graphical stock prices across both marketplaces, time and sales and tick-by-tick capability, drag-and-drop order management, a trading velocity index as well as connectivity to any other exchange where those securities trade around the globe.

Q. What is the driver behind the collaboration with Bank of America Merrill Lynch, Citi, and Nomura on a transaction cost analysis consultation paper?

A. It is in response to our buyside clients’ concerns over transaction cost analysis and the conflicts of interest that exist in the brokerage community when a broker offers a TCA service. It is hard for the buyside to identify the strong from the weak performers because there are no clear industry-accepted standards for measuring transaction costs. Open TCA aims to bring together buy- and sellside firms to set clear standards for measurement, so there is a common process and tools for analysing those costs fairly, without bias. We issued a consultation paper that looks at how we can establish a common benchmark methodology.

Q. What are the future challenges and opportunities?

A. There are so many things happening at the present moment but one of the biggest changes that we will see is the redrawing of the investment banking business model due to regulation and market conditions. Also, clients increasingly want simplicity and the ability to execute all asset classes whether it is futures, fixed income, commodities, ETFs, structured products or equities across different regions through one pipeline. I think that is one of the biggest challenges but also for us one of our greatest opportunities is to be able to meet these demands with new services and products.

[Biography]
Philippe Buhannic is founder and chief executive officer of TradingScreen. He was previously a managing director at Credit Suisse First Boston in New York, where he worked in fixed income as well as having created and implemented CFSB’s e-commerce products. Prior to joining CSFB, from 1993 to 1995 Buhannic was chairman and CEO of Fimat Futures USA, a subsidiary of Société Générale, France and a member of the board of the Fimat Group. From 1987 to 1993, he was the deputy chief financial officer of Credit Commercial de France, where he oversaw the global marketing of short-term FX and interest rate products. Buhannic holds a MBA from New York University’s Stern School of Business, a Masters degree in Finance and Taxation from Institut d’Etudes Politiques de Paris. He is a long-time board member of the Futures Industry Association and Guest Professor in business and finance courses.
 
©BEST EXECUTION

 

 

Nordic Trading: Buy-side Takes Control

Photo-B-small_0
Simo Puhakka, Head of Trading for Pohjola Asset Management, shares his experience trading in the Nordic markets, giving his opinions on interacting with HFT, using TCA and knowing whether you can trust your broker.
Nordic HFT
The prospects for High Frequency Trading (HFT) are really up to regulators. It will be a free market, but as we all know, regulatory changes affect the whole trading landscape. For example, we can see what is happening in France and the debate that is going on in Sweden, which are quite hostile towards HFT, so those countries can expect some changes.
Personally, I think that HFT is a good thing for the market, as long as you have the proper tools to deal with it. There are a number of small firms that have been suffering from HFT since MiFID I because they lack the proper technology and tools to measure and deal with it. We have not suffered in our dealings with HFT, and I would actually say in many cases, it is the opposite. HFT firms seem to add liquidity and when you have the proper tools to deal with it, you can take advantage of it.Speaking of tools, we started building our own Smart Order Router (SOR ) a year and a half ago. The goal was to create an un-conflicted way to interact with the aggregated liquidity. In this process we went quite deep into the data and turned processes upside-down with the result that we have full control of how we interact with the market.
On the other hand, I welcome technological innovation from the sell-side; for example, brokers now disclose the venues where they execute trades on an annual basis. The surveillance responsibilities that brokers have are beneficial. Many of the small, local brokers and buy-sides, however, are now finding it challenging to upgrade their technology.
Trusting your Broker
Our approach was to take control of our order flow and only use our brokers for sponsored access. We chose full control because, in some cases, I do not fully trust brokers to deliver what I am asking. These questions first arose a few years ago, and we realized we needed to create a transparent, fully-controlled, non-conflicted path to the market. How you interact with different venues – even lit venues, where you have more transparency – will affect your choice of strategy. In most cases, you are better off without brokers making decisions for you. The root of the problem is, when you send an order to the broker, what happens before it goes to the venue? What control do we have over the broker infrastructure, including their proprietary flow, internalization, market making and crossing, not to mention the routing logic?

When we dug into the data, we were quite surprised to see that, although a broker was connected to all the dark liquidity, many of the fills were coming from that particular broker’s dark pool, suggesting there are preferences in the routing logic. Brokers want to internalize flow, which is not a problem, if you are aware of potentially higher opportunity costs. When it comes to dark liquidity, that is an even bigger problem, since our trades were often routed to the broker’s own dark pool or those it has arrangements with.
To interact with multiple liquidity pools, we decided it was important to define our own routing logic, separate from our brokers’. It all comes back to whether we trust our brokers to deliver what we requested, and in our case, we found it hard to believe.
In theory, broker internalization can add value to our trades, but when we looked at our TCA, there was quite a consistent pattern that brokers with their own order books and market making operations, perform worse than agency-only brokers, which tend to have fewer activities. The reason for this is not technology or limited resources, which bulge bracket firms have plenty of. We think that we are suffering because they are internalizing and making markets. Since we began using our own smart order router, we have seen better numbers than the best agencies. Just looking at our numbers, this decision has made quite a difference.
TCA Best Practices
There are two ways of looking at TCA. The traditional use for TCA is very important to us. We analyze all of our trades, and because we only have one executing broker (our own technology), we do not compare brokers anymore. We do pay a lot of attention, however, to venue quality. We get more dark fills today because we have full control of the apparatus. In the past, I would have been concerned, if we used a broker dark algo, that there would be a lot of liquidity coming from that broker dark pool. Now, we are connecting to most of the European dark pools through our logic and I have seen an increase in dark fills.
More specifically, we use TCA to look at the fills coming from the dark pools, and how much noise they make on the lit market, before and after I get the fill. Admittedly, it takes hundreds of fills from a particular venue before you can actually see that the numbers are stabilizing, but through this process, we try to avoid toxic venues that create volatility in the lit market after or before we get the fill. To do this, we measure the midpoint standard deviation of the lit market just before and after we get the fill. Our benchmark is implementation shortfall, and since we are not comparing brokers, we use TCA to see if the numbers are consistent. Nonetheless, hiccups in volatility will increase slippage in our TCA numbers.

When it comes to pre-trade TCA, the interesting aspects are around adding that kind of logic to our smart order router. We incorporated the real-time TCA into the smart order router, itself, so we do not use it on the trading desks. Our traders are not using that knowledge, but we utilize it in our SOR , which makes a lot of trading decisions for us. This is obviously different from a desk that uses 10 to 20 brokers and broker algos. We focus on technology and building for all possible scenarios to utilize the market data in real-time.
New Priorities
Volatility and risk have pushed us to bring more of our trading operations in-house. We still use other brokers, with about 80% of flow going through our SOR and 20% using our brokers’ trading methods. It is our understanding that discussing the effects of different market environments and strategies is more important than discussing the brokers themselves. We used to ask that question, but we have discussed our numbers with our PMs and they are convinced by the way we interact with the markets. In the kind of market where close-to-close volatility and intraday volatility is high, the challenge is to demonstrate for the PMs what effect this will have on trading strategies. For example, when the intraday volatility picks up, you need to be more aggressive. If you are using reactive strategies and trading over the day, then you are more likely to be impacted by volatility than implementation shortfall strategies. Most of the time our desk has full discretion to trade, but some orders come with parameters and in those cases, we focus on maintaining the feedback going back to PMs.