By Richard Coulstock,
The only certainty is change. Never has this been truer than in recent times. But change, Prudential Asset Management’s Richard Coulstock believes, brings with it the opportunity for the buyside desk to gain competitive advantage, especially in Asia.
How do we position ourselves for the future and gain that competitive edge? The starting point for a buy-side dealing desk is to ensure that your role is recognized as being an integral part of the overall investment process and not a stand alone operation. People, workflows, systems and benchmarks should all be built on that crucial foundation.
When we look at people and workflows, we should understand that the days of having dealers acting merely as order clerks are long behind us. Dealers can begin to help themselves by raising their profile internally through engagement with their Fund management teams. Contribute at meetings, listen to investment ideas, share flow information, and understand why trades are being placed, not just concentrate on how to execute.
We live in an age where the balance of power has swung away from sell-side traders to those on the buy-side. Historically, access to information and sophisticated trading systems with direct exchange access was the preserve of the sell-side. That is no longer the case. With direct market access, broker algorithms, crossing networks and sophisticated Order Management Systems (OMS), the role of a buy-side dealer carries more responsibility and opportunity than ever before.
These are responsibilities and opportunities we should eagerly embrace. The buy-side has been empowered, it is now up to us to seize the opportunity and prove our worth to our businesses by accepting responsibility for execution quality and by more firmly embedding our role into the overall investment process. There are a number of ways in which I believe dealing desks can demonstrate the growing importance of this critical role.
Dealing with the dark pools
I shall start by discussing the increased attention on buy-side dealers, largely led by regulatory issues; – a recent good example being the proposed U.S. regulation regarding the use of ‘dark pools’. Publicity such as this increases focus from both clients and internal compliance departments on the role of a dealing desk.
It is our responsibility to be in a position where we know the pros and cons of dark pool utilization. Accessing dark pools can facilitate the very principle of ‘best execution’ demanded of us by regulators. It is important to remember that the principle of crossing stock offexchange, with subsequent reporting to the exchange, is not new.
While the name can cause alarm, ‘dark pools’ in many ways, simply automate an existing process.
Another important principle to remember is that you will, or should, only trade on a dark pool if the terms of the trade facilitate ‘best execution’. Contrary to some reports, these are not uncontrollable electronic monsters that force you to trade on terms that are unfavourable to your underlying investors. Instead, as we seek to attain ‘best execution’, we should ensure that our desks are equipped to access all relevant liquidity pools, light or dark. To do otherwise is verging on negligence. Our responsibility is to be aware of suitable liquidity pools, current and proposed legislation in Asia and elsewhere, and to educate our own companies to gain approval for their use where appropriate.
Aside from existing dark pools, we need to be aware of other developments in the Asian trading landscape. The Chi-X/SGX announcement earlier in 2009 is a live situation we have a duty to monitor. We should look to the US and Europe for guidance on how changes in market structure will impact our roles. For example, according to Chi-X, on July 14 this year, it reached 22.65% of market share in FTSE 100 stocks and executed 92% of trades at, or better than, the primary market spread. Most remarkably, this has all happened within two years of MiFID. Could Chi-X have as dramatic an impact in the Asian arena?
While we need more details before a reasoned judgment can be made, it is safe to say this is a development we cannot afford to ignore. We have a duty to raise awareness of such developments within our own organizations and to ensure that, when the time comes, we are in a position to trade on any alternative exchanges where we believe we may find meaningful liquidity.
Despite operating in a multi-market and multi-regulatory environment that makes it harder for new entrants to gain a pan-Asian foothold, we have seen advances in the last two years, for example, with Liquidnet and Blocsec. Now, with the Chi-X/SGX announcement, we are surely likely to see more changes in the coming months and years.
Competition among established exchanges can be a good development as it may see lower overall execution costs. However, we should be wary of over-fragmentation. The proliferation of alternative execution venues in the US has been extreme, but the multimarket environment here should be a natural barrier to a similar situation arising in Asia.
Putting the right technology in place
When we consider these new alternative trading venues, we also have to ensure that our systems are robust enough to cope. It still surprises me when I hear of asset managers that are not FIX-enabled, despite the undeniable benefits of Straight-Through Processing (STP). Having a FIX-enabled order management system that allows you to access DMA and broker algorithms is a basic requirement of any professional dealing desk. An execution management system (EMS) would be even better, and there are a number of those to choose from, each with its own strengths and weaknesses and pricing structures.
An EMS helps dealers to monitor and manage trades in a number of ways. By incorporating DMA and algorithms, live market data, and pre and post-trade costs on the same blotter, an EMS should help your team execute more effectively. EMS development will continue and the end result may well be an EMS that, upon receiving a trade, will look at pre-trade data, decide on execution strategy/broker/algo, and then automatically send the trade with appropriate parameters. This may even change dynamically as the trade progresses and the market moves. This scenario is unlikely in the near term – and would risk putting many of us out of work – but it is development we need to monitor.
Measuring the value of execution
Another area where buy-side dealers can prove their worth is in the field of transaction cost analysis (TCA). We must benchmark ourselves in such a way as to demonstrate that execution is a crucial element of the overall investment process. Dealers must be aware that execution performance has an impact on underlying fund performance and they should be prepared to be measured and accept responsibility for their results. Having a robust TCA process will become an area of focus from regulators and clients, both existing and potential. We already use TCA tools to measure our own dealing team, as well as our relationships with our counterparties. This is an important point. TCA is a measurement of broker relationships, not of broker performance. Any buy-side dealing team has to accept responsibility for execution quality and not blame a broker when things go wrong. If a buy-side dealer with adequate systems, trading a live market does not accept execution responsibility, then his or her suitability for the role has to be questioned.
Sharing the benefits and responsibility
As dealing desks prove the value that they can add to investment performance, a natural follow-on is increased dealing discretion. Commission Sharing Agreements (CSAs) can and should be used, where permitted by regulations, to allow dealing desks the freedom to execute where they believe they can obtain the best results. The principle of dealing with a broker because they offer good research has to be dropped. An extreme example is in Australia where one asset manager has effectively outsourced all its dealing to a single broker. This broker uses their pre and post-trade analytics to decide upon execution strategy and uses CSAs to pay research brokers as instructed by the asset manager.
So, what does the future hold for Asia? I believe we will see more money flow into the region and that this will be followed by people, changes to market structures and increased pressure on buy-side dealing desks, in terms of performance results, as regulators and clients look more closely at how those desks manage their executions.
Our role is to be prepared for the future. We must monitor regional and international developments, and inform and educate appropriate people in our own organizations –fund managers, compliance departments and IT – and to readily accept increased responsibility for our execution performance. In short, we must prove our worth.
POSITIONING FOR THE FUTURE : Gaining competitive advantage. A buy-side perspective.
Is FIX Really Slow?
By Kevin Houston
Working on a Porsche analogy: To say that all Porsches are slow is, clearly, about as ridiculous as saying that FIX is slow! Kevin Houston explains.
All early Porsches have their roots in the design of the Volkswagen Beetle designed by Ferry Porsche’s father Ferdinand. The VW Beetle is only capable of about 80 mph and even that would be at the cost of a terrified driver. Early Porsches share a lot of their design elements with the Volkswagen, however that does not mean that Porsches’ are slow. Many of us will have been on track days where we have driven Porsches around a race track, hitting speeds of well over 80 mph, without inducing any great feelings of fear.
The early FIX engines, designed in an era of simply routing care orders between the buy-side and sales traders, are also slow and if used for modern high speed trading would result in traders nervously guessing whether each message would be one of the lucky ones that went through quickly or, more probably, one of the unlucky ones that took several seconds to arrive at its destination. Again some of us can testify that these early engine do not represent the state of the art; equally, however high velocity trading houses, today, are using FIX to trade in around 250 microseconds and a small number of FIX engine vendors are currently capable of consistently beating 10 microseconds for message processing, delivering throughput of around 100,000 messages per second.
To say that all Porsches are slow is, clearly, about as ridiculous as saying that FIX is slow. The remainder of this article examines the second myth in more detail.
First a bit of History
FIX started as a pilot between Salomon Brothers and Fidelity Investment to automate the placement of orders, the reports of executions and the distribution of advertisements and IOIs. Indeed many early FIX implementations did not even place the order electronically, but only reported the executions after order placement. At this time there was only one FIX engine on the market place and the price was extremely high, with the performance, by today’s standards, being extremely low. FIX adoption was driven by error reduction and the like. The arrival of the early commercial FIX engines did the community a great service by creating cheaper alternatives; but the performance bar was not very high. Often when people now refer to FIX as being slow, they are using the yard stick of early FIX engine performance as the measure. Since then and particularly over the last 5 to 10 years, increasing emphasis has been placed on the performance of FIX driven by a number of trends, and the FIX engine vendor community has responded. Some have accepted their current performance levels as adequate for order routing but not DMA and focused on that market; others, often new entrants, have engineered FIX engines from the ground up, to focus on future-proofing performance.
The drivers behind this need for speed are worth noting:
• Increased market data
• Increased order volume
• Exchange adoption of FIX
• Large percentage of trades going electronic
• Rise and rise of algorithmic trading
These drivers lead to two separate performance needs, high throughput and low latency. Whilst these are related needs there are optimisations that can be made that favour either. For example, FIX communicates over TCP/IP, typically a FIX message uses a few hundred bytes, but an IP packet has space for around 1,400 bytes of information. A setting on the IP communication layer allows you to select whether packets should be held back to wait for additional information that can be sent in the same packet.
This obviously improves throughput as the receiving application has to process fewer packets, but it has the potential drawback that it increases latency for at least the first message by it having to wait for subsequent messages. So, whether you allow this hold-back or not depends on your performance needs, or profile. This process was first introduced by John Nagel and is named after him; see http://en.wikipedia.org/wiki/Nagle’s_algorithm for more details. OK, so FIX can be fast but a lot of the implementations out there are dated and therefore can be slow; so, are there other things FPL is working on that will help with performance?
Where are we today, how long does it take?
Let’s take a look at some of the time costs in sending a FIX message. Obviously this is a very rough estimate and there are a lot of variables, but here are some general timing that are worth covering. Firstly, there is constructing the message, which typically takes something like 10 microseconds, saving a copy before sending, 50 microseconds; sending it to the wire, typically 10 microseconds; transmission time, a function of distance but easily worked out as distance divided by 2/3 the speed of light; switching time, a function of the number of routers and switches and their ilk, via the operating system into the user space, say 10 microseconds; and finally parsing in user space 10 microseconds. There are a number of ways you can improve these such as saving the copy of the message asynchronously, etc., but that still leaves a lot of time spent building the FIX message and it’s tag = value syntax.
What is the FPL Global Technical Commitee Committee doing to make things even better?
Whilst FIX can be as fast or faster than most of today’s exchange API’s there are a number of areas that the FPL GTC is working on and has approved Working Groups to look at these issues.
1) Putting FIX on a par with Native Exchange API’s – Stateless FIX
A lot of exchanges want to implement FIX in parallel to an existing interface. This means that they have a matching engine that stores its state and an interface that does not. If they architect their FIX interface to talk directly to the matching engine then the main difference between the FIX interface and the native interface is that the FIX interface has a costly extra persistence operation. This means that at best the FIX interface is going to be something like 50 microseconds slower than the native exchange interface.
FPL recognised this problem and introduced a partial solution, for non-trading messages in FIX5.0, the GTC has now prioritised extending this solution to include trading messages and makes more extensive use of the Application Sequencing messages introduced in this version.
2) Examining how much information we move?
FIX has historically been designed to meet the needs of many distinct constituents. One of the repercussions of this has been that some fields are marked as mandatory when in fact they are only required for a certain subset of the FIX community. FPL has decided to revisit what constitutes a mandatory field within the specification. For example, does there need to be 7 mandatory elements in the Order Cancel Request message, when all an exchange may need is the Order ID to be able to cancel the order?
3) Optimised transports for FIX – Binary FIX
Currently a major cost for a FIX interface is the time it takes to serialise a FIX message into a Tag=Value format. This Tag = Value string is simply a version of a piece of memory in the senders computer that is used to create a copy of the memory in the receivers computer. FPL introduced the repository some time ago and many firms already use this to describe the piece of memory to their computer programs by generating object models directly from it. Why not take this one stage further and also convert these object models (or pieces of memory) into information exchanged on networks using some of the lessons learnt from the FIX Adapted for STreaming (FAST) initiative? Then examine this structure and optimise it?
4) FIX Interparty Latency – FIPL
Whilst we are focusing on latency and performance of FIX engines in this article; what we as an industry are concerned about is the end-to-end latency across the whole trading infrastructure. One of the first things you need to do in any engineering endeavour that aims to improve the performance of a system is measure the performance of that system and identify where the largest performance problems are and where the easiest gains can be made. Many organisations have detailed network level information on the arrival and departure of network packets, many components of the trading system write log files that detail when they receive a message and when they forwarded it on, and sometimes even information to trace its path through their element of its overall journey.
However when you try to assemble an end- to- end picture of the journey of a trade across an organisation and through many systems that entails, you find that often this information is stored in different formats and compiled on a different basis from system to system. The FIX Interparty Latency Working Group is aiming to develop a standard that will allow the easy assembly of this information on a consistent basis across multiple organisations so that we can understand the latency introduced across the whole of the trading life cycle.
So why is FIX perceived to be slow? Because some implementations of FIX are slow and those implementations are slow because the industry didn’t need them to be fast! Now that people are demanding faster FIX interfaces the community is providing them. Similarly the Volkswagen was slow because the peoples’ car was targeted at simply moving people around, as an economy car, not moving them quickly.
Road-going Porsches are faster because they are optimised for a different problem, going quickly on public roads. Now we are entering the era when we are racing on a circuit and unless your car is designed for that you are going not to make it to the starting grid.
FIX is the same, the engines designed a decade ago to do one job cannot be expected to lead the field when asked to perform an almost entirely different one.
DMA and the Buy-side
Getting to the bottom of naked sponsorship and high-frequency trades.
FIX: What does the buy-side want from Direct Market Access (DMA)?
David Polen: There are two distinct market segments that use DMA – the human trader and the blackbox. I like to call this “Human DMA” and “Highfrequency Trading (HFT) DMA”.
With Human DMA, the extreme is a buy-side that has traders manually executing trades and looking at market data over the Internet; with HFT DMA, the extreme is a blackbox co-located at the exchange. One market segment is sub-millisecond and the other is more than tens of milliseconds – sometimes hundreds of milliseconds.
The human trader manually clicking around on a front-end is more interested in the full range of services a broker can provide than he is in latency. Although speed is always important, he’s keen on being able to access all his applications via one front-end versus having to go to different windows. He’s looking for his broker to be a one-stop-shop, providing all the necessary services, such as algorithms and options and basket trading, in one easy and convenient bundle. He wants clean and compliant clearing and settlement.
The high-frequency trader is different. He has his own algorithms and smart order routers (SORs). He wants to get to the market as quickly as possible and needs credit and also memberships to the various execution venues.
FIX: What is the controversy around naked sponsorship for highfrequency traders?
DP: With naked sponsorship, the HFT is trading directly on the exchange, and the broker is only seeing the orders and trades afterwards. To help with this flow, exchanges have built in risk checks, so the broker can rely on the exchanges for pre-trade risk management.
To get a view across the exchanges, the broker consolidates the posttrade information through drops of the orders and trades. Although, the broker has a reasonably good view of the risk at all times, it can take as long as a minute to turn off a buyside that has exceeded their pre-set risk parameters. This is often exaggerated into a doomsday scenario where a buyside trades up to $2 billion of stock in those 60 seconds, but that ignores the exchange’s own controls which would not be set to $2 billion. It is a lot more likely for a buy-side to barely stay within its risk limits at each exchange, but exceed the overall allotted risk by multiples. Brokers need to have measurements in place to prevent that.
FIX: What are the key concerns with latency?
DP: The best way to lower latency is to get rid of as many message hops as possible. Co-locating at an exchange is obvious as it eliminates network hops. Although, co-location is important, it does come with infrastructure costs that not all high-frequency traders are willing to bear, for example, they may need to co-locate at each exchange.
Some buy-sides or brokers may co-locate at only one exchange and use that venue’s network to access others. Co-location also depends on the buy-side’s trading strategy. High-frequency traders need to understand where they want to trade. They can’t think of the market as a montage when they’re trying to achieve the lowest execution latency. There’s no time to sew together the fragmented marketplace if you’re also trying to be incredibly reactive to each and every exchange.
It’s also important to focus on latency within each exchange. Shaving another 100 microseconds off your DMA solution may not matter much if you are hitting an exchange port that is using old hardware or if you are overloading a port at the exchange and not load-balancing to another port. You also have to be aware of the protocol you are using: some exchanges have created legacy FIX sessions that are wrappers around internal technology and can be quite slow converters.
They are now creating “next generation” API’s that are native FIX and much faster, but these sessions may only offer a subset of available messages, so you have to consider routers that send the legit subset down the fast FIX pipe.
FIX: Outside latency, what are the main drivers for DMA?
DP: Buy-sides like to have a single FIX session for all of their services: global, high-touch, baskets, options, DMA and algorithms. Having multiple connections for multiple services is inconvenient. It costs more money and is cumbersome for the buyside trader. Also, for the sell-side, it is harder to have a single risk control around the client’s business. Risk management is all about consolidation. Of course, the lower the latency a buy-side wants, the more they will support multiple FIX sessions. So a broker should support multi-asset class, multiple lines of business globally. They should present a single simple interface to the buy-side for risk, order entry, allocation and (where possible) clearing.
FIX: How do you put all of this into a low latency FIX connection?
DP: The architecture is reasonably straightforward. You build the fastest FIX router/gateway and you put a framework inline that checks for risk, locates and compliance. This means that your FIX gateway also has to be hooked up to real-time global market data. Asynchronously, you copy the data to an order management system (OMS) for advanced post-trade features. Then you start building features off the back of it, including algorithms and smart order routing (SOR).
FIX: So it is all about low latency?
DP: Absolutely not. That’s just one category of buy-sides. As previously mentioned, there are a huge number of buy-sides that have taken on the trading function themselves. They
have hired sell-side traders, or decided they can trade themselves, and are using DMA tools, like our EMS application, to watch the Level II data and hit the market. Again, the issue is centralization.
Typically, these tools have their own risk
checks that the broker can control. But what if the broker’s clients use an array of these tools? Regardless of the tool the buy-side is using to enter these orders, DMA allows them to centralize all the order flow and run it over their risk and compliance checks before the orders go to market. Fidessa’s DMA capability is also integrated with our OMS, so these trades go to the back-office and show up on regulatory reports and on more than 100 compliance reports.
FIX: What should a broker entering this game focus on?
DP: The key point is to understand what market segment you are after and what your differentiators are. If you are a regional market maker with strong research, then you have loyal buy-side firms who will look to use your services. They will keep trading through you to gain your research. If you can offer them DMA, that’s just one more service they don’t have to go to a Tier-1 broker for.
But to get into the high-frequency trading space, you need a stronger differentiator – perhaps a ton of stock to lend so the buy-sides can sell short. Obviously, prime brokers are already in this space. Either way, pricing is critical, and the maker/taker model in the US means that brokers have to get to top-tier on volume traded on the exchanges as quickly as possible.
The goal for new brokers is to be toptier in rebates and costs and have an internal dark pool as part of the SOR for internalization efficiencies. But there are always functional differentiators. There may be clients who are willing to send flow for specialized functionality – perhaps the calculation of commission on each notice of execution (NOE). Or maybe
they want the ability to let them allocate their own blocks. You have to understand what your client wants and tailor your offering accordingly.
Upgrading the Exchanges – With Arrowhead, Tokyo Stock Exchange takes aim at the traders of the future
By Satoshi Hayakawa
The wait is over. Arrowhead, Tokyo Stock Exchange’s next-generation trading platform, launches in the New Year and promises high-speed performance, accessibility and reliability for local and international investors. Satoshi Hayakawa, from the Tokyo Stock Exchange’s IT department highlights some of Arrowhead’s key features.
January 4, 2010 will usher in the Tokyo Stock Exchange’s renewed stock trading system. This next-generation system, called “Arrowhead,” not only provides high-speed performance at a global level, essential for order execution and data transmission, but also constitutes a world-class stock exchange trading system that brings both fairness and reliability, the fundamental attributes demanded of any market. Arrowhead will transform the Tokyo market into an even more appealing trading environment.
Why Arrowhead?
Tokyo’s long-awaited Arrowhead is due to go “live” on January 4, the first Monday of 2010. The concept is simple: to enhance the speed and reliability of the Tokyo Stock Exchange. The next generation trading environment uses state-of-the-art technology to increase capacity, expand the provision of market information and streamline the trading process. In short, the exchange is aiming to strengthen both the hard and software it provides.
Speed
Perhaps the most striking improvement will be to the speed with which orders are received and confirmed. Arrowhead will move the TSE from the one to two seconds it currently achieves to within 10 milliseconds. Test results show that Arrowhead is, in fact, posting even faster intervals. It’s an essential feature if Arrowhead is to accommodate the algorithm and high-frequency trades that are increasingly common among the global financial community.
High reliability
While speed may be the headline feature of Arrowhead, the reliability of the system is just as important. Gone is the concept that reliability is the trade-off to speed, or vice versa. Arrowhead guarantees the completion of the transaction process via a sophisticated system that holds orders and confirmation on three different servers to prevent deletion of data due to hardware malfunction. It’s a system that been proven to cope well, even with orders that constantly change.
A part of the reliability pledge of Arrowhead is its commitment to enhancing the availability of the system. This involves simplifying the user interface, by using the highest level of technological support behind the scenes. A secondary back-up site has also been newly constructed, in the event of a catastrophic incident in the primary exchange location.
Enhanced market data
Another aspect of availability are the enhancements the exchange has put into its market data feed. With Arrowhead, the exchange claims to have significantly reduced the interval between data creation and transmission, as well as increasing the amount and transparency of data and transactions. In practice, this will see the amount of nominal prices transmitted rising to eight from the current five, using the FLEX standard service. A further enhancement, via Arrowhead, is the new FLEX Full, a system that will transmit all nominal data for every issue. This data will be available to all investors, not only market players at the TSE, including institutional and individual investors in Japan and overseas.
Revisions to Trading Rules
Other changes include upgrades to the trading rules. These include:
- Partial reduction of tick size. To correct any disproportion overall and enhance user-friendliness, the tick size for a portion of the price will be reduced to allow more detailed orders.
- Introduction of sequential trade quote. To avoid sudden price changes over a short period of time, this function displays continuous confirmed bids and can suspend trading for a short period (e.g. one minute) if market players attempt to place buy or sell bids that greatly diverge from the preceding confirmed prices. For example, doubling the bid renewal margin either above or below the price.
Once Arrowhead is in operation, the TSE says it will continue to maintain all systems necessary for fair pricing to occur. The current system, which allows investors to securely place orders and supports price finding functions that values assets through “itayose” special bids, limit value margins and abnormal order checks, will continue without change.
Higher speed, lower latency and start of Co-llocation service
TSE’s Arrowhead preparation moved into high-gear in June 2009, with the launch of the ‘Arrownet’ network. Arrownet brought together all systems, traders and information vendors to achieve an internal latency among this core network of two milliseconds or lower. This represented a ten-fold decrease in latency, and created a ring-shaped network with 99.99% accessibility and future potential for expanding into international connections.
A few months later, the TSE rolled out its co-location services that allowed for the installation of servers and other devices within the primary site that houses Arrowhead. With this co-location, latency is reduced to a few hundred milliseconds.
The full results of extensive testing are on offer to all from January 2010, with the TSE billing Arrowhead as a “comprehensive, high-reliability, high-speed trading platform designed to provide a more appealing trade environment.”
Creating an exchange for the future
The development of Arrowhead has been seen by many across the industry, as both a technology achievement, and also an example of partnership between the exchange and market players, including traders and technology vendors. The TSE is confident that Arrowhead will send a message to the world about the reliability, latency and quality of its exchange.
FIXatdl: An Emerging Standard
With the official release of FIXatdl taking place in Spring 2010, Zoltan Feledy gives us a head start on the standard.
A few years back when algorithmic trading became a standard tool in the trader’s toolbox the explosion of algorithms from various providers presented a number of challenges for the trading community as well as FPL. These algorithms brought with them a number of new parameters. Orders now had to contain not only what to do but also how to do it. Since the existing implementations of FIX did not have tags to indicate strategies, starting times, and whether to include auctions or not, led to a proliferation of user defined tags that are unfortunately still in existence today.
In addition to standardizing parameter specification, FIX also wished to standardize the way they are delivered to the end-user to further improve the process and to reduce time to market. Previously, providers had to work with multiple vendors in presenting these new order types to their clients. This often required months before a trader got to see a screen to enter orders in.
FPL responded to address the needs of this trend by creating the Algorithmic Trading Working Group. The group solved the above issues sequentially. The issue of the proliferation of custom tags was solved by the Algorithmic Trading Extensions which provided support to express an unlimited number of parameters in a repeating group structure, which is now an integral part of the FIX 5.0 specification. The second problem of delivering algorithms to end-users was solved by FIXatdl. The FIXatdl project chose to use the richness of XML, which is well understood and supported to solve these problems. This would allow providers to release their specifications in computer readable format, as opposed to a long document, allowing end users instant access to the latest versions of their algorithms. This is exactly what the working group did.
There are 4 parts to the standard. The core is used to specify the parameters for the algorithm. This is accompanied by 3 other parts to express the visual layout, validation rules, and the ways in which parameters should interact with each other. The earliest versions of the specification have now been around for a while and following various enhancements, in spring 2010, FPL will launch FIXatdl Version 1.1.
Adoption was slow at the beginning as the space was developing rapidly and the work was substantial. The “chicken or egg” problem between providers and vendors was a difficult one to overcome in the beginning. Why would vendors develop support for this new standard if providers are not delivering their products in this format?
Conversely, why would providers deliver in this format if vendors will not support the standard? With development slowing and the effort gaining substantial traction from both sides of the fence, all the stars have lined up for this new technology to take off.
I recall first hearing about this standard in New York few years ago, when it was just a concept. I signed-up to help the efforts on the spot and helped draft the first specification until I moved to Asia two years ago. I was delighted at a recent demonstration to witness that this standard is gaining significant traction. Shortly thereafter, I saw the video of the panel discussion from the FPL Americas Electronic Trading Conference and it has become clear that this standard will inevitably be the way forward.
Please enjoy the feature article on FIXatdl in this issue of FIX Global. The effort is a great realization of FPL’s slogan where so many industry participants collaborated to solve an industry need for the benefit of all, in one impressive effort. With the official release of FIXatdl to take place in just a matter of months, FIXatdl is now ready for prime time. Look for it in a trading system near you.
Korea – A FIX Conversation Initiated
By Josephine Kim
FIX is still a relatively new concept in South Korea. Though it is on everyone’s mind and its potential is clear, how do we move forward? FIXGlobal initiated this conversation with almost 200 industry leaders at their recent Face2Face event in Seoul. Credit Suisse’s Josephine Kim encapsulates the conversation.
Korea is one of the ten largest economies in the world with an increasing number of institutional and foreign investors, making FIX Protocol an ideal choice. But why is the use of FIX in Korea still in its infancy?
This was the key conversation point with representatives from Korean financial players and the exchange at the recent FIXGlobal Face2Face forum held in Seoul on November 10, 2009. Ryan Pierce, Technical Director of FPL believes there are advantages to being a relative newcomer in the FIX arena. “If a market has not already adopted FIX, then they are working with a clean slate, making it easier to implement the latest version.” In other words, it gives them the ability to jump past the earlier FIX technologies to a more sophisticated version.
The buy-side speakers at the event were vociferous in their support for FIX. Fidelity’s Kan Wong and Samsung Investment’s Young-Sup Lee spoke to Nomura’s Rob Liable, and all were enthusiastic about the positive impact of FIX on their business. “Earlier, all our orders were over the telephone or email. Sometimes there were errors while placing orders or we had delays in order-processing that hurt our trading performance. We adopted FIX in 2006 and the results have been very clear: our team is more efficient and we are better able to respond to market conditions,” said Lee. “In 2006, when we adopted FIX, we had three traders. Today, though we handle a lot more trades, we still have only three traders. That’s a clear sign of how FIX has driven efficiency in our team,” he added.
With the buy-side supporting FIX, the attention turned to the exchange. Hong Kim and Chang Hee Lee from the KRX and Daegeun Jun from Koscom were all eager to be involved in the ongoing FIX discussions as they understand the growing importance of the protocol. They demonstrated a clear understanding of FIX and its many benefits, recognizing that many institutional investors are active users. So far, the Exchange has not adopted FIX due to the reluctance of some of their members to make changes.
The Korean market is dominated by retail investors who prefer to trade offshore stocks on their own. Some argue there is also no burning need to adopt FIX Protocol as the trading desks use the Exchange’s proprietary system to execute their trades. Though it resembles the global protocol, the Korean protocol – KFIX – has some fundamental differences from it and lacks the various message types available with FIX. KFIX originated in the Korean environment for trading between local institutions. KFIX has served well until now, but FIX would be the way to go forward. The Korean Exchange representatives believe their work with other international exchanges and increasing demand from the international buy-side to track liquidity may be a driving factor that will encourage the Exchange to adopt FIX and be part of the global standard.
The conversation has been initiated in Korea and the Exchange’s desire to be involved is also clear. Now what we need to do is to continue the discussion on adopting FIX while taking into consideration the uniqueness of the Korean market.
MiFID I Lessons Learnt and Looking Ahead
By Andrew Bowley, Chris Pickles
MiFID I has undoubtedly made its impact on the industry. FIXGlobal collates opinion from Nomura’s Andrew Bowley and BT Global Service’s Chris Pickles on the success of MiFID and its next manifestation.
Having digested the massive changes MIFID brought to the EU two years ago, what has the financial community learnt from the content of MIFID 1 and the process whereby it was developed and implemented?
Andrew Bowley (Nomura):
First and foremost we must conclude that MiFID has worked. We now have genuine competition and higher transparency across Europe.
Costs are down. MTFs (Multilateral Trading Facilities) have brought in cheaper trading rates and simpler cost structures, and most exchanges have followed with substantial fee cuts of their own. Indeed this pattern is also clearly demonstrated by exception. The one country where MiFID has not been properly introduced is Spain and this is one country where fees have effectively been increased. This teaches us that complete implementation is the key and the European Commission needs to look hard at such exceptions.
We have also seen clearing rates reduced, though the fragmentation itself has caused clearing charges to increase as a proportion of trading fees as typically the clearers charge per execution. Interoperability should help address that, assuming a positive outcome of the current regulatory review.
In terms of lessons learnt from the process we must consider that we have experienced a dramatic change in a short period of time, and should allow more time for the market to adjust before fully concluding or looking to further wholesale change. We are certainly still in a period of transition – new MTFs are still launching; and the commercial models of all of these, mean that we are far from the final equilibrium. To have so many loss-making MTFs means that we cannot be considered to be operating in a stable sustainable environment.
Chris Pickles (BT Global Services):
MiFID is a principles-based directive: it doesn’t aim to give detail, but to establish the principles that should be incorporated in national legislation and that should be followed by investment firms (both buy-side and sell-side). Some market participants may have felt that this approach allowed more flexibility, while others wanted to see specific rules for every possible occasion. The European Commission has perhaps taken the best approach by allowing investment firms and regulators to establish themselves what are the best ways of complying with the MiFID principles, and has perhaps “turned the tables” on the professionals. If the European Commission had tried to tell the professionals how to do their job, the industry would have been up in arms. Instead, MiFID says what has to be achieved – best execution. Leaving the details of how to achieve this to the industry means that the industry has to work out how to achieve that result. This takes time, effort and discussion. FIX Protocol Ltd. helped to drive that discussion by jointly creating the “MiFID Joint Working Group” in 2004. And the discussion is still continuing. A key thing that the industry has learned – and continues to learn – is to ask “why”. Huge assumptions existed before MiFID that are now being questioned or proven to be wrong. On-exchange trading doesn’t always produce the best price. Liquidity does not necessarily stick to existing 100% execution venues. Transparency is not sufficient by just looking at on-exchange prices. And the customer is not necessarily receiving “best execution” from today’s execution policy.
A European Commission spokesperson says a review of MiFID is likely in 2010, so what can the financial community feed into this review to make MiFID 2 a better version of its predecessor?
Chris Pickles ( BT Global Services):
The European Commission has always intended the implementation of MiFID to be a multi-stage process – using the Lamfalussy Process – and a review has always been planned to see how effectively MiFID has achieved its goals and what tuning measures are still needed. Key points that the industry can raise during this review process are:
- Requiring the use of industry standards by regulators for reporting by the industry to those regulators. Though regulators monitor the industry, they are also part of the industry’s infrastructure and can help the cost-efficiency and compliance of the industry by using standards that the industry itself uses. This would include the use of standards like the FIX Protocol for trade reporting and transaction reporting.
- Requiring the use of industry standards by investment firms to meet their MiFID transparency obligations. For example, using the FIX Protocol would allow investment firms to publish their own data in a format that is easy for all to integrate into their own systems. This could help to address the issues around the need for consolidated data across the EU. Execution venues also need to understand that continuing to use their own proprietary protocols is adding unnecessarily to the costs of the industry, whether for trading or for market data distribution.
Andrew Bowley (Nomura):
It is crucial that the financial community contributes to this review and the European Community is very keen for that too. On a recent AFME (Association for Financial Markets in Europe) LIBA (London Investment Banking Association) trip to Brussels those of us present were encouraged to return with written proposals where detail or refinement is needed – specifically on the consolidated tape for instance.
The “consolidated tape” debate is one that is not focussed enough today, and is a great example of where we can provide clear input. Nomura intends to be at the heart of this discussion.
It is also vital that the community makes data available to the policy groups. There are still voices suggesting that costs have risen, which is not the case. Data is essential to demonstrating the effects that MiFID has created.
Equally there is debate around the size of broker dark pools, which will be greatly enhanced via clarity on the real trading activity. We should be debating point of policy, not points of fact.
Summing up the year – FPL Americas Conference
By Sara Brady
The FPL Americas Electronic Trading Conference, for those in electronic trading, is always a year-end highlight and this year was no exception. Sara Brady, Program Manager, FPL Americas Conference, Jordan & Jordan thanks all the sponsors, exhibitors and speakers who made this year’s conference a huge success.
The 6th Annual FPL Americas Electronic Trading Conference took place at the New York Marriott Marquis in Times Square on November 4th and 5th, 2009. John Goeller, Co-Chair of the FPL Americas Regional Committee, aptly set the tone for the event in his opening remarks: “We’ve lived through a number of challenging times… and we still have quite a bit of change in front of us.” After a difficult year marked by economic turmoil, the remarkable turnout at the event was proof that the industry is back on its feet and ready to move forward with the changes to the electronic trading space set forth in 2009.
Market Structure and Liquidity
Two topics clearly stood out as key issues that colored many of the discussions at the conference – regulatory impact on the industry and market structure as influenced by liquidity, and high frequency trading. An overview of industry trends demonstrated that the current challenges facing the marketplace are dominated by these two elements. Market players are still trying to digest the events of 2008 and early 2009, adjusting to the new landscape and assessing the changing pockets of liquidity amidst constrained resources and regulatory scrutiny. The consistent prescription for dealing with this confluence of events is to take things slow and understand any proposed changes holistically before acting on these changes and encountering unintended consequences.
The need for a prudent approach towards change and reform was expressed by many panelists, including Owain Self of UBS. According to Self, “Everyone talks about reform. I think ‘reform’ may be the wrong word. Reform would imply that everything is now bad, but I think that we’re looking at a marketplace which has worked extremely efficiently over this period.”
What the industry needs is not an overhaul but perhaps more of a fine-tuning. Liquidity is one such area that needs carefully considered finetuning. Any impulsive regulatory changes to a pool of liquidity could negatively impact the industry. The problem is not necessarily with how liquidity is accessed, but the lack of liquidity that results in the downward price movements that marked a nightmarish 2008. Regulations against dark liquidity and the threshold for display sizes are important issues requiring serious discussion.
Rather than moving forward with regulatory measures that may sound politically correct, there needs be a better understanding of why this liquidity is trading dark. While there is encouraging dialogue occurring between industry players and regulatory bodies, two things are for sure. We can be certain that the evolution of new liquidity venues is evidence that the old market was not working and that participants are actively seeking new venues. We can also be assured that the market as a messaging mechanism will continue to be as compelling a force as it has been over the last two decades.
Risk
One of the messages that the market seems to be sending is that sponsored access, particularly naked access, is an undesirable practice. Presenting the broker dealer perspective on the issue, Rishi Nangalia of Goldman Sachs noted that while many agree that naked sponsored access is not a desirable practice, it still occurs within the industry. A panel on systemic risk and sponsored access identified four types of the latter: naked access, exchange sponsored access, sponsored access of brokermanaged risk systems (also referred to as SDMA or enhanced DMA) and broker-to-broker sponsored access.
According to the U.S. and Securities Exchange Commission (SEC), the commission’s agenda includes a look specifically into the practice of naked access. David Shillman of the SEC weighed in on the commission’s concern over naked access by noting, “The concern is, are there appropriate controls being imposed by the broker or anyone else with respect to the customer’s activity, both to protect against financial risk to the sponsored broker and regulatory risk, compliance with various rules?” Panelists agreed that the “appropriate” controls will necessarily adapt existing rules to catch up with the progress made by technology.
On October 23, NASDAQ filed what they believe to be the final amendment to the sponsored access proposal they submitted last year. The proposal addresses the unacceptable risks of naked access, and the questions of obligations with respect to DMA and sponsored access. The common element of both of these approaches is that both systems have to meet the same standards of providing financial and regulatory controls. . . Jeffrey Davis of NASDAQ commented on his suggested approach: “There are rules on the books now; we think that they leave the firms free to make a risk assessment. The NEW rules are designed to impose minimum standards to substitute for these risk assessments. This is a very good start for addressing the systemic risk identified.”
These steps may be headed in the right direction, but are they moving fast enough? Shillman added that since sponsored access has grown in usage there are increasing concerns and a growing sense of urgency to ensure a commission level rule for the future, hopefully by early next year. This commission proposal would address two key issues – should controls be pre-trade (as opposed to post-trade) and an answer to the very important question, “Who controls the controls?”