Home Blog Page 564

Analysing The Cost Of FX Trades

With Daniel Chambers, Head of Trading, Sequoia Capital Fund Management
Daniel ChambersSequoia Capital Fund Management (SCFM) is an alternative investment management company specialising in investing via quantitative strategies and returned 13% in 2015 net of fees. Since going live in June 2011, SCFM has managed to provide an average monthly net return of 0.71%.
The decision to execute entirely through electronic trading was a natural one to make as it is in keeping with SCFM’s ethos of implementing technology to increase productivity. SCFM has always traded electronically, with varying degrees of sophistication. Two of the main drivers for this decision were the reduced risk of errors during execution and increased efficiency. SCFM executes all orders in the market electronically throughout the European liquid trading hours. Although FX is a 24-hour market, liquidity is not equally distributed throughout. A look at the daily profile of spreads in the vast majority of pairs highlights how much more one can expect to pay to execute overnight relative to in the middle of the European trading day. This is certainly a consideration that needs to be made with volume constraints measured against a trading day that is not 24 hours long.
Execution Process
Trades are executed by sending orders via FIX to an aggregator and receiving the information for each clip also via FIX. The trades are sent to the prime broker and matched immediately. As well as matching with the PB, the trades are also matched against the initial ticket to ensure execution took place as expected. All of the processes are automated. Once the trades have been matched, it is possible to view information related to the trading session such as execution costs and share of flow per liquidity provider. This systematic, straight through process is essential when executing multiple times per day in a large number of crosses. The FIX protocol makes the whole process easier and enables us to execute in this fashion and to retrieve all relevant information for analysis conveniently. The orders are placed in the market with the intention of creating as little market impact as possible. A conscious decision was made to execute only with banks, and specifically the large institutions, as opposed to ECNs and other third-party vendors. This provides a level of anonymity when executing. There are undoubtedly other participants with whom we could trade on an ECN, but a direct relationship with the LPs supports communication and assists in reducing market impact. In our opinion, executing with enough liquidity providers directly enables the client to receive the best of both worlds; greater anonymity and diverse flow with extremely tight spreads and great depth.
Improving the steps needed for execution is an ongoing process to ensure all measures are being taken to ensure the most secure and efficient methods are being implemented. The back office/operations systems were prototyped by people with relevant business knowledge. This facilitated us being able to create something effective in a short period of time.
Cost Analysis
A significant amount of work has been carried out to quantify, analyse and understand the various components of our execution costs and how they might be reduced. Costs are measured against various benchmarks, including arrival and risk transfer price. Over the last few years there appears to have been less liquidity available in the G10 space. Particularly difficult have been the Scandinavian pairs, which have been noted to be behaving like emerging market currencies. Building our capability to drill down into each cross and its various cost components has revealed the most useful information. Deep examination highlights information not available through superficial analysis. Although a lot of what has been revealed is in line with what one might expect, such as EURUSD and USDJPY being cheap to trade, there are other very interesting observations to be made such as discovering if trading EURNOK, EURSEK is more cost efficient than going for the NOKSEK directly. Much of the knowledge gleaned leads to enhancements in execution algorithms. We approach it as a four-part process including:

  • Collect – What data can be collected and stored?
  • Calculate – What information can be derived from the data?
  • Input – How can that then be used in any part of your execution strategy?
  • Monitor – How to monitor and quantify the changes made?

This segregation of the different components involved, each with their own elements, allows clear and precise planning for how to implement change.
Trade Cost Analysis is a concept that has been around for a while and we have seen a shift from post-trade cost analysis to pre-trade cost calculation or estimation. The ability to determine the optimal time to take executing an order is a progression from simply measuring incurred costs. Typically used for these estimates are factors such as current volume relative to averages and volatility. The nature of financial markets would suggest that on any individual execution, you’re not guaranteed to be right in your estimation, but over time and with enough observations the estimates should be correct within a margin for error. Being able to store and compare estimated versus incurred is vital to improving confidence in estimates.
Market Analysis
With China’s circuit breakers active twice in the first four days of trading this year and crude close to $30 per barrel, the theme this year appears to be uncertainty. In a period where the Federal Reserve have recently raised rates for the first time in seven years, more and more emphasis is put on what comes out of the central bank’s meetings in terms of pace of subsequent actions going forward. Other central banks have proved in the last year that they are willing to shock the market with unexpected decisions if seen as being in their best interest and we’d expect this to be the case going forward. The increased volatility impacts all participants executing in the markets and their execution methodology needs to be able to adapt to the environment.

Determining Required Market Surveillance Levels

By Stefan Hendrickx, Founder and Executive Director at Ancoa
Stefan HendrickxWhilst the financial services industry might be breathing a collective sigh of release at the delay of MiFID II, it would be prudent to remember that not all related regulations have been delayed.  The Market Abuse Regulation (MAR), aimed to increase the transparency, safety and resilience of European financial markets, is due to be implemented in less than 4 months on 3rd July 2016.
OTC derivative and high-frequency traders will (for the first time) need to prove they have surveillance capabilities in place, whilst market operators and investment firms will most likely have to reprogramme their internal systems in order to comply with suspicious transaction reporting requirements.  Smaller prop-shops and hedge funds might be surprised to find out that they too might fall under the forthcoming MAR.
How do you determine the required surveillance level?
With specific regards to detection of market abuse, ESMA mandates that a surveillance system should cover the full range of a firm’s trading activities.  Firms will be required to consider whether an automated system for market surveillance is necessary and, if so, its level of automation.  ESMA laid out a set of criteria which firms are advised to take into account when considering levels of market surveillance, including:

  1. the number of transactions and orders that need to be monitored;
  2. the type of financial instruments traded;
  3. the frequency and volume of order and transactions; and,
  4. the size, complexity and/or nature of their business.

Firms should take note that ESMA has deemed, for the large majority of cases, an automated surveillance system to be the only method capable of analysing every transaction and order, individually and comparatively, and which has the ability to produce alerts for further analysis.
Regardless of what type of surveillance system is eventually decided upon, firms will have to be prepared to justify to regulators how generated alerts are managed by their chosen system and why such a level of automation is appropriate for their business.
What’s new: reporting both manipulation and intent
A change from the existing STR (Suspicious Transaction Reports) regime, the new STOR requirement mandates that suspicious ‘orders’ are to be reported to regulators, as well as the ‘transactions’ that are required today – even if the orders do not proceed to execution.  Furthermore, regulators will shortly be reviewing the cancellation or modification of orders, meaning analysis of suspicious orders and transactions which did not result in a submission of a STOR form would also need to be retained and accessed by a firm.
Time is ticking
The forthcoming additional market abuse requirements, including the reporting of suspicious orders and the preservation of suspicious order and transactions analysis, mean that brokers, OTC derivative traders, high-frequency traders, prop shops and hedge funds alike will all have to perform a review of their surveillance systems in time for the imminent implementation of MAR.  They will also have to justify to the regulator the decision behind selected systems.  Automated surveillance systems have been recommended by regulators as being the only capable method, for the large majority of firms, of appropriately analysing every order and transaction.  The Market Abuse Regulation may not have hit as many headlines as MiFID II, but unlike MiFID II it is fast-approaching with no expected delays.  The time to act is now.
We’d love to hear your feedback on this article. Please click here
globaltrading-logo-002

Blocks And The Broker Review

By Kent Rossiter, Head of Asia Pacific Trading, Allianz Global Investors
Ken Rossiter 2016There are risks around everything we do, and every decision our traders take impacts the final results of our executions. We need to understand the signals and act to the best of our abilities. Risk is central to what traders do, day-in, and day-out: we need to look at the benefits of risk instead of fearing it. Too often a trader’s natural instinct is to hedge themselves by dragging trades out through the whole day even when they’ve got strong conviction that they’d be better off taking another course of action. This is the type of behaviour we monitor on the desk and look to change. We have to be open to new technology; some new vendors are very innovative, while others appear to have little differentiation from established products and don’t appear to add any real value.
Broker review
For equity TCA we constantly review our brokers on a global basis. We feel it’s important to view our results as best we can against our peers, and our TCA vendor arguably has the largest peer data-set with which to work. Monitoring our executions is an ever-improving process but we are very pleased with our long-term TCA results and peer rankings.
We work with brokers every day and we’ve known a lot of our counterparts for decades. Over this long time we’ve come to know who’s likely to be the most helpful or knowledgeable about different trading situations and markets. Naturally we want them to be there to help us in the future, so we need to pay the people who can benefit our clients trading activities. Those brokers most successful in matching up IOIs are likely to be the brokers getting our highest votes as well. However, the voting is not just restricted to that one important service. It also rewards them for good trading calls, execution skills, being able to watch our orders carefully, responsiveness to our IB chats, and so on. And so it’s not just focused on blocks but the end-result of the whole execution process.
Globally AllianzGI has three main regional voting groups which are the European-based investment teams, the investment teams based in America and the investment teams based in Asia. Each of the different regional team members will vote for the brokers they feel provide the best service. But for example, if the US team votes a certain amount of money to go to one broker, and we don’t end up trading with them in Asia, then that broker can be paid by the US and European trading desks. Or if we are getting exceptional service from a particular broker we may trade a lot with them here in Asia, in which case the US and Europe may trade with them far less. Another situation is where the PM might be voting for that broker because they like the US consumer research analyst, but from a trading standpoint, we find that broker to be excellent in Korea, Taiwan and Japan tech names so instead trade a lot with them out here in Asia to pay them. And there are situations where some brokers just are poor at executions or technology everywhere and we don’t feel comfortable paying them through trading activities. In this case we have Commission Sharing Agreements (CSA’s) in place and can pay them from that pool.
As a consequence it is very important that we sit down with the brokers and be transparent as to where they’re getting the votes from. Every half year, our main brokers come in and we all sit down and go over with them who from our regions are voting for them. When we sit down with the broker, they know which fund manager or tech team appreciates their service even though we might be paying them through a completely different avenue. In that way, the people who are actually providing the service on the research side get rewarded and at the same time, the brokers can clearly see that we trade with them a lot in a given area, and because of that, the broker can assume that they have a good franchise servicing that need.
Generally as an outcome of the vote we have 75 or 80 different brokers or research outfits tagged. The latter research outfits don’t have execution capabilities so we end up trading with one of our top brokers who we have the aforementioned CSA program set up with. And then every half year we’ll figure out how much from the CSA program we need to pay these external research providers and organize the payments so they still get rewarded.
Blocks
Actual market liquidity in a stock we want to buy or sell doesn’t necessarily exist when we need to be doing that buying or selling. When orders are of block size it often doesn’t make sense to test the waters getting small executions because we might not make much progress before we find ourselves pushing the stock price. By putting out some feelers in the right places we often get a look at contra-liquidity before an order goes into the market. Patience doesn’t always pan out however. Finding and trading blocks is therefore often a timing decision, with a bit of good luck thrown in. It is not worth immediately going to the market just because we happen to get the order at 10:08am. If the liquidity is not there, we might want to wait and be a bit more patient to try to line up blocks by talking with our contacts or using the technology which is out there.
We’re able to do the searching ourselves but the fact is we still find most of the flow through our large panel of brokers. Without the information from these brokers, we would have far fewer matches and fewer trading opportunities.
Many crossing platforms are very innovative and they’re moving in the right direction. Often we’ll see five IOIs in a name from brokers so we decide whether we want to negotiate with the broker or try our luck in the anonymous electronic matching systems.
One of the most satisfying results, which only happens rarely, is trying to match up flow and let PM’s know of outsized block flow we may see on the Street, even when we don’t have live orders. When our PM’s hear about such opportunities they can be quite responsive and generate an order on the back of it. That’s a win-win for both sides; we each get to execute size that wouldn’t have ever been there in the market if it weren’t for that initial call.
And sometimes a broker comes to us. They know that we’re a big holder, either because we’ve traded the stock with them before in size, or they can see us as a significant shareholder on the Bloomberg HDS page or another reporting system. The broker will come and tell us that they have a chunky size in case we’re interested. Even though I might not have an order on my desk, it provides the perfect opportunity to go to the PM and see if they want to take advantage of the unexpected liquidity.
The great thing about doing this is that there was only one side of the trade to start with. The other side didn’t exist and yet because we work with the broker and PM together, we are able to create something. Accomplishing this requires communication with our PM’s, and skill and trust between the brokers and the buy-side, something that won’t likely be automated anytime soon.
We’d love to hear your feedback on this article. Please click here
P21_2016
globaltrading-logo-002

How Automation Can Future Proof Trading Systems And Meet Regulatory Requirements

By Jim Northey, Principal Consultant and Industry Standards Liaison, Itiviti
Jim Northey All market participants are under enormous pressure to maintain profitability in the face of stagnant trading volumes, sustained record low interest rates, increased regulatory pressure, and stifling competition. No one is immune to these challenges. Venues, buy-sides, sell-sides, service and product providers are all being squeezed. The adoption of advanced automation of heretofore largely manual processes can both future proof trading systems and meet regulatory requirements.
Manual processes surprisingly remain very prevalent today within the financial markets, particularly for fairly standard tasks such as regression testing and client onboarding, both of which are labour intensive and divert critical resources away from adding value elsewhere within the organisation. It also exposes firms to the risk of human error, which can be costly and lead to trading errors and lost business.
Moving towards automation
Testing is an area that is ripe for automation. The supply chain in a trading environment even in SME (Small Medium Enterprises) consists of multiple different computer applications, systems, and networks, often spread across several geographical locations. The effort required to construct test cases that fully exercise the system when one component is changed is prohibitively expensive in terms of labour and time. Further compounding the testing issue are changes by counterparties to their systems that can have an impact on internal systems. For when one piece of the trading infrastructure is upgraded, the whole chain needs to be tested to ensure there are no unintended impacts on up or downstream programs. Every time a trading or order management application is updated – the whole end-to-end flow requires full regression testing. These tests are standard and repetitive. Automating the process allows firms to reduce the number of resources required to focus on testing, at the same time simultaneously expanding the entire testing capability. This is where it helps prevent oversights in scenarios where important components or integrations are left untested or insufficient when updates are rushed to market. Today many sell sides use some tools to automate this process, but in large part remain consumed by time-intensive manual testing – either due to testing tools that lack full automation capabilities, or just don’t provide the full range of functionality.
The move to offshore manual regression testing regimes that started nearly twenty years ago has not resulted in improved testing efficacy. The offshoring of manual testing has simply lowered the price of inadequate testing. Automated testing practices can not only cost less than outsourced manual testing, if techniques such as model based testing and service virtualisation are employed, the effectiveness of testing can dramatically increase. The cost per test case can be orders of magnitudes less expensive. This means that a higher volume of testing can be done.
Another popular technique is the use of the Agile Software practice of Behaviour Driven Development using software testing tools based upon Cucumber. While there is no argument that when building a new application in which requirements are fluid and the solution to a business problem is being discovered, the results are a very narrow thread of testing that do not provide sufficient breadth and depth to fully exercise a distributed system. Behaviour Driven Development and testing tools such as Cucumber and the Gherkin language are important tools, but are insufficient to fully exercise and test complex trading applications.
We have the benefit that our distributed trading applications are usually connected by industry standard messaging protocols (FIX and ISO 20022) and de facto standards from exchanges (Itch and Ouch), and vendors (think of the major market data vendors and EMS APIs). Each of these protocols defines a specific model of behaviour. This predefined system behaviour can enable the use of model based testing. Model based testing using a model of the system under test and automatically permutes possible behaviours of the system not limited by human thought on “how the system should behave”. The permutations generated by model based testing have a much higher likelihood of uncovering unexpected errors and outages that we have come to accept as part of doing business.
Time and cost
It is an error to assume that test script development and execution is the majority of cost in a complex distributed test environment. Much of the time is consumed by configuring the test environment. Even when firms have adopted best practices in the areas of continuous integration, continuous deployment, and DevOps, there is still considerable effort required to configure systems for various testing scenarios. This is where service virtualisation, the ability to emulate behaviour of systems based upon well-defined protocols can provide significant benefit. The learning curve and lack of availability of expertise have limited the use of both model based testing and service virtualisation. A focus on employing model based testing and service virtualisation in very specific domains such as FIX can provide a powerful and readily accessible automated testing environment that can provide an immediate return.
Client onboarding and certification is another business area that is weighed down and compromised by manual processes. Before a new client can start trading, a raft of integration work and certification testing must firstly take place. Connections must be established which are then checked to ensure all systems are effectively communicating. Ideally the client sends a number of different types of test orders to ensure orders will be processed correctly from end to end. Once this has been completed, business can commence.
Approximately 20 percent of customer onboarding is currently automated. And it is not uncommon for this process to take a couple of months from start to finish, and in extreme cases it has taken up to six months. Because the process is so labour intensive, some sell-side firms maintain substantial backlogs of new and existing clients still waiting for connections to trade. This is costing the business.
Client onboarding is a major pain point for many sell-side firms, sapping large teams of internal resources and needlessly blocking potential revenue. And it is not only a problem that exists for new clients. Existing clients also go through the onboarding process each time they start to trade new markets or when the sell-side makes a major system upgrade. Slow customer onboarding time and time again leads to customer frustration. Automating the process removes this client dissatisfaction and importantly closes the revenue gap.
Automating the onboarding process also makes it much easier for a sell-side firm to upgrade the FIX connectivity layer. Let’s say a broker has 1000 clients. During an infrastructure migration, all of these existing clients need to be on-boarded again, and the broker must ensure the ability to route orders is not impacted. The prospect of doing this manually can discourage many firms from performing a valuable upgrade. But if the onboarding process is automated, improving, (and future-proofing) the firm’s own FIX infrastructure, it becomes a much simpler and less daunting task.
While savvy trading firms will always be careful to evaluate technology investment, automating key manual processes that immediately help firms reduce operational costs and shorten time to revenue, while improving customer retention and satisfaction in the long term, needs little consideration. For these are key weapons that future-proof connectivity and system infrastructure, while positioning firms for greater growth and leadership.
With a focus on improving and simplifying existing processes and systems, automation is a first step in reducing overhead, simplifying the process of upgrading, and taking advantage of new technology as it emerges.
To kill two birds with one stone, some may ask ‘how will automation help meet regulatory requirements such as RegSCI and the emerging requirements from ESMA in Europe?’ Building a reporting system for existing manual testing processes to meet regulatory requirements leads to increased costs, time, and resources. Automated systems automatically produce output that can be used to meet reporting requirements with little effort. Taking this a step further using a “fit for purpose” reporting system designed to meet regulatory requirements and you end up with significantly improved testing at a lower cost that can be meet reporting requirements.
We’d love to hear your feedback on this article. Please click here
globaltrading-logo-002

Setting The Standard

By Kuhan Tharmananthar, Etrading Software
Kuhan TharmanantharYou would think, after centuries of standards – from the Chinese measurement of distance ‘li’ to the metric system itself, we would happily develop and embrace them across the global economy.  After all, a standard makes everything easier for everyone… but in a free market where competitive advantage is sought and leveraged by every lawful measure, ‘easier for everyone’ is not necessarily what everyone wants.  The British Standards Institute define a standard as ‘an agreed way of doing something’.  The challenges become obvious when there is money or advantage to be gained by not agreeing.
There are a few examples where standardisation was required early in the development of a market.  Physical electronics is quite a good one; interfaces and connections are often turned into a standard before being manufactured, although there remain some who are able to adhere to proprietary approaches just through providing a useful solution at the right time in a market’s development.  This creates a market presence that generates a standard almost by default.  The RED code for the credit swap market is a good example of this kind of standard in finance.  The code came into existence through necessity as a proprietary data point developed by a group of key participants and then formed into a separate venture to serve the whole market.  The venture is now valued at more than $5bn.  Whilst standards of this type facilitate the growth of a market, they can also generate different challenges for participants, especially in an environment undergoing constant change from regulations and economic conditions.
This is where an open standard can deliver additional benefits to the whole market.  A transparent funding model alongside a standard that is governed by the industry can ensure that cost reductions created by technological innovation are applied across the market.  In addition, it provides a forum for relevant regulations to be addressed, where possible, with a mutualized approach.  It is increasingly clear that it is only through this open model that cost reduction can be maximized.  Industry governance also provides a more stable platform for the standard – giving the whole industry transparency on any upcoming changes and, indeed, influence on the direction it might be taking.  This foundation enables the investment and development of value-added services.
Putting aside the open or proprietary nature of a standard, it is interesting to note the other circumstances that cause an open standard to be developed in the financial industry:  One – a problem that is common across participants; two – part of the problem involves the capture of value away from those participants who generate the underlying data; three – environmental factors, such as regulations, highlight the impact of the problem and finally four – technological development reaches a point where it is largely commoditized and low-cost.  Neptune is an example of an industry utility, using standards, that meets these four criteria and is one of the reasons the network continues to grow in an ever increasingly populated market.
A good example of an opportunity for an open standard is market data.  Market data is expensive for all participants – sell-side, buy-side and CSDs amongst them.  In addition, despite there being a few composite prices in the corporate bond market, the derivation of them is unclear to many market participants.  Upcoming regulations in Europe and the US around transparency and best execution are only going to increase the importance and value of this market data.  Finally, using open standard FIX best practices and utilising the gateways and technologies that are already familiar to market participants, it is possible to implement a low-cost network that allows providers to retain ownership and control and receivers to reduce their cost base.  In addition, a composite that is defined and governed by the industry can be delivered alongside the individual data.
There are other opportunities that fit this model for an implementation of an open standard.  Of course, at the moment, much of the financial market is occupied with meeting regulatory requirements as quickly and efficiently as possible.  However, sometimes a group with a clear and shared vision can meet those requirements and address long-standing issues at the same time without incurring huge costs.
In the final article of these three pieces, we will look at what are the key ingredients for a standard’s success in capital markets and where current standards are being brought together, combined to help the industry in an increasingly challenging environment.
We’d love to hear your feedback on this article. Please click here editor@fixglobal.com
globaltrading-logo-002

Service And Support: A Minor Consideration Until You Need It

By Clayton Meadows, Head of Operations, REDI Global Technologies
Clayton MeadowsAs trading platforms have evolved over the years, their features have grown in number and sophistication. With every technology advancement, regulatory change and market structure shift, new capabilities have emerged for traders to master. At the same time as this explosion in functionality, true integration with the rest of the trading stack has become a reality as well, allowing for a seamless flow of order information straight through the entire investment process.
While these new tools and technologies have resulted in clear efficiency benefits, they often come at a price: service. Many software providers treat support as an afterthought, resulting in high employee turnover and under-skilled staffers. This leaves users with myriad frustrations: increased onboarding times, inexperienced support desks and the inability to successfully tackle unique client issues, which not only impacts productivity, but adds business risk as well.
Many vendor reviews, however, focus almost exclusively on capabilities, technology and cost. Service and support is all too often an afterthought: that is, until it’s needed, when it becomes an extremely important component of the overall client/vendor relationship. In order to save some of the potential headaches that we’ve seen, we offer the following questions to consider when vetting a provider:
Initial onboard and setup

  • What is the average length of time that a vendor’s onboard takes? While the onboarding process requires the client to be an active participant with regards to document completion and such, straightforward onboards should be able to be completed in only a week or two and often as little as a few days.
  • How large is the client implementation team, and what is the average amount of experience possessed by the team, both in the industry and with the particular vendor? Account setup can be extremely complicated depending on the quality of the vendor’s internal tools platform, which often receives far less development resources than the client-facing system. As well, the team must be proficient with the entitlements and clearing integrations that are required. Having a knowledgeable group driving user setup can not only make the process run much more smoothly, but also enable you to be back up and trading should an unforeseen issue arise once live.
  • Similarly, how many FIX engineers does the vendor have, and what is the team’s average experience and tenure? Are the engineers globally dispersed, allowing for 24×6 coverage, or are they all in a single location? In addition, does the vendor maintain its own FIX network, or leverage a third-party? While third-party networks can provide efficiencies, they also create an additional layer of dependencies (read: extra time) when onboarding or troubleshooting.

Ongoing service and support

  • How is training handled? Is it solely via documentation, or is on-site support provided as well? And if so, at what cost? User guide documentation is obviously important, but live training is valued by many users as well.
  • Where is the vendor’s support staff geographically located, and what is their availability? Are they knowledgeable on not only the system, but overall market structure and the way the various products trade as well? Understanding how an instrument trades is imperative if an issue arises, particularly in derivatives markets where risk is exacerbated by the notional values of trades that can be in the tens of millions of dollars.
  • Should you desire it, is the vendor available to shadow your trading and provide proactive suggestions to improve trader workflows? A quality service and support organisation should be able to provide value at all times, not just during an issue.

Bespoke Services

  • How flexible is the system? Can the GUI be customised to meet your workflow needs or emulate a look/feel that you’re accustomed to? If it can, is it provided as part of the relationship, or is there an extra consulting fee involved?
  • Are development services available to provide bespoke solutions like custom reports or API-/DDE-based integrations? If so, what is the cost? Having access to custom development resources can help improve efficiencies and also enable vendor’s platform to match your needs, rather than vice versa.

At the end of the day, good service won’t salvage a bad product, but a poorly run, inexperienced service organisation can definitely outweigh the benefits that even a quality platform provides. Hopefully with a little bit of extra due diligence, you’ll never have to find out for yourself.
We’d love to hear your feedback on this article. Please click here
globaltrading-logo-002

Deepening Risk Management Practice: Technological Glitch Planning

With Leo Li, Equity Operational Risk Manager, Vanguard
Leo LiMany asset managers currently use one Execution Management System (EMS) to manage connections with all brokers. Usually this situation starts with the trading desk on-boarding an EMS to manage all connections, and traders eventually become intimately familiar with the layout and functionality. While there are benefits to having a single system, such as improved operational efficiencies and lower costs of maintenance, this leaves the asset manager with a single point of risk should the single EMS connection become unavailable. Asset managers historically have considered the EMS as part of their internal systems that can be operated and controlled internally. However, as we have seen over and over in the past few years, there is tremendous risk for trading desks if an EMS provider suffers a failure due to connectivity issues, natural disasters, or regulatory sanctions.
Let’s think more like risk managers.
Asset managers should consider implementing multiple EMS’ in order to mitigate the potential risk of losing functionality of its single point system. While the amount of time and resources to implement new systems could appear overwhelming and redundant at times, the asset management industry as a whole could benefit from having multiple systems. The key to success is getting all the traders familiar with multiple systems. Although each EMS provides similar functions, operationally the subtle differences among EMS’ could cause major errors, especially when it comes to selecting the right algorithm and trading strategies.
We have seen scenarios where a person or team is replaced by a piece of technology / software. At the same time, we also realise that some human redundancy has been reduced in terms of coverage and backups. When an employee is out, there should be coverages for the role and responsibilities, especially when it comes to trading related functions. When connectivity software is down, do all front office operations have a backup?
We should think like risk managers as often as we think like asset managers, consider what could go wrong, and to try to address those areas before any glitch occurs. The hardest part is to anticipate potential points of failure in a relatively low volume, stable environment. From a different perspective, if we take the one FIX connection to EMS out of the trading equation, what would happen to the front office? Does business continuity get impacted? How long can the business function without the single EMS? With that picture in mind, what do we wish for in order to make the situation better (i.e. direct connections, multiple EMS’)? Then start the solution from there.
The solution will likely vary from firm to firm as each has its own internal procedures and processes related to the EMS and process of execution. Each firm, for example, should consider the full life cycle of an order; in terms of how the order is generated and how it is executed. At any stage if there’s a single point of failure, especially when it comes to order routing via FIX message, there should be multiple points of access to the top counterparties.
This is similar to a stream of traffic travelling over a single bridge. But if the bridge is temporarily unavailable, are there detours? Some questions to consider:

  • Does the firm have alternative means to router the final destination via FIX message?
  • Are the front office traders familiar with the alternative EMS in terms of functionalities?
  • Can the connection also be made directly to the broker and with Order Management System as a gateway?
  • Can trading be done over the phone? If that is an option, is the compliance group comfortable with pre-trade compliance not running?
  • Would the counterparties be able to take all the trades over the phone?

These are all questions that can be resolved by encouraging firms to have redundancies within their front office operations.
Potential solution and approach
The cost and complexity of adopting additional EMS’ can vary greatly depending on the firm’s size and trading volume. And the infrastructure may take a long time to establish, especially when an asset manager would like to be connected with a large number of counterparties. At the same time, it is also crucial to ensure that traders become familiar with the alternate EMS’ functions.
We should also encourage EMS providers to develop a seamless process for transitions. This could potentially lower the cost of transition for most firms. Asset managers should also be cognizant that just by having two EMS’ does not mean redundancy if one relies on the other to be connected.
FIX messaging has evolved many times in the past to accommodate new trading strategies and algorithms, and firms have rapidly adapted to the new standard, we believe it is time for the FIX connection infrastructure to evolve as well.
We’d love to hear your feedback on this article. Please click here
globaltrading-logo-002

Industry Collaboration In The Year Ahead

By Tim Healy, Global Marketing And Communications Manager, FIX Trading Community
Tim Healy_2016‘As January goes, so goes the year’ is an old Wall Street saying. I’m certainly not going to make any stock market predictions for 2016, but can say with some conviction that the year is likely to be a busy one given how January has started.
2015 was an extremely active year for the FIX Trading Community. Nine new Working Groups were formed through the year, some of which will be mentioned in other articles in this issue. Whilst a number of these did concentrate on MiFID II, the growing global focus on Cybersecurity and Blockchain was not lost on the members as both topics were addressed with new Working Groups. The subject matter on each is wildly different and extremely far reaching. FIX has been lucky enough to call upon experts on each subject to act as Co-Chairs and tangible work is being done with people showing great interest in how FIX fits into these respective areas.
The Cybersecurity Working Group completed a review, updated and augmented a white paper with a set of security threat scenarios. The intent is to illustrate the proposals in the white paper with examples of possible strategies an adversary may employ to disrupt, imitate or modify legitimate traffic between electronic counterparties.
The Working Group has begun work on a number of other initiatives to further support its objectives and will look to engage with membership actively over the course of 2016. In addition to this work, a Regulatory Subgroup has been formed to monitor and provide insight regarding emerging global regulations dealing with cybersecurity.
In 2015, the Global Post Trade Working Group continued the strong momentum built up in 2014 as it looked to add different asset classes to the list of Recommended Practices using FIX in the post trade space. Equity Swaps, FX and Futures are all being addressed currently and there is also an initiative in the group to cross reference with the Digital Currency/Blockchain Working Group to look further at this technological revolution and the work both Working Groups are doing.
At FIX, we are lucky enough to have strong input from a very active global Buy Side Committee. 2015 saw the initiative to make the IPO order entry and allocation process more efficient and less error-prone gain greater momentum. The two vendors supporting this electronification became members of the FIX Trading Community and fully engaged with the work the Buy-side had been doing. A set of Best Practices was published in 2015 and the focus will now be to engage with the Sell-side to complete the process fully.
In Europe, there has been much talk and focus on MiFID II and timings and demands for all market participants, vendors included, as the industry looks ahead to early 2017. Much of the work that FIX has done over the years, promoting the use of standards, is key to what the Regulators would like to see from the industry. During 2015, further work on Execution Venue Analysis was conducted by the EMEA Business Practices Subcommittee. Under MiFID II, each broker will need to demonstrate where and why they send orders to a specific execution venue. Providing venue reference on each execution real time will allow clients to monitor precisely where their orders were executed. An updated version of the Execution Venue Reporting Best Practices was published in October 2015 and is focused on adding further clarity around Last Capacity and Liquidity flag definitions, as well as mandating all executing venues, including broker crossing and alternative trading systems, to supply valid Market Identifier Codes (MICs) on their executions. The group will further be liaising with the buy-side in the US, the original authors of this document, to ensure that there is continuity and collaboration going forward.
To finish, it would be appropriate to mention the work done by the various members of the Global Technical Committee. A new working group, originally called FIX Service Profile and then renamed FIX Orchestra Working Group, was formed to address the work of generating machine readable rules for FIX Specifications to improve operational efficiency and the value of the FIX Protocol by reducing the time and effort it takes to on-board, certify, and deploy new FIX connections with counterparties. Additionally, in November 2015, we announced the publication of two specifications by the High Performance Working Group published on Github. Although highly technical in nature, there is an additional need for a high performance interface for order routing and market data to support the modern trading venue – a real business need.
The work done on this encapsulates the spirit of the FIX Trading Community by addressing changes in the market place and making them accessible freely and publicly, allowing feedback from the market and developers. We strive to include people and will continue to do so and keep the Community and the market fully aware of the work that we do in 2016.
We’d love to hear your feedback on this article. Please click here
globaltrading-logo-002

Driving Execution Venue Analysis To A Global Audience

With Brian Lees, Chair, Execution Venue Sub-Group, Global Buy-Side Committee, AVP, Trading Technology Manager, Capital Group and Irina Sonich-Bright, Director, AES Business Development, Credit Suisse, Co Chair of the FIX European Business Practices Subcommittee, Co Chair of FIX MiFID II Transparency Working Group
Brian: Many of the execution venue’s sub-group’s most recent updates have been driven by ongoing changes to European regulations around brokers’ execution capacities. There are some aspects that need to be clarified as they are not addressed in the current documentation, including the issue of properly capturing time stamps. This is becoming more important due to MiFID II requirements which include transaction reporting and time stamps.
We also need standardisation in the time stamps of the trades in order to conduct proper TCA because we are beginning to use that to match up to market tick data in order to do analysis. If time stamps are inaccurate, then it is hard to find our footprint in the market. The problem goes beyond how granular a time stamp is; we may also have the wrong time stamp altogether. People in the group have found that time stamps don’t actually represent the ‘actual’ time that the order transacted on the venue itself. Instead, brokers are stamping the order with a time stamp when it passes through their gateway before it is returned to us. We require clarification that it is always the ‘actual’ time stamp that comes back from the venue. Tightening up this process will make it easier to analyse and use the data as it was intended.
Irina Sonich BrightIrina: I agree that there is still an ongoing dialogue regarding what kind of timestamp we need to give to our clients – would it be the timestamp when the actual execution took place or when the broker sent the execution to the client? In theory you need to know both – time of the execution and time the broker sends the message to you – and without mutual agreement as of what should go into the execution time tag you end up with different interpretations. The next level is the granularity of the execution timestamp. MiFID II is trying to address both of these issues but until it’s finalised, we rely on the collaboration forums like the FIX Trading Community to address the immediate demand. Worth mentioning that just like with the liquidity provision tag (851) in Europe, not all exchanges provide us with the milliseconds granularity right now. However, where we are able to access this data, we can deliver it to clients.
We still have clients who do not receive all the necessary information, and if they do receive the information, it may not be consistent when presented by different brokers. There may also be clients who request similar information but in a different shape or form. Also, between the buy-side and the sell-side we have vendors making changes to the data on broker or client requests that might be inconsistent with someone else’s ask.
The execution venue paper (in its original form) therefore began a very necessary push towards standardising our approach to these issues of transparency. This is an area that will continue to develop as more specific technical requirements for MiFID II implementation are released.
Brian LeeGlobal and sell-side involvement
Brian: Another of our ongoing goals is to increase the global reach of the working group (which has been primarily US-based), to try to bring together conversations in Europe and the US. We can bring Asia into the conversation but it has been difficult getting some of the necessary information like tag 851 from the exchanges. So as a result, we are less concerned with Asia; the market structure doesn’t drive as much of a need as it does in the US and Europe.
The key to continuing the development of this working group is to get more people involved. I am hoping to reboot the whole project, to expand it globally to bring in a much broader representation of the buy-side and sell-side and to get it all moving again. With all the changes to the regulatory process, we need clarification put out there on how to manage transparency and the associated data – the requirement is only going to grow.
A further point to make is that the group was originally a buy-side only working group but we are hearing from brokers that their clients are asking for increased granularity in the data. As a result, we would like to expand the group to include more buy- and sell-side representation, so that we can understand the issues on both sides in order to get our documentation reflective of what we face in these markets. We have to decide on the precise scope of the working group and what should be considered a special case, and we need to work together with the sell-side to figure that out.
In addition, we have undertaken mapping of tag 851 values for US exchanges from the exchange hubs themselves, the raw exchange hubs to tag 851 values of 1, 2, 3 and 4. It might be that something similar is needed for the European exchange, but we don’t know if the sell-side is having the same difficulties in mapping these tags as the US sell-side was.
Irina: We are also receiving feedback from buy-side firms who are not necessarily participating in driving the change. For example, there are still buy-side firms who are unable to support MIC codes and we need their input to ensure we take their technology into account.
This is why we like a proactive buy-side acting as the driving force for the working group. We are trying to be more proactive too, but the message needs to go out that this is now industry-standard and firms need to be involved.
We already have a good foundation upon which to build a more detailed framework, and if we reach a broader spectrum of people then we can include more diverse requirements in the documentation.
Global impact of a change adds yet another complexity towards streamlined implementation. We have different regulations and initiatives and sometimes contrasting interpretations of scenarios within different countries. While developing a global standard, people need to tell us when something doesn’t work for them. Any minor changes to the specifications (especially if they override the meaning of a previous figure), are very dramatic because they cause a ripple effect. The sooner more people interact with the group that actually designed these specifications, the better it is for everyone.
The FIX protocol used to be perceived as an IT speciality, but changes in the protocol and our ongoing work is increasingly affecting the trading and business parts of a firm.
There is considerable dialogue and coordination between the different buy-sides and sell-sides to understand whether what was asked for is actually possible.
Brian: We have traders who are now aware of tags 29, 30 and 851. The groundwork has been laid but the key is getting more people involved in order to move this forward and to make sure that the industry’s needs are represented.
We’d love to hear your feedback on this article. Please click here
globaltrading-logo-002

Finding Blocks – A Case Study – Industry Collaboration in Action

P67_2016
With Robert Barnes, CEO, Turquoise, Jonathan Finney, Head of Systematic Trading EMEAA, Fidelity International, and James Hilton, Co-Head AES Sales EMEA, Credit Suisse
Robert: In an order book environment that naturally leads to small trade sizes, investors that wish to outperform benchmarks are calling for innovation in electronic block trading.
To answer this call and still trade in the presence of anticipated MiFID II double volume caps, one needs a respected and working trading mechanism that can match orders above 100% of the Large In Scale (LIS) threshold determined per stock by ESMA.
Turquoise has a working LIS innovation: Turquoise Block Discovery, which matches undisclosed Block Indications that execute in Turquoise Uncross. Turquoise design reflects active collaboration with the buy-side and sell-side. 1 , 2
Jonathan: With impending MiFID II legislation, Fidelity as a whole has commended much of the innovation that has happened in the markets and been actively engaged on several projects, many from inception. However, as a large institution we also very much err on the side of caution and, when Turquoise approached us about their proposed new venue, we met this with cautious optimism. Our concerns were varied but were mainly focussed on due diligence, risk and control analysis and general policing of participant behaviour. We could definitely see potential in their proposals but we felt there was also a lot of granular detail which needed to be worked through for us to be comfortable as a firm from a best execution perspective.
Therefore, when Turquoise Block Discovery first launched, we were not one of the first initiators but we definitely had a interest in its success and were actively engaged with Turquoise, buy-side peers and sell-side counterparts in the meantime in discussing what we needed to be comfortable with engagement. By understanding how their reputational scoring worked, having increasing comfort in the transparency of their policing mechanisms, and attaining greater understanding of the types, size and quality of trades occurring, as time went on, we felt we were in an increasingly strong position to justify adding Turquoise Block Discovery to our selection of venues via brokers, especially using the conditional order type. I am pleased to say that, though success has been relatively scarce as adoption is still in its early growth phase, the quality of liquidity has scored well across a variety of external metrics.
James: There have undoubtedly been challenges to implementation of this solution. Even before you look at the technology development, you have to spend some time to understand and get comfortable with the model – it’s relatively unique. And clearly, we had to garner feedback from clients to see whether there would actually be a demand for it.
Once we’d established that it would be a viable model, and clients wanted to use it, we invested in the technology development. Luckily for the buy-side, the heavy lifting definitely falls on the sell-side. Supporting the full conditional nature of the service has required some effort, and there are still some key brokers working on that. Because of the model, Credit Suisse offers access to the service on an opt-in basis. So, whilst you have some brokers not offering full functionality, and buy-side having to positively opt-in, invariably it takes time for the momentum to gather. The key to the service is that there’s virtually no opportunity cost. We have been meeting a lot of clients to talk about the service, and explain why this is something we feel is worth supporting and the majority are signing up.
Key metrics
Robert: The metrics that matter include low reversion, large average fill size, and high firm up rates.
Low reversion
The design of Turquoise Uncross innovates with a random timing mechanism which prevents either buyer or seller from determining the instant of execution. This has been the subject of multiple studies by LiquidMetrix, the independent analytics firm that specialises in venue performance metrics and execution quality analysis. They repeated their analysis in October 2015 and, for the third year in a row, spanning before and after the 2014 launch, and again concluded that trades occurring on Turquoise Uncross had a far lower correlation with sharp market movements on primary venues than trades occurring on other continuously matched MTF Dark Pools.3
Large average fill size
Turquoise prioritises orders by size and features innovations that deliver a broker neutral venue that is reversing the electronic trend of shrinking trade size. Turquoise Block Discovery now averages more than €250,000 per trade, and this average is more than twentyfive times larger than the average €10,000 for electronic trades matched by continuous dark order books.
High firm up rates
We have more than a year’s worth of empirical measurements evidencing consistently high firm up rates. These high firm up rates result from a strict timing mechanism and robust automated reputational scoring, which measures the difference between the original block indication and the subsequent firm order.
P68_Footnote

We're Enhancing Your Experience with Smart Technology

We've updated our Terms & Conditions and Privacy Policy to introduce AI tools that will personalize your content, improve our market analysis, and deliver more relevant insights.These changes take effect on Aug 25, 2025.
Your data remains protected—we're simply using smart technology to serve you better. [Review Full Terms] |[Review Privacy Policy] By continuing to use our services after Aug 25, 2025, you agree to these updates.

Close the CTA