Mike Abbruzzese of Dominick and Dominick explores the business case for outsourcing as a small firm amongst much larger competitors.
Smaller investment managers often outsource trading technology to achieve competitive parity with larger market participants. Yet, timely upgrades to the FIX trading architecture require a flexible, segmented approach. Coordinating specialist offerings from multiple vendors can facilitate faster upgrades and client response times.
FIX connectivity is the daily point of contact with our clients. As such, it needs to be flexible and multifunctional. At Dominick and Dominick, we outsource our FIX connectivity separately from our OMS/EMS providers. Outsourcing gave us the flexibility of not being married to our OMS anymore. Previously, it had been a major project to change an OMS system because the provider housed your FIX connections, held end-of-day files as well as the connections to your brokers.
Concentrating all our FIX connections through our OMS provider also gave us too much exposure to one external provider. We had everything with one firm, so when there was a problem everything went down. Splitting these elements out into FIX connections and our EMS reduced our operational risk. By putting a firm in the middle to manage our FIX connections, we run one pipe into the OMS, which then goes to each executing broker. Adding new FIX connections through the OMS provider was a lengthy process and often they did not know what networks our clients were on. After outsourcing this element, new FIX connections are turned around in two days. Moreover, if we need to change our OMS or EMS or the technology base changes, we can adapt. Without a dedicated FIX team, this would otherwise be difficult.
Outsourcing has yielded cost savings with clients that have multiple connectivity options, as our specialist provider gave us a better option than the client’s primary provider of FIX connectivity. As a smaller shop we want to give clients, whether institutional, high net worth or family office, the seamless capabilities to compete and look the same as our peers.
Naturally, outsourcing FIX connectivity enables us to get clients online quickly, but if a client tags a trade incorrectly, which then leads to problems on the OMS, our partner helps us deal with that. For example, we had a problem with our EMS where we had one institutional group and another for the rest of the firm’s business. At one point we had to revert to an old OMS and were running two OMSs simultaneously. Our partner put in another pipe and we never had to bother the client.
It is not good to be constantly on your clients’ radar. Our clients expect that once we are set, all necessary changes should be done internally. Outsourcing gave us the ability to do that. We switched overnight and the client did not know because it looked the same to them.
Long term relationship
Just as we build rapport with clients, we appreciate long-term relationships with our vendors, where each understands the other’s business better. Our FIX provider recognises adding a connection as a small firm involves taking a risk on the cost. The decision process for adding new connections is based on client costs and potential flow. Exploring connectivity options that are cheaper, but look and work the same helps us make that decision.
Staying competitive in our segment means outsourcing FIX, reviewing our EMS and paying attention to client changes. By the same measure, when providers promise us functionality they later cannot deliver, the added workload is a competitive disadvantage. We would rather a provider says “No we can’t do that” than “Yes we can”, but find the solution does not fit the problem.
Despite best efforts and improving technology, problems do occur. We want partners with the capabilities to handle issues as and when they arise. Every product runs well in ideal conditions, but the test is how it reacts in the event of an error.
Until that happens, we assess our vulnerabilities and spread our exposure over multiple partners.
Technology and consulting
We frequently review our needs for what we can do better ourselves and what should be outsourced. To keep up with technology advances, we expect our partners to stay current even as we internally review developments in functionality and features from other providers. Up-to-date partners are essential when clients we have not heard from in six months suddenly request a change. If our partner is behind and cannot meet a client’s demands, we suffer for it.
There is a long list of inexpensive, commoditised trading services and finding what differentiates these products is a lengthy process. When evaluating new solution providers related to our EMS, OMS connections, brokers or FIX functionality, we look to our partner amongst other people for recommendations.
Typically, we look at our needs and consult with vendors to itemise the areas we have to address, and then we look at costs. Together with the COO, we review each scenario and make decisions. Sometimes it comes down to cost and whether we can build it in-house, but many times it comes down to functionality. If it makes economic sense for the business, and especially if it allows us to trade like a larger firm, then we outsource it.
Playing With The Big Boys
The Data Horizon
With Daryl Bowden, Partner and Head of Equities, Sunrise Brokers
We are increasingly driving towards the idea that there is no such thing as coincidence in how brokers deal with their clients – so much of what we do is data driven, and with the right systems the effort can be stripped out and replaced with as much automation and evidence as possible.
The sales trader function is being pushed up the value chain, and buy-side dealing desks are becoming internal specialist sales traders. There is an inevitability to the movement of systems and people to the buy-side and they need to prepare for the challenges that they will soon face. They can learn from the mistakes that the sell-side made when implementing their systems, and ensure proper collaboration and control systems are designed into the technology, rather than tagged on afterwards.
With the ongoing shift to HTML5 and the cloud, technology should be more bottom-up, with the view that users of the system are ultimately those who decide what the system is and what characteristics it has.
Developers also need to have more information about their users’ use patterns – we believe that all forms of content are ultimately alerts; users need to be able to interact with, and share, the information that they find important regardless of source and form. And by structuring this with tags it gives the client the ability to select sources of brokers and stocks: it allows them to cut the noise from their networks.
Another important theme is contextualisation, in that relevant alerts have to be rolled in with known market information that the user may not have linked together. Our work focuses on the creation of knowledge from data; by a process of organisation; enhancement and analytics. Data services alone will not be enough in the future. Computer systems will soon be capable of answering questions that haven’t even occurred to the user. This supports decisions and reaffirms the network of information, which is then further enhanced through the sharing of this information with peers and colleagues.
Systems and technology are changing; there is a great network of technology firms coming together, but the challenge for the buyer and developer is making sure that those pieces of software work together, maximise functionality and are agnostic to which networks you want to plug into the hub. Our mantra is “Delivering your Content in Context on Demand”, as we know this is different for each and every user.
Change and disruption to extant technology structures in financial services is a key dynamic that requires further examination. The question must be asked; can smaller firms compete and disrupt the standing incumbents? Ultimately I believe yes, but it is a long road.
Standardising Execution Reports
Linda Giordano and Jeff Alexander, Principals, TABB Group further examine their efforts to standardise execution venue reporting, and look at how it can help both the buy-side and sell-side.
Since we last contributed to the GlobalTrading journal, we came to realise that a single broker was not the right place for us to continue with our programme to bring transparency to execution routing and venue analysis. As a result, we moved into the TABB Group organisation, helping create a TABB METRICS product called Clarity.
We believe that we are now better able to analyse data from brokers of all sizes. The July 2014 move to TABB has made us more effective, which has been much better, more interesting and more valuable to clients.
Currently, we’re taking all the routing data from brokers, whether things are filled or not, driving a big push for standardisation, including areas around FIX tag 851 and other related initiatives, such as the standardisation of time stamps. As the data comes in, we process it and issue reports for clients. However, we’re going to be moving to an industrial strength platform soon that could potentially lead to daily processing. This could be a little overkill, but it’s a move in the right direction.
Industry feedback
Our data has shown clients what’s going on across multiple brokers. There’s always a delicate balance between cost and execution. For example, is a broker being too cost sensitive, too aggressive, or are the broker and investor aligned to provide truly best execution?
To accomplish this, we look at strategy in terms of what is the client’s intention and what order types are being used. We want to see broker and client alignment and how investment strategies are implemented, which is not easy because strategies are very different when a broker is providing or taking liquidity.
The first step is to get a very high level of review of where the broker is routing to and where they’re getting filled. We take a look at a sample and see that the broker is trying to route through agency pools. We have charts and tools that split the orders out into lit and dark, and we’ve actually classified them into more granular categories, including a broker’s own dark pool; agency pools; consortium pools; blotter scrapers; and lit, inverted and electronic market maker liquidity pools.
We have learned that when you begin to analyse the data, you discern definite trends. If a broker has a cost sensitive model, they will begin trading in the free or inverted markets first, which means they are taking from broker venues, consortium-type venues and electronic market maker type venues. These are free places to execute, so you can start seeing that this type of broker has a very cost sensitive philosophy in routing.
For the most part, these pools are not offering protection from information leakage. Nor are they offering size or anonymity. This is because they are prepaid. It’s for the broker’s economic benefit and that can be seen through our analysis.
The broker has to be as cost effective as possible but still ensure that they are protecting their client and that is where the line has become muddled: the buy-side wants to make sure that brokers are not doing things to their detriment.
Drive for transparency
It is important to say here, it is not always that the sell-side has to pay more for execution and the buy-side just gets better prints. Sometimes the buy-side gets a little too aggressive in cost cutting and the broker is forced into more cost-effective routing strategies. If the buy-side are constantly pushing their broker on costs – don’t forget you’ve got research to be paid for, too – the end result is that routing starts to look unbalanced. In some cases, if the buy-side wants more aggressive, costly routing, they have to look very hard at commission rates they’re paying and consider increasing them.
Standards
The intent is for the buy-side to have a more meaningful discussion with their brokers and work together. One problem is that everybody has their own definitions. Items like commission rates are defined differently on each broker report. There are also areas that are less well-known, such as time stamps being calculated differently on broker reports, meaning that even the items that people think are clear vary between broker to broker.
Examining the difference between a router-based time stamp and an actual time stamp, we found that the router-based time stamp happens when the broker gets a fill from the market and then stamps reflect the time they got the fill – rather than the time it filled on the exchange.
The difference can be significant, so if a buy-side firm gets a report from one broker based on their router-based time stamps, and another broker gives actual market time stamps, you are obviously not looking at the same thing.
The result of standardisation is that brokers can focus on key metrics and improving execution, eliminating the need to have a discussion around definitions. Buy-side firms know how the numbers are calculated because Clarity is completely transparent in terms of how everything is calculated.
Long term
The conversations around standardising FIX tag 851 and time stamps are first steps to allow people to standardise a lot of these other conversations and to drive cross comparability of data. Moving forward, we have to encourage the sell-side to open up and the buy-side so they’re all able to understand it and implement the solutions to manage a lot of the data.
Regulators have been looking at areas like these but it is unclear if they want to come up with a standard. If they were to drive a standard report, each would have the same matrix and each client would have a very simple and easy-to-digest report regarding their routing practices.
The standardised reports that the regulator has put into a trial so far are not particularly complicated but also not very illustrative. This could be an area where the regulators have seen industry solutions to the problem but there does not necessarily need to be a regulatory-enforced level of reporting. However, some areas obviously do have a much greater need for regulatory involvement, especially if commercial solutions are unable to meet the needs of the industry.
Trading needs to be driven by execution quality purposes, not by payment for the flow purposes and it’s something that some firms are not grasping. There are firms that exist solely because of payment for flow arrangements, but they’re not good enough from a quality perspective. This is an issue that shouldn’t be too difficult to put right.
The tradition in the United States has been that the market has come up with its own solutions. As we see it, this is not always the best way because market solutions at different times are biased. Whoever has the loudest voice and whoever has the most money is going to sway things.
Problem is, the buy-side has not been in the driver’s seat and that’s been to their detriment.
Standardising Execution Data
With John Cosenza, Co-head Electronic Trading, The Cowen Group
One of our bigger value-adds to customers is the fact that we are venue neutral. We do not operate a dark pool. We have no electronic market making. We have no prop trading within the organisation. As a result it is imperative to our business model that we maintain proficiency in analysing liquidity venues and understanding which venues we should route to in various situations; how, when, and where. Data and analytics on liquidity venues serves as both a driver and feedback loop to this core competency.
The venue and routing component of the execution equation is extremely important given the complexity associated with fragmentation and lack of transparency in the marketplace. Having residential expertise in optimising routing and analysing liquidity venue data, our hope and what we believe in is that this focus will translate into better execution quality for our customers.
The push for standardisation
We don’t have a dark pool so it’s very important for us to have a consistent framework where we can look at all the places which we route orders to, normalise the data and then draw meaningful conclusions on our routing practices to these third party liquidity venues. The firms that are venue-neutral and approaching it this way are driving for more transparency and standardisation.
For the camp that provides electronic products but also manage their own dark pool, the data could be a little bit more daunting, in that they have electronic products which route to liquidity venues and their venue is one of them.
For this type of provider, if the data comes out to show that there is an extremely disproportionate amount of volume going to their internal pool, a detailed explanation is in order.
In such a scenario, it could be reasoned that there are tangible benefits to internalisation, rendering the data as a positive outcome. And if there are scenarios where it can’t be reasoned that there’s a tangible execution benefit of internalisation to the end customer, then it probably requires some adjustment to routing. Either way, this increased transparency is only going to be good for the marketplace.
In order to be meaningful, firms will of course need to accumulate significant sample size, but the initial steps that Tabb are putting in place are important. If you can normalise data through an independent source not involved in the execution process, a level playing field for meaningful comparative conclusions can be drawn. Similar to traditional TCA, an important part of the evolution of venue and routing data starts when there are no more issues questioning the source of the data, which has historically been a concern when brokers calculated and individually provided their own data.
Hopefully, down the road, if there is a consistent standard and streamlined process everyone in the ecosystem will benefit. While some of the larger institutions have dedicated resources and teams to do this themselves, the overwhelming majority do not. For the customers that don’t have the team or resources for big data, this is a great way for them to receive data in a comprehensible way and make good decisions for their firm with their counterparties.
There still has to be a tieback into how this improves execution quality. Optimisation of Routing to Liquidity Venues is an important piece of the overall puzzle. With an agreed upon standard between the customers, brokers and venues, data on routing and venue analysis can serve to assist the overall optimisation of execution performance, translated into minimisation of risk-adjusted execution costs.
Fundamentally, how do we navigate a marketplace that is complex, extremely fragmented and lacks transparency in some critical areas? Transparency is moving in the right direction but is not ideal. It is our job to use data and context to combat these gaps in transparency with the end goal of getting better execution quality for the end customer. It’s not going to happen overnight but I think we’re going down the right path.
Let There Be Dark
By Huw Gronow, Head of Equity Trading, Europe and Asia, Principal Global Investors.
Sometimes an industry adopts mysterious-sounding labels to add to the intrigue of its complexity. It may well be that market historians will regard the use of the term “dark pools” as an error of PR.
“Dark pools” burst into the public consciousness not with the adoption of Regulation ATS in the US, nor with MiFID in Europe, but with the publication of Michael Lewis’ “Flash Boys” in April this year.
Whereas “non-displayed liquidity traded in an exchange-like venue under a pre-trade transparency waiver” may not sound as sexy (few would argue against that) the negative connotations surrounding the venues have reverberated through to regulators and law makers alike, we believe to the potential detriment of the end investor.
For institutions such as Principal Global Investors1, with USD326bn under management2, being able to transact large positions with minimal market impact is critical to managing the risk transfer associated with portfolio implementation.
In Europe, non-displayed liquidity can be most simplistically categorised into either “small dark” or “large dark”, that is, small executions either in public dark books (MTFs) or private broker crossing networks (BCNs). Then there are “large in scale” blocks transacted in institutional pools where the trade sizes are many multiples of the average trade sizes in the lit venues (and indeed the dark MTFs and BCNs, for the most part).
The choice of “dark” or “lit” environments, or a combination of the two at any time, and indeed the way you interact with a dark environment, is chiefly dependent on your catalyst for trading.
Trade urgency dictates how much you use the dark, not only in relation to your desire for certainty of execution, but the way you interact in the dark is also a function of your urgency to trade. In short, it is not as straightforward as either “lit” or “dark”. Many, as we do, interact with both for differing catalysts and at different stages in the trade cycle, and this nuance does not seem to be well understood when regulation is being considered.
Fundamentally, the main benefit of the non-displayed environment to institutions like us is that we’re able to place large orders out for execution, at the time of our choosing, that would otherwise send a clear signal to the market of our intent.
Indeed, even if those large orders are atomized to reflect the microscopic size of the trades in the modern ecosystem, predictability is the enemy. Smart usage of dark pools protects us to some extent (not totally in our view) from that.
MiFIDII looks like it is going to potentially change the landscape radically however.
The policy makers seem keen to increase transparency in the market, with concerns over impact to the price discovery process if too much trading volume is in the “dark”. We argue that the ability to transact without publicly showing our hand provides real and quantifiable benefits to institutional portfolios, which are ultimately often held by retail investors, the persons whom MiFID was designed to protect. There is hope, however. The Large in Scale Waiver under MiFID remains unaffected by the dark pool “caps” that are proposed to limit the percentage of smaller trades that take place under the Reference Price Waiver and it is already apparent that the buy-side are looking to take control.
The name “dark pools” implies something bad? It has been said already that it may be high time to call them something else. The concept of the buy-side driven Turquoise Block Discovery Service is a great example of the right response by the buy-side, sell-side and a forward-thinking exchange-like venue not only to likely changes in regulation but designing a system geared towards the benefit of the end investor – the public investing for their retirement. And it does not sound nearly as sinister. Maybe that’s the point?
Financial Group; including both its internal and external boutiques
globally, notably (amongst other entities): Principal Global Investors LLC,
and Principal Real Estate Investors, LLC. 2Correct as of September 30, 2014.
Evolving Emerging Markets
Andy Maynard of CLSA outlines the development of Asian markets and the dynamics that will drive further innovation.
Exchange-traded volumes on emerging Asian markets have increased dramatically over the last ten years, but their full potential will not be realized without change in domestic regulations and global buy-side allocations.
Europe, the US and Japan have been the biggest markets for some time and as a result, they clear trades independently. Within developed Asia, Taiwan to a certain degree, trades independently but now it is included in investors’ China exposure. ASEAN went through a euphoric run based on ballooning GDP growth, with similar stories in individual markets such as Indonesia, Thailand and the Philippines.
If you take the Philippines, for example, ten years ago that market traded USD5 million a day, and now it trades half a billion. It has become a bona fide market compared to what it was, as have Malaysia and Thailand, but they have been in a second tier. Within global portfolios, exposure to emerging Asia has either been a relatively small part of the overall allocation, or contained within a country-specific fund.
As the growth story evolved, so did the equity markets, but the economic backdrop has also changed. The exchanges as well as the regulators are still very independent in each jurisdiction. More cohesiveness amongst the regulators, especially as regards technology and new exchange technologies would help brokers.
Using Japan as a case study, the exchanges have changed themselves to compete in modern trading, including merging the exchanges to narrow spreads. They have done much to ensure the main board is as competitive as it can be and limit the proliferation of exchanges/venues/pools. Expanding it to Korea and Taiwan, the markets are relatively efficient and evolved.
Taken together, developed Asia (Singapore, Hong Kong, Australia, New Zealand, Japan, Korea, Taiwan) has critical mass, as the exchanges evolved and the regulators produced a playing field for all participants.
In the rest of Asia, including India, the myriad nuances of exchange rules and regulations mean the majority of a broker’s work is on market microstructure. Our client offerings in places like Indonesia, Thailand, and the Philippines are all materially different, so the best execution suite differs market-by-market. Because these markets have gone from famine to feast relatively quickly, volumes have increased and market participants, whether international, retail, institutional or mutual, are more diverse.
The progression in the Philippines is amazing considering how quickly the Philippines went from the smallest Asian market in the 1990s to where most buy-sides now have a few stocks in their portfolio that are listed in Manila. A growing domestic community trades actively and has begun to look outside the Philippines.
By default, the exchanges tend to play catch-up. Not long ago the Philippine Stock Market closed for technical glitches on a relatively frequent basis. Now, with international interest in the market, market failures due to technical glitches have an unacceptable effect on investor confidence, so they will have to upgrade their whole exchange system at some point.
Asian markets will evolve at their own pace, but the overall direction is very clear. The lines between emerging, frontier and developed markets in Asia are blurring. They will retain idiosyncrasies, but interactions in and out of the market will slowly homogenize. Malaysia’s market structure is well developed in so far as brokers know what the regulator does and does not allow.
To this day, many governments are the principal owners of the national exchanges, which is viewed as a crown jewel. This complicates the necessary exchange modernization that investors are looking for. Innovation and increased functionality are requirements to be internationally competitive. India is a huge market, but it lags behind the evolution of similar markets because market activity and structure are impeded by the regulator.
These domestic developments often come back to whether there is a need for it. In those markets where the government owns or operates the exchange, the regulator is frequently prohibited or discouraged from introducing competition.
The burden is on the domestic community to request change. If the exchanges do not think they need to innovate or the regulator is not producing a legal framework for change, usually the market participants have not been asking for it. When these markets evolve and their international presence grows there is a tipping point where portfolios start to include local names. Then, almost by default, the exchange will upgrade to continue to attract flow.
In the last few years, we received numerous requests from a range of big buy-sides for frontier market access, from Mongolia to Pakistan to Sri Lanka. Investors want to know who is on the ground, who offers comparable services/structures across markets, and who has the heavy lifting in these smaller markets. As a result, we now have a daily order pack for the smaller markets in the region and a dedicated team covering frontier markets.
The overarching trend is buy-side firms offering dedicated global frontier funds. The money associated with them does not compare to the larger emerging funds, but most asset managers have some level of frontier activity. Investors see the upside versus the risk of investing in markets, so the conversation is shifting to formerly obscure places such as Laos and Mongolia.
The outlier in all of this is China, which despite its size remains an anomaly. The Chinese market is massive in its own right such that any mention of innovation in the market, either from the exchange or the regulator, makes everyone jump. The market is still cheap relative to its global peers, and the Stock Connect is already showing investor attraction. Moreover, the influence coming out of China through Hong Kong could eventually outweigh the northbound flow.
20 years ago a global portfolio meant investing in an Aussie banker and a handful of Asian brokers. Today, it means stocks in Taiwan, car manufacturers in Korea, and a range of regional firms in your investable universe. At some point it will include names in Malaysia, Thailand, Indonesia, the Philippines and India. That matrix of pressure on markets, exchanges and regulators is going to continuously grow as Asia takes a bigger share of the GDP pie. Regulators and exchanges will have to evolve.
Outsourcing: Unlocking Value
With George Rosenberger, Managing Director, Head of Trading Services, Connex
Today, firms continue to use outsource providers to create value for their businesses. It is not that every firm is still in a mass cutting phase, but a lot of big firms have cut pretty deep and no longer have the resources to quickly complete new projects. We’re starting to see programs and initiatives in our space that used to get approved very quickly now taking longer to get off the ground. Firms are starting to implement a formal review process where procurement groups have become a business committee that analyses whether or not a project is viable and a strong enough bet for firms to invest resources toward development. That tells us that there are finite resources available at sell-side shops to get things done. They’ve had to cut so many people over the last few years to maintain profitability and market share, and in many instances it’s obvious that they have now cut too deeply to properly address their evolving business needs.
When the market is doing well, everybody is bullish and new expenses are not heavily scrutinised. When business slows down, everybody retools and focuses on expense reduction. However, I think this last downturn has been so dramatic that it has caused many firms to formally implement procedures that they didn’t have before. These new procedures relate to new development items, how they get rationalized, approved, onboarded, and ultimately how resources are allocated to them. And, for better or for worse, depending upon your perspective, the trend is here to stay.
The nature of our industry is that our clients continuously and increasingly demand new tools and services. If we need to slow down development of new functionality for existing clients in order to be compliant with changing regulations or to deliver tools to new clients, existing client relationships may begin to deteriorate. Clients will demand that firms get more done with less. And, the way to do that is to outsource. Regardless of whether it’s onshore/offshore outsourcing, firms continue to see the value in outsourcing to a knowledgeable provider who can create value for their firm. Often outsource providers offer access to dedicated resources that have expertise in a particular market/segment. That expertise, coupled with their scale, often allows firms, who leverage outsource partners, to get on par with their bigger competitors.
In recent years, there has been an increase in need assessments that are now conducted within big firms. The challenge for firms is that now all new projects go through ROI analysis and need sign off at the committee level to get done. Part of the analysis is to decide whether it is a core competency for their firm. Then they bid internal resources against external. Recently, we were competing against one of our prospect’s internal development teams. The firm was looking for bids to analyse whether we could get their project done cheaper, faster and with better results. We have an edge when competing against internal resources. Development is what we do professionally so we understand the resources, the investment and the effort required to initiate and finalize a project. Competing against internal stakeholders and development teams is opening up doors to prospective clients who would otherwise not necessarily consider outsourcing as an option.
The client relationship
Obviously, we want to try and bid for as much work as we can. If firms are now opening up those channels of communication with us and initiating a dialogue with new outsourcing partners, that’s great for us and great for the outsourcing industry as a whole. Certainly every client is different. The size of the firm is relevant because of their resource availability and/or the quality of resources is always important, and we along with other providers need to adjust our services to meet the specific requirements of clients of all shapes and sizes. The mid-tier brokers and smaller brokers have always had the mind-set of outsourcing their initiatives because they don’t typically have their own technology. Unlike the bulge bracket firms, the second and third-tier brokers don’t have the resources, the personnel, or the cash to build these things directly. As a result, they white label technology from other solutions providers. And, the bigger banks and broker-dealers that thought that they could deliver projects better on their own originally are now seeing the value in outsource providers as their internal resources are scaled back, and/or working on other initiatives.
Earning new business depends on the type of relationship we have with clients. Is it a pure software development relationship, or do we also have a hosting and support relationship with them as well? What we started to see was that we began to get development work from clients who we initially started a relationship with by hosting and supporting software for them. We also have relationships with clients where we initially built software for them and they hosted it, and then later in the engagement, we end up hosting it for them as well.
We often like to start a relationship, with a new client, through a limited engagement whereby clients “try before you buy”. A client can see how we, as their outsource provider, deliver and what our service level is like over time. Do we live up to our reputation? What type of product do we deliver? How is our support model different from other providers? Is it a timely billing cycle? Once firms are comfortable with all of those things, they often leverage us more. We regularly find that we initiate a relationship providing one particular type of business, product or service, and it quickly grows into other opportunities that we did not initially discuss as part of the engagement. Later in the relationship, there is ultimately more trust between us and our clients. We are increasingly finding that building trust with clients leads to greater opportunities.
New uses for the data
Our core business is FIX connectivity and risk management. That is still the foundation and why clients come to us. But once a client is onboarded, we start to do more customized development work for them. We can build custom MIS reports and business intelligence studies to help clients grow their business.
Today, we are showing clients powerful information that they have never seen before. They can use it in a variety of different ways. With our consultative approach, we work with clients to deliver reports that show which clients are trading, how they’re trading, and what they are trading. Once clients have the data, it is of course limitless in terms of the number of data points that they can leverage to their advantage. We can create custom MIS reports that help clients look for new revenue opportunities, which of their clients are using what execution tools, which clients were sold nine different strategies and only use two, etc… We have the ability to deliver valuable data to our clients that they can use with their clients to create greater revenue opportunities. By delivering the valuable insight and data to clients, relationships continue to grow organically. And, as clients continue to work with us they notice how much value we add.
The right type of provider
With outsourcing, clients should closely consider the type of provider that they are looking to use for particular initiatives. There are niche providers and general providers. With some providers you have geographical issues with multiple time zones that make it challenging to firms. There are often critical details lost in translation between the groups. If a client selects the wrong provider for their needs, it can lead to a disappointing engagement. Furthermore, the switching costs are high. There can also be commercial costs for switching from the current provider as well as timing risk. Building on this idea of organizational trust is critical because, in our space FIX is the lifeblood of trading firms. In order to get a large firm to trust us to handle their FIX connectivity, we have to convince them that we will treat them with the utmost care and the same attention to detail that we would our own business. This is why clients have to rely and leverage providers’ existing clients to get referrals. Firms should look at the longevity of the provider’s business, how long have they been in the space and take a close look at their track record for their engagements with their clients. Doing research is an important step in selecting a provider.
What is next stage for outsourcing?
There’s always going to be a need for vendor outsourcing. Broker-dealers trade and some of them provide equity while others provide research. None of them are in business to manage technology and development. Technology is a means to an end to deliver on a business goal. In that respect, there’s always going to be a need for more technology, and what we offer is specialized technology for a specific part of your critical business functions. FIX and risk management is often underserviced by internal stakeholders. We understand it through and through, and we provide state-of-the-art outsourced FIX connectivity and FIX-related services to our clients. Our team are experts in their respective fields and understand the nuances required to manage and build out newer and better products and services for our clients year after year.
To read more articles on Outsourcing, please read our supplement here.
FX Algos: Changing Technology And Trading
With John Radle, Global Head of Trading, Campbell & Company
Currently we trade more than 90% of our FX flow electronically, that is a combination of bank supplied algorithms and point and click trading via an aggregator that we assembled leveraging our bank relationships and in conjunction with Portware our EMS provider. We put together the liquidity pool to give us the ability to trade point and click if that’s the best method of execution for the order. Or, we have the ability to use a variety of bank supplied algorithms that we use to target specific execution benchmarks, including time weighted average price and arrival price. Most recently we’ve been trying to take the bank algo approach a step further by partnering with Pragma. Pragma is working with us to build on their extensive experience with equity algos to develop custom FX execution algorithms. We assisted them in establishing a liquidity pool that their algos sit on top of accessing various bank and ECN streams.
Our trading is strictly execution based. Our quantitative group on the research side creates the systematic alpha models, which then produce orders that are delivered to the trading desk. The orders route into our OMS and then the traders look at each one and select an appropriate execution algorithm to match the desired benchmark.
We’ve also built some proprietary TCA tools. We have a platform that allows us to watch the orders in real time, compare the benchmark price to our average price and effectively monitor slippage in real time. As we’ve gone increasingly electronic and we have multiple execution algorithms trading simultaneously with different banks and at Pragma, the traders have a view as to how those algorithms are performing versus the stated benchmark that they’re targeting. That way if we start to deviate too much from a specific benchmark, the trader can intervene or investigate why that trade isn’t working as planned and adjust accordingly.
Extending FX into TCA
We see this as being part of our real time TCA, as we can see what percentage of the order is complete and how much is left. We can see our start time on the order, the end time, what the benchmark price is being calculated at and what our average price is at that moment. We can also see the difference in terms of dollars so it’s a very clear view. What we have found is, as we became increasingly heavy users of algorithmic trading, we could not just rely on an end of day, after the fact, transaction cost analysis report.
Moving that process to real time has assisted us in our goal to provide best execution for our investors. This allows us to effectively monitor trades as they are happening, as liquidity is evolving in the market. It tells us whether spreads are tightening or widening and then it’s very apparent to us as the trade goes on, what the slippage is, and it gives us a great view into what’s actually happening at the order level. If we have 10 or 20 algos trading at one time we can monitor that whole group in real time and it gives the traders another tool to ascertain whether the algo is performing as we expected. The whole real-time TCA concept is something that the brokers really don’t offer. We had to build the systems ourselves.
The liquidity pool that we created allows us to do point and click trading on that pool. It allows us the opportunity for better execution for our investors because we’re able to stream a variety of competing banks into that pool and we’re also able to include some ECNs in there too. We feel that this provides better price discovery by having the banks and ECNs compete.
The future of RFQ voice trading
There will always be a need for voice trading FX. It just depends on what trading style you use. Obviously, there’s more of a need for that type of execution when you are trying to move larger blocks quickly. If you want a big position put on immediately and then a few hours later you want to take the entire thing off then that type of trading would suit. We have become more granular in our trading over the years, and now we take our orders and break them out to trade in different timeframes instead of doing those very large, potentially market impacting, trades. This allows us to take a large order and break it up into smaller pieces. Then we can execute those smaller pieces on much tighter spreads in much smaller amounts which allows us to effectively match our benchmarks more closely.
Generating the benchmarks
Our research team has done a lot of work around our execution benchmarks; when they deliver an order to us it has a pre-determined benchmark, a start time and an end time. Then we’re just measuring that benchmark over that timeframe. We did have benchmarking challenges when we were trading larger sizes more quickly because trading windows were less defined. But as we moved our trading style to smaller sizes over defined windows, we were able to target our benchmarks within those specific trading windows. That also helps us to understand if we’re trading at optimal times, because we’re able to compare our benchmark slippage across the trading day and see if in certain periods we’re seeing higher slippage or lower slippage. This has helped us fine tune our FX trading.
The impact on trading, and the traders
We’ve seen a meaningful improvement in our slippage and our executions by having the traders really hands-on, in terms of monitoring slippage and making recommendations to our research group, such as modifying the times that we’re trading some of the currency pairs.
We leverage a lot of knowledge to try to get better in other asset classes. Initially we saw good improvement in slippage in our futures trading and we applied that same methodology to FX. We’ve definitely seen the trading job evolve. When everything was done over the phone, it was a much more manual process. Now that we’ve got this proprietary platform that allows us to monitor slippage in real time, it gets more granular for the traders to understand how an algorithm is trading. There is also greater leeway in the type of parameters traders can tweak when setting up an order, because of their past experiences doing similar trades. By looking at their impact they can make a decision based on what’s going on in the marketplace. If the trader determines that the algorithm should be a little bit more passive or a little bit more aggressive, they can make appropriate adjustments. If the trader needs to stretch or change the window, they can within certain parameters because, at the end of the day, everything is about getting the best execution for our investors.
Our real focus is on experienced traders that understand the market and have old school knowledge but also a deep understanding of electronic trading, slippage management, how the algorithms work, and how the parameters within the algorithms work. We are effectively pairing an experienced trader with those electronic trading tools, optimising those tools and monitoring the results in real time to get the best possible execution.
Always advancing
Trading overall is going electronic; all traders have to evolve and adjust. If you’re an old school trader whose only skillset is the ability to pick up a phone to get a trade done in a block, you’re unlikely to be successful long-term in tomorrow’s world. Clearly there’s a need for people who have that ability, but even on a desk where they do a lot of block trading, you also need people that are able to select the right algorithm, set the parameters and be more sophisticated in the way they handle an order.
With the evolution of these trading tools, it’s difficult to see a world where one size fits all or one solution is the best for all trading. There could be times during a holiday period where maybe it isn’t appropriate to use an algorithm. We give our traders flexibility on the execution front to look at the marketplace, to look at the tools they have, at the order itself and make the best decision as to how to handle that order and get the best execution.
This is why our proprietary platform is so important, because you don’t ever want to get into a “set it and forget it” mentality. You want to do your best to select the right execution strategy, appropriately set the parameters, and then monitor it so that if it’s not trading in an optimal fashion you can adjust and take action as needed.
Once people start to trade electronically and they see the efficiencies that it can bring to a trading desk, the next step is to push deeper and utilise even more tools that are available. People are exploring the algorithmic tools that are out there. That is leading to more algorithmic usage, which is what the surveys have shown and what we’ve heard from our brokerage counterparts.
We’ve always pushed for innovation on the electronic trading front and feel it is a necessary tool in order to provide best execution for the end investor. With the markets and trading tools constantly evolving you’ve always got to be looking at what’s new and where else you can advance so that you’re always at that cutting edge of what’s available to help deliver the best results for your clients.
FIX-6 Standards for Market Data
By Yuval Cohen, Technical Lead, Etrading Software; Co-chair Application Layer – Market Data Subgroup of the High Performance Working Group (HPWG), FIX Trading Community.
Over the past few years there has been a proliferation of high performance market data protocols not only in the traditional equities and listed derivatives markets but also in other markets such as foreign exchange, fixed income and energy. With each new protocol comes the need for software vendors and investment banks to develop and maintain new feed handlers, and with that, incur the increased cost.
Increasing costs in an industry that is currently looking to reduce the financial impact of its technology has raised alarm bells within the industry which has resulted in initiatives that should soon establish an opposite trend that converges existing market-data protocols to a few key standards.
Over the past couple of years, FIX HPWG has been working to develop a set of open standards aiming to reduce the overall costs of development, implementation, maintenance and support of high performance trading and market data protocols. An additional initiative to support high performance market data protocols has been started a few months ago and is now being integrated into the working group.
The aim of these new standards is to be easily adopted by execution venues, banks, vendors and other financial firms that communicate with trading or market data protocols.
They cover multiple asset classes: equities, derivatives, foreign exchange, fixed income and other tradable instruments. The new standards have been built with high throughput and low latency as a core requirement and design principal.
The following standards are being developed by various subgroups of the FIX HPWG to provide a solution for high performance, low latency, high throughput market data protocols:
- A complete set of market data workflows and messages which contain:
o Reference data workflows
o Order-book management workflows
o Recovery and initialization of order-books and reference data
o Additional workflows which are common to market data feeds
- A set of high performance encodings which are recommended for non-bandwidth constrained networks
o Simple Binary Encoding (SBE)
o Encoding FIX Using ASN.1
o Encoding FIX Using Google Protocol Buffers (GPB)
- For bandwidth constrained networks, FIX
o Developed FAST (FIX Adapted for Streaming) encoding a few years ago
o Intends to develop CBE (Compact Binary Encoding) in the future
- A new session layer FIXP which provides:
o Reliable, highly efficient, exchange of messages
o Symmetric (peer-to-peer) communication of messages (mainly used over TCP/IP)
o Asymmetric multicast (one-to-many) communication of messages (mainly used over UDP)
o Independence of encodings and transport layer
Some of these open standards are ready to use, whilst the full set of these standards is expected to be technically ready and published in 2015.
There are already execution venues that have started to adopt some of these standards and others that have shown great interest.
In the coming years, there is a good opportunity for the industry to significantly reduce the costs of market-data communication by converging on these fit-for-purpose open standards.
MiFID II, Transparency and European Corporate Bond Markets
By Jim Rucker, Global Head of Operations Services, MarketAxess.
With the release of ESMA’s Consultation and Discussion Papers on MiFID II the industry is much closer to understanding what these far-reaching rules might look like once finalised, and the changes they are likely to entail for market participants.
A clear goal for European regulators is to increase pre-and post-trade transparency in fixed income markets. We have always supported post-trade transparency, which we believe has had a positive impact on the size and liquidity of the US corporate bond markets.
We see potential risks to liquidity, however, in introducing mandated pre-trade transparency for less liquid products such as corporate bonds.
What is liquidity?
A critical piece of the MiFID II regulation will depend on how liquidity is defined and measured. In the US we have conducted regular research, using data from FINRA’s TRACE consolidated tape, to analyse and quantify liquidity and trading costs in the US markets over time.
More recently, we have conducted a similar analysis to understand the key differences between the liquidity characteristics of the US and European markets.
We looked initially at three key measures:
- Overall trading volumes
- Turnover rates (defined as the total amount traded, divided by the volume outstanding for the bonds that traded)
- And overall size of the markets
Trading volumes
We observed some interesting differences. According to Trax estimates1, total trading volume for Euro-denominated debt in 2013 was only $1 trillion, just a third of the $3 trillion traded last year in US dollar-denominated debt according to TRACE data (Fig. 1).
Turnover rates
We also found that bond turnover for US dollar debt was approximately 60%, compared to just 41% for Euros. This means that US$-denominated bonds were trading about 1½ times more frequently than Euro-denominated bonds (Fig. 2).
As a point of comparison, prior to the financial crisis, turnover in the US corporate bond market was running at over 130%, meaning that each bond, on average, was both bought and sold more than 1.3 times per year (Fig. 3).
Corporate debt outstanding
A similar picture can be seen for the total debt outstanding in each region. In Europe, high-grade debt outstanding has increased just 20% in the last five years, from $2 trillion to $2.4 trillion. Yet, during the same period, US high-grade debt outstanding has almost doubled, from $2.7 trillion to over $4.5 trillion, according to FINRA (Fig. 4).