Home Blog Page 467

Viewpoint : The next generation : FIX Trading Community

FIX NEXTGEN: THE FUTURE OF FIX?

FIX NextGen is an initiative designed to continue to build the community by bringing together members of our industry who may not have thought that participation in FIX is necessarily relevant to them. The NextGen initiative has been endorsed by the EMEA co-chairs, who along with a small steering group are working to hold a series of events throughout 2020 to reach as many of our FIX members as possible.FIX, or the Financial Information eXchange protocol, is often compared to the telephone; it just works and you never need to think about how it works. The FIX protocol is similar in a way. It underpins trading in capital markets globally. However, the FIX protocol was not the brainchild of a single person or firm, but of a small number of individuals and firms who grew the network and built the protocol into what we know it as today. However, a number of the active participants who were involved in forming the original community have now mostly risen up their ranks and are gradually moving into semi-retirement. The FIX Trading Community has achieved amazing things over the past 20 years. Its success is largely due to the volunteers who represent the whole breadth of the financial services industry, and who have always found ways to work together in order to find the best solution for the industry. We need to ensure this successful collaboration of leadership continues.

“As someone who has been involved in FIX since early in its inception it runs through my DNA – I’ve seen it grow from a very good idea to the protocol becoming an essential part of the fabric of global capital markets,” commented Maria Netley, FIX EMEA Secretariat and founder of FIX NextGen EMEA. “The NextGen initiative is helping to secure the future of the community and identify the future guardians of the protocol.”

Rebecca Healey, Co-chair of the EMEA Regional Committee, FIX Trading Community and Head of EMEA Market Structure & Strategy, Liquidnet, added: “The development of the FIX Protocol within the FIX Trading Community is a unique example of how an industry can come together over more than two decades to create a pragmatic, cost-effective solution for the entire industry. Its continued success will depend on finding the next generation of guardians and volunteers and we expect that the NextGen initiative will help to identify those individuals.”

We asked some of the NextGen Steering committee about what their driving force was to get involved with FIX.

Stuart Nicholls, Execution Services FIX Integration Product Manager, FIX NextGen says: “At the beginning of 2019, senior members of the FIX Trading Community led an initiative to recruit a new, next-generation of committee members. The ask was simple: after us, who will continue our great work in promoting, protecting and driving change within the protocol? We all agree that the FIX Trading Community is incredibly important to our industry, so it is vital that FIX stays current and relevant, and in order to be both of these things, the innovation of new products and the approach we take is key.”

Alex Ings, Global Execution Services Product Management and Market Structure, Credit Suisse noted: “I have been working with FIX for just over a year now and so I am relatively new to the community. Accessibility and joining FIX working groups can be daunting if you haven’t previously been involved and are facing people with many years of experience. I got involved with FIX NextGen to try and help and get more people engaged, especially the younger members of the community and those interested in the technical aspects of FIX”.

According to the NextGen members, what is FIX trying to achieve and what are the ambitions for FIX NextGen in 2020?

“FIX is something that most graduates, new entrants or industry changers start with no knowledge of. Getting thrown in at the deep end and learning on the job are the norm,” says Sakeena Lalljee, Sales & Business Development Manager at Aquis Exchange. “The group is planning to provide a new layer of support for those individuals, and for their companies, as well as opportunities for more experienced technologists and business users to get involved with FIX initiatives, workshops and events.”

In particular, NextGen is working to create more opportunities for non-client facing individuals as well. The FIX NextGen launch event, held at the “The Folly” in the City of London in October 2019, was the first step towards this, attended by close to 60 members of the FIX community. It gave the opportunity for people to hear from some of the earliest members of the FIX Trading Community about how far the industry and usage of FIX has come, and the importance of current and upcoming industry members in continuing to uphold and shape the FIX protocol.

FIX NextGen will be running workshops covering new topics every other month, including TCA/Best Execution, ESG (Environmental, Social and Governance) investing, diversity and a mentor pilot scheme. These workshops, the first of which will be held in January 2020 will provide the opportunity for newer members to come together, actively engage and benefit from these educational sessions. One of the key drivers of FIX is to ensure members always come away from any of the FIX events or working groups feeling like they have learnt something new or have been able to really add to the debate. It is in this same vein that we hope NextGen members will always walk away from subsequent meetings with that same feeling.

To get involved with the FIX NextGen contact fix@fixtrading.org and be sure to subscribe to the FIX NextGen LinkedIn group https://www.linkedin.com/groups/8811221/

©BestExecution 2020

[divider_to_top]

Viewpoint : Managing reference data : Tim Fox

MISSING A TRICK WITH YOUR DATA?

New regulatory reporting requirements such as Securities Financing Transaction Regulation (SFTR) are prompting financial institutions to reassess how they gather and manage reference data. Meeting immediate regulatory demands is essential, says Tim Fox of the SmartStream Reference Data Utility (RDU), but wouldn’t it be smarter to establish an enterprise-wide approach to handling reference data?

Where reference data is concerned, recent rule-making has given – and is still giving – plenty of sleepless nights to those working in financial institutions. The dust may just about have settled on MiFID II, but now firms must get to grips with looming SFTR.

SFTR reporting obligations will kick in next year – April 2020 for banks and investment managers, July 2020 for CCPs and CSDs, and then October 2020 for insurers, pension firms, UCITs, and AIFs. Non-financial companies will have to follow suit in January 2021.

Preparing for SFTR compliance is proving a somewhat frustrating experience for financial institutions. The final version of regulatory guidance, due from ESMA in Q4 2019, has not yet – at the time of writing – been published. A lack of clarity still hangs over certain areas. For example, it remains unclear how parties to an SFT should determine the quality of the security or collateral involved. Should they seek a potentially costly assessment from a ratings agency? Or make use of some other mechanism?

Another pain point is the need to report whether equities belong to a main index or not. Financial authorities have provided guidance as to which indices constitute the main ones – but knowing which are the main indices is simply the tip of the iceberg, and it is quite another matter to identify all the constituents involved and then integrate this information into an SFTR regulatory reporting workflow.

For the buyside – although October 2020 may appear some way off – the present is not too soon to start looking at gathering the data needed for SFTR reporting. Unlike the sellside, the buyside has had relatively little regulation to contend with: coming to the battlefield later on in the day means that it has had less experience in handling the complexities regulation can pose. Data budgets on the buyside also have a tendency to be smaller than those on the sellside, and so the challenge for firms is to meet compliance obligations as cost-effectively as possible.

How does the SmartStream RDU fit into this picture?

The SmartStream RDU was set up to create a comprehensive repository of cross-asset class instrument data. It operates as a provider of not only European, but global instrument reference data. Initially set up by a group of banks and SmartStream, the RDU is very much an industry-led approach.

Although the original intent of the RDU was the creation of a repository of cross-asset instrument data – and having set up an exchange traded derivatives data service, as well as commenced an equities service – the utility diverted into the regulatory space, at the behest of its customers.

A result of this move was the setting up of the RDU’s MiFID II service. The service eliminates the complexity of acquiring, normalising and integrating the complete set of reference data sources required for MiFID II, integrating all MiFID II reference data into a single schema for financial institutions to incorporate into their EDM, regulatory reporting and trading platforms.

More recently, the RDU has collaborated with a number of industry participants to develop an SFTR service. Available via API, it provides a handy solution for firms looking to source the required security data as quickly, accurately and cost-efficiently as possible. For buyside firms, which often have smaller data budgets at their disposal than the sellside, such a solution may be of particular interest.

Having executed its MiFID II plans, and currently rounding out its SFTR offering, the utility is once more focussing on the pursuit of its original course. To this end, the RDU is launching its equities service, and is engaged in the development of an instrument reference data solution for fixed income. In addition, the utility’s ETD service is now mature. It is also quite possible that the RDU will, in the future, deliver further services geared towards other regulations, although none are presently in the pipeline.

Superior reference data – the link to future competitiveness

The author believes it is important for financial institutions to look beyond just satisfying immediate regulatory reference data requirements and should take, instead, a far broader view. Far-sighted institutions, the author feels, recognise that there is an enterprise-wide necessity for high quality instrument reference data – and they also realise that getting this aspect right can play a vital role in maintaining future competitiveness and profitability.

Yet all too often, firms do not give data the attention it deserves. For example, when preparing to meet compliance deadlines, firms frequently downplay the importance of putting their data houses in order in a timely manner. Effort tends to be focussed on the technology involved in reporting, rather than the data that needs be reported, which generally comes as an afterthought.

A typical – and expensive – trap into which firms fall is to put in place the data gathering and managing processes necessary to meet one set of regulatory requirements, and then repeat the same exercise for subsequent pieces of regulation. Unfortunately, taking this type of piecemeal approach, rather than pursuing a coherent, enterprise-wide data management strategy, risks increasing the total cost of ownership.

In contrast, firms that understand their data in greater depth can respond to upcoming regulation by extending existing infrastructure, tackling any new elements as required, rather than building an expensive new silo. This approach allows them to avoid duplicating data costs, or hiring extra personnel to get data into a compliance-ready state. Buyside firms – given their budgetary restrictions – would be especially well-advised to keep this point in mind when devising any future data management strategy.

Of course, investing in superior data and taking an enterprise-wide approach is not just essential for regulatory purposes: having high quality data and sound information management practices in place is vital if firms are to improve overall data quality. Consider a company putting together funds, with exposure to any given sector or issuer. How does it ensure that concentration limits are correct? Without the appropriate data this task becomes very difficult.

The availability of superior data also has the potential to improve firms’ future competitiveness and profitability. For instance, it will allow them to take advantage of developing technologies such as AI. Indeed, if a financial institution is to leverage its AI strategy to the fullest extent, a bedrock of clean data is absolutely vital.

In conclusion

There is no doubt that a regulatory deadline provides a spur for activity. Where data is concerned, however, satisfying immediate regulatory demands should be a starting rather than an end point. Creating an enterprise-wide data management strategy, which makes high quality reference information available across the entirety of the business, is an important tool in firms’ armoury – whether they are looking to cope with future regulation more cost-effectively or to ensure they remain profitable in years to come. The RDU, through its renewed emphasis on delivering a comprehensive, cross-asset set of reference data, is focussing on providing the superior quality information that will enable financial institutions to achieve these goals.

©BestExecution 2020

[divider_to_top]

Execution analysis : Fixed income execution : Umberto Menconi & Carlo Contino

FIXED INCOME – A BEST EXECUTION LEGOLAND.

The building blocks needed to achieve best execution depend not only upon qualitative and quantitative analysis of trading behaviour, but on the resources of sellside counterparties. Banca IMI’s Umberto Menconi and Carlo Contino* discuss the importance of partnership in assuring execution quality.

What level of feedback should buyside firms expect from their counterparties on execution quality?

Technological innovation, digital transformation and new regulatory obligations following the global financial crisis are dramatically reshaping the financial markets landscape, affecting every asset-class. Relationship dynamics, the level of engagement and the sellside and buyside roles are shifting from the traditional bilateral workflow to a new paradigm. The buyside is not only taking more ownership of its own trades but also rethinking its role in market transformation. It has also increased its appetite for sellside collaboration and investment in trading and risk management tools.

Umberto Menconi is head of Digital Markets Structures, at Market Hub, Banca IMI, part of the Intesa Sanpaolo Group.

Changing dynamics have forced the sellside to evolve and adopt flexible models to satisfy the buyside’s liquidity demand requirements. They have to invest huge resources in technology innovation, in order to generate greater efficiencies as well as acquiring and sharing information and analytics in a more complex and fragmented market environment.

The biggest challenges remain efficient multi-market connectivity and real market data across the trade lifecycle: order creation, order placement, trade execution, TCA analysis, clearing, settlement and custody. The execution quality of a trading strategy, employed by buyside execution desks to capture alpha or to limit portfolio risk, strongly depends today on communication networks, collaboration tools, and continuous feedback. This is not only the case with buyside execution desks, brokers and trading venues, but also among other financial ecosystem participants, such as custodian banks, clearing houses and central depositories. This is because best execution is more than just the transaction price. It also needs to embrace the entire trade workflow with a much broader horizon.

How does the execution protocol impact that?

Leverage limits, stricter capital requirements and balance sheet restrictions on proprietary trading are causing many sellsides to rebalance towards agency broker models. They are helping clients to execute trades on trading venues rather than directly assuming portfolio risk. This market evolution had been pursued by Banca IMI from the beginning, with the launch of the Market Hub platform back in 2008. The transformation from a bilateral, voice-execution model towards a new communication and trading model, based on a network of trading platforms, electronic chats, systems connectivity, and sophisticated EMS/OMS tools necessitated a change in trading culture. This did not only include human resource hiring, but the adoption of different communication channels, language formats, trading protocols and a higher level of collaboration.

Carlo Contino is an analyst in Digital Markets Structures, at Market Hub, Banca IMI, part of the Intesa Sanpaolo Group.

To better execute orders, there is an increasing array of venues to use. This ranges from the conventional (MTFs, RMs, OTFs and ISs) to the unconventional (dark pools) sources of liquidity, as well as different trading protocols (All-to-All, click-to-trade, CLOB, Auctions, Direct connectivity, RFMs, Portfolio Trade, anonymous RFQs-to-all). However, despite the growth, the traditional electronic RFQ (request for quote) model is still central.

We are still a long way from real market structure changes, and the new market ecosystem integration remains a ‘work-in-progress’. However, there are several drivers for a stronger fixed income trading ecosystem including standardised and integrated market connectivity solutions and communication protocols to reduce complexity. There are also shared technological infrastructures among all participants as well as different trading relationship levels, with respect to the recasting of players’ roles, and putting the different protocols in place. Moreover, there has been broader implementation of machine learning and artificial intelligence solutions for the data ‘tsunami’ created after the introduction of MiFID II. Last but not least is the increasing demand for readable smart data and value analytics.

When supporting high- and low-touch executions, what should dealers be cognizant of in supporting their client’s performance?

In fixed income it is possible for the buyside to fully automate some of their execution, thanks to more data availability, axes and price streaming activity. Buyside execution desks see this as an opportunity as well as a game changer. A high-touch and low-touch execution strategy is increasing automation in execution of small size tickets, while leaving more time for traders to concentrate on larger tickets on a bilateral bespoke basis as well as the design of new investment ideas.

To perform high-touch and low-touch execution, the buyside needs to modify its best execution rules accordingly, based on both a quantitative and qualitative approach. It has to reflect their perception of what is low-touch or high-touch as well as ensure trading decisions are addressed to the right counterparties. It also has to make the best use of low-touch efficiency in small size execution and of high-touch risk trades. Firms have to measure execution performance through new metrics tools, even though we are still in the early stages for TCA in fixed income. TCA is still controversial due to a lack of reliable transparent data compared to some other asset classes.

To support buyside execution performance, the sellside should constantly monitor their execution quality and offer customised and efficient trading solutions for high-touch and low-touch business, market connectivity, readable market data and comprehensive reporting services.

What sort of feedback and interaction can better support an understanding of dealer/client objectives?

On one hand, the sellside has been spending plenty of time focusing on data consumption and, more importantly, wondering how they can transform that data to ensure the buyside can use it and that it adds more value for both sides. On the other hand, the buyside has focused more resources on workflow automation and collaboration enhancement with their sellside partners. What is clear today is the need for collaboration with the goal of establishing a relationship based on value.

In this new and complex environment, a pivotal role can be played by fintech vendors when bridging the gap between buyside and sellside, and better supporting interaction. The key moving forward for both buy- and sellside firms is to have a more open and connected networking infrastructure. The end objective is to create a best-of-breed solution that brings the buyside and sellside together.

What are the challenges in benchmarking performance?

Since market data can address the adverse effects of a more fragmented trading environment, demand for real market data has increased. However, today, the cost of acquiring real market data from the various data sources is expensive and there is also a lack of standards. This results in a significant hidden cost to consume and manage data, and represents one of the main challenges in fixed income performance benchmarking. It reduces the potential of using it as a measure of trading performance, improving execution and applying more complex benchmarking. The equity market can be used as a reference, but fixed income execution needs more caution due its unique characteristics.

How does a bank then optimise performance based on feedback from multiple clients?

The style in which the buyside and sellside interact has changed significantly in the last few months. Investment banks can no longer offer everything to everyone. The buzzwords of today are, network compatibility, partnership, interoperability and execution model flexibility. Therefore, Banca IMI has set up a client service desk, which is fully dedicated to improving the client experience – across asset classes, as well as the full operative process – that is in close proximity to the sales and trading desks.

This service strives to better serve and increase the specialisation of our services for our clients. Customer experience offers a key opportunity to build investor loyalty and to create a competitive advantage for both the buyside and sellside. To achieve an edge in the current financial market, the sellside needs to move forward with an increased reliance on innovative technology in order to deliver returns for the buyside, reduce market complexity and real market data consumption cost. Firms also need to be able to integrate their trading systems and data tools into the client workflow.

By moving from trading-centric towards a customer-centric, unified value proposition, fintech companies are bridging the value chains of various industries to create an ‘ecosystem’ that should reduce customers’ costs, be more user friendly and provide them with new experiences. In this collaborative environment, a data-centric business model and client service feedback are gaining ground as the real new assets and value-adds for the sellside/buyside relationship.

*Umberto Menconi is head of Digital Markets Structures, and Carlo Contino is an analyst in Digital Markets Structures, at Market Hub, Banca IMI, part of the Intesa Sanpaolo Group.

©BestExecution 2020

[divider_to_top]

Execution analysis : TCA : Citi

THE EVOLUTION OF TRANSACTION COST ANALYSIS.

By Dr Mainak Sarkar, Head of European Execution Advisory, and James Baugh, Head of European Equities Market Structure, Citi.

The post trade workflow for most institutions on the buy side typically consists of the following steps: record all trade data and enrich it with the relevant market data in order to compute a set of diverse benchmarks, the most popular being Implementation Shortfall (IS), volume weighted average price (VWAP) and participation-weighted-price (PWP). The performance of orders measured against these benchmarks is then analysed based on stock and order level characteristics (such as the bid-offer spread, size of the order as percentage of daily volume, daily volatility of the stock and so on), this is the attribution phase. Finally the performance data is usually analysed by broker to measure any systematic differences in execution, all of this is as mandated under current regulations.

Limitations

We start by noting the challenges associated with each step of the post trade process:

  • Data quality issues and latency in data capture during the measurement phase; we observe considerable diversity in data infrastructure and investment amongst buy-side firms that connect to Citi as a broker. Most institutions do not maintain their own market data as it involves significant cost of setup and maintenance and usually resort to buying the same from third party providers, this can lead to an added layer of complexity and challenge. We do encounter frequent complaints by clients with regards to the quality of the data received from vendors which can lead to inconsistent estimation of performance.
  • Selecting the right benchmarks is critical and this can vary by the type of firm and trading objectives; for instance a quant fund with short term alpha compared to passive investors; have both different trading objectives and horizons and therefore utilise different performance benchmarks. A narrow focus on a particular benchmark can potentially lead to the algorithm trying to achieve that goal possibly at the exclusion of other important goals thereby hampering best execution, as broadly defined under MiFID II. For instance slippage against arrival may be reduced by trading intensively (small orders only) however this can lead to high levels of impact and price reversion, which is not desirable. Conversely too many objectives can often cloud the findings regarding performance therefore making any changes to trading difficult. As a broker we have observed significant variation in how clients approach this and there is no single size fits all solution to this problem and various clients are grappling with this on an ongoing basis.
  • Attribution can often be challenging- for instance if the stock moved adversely during trading, decomposing this move into market impact caused by the trading of the algorithm with the rest being natural alpha in the stock is usually challenging, various approaches using index and sector level beta adjustment exist however they are not entirely satisfactory in their outcome. Similarly small sample sizes can lead to the analysis frequently being restricted (or dominated by notional weighting) to the large capitalisation / more liquid names with significant order flow, however the performance across brokers in the less liquid category may be more relevant but difficult to measure. .
  • Evaluation of brokers may be similarly biased due to the skewed distribution of order flow in terms of difficulty particularly when they are manually traded, this may be driven by long time perceptions of particular strategies and brokers. The algorithmic wheel setup seeks to ameliorate these problems by randomising the flow of orders across brokers. Citi’s experience in this regard has been varied as we are well represented on multiple wheels across a host of clients, wheels are frequently time consuming to setup and calibrate as the right data is collected across brokers and adjusted for order differences to achieve a fair comparison, this process can be quite challenging.
  • Statistics / machine learning: working with large datasets (larger clients can trade thousands of orders in a day) usually requires key decisions to be taken regarding how to deal with outliers (identify, remove, winsorise, etc), how to price the unfilled portion of any orders (opportunity cost), whether to penalise under participation by brokers under adverse conditions (scaling wrongly), etc. mostly in an automated fashion, all of which require sophisticated analysis on a large enough sample of orders by specialised team to draw meaningful conclusions. Citi has observed a significant uptick in the hiring of such professionals in recent times on the buy side. There is a trend towards using more advanced statistical analysis and machine learning techniques to analyse performance. This is an industry trend which has been strengthened by regulatory requirements.

Citi recognised the complexity of the above mentioned process of post trade analytics and created an Execution Advisory team, whose role was to engage and consult with clients to help improve their benchmark performances. There has been significant engagement across a range of clients who have benefitted from this more consultative approach.

Challenges for the broker

A significant development in the post MiFID world has been the increasing automation of the trader workflow and has seen the growth in the use of algorithmic wheels. The wheel engine randomly identifies and allocates a set of ‘easy to trade’ orders across a list of brokers. The identification of ‘easy to trade’ can be either done via a pre trade model (estimated impact cost function) or by a set of rules such as percentage of daily volume and/or notional limits (all orders less than $1 million are routed to the wheel for instance). At set frequencies – usually a month or a quarter – the performance across brokers is analysed and the wheel is reweighted such that the broker performing better receives more orders from the client over the next period. Note that this can lead to an imbalanced sample in the next period and therefore statistically difficult comparisons across brokers, as some brokers have large samples and others significantly smaller ones. Having too many brokers on the wheel can potentially lead to orders being thinly spread and therefore sample sizes being too small, the optimal number of brokers is therefore client and flow specific. Finally for fair comparison ideally a post trade normalisation of order performance across brokers is desirable, for instance if Broker A received predominantly certain orders on news or event days when volumes and volatility are usually high; this needs to be factored into the analysis while evaluating brokers.

Brokers are continuously in the process of upgrading their algorithms based on internal analysis as well as feedback from clients, however such redesigns are often time consuming and expensive in terms of resources. Therefore fundamental changes in the trading platform occur only infrequently and usually most innovations are incremental in nature. Therefore short term changes to the strategies are unlikely to drive large fluctuations in execution outcomes for clients, given that significant care is usually taken by the broker to roll out only changes which are well tested and are expected to improve the outcomes on average. The volatility observed in rankings can often be greater than the infrequent changes to algorithms which leads one to suspect that the evaluation mechanism on the buy side can be improved by filtering out the noise in the data better leading to more stable comparisons, from a broker perspective.

For medium to small size firms infrastructure is often restricted to consuming TCA provided by the broker themselves. Assuming complete impartiality it is observed that due to significant differences in definitions, data capture and storage issues, and measurement errors, performance measurement can vary widely across brokers making comparison problematic for the buy-side. The subjective element of measurement is significant when comparing TCA across brokers.

In our experience even for those larger firms that are able to capture their own data, the numbers invariably do not match the brokers’ numbers and a significant amount of time and effort is subsequently required to reach some level of agreement and consensus, which is essential to ensure that the data is sufficiently clean before drawing any conclusions from it which will drive the execution outcomes.

Whilst we have seen an increased interest in independent providers of TCA, which can apply a more uniform and consistent set of measures (benchmarks) against the data, they are not immune to these same challenges. Moreover, they lack the understanding and the nuances of broker specific execution strategies. Applying a one size fits all approach to data across brokers loses out on the broker specific differences that are key to achieving best execution. Without the right context as to why the algorithm behaved in a particular way the filtering of orders is based on rules which may not give informative outcomes.

Innovations at Citi

Citi is developing the use of advanced machine learning techniques to drive a more proactive approach to the TCA process. A number of these techniques, including the use of peer group analysis and clustering, are discussed in this article.

  1. Peer analysis

A new feature selection approach using Random Forest models has been implemented to highlight how different variables contribute to the overall performance of flow from a particular underlying client. This tool is also helping to identify areas of underperformance via potential trader biases under different market conditions when the price may be moving in and out of favour, such as systematic modifications, limit price selections, cancellations, etc.

Actionable TCA of this nature is key in delivering client specific recommendations, which in turn is leading to better overall performance.

Machine learning based Peer Analysis has also been developed at Citi. Clusters are compared across different clients with similar objectives thereby identifying any systematic differences in order flow. Using clusters identified for each client, we can make recommendations on customisations and on strategies to optimise client performance. Exhibit 1 illustrates this approach.

  1. Stock categorisation

Citi is also using machine learning methods to identify different clusters of orders. We use various stock level characteristics. The most standard being bid-offer spread, volatility and queue sizes. Other more microstructure level features are used to endogenously categorise each stock in the universe to a particular category. For instance one category may be high volatility, small queue size stocks. There are multiple applications of this approach both at the pre-trade and also post-trade phase.

This model can be used pre trade to determine the right strategy. For instance where a client is using VWAP algorithms for low volatility but wide spread stocks. The EAS team actively monitors the performance goals and can use these tools to come up with custom recommendations for different clients. In this case one recommendation may be to change the passive layering of the book. In essence security and order level clustering methods allow us to draw conclusions at the client level which ultimately leads to better execution performance.

III. Forward looking

Citi is enhancing its current historic data and analytics based approach by incorporating a forward looking simulation methodology, through the provision of ‘scorecards’. Citi believes that this hybrid method leads better insight into performance measurement. Each scorecard highlights alternative strategies for liquidity provisioning. This innovative approach marries the client’s objective with market conditions and available internal liquidity to come up with theoretical improvements in execution versus the client’s benchmark.

The scorecard can be used by high or low touch trading desks and or central risk to adjust for commissions and make client specific recommendations depending on what the optimal strategy is for them to execute their flow.

Recommendations can be tailor made for clients with different flows. Exhibit 2 illustrates a sample scorecard approach, which shows cost savings / slippage versus commission rates for liquidity sourcing.

Optimising for TCA performance has been a key focus of the research and development here at Citi. Extensive study and back-testing has revealed that the price and size alone should not be the only consideration when interacting with different liquidity sources. Citi has employed machine-learning algorithms to monitor intra-trade market activity and real time order performance to drive its algorithm behaviour. Real-time venue ranking also plays a crucial part in sourcing not only the best price and size, but also takes into account the quality of the liquidity. In regards to interacting with Systematic Internalisers, Citi’s comprehensive real time venue analytics measures different characteristics such as short term toxicity, uniqueness of liquidity, reversion, good and bad fills, fill rate and quote and fill toxicity. For example, we not only look at the child-level fill performance to measure the quality of our fills, but prior to the child trade we establish the toxicity of the quote from each of the Systematic Internalisers.

This information is fed to the Smart Order Router to determine its routing decisions. Furthermore, the routing decision for taking and posting liquidity is also regulated for different strategy types based on the urgency level – a very aggressive strategy may still interact with toxic liquidity sources to satisfy liquidity demand. These enhancements help us to tailor our execution strategies and improve performance.

  1. Case study

TCA driven enhancements to algorithms

Citi measures performance across a wide range of clients and uses the findings in designing better products that help clients achieve optimal outcomes – an example is given below. Citi is in the process of rolling out significant enhancements to Dagger, its flagship liquidity-seeking algorithm (LSA) for equities. Earlier TCA reports gave a starting point of where the algorithm could be calibrated more effectively. This was combined with more research by the quantitative team to build a new algorithm which performs better against a wide spectrum of market conditions and stocks. Performance was evaluated using a standard A/B race on Citi principal order flow from Feb 2019 to May 2019. Trades numbered over ten thousand, with a value of over $1bn. Results showed significant performance improvements. Some of the key benefits were:

  • Spread capture improved by 54%, with similar average fill rates;
  • Spread crossing reduced by 42%;
  • Similar average order duration and fill rates to the classic Dagger.

Key performance drivers were:

  • Move away from heuristic based, reactive logic to a continuous optimal trajectory model based on machine learning analytics;
  • Improved strategy sensitivity to possible adverse selection scenarios;
  • Use real time analytics in addition to historical to rank venues;
  • Machine learning algorithm for dynamically choosing passive allocation policy based on real time performance.

Performance was analysed on different aspects/slices of the data in terms of order as percentage of ADV, volatility, spread, market move and participation rates. The most significant improvement was observed in the category where the prices ‘come in’. Exhibit 3 shows the distribution of the slippage for this category; note the significant improvement in performance. We also note that the performance remains consistent when the prices are moving away. Furthermore, the improved Dagger is more capable of sourcing liquidity passively and at mid. This is an example of where machine-learning components in the algorithm have helped to provide a better performance to the end client.

 

Conclusion

In this note we have outlined some of the shortcomings of the post trade analytics process (of which TCA is an integral part), that in our experience clients have encountered frequently. Some of which can be fixed via more and better investments in infrastructure (better data quality) however others are more intractable (normalisation of orders by broker for instance) and require continuing research and analysis by our buy-side colleagues. The Execution Advisory team here at Citi is a resource to assist with this process in every step of the way.

A number of innovations undertaken in this space have been outlined above. Our application of advanced machine learning techniques to TCA and using real-time analytics to minimise the implicit costs of trading is proving invaluable in providing the performance metrics our clients need to improve performance, meet best execution requirements and measure and rank broker performance.

Finally we gave an illustration of product design based on inputs obtained from TCA which led to significant improvement for the client and better execution outcomes through the use of advanced machine learning techniques, calibrated using the findings from TCA.

For further information, please contact: emea.eas@citi.com

©BestExecution 2020

[divider_to_top]

Execution analysis : Multi-asset TCA : Kevin O’Connor & Michael Sparkes

MULTI-ASSET TCA: FASTER, BROADER, DEEPER.

By Kevin O’Connor, Head of Workflow Technology and Analytics & Michael Sparkes, European Analytics Business Development at Virtu

The scope and application of transaction cost analysis (TCA) and its various close relations has evolved dramatically over recent years, driven and enabled by changes in technology, regulation, market structure and client demand. Launched initially as an equity-focused compliance-driven process, the breadth and depth of analysis has expanded to cover almost every major asset class, incorporating not just implicit and explicit costs but also factors such as liquidity profiling, algo analytics and venue analysis.

Kevin O’Connor, Head of Workflow Technology and Analytics, Virtu

The expanded use of TCA across asset classes has introduced a variety of challenges due to different market structures, data availability, cost, and the relevance of differing metrics – particularly for less liquid instruments. In addition to asset class expansion, the intended audience has also evolved. Its use is not just limited to traders or compliance departments but typically constitutes a series of elements and actionable outputs that can be relevant throughout the investment process targeting portfolio managers, risk managers and CIOs.

A major driver for many of these developments has been a raft of new regulatory requirements, not the least of which were introduced by MiFID II in Europe. The regulation requiring a demonstrable process for monitoring best execution across asset classes, using data from actual results as an input to future decisions, has galvanised the buy-side community to reassess their approach. While it is not mandatory under MiFID II to use an external TCA service, it is required that a systematic method of capturing and reviewing trade data be in place. Approximately 95% of firms reported using TCA across asset classes at the start of 2019, up from 75% in 2017, according to Greenwich Associates*.

Michael Sparkes, European Analytics Business Development, Virtu

In addition to these regulatory best execution requirements, the global trend towards commission unbundling has led to a greater focus on the quality of order execution to ensure trading and routing decisions are driven by objective and quantifiable results, separated from any need to pay for research or other bundled services which used to be funded by commissions.

Many firms had only just begun to address the new European regulations when we reviewed the state of preparedness of the buy-side in our 2015 best execution article, MiFID II and Best Execution Across Asset Classes. Our recent update of this survey captures how the industry has responded and indicates the likely path forward. While the median or typical firm has raised their game considerably, in terms of TCA data utilisation, leading firms have, if anything, pulled further ahead of the pack.

Top firms’ in-house data science teams are working directly with external TCA vendors as they consume normalised benchmarked data and load it into their proprietary databases for further analysis. The TCA vendor provides guidance not only as these teams seek to adjust or redirect trading allocations according to which brokers and counter-parties are consistently outperforming but also as they undertake more forensic testing and experimentation with different strategies to minimise costs and preserve alpha in line with their investment objectives.

Equity analysis – more granular and holistic

In 2015 it was typical for firms to monitor allocation-level costs in equities based on data from an order management system (OMS), while only a minority of firms also analysed fill-level data. Fast forward to today and we see more firms monitoring algos and venues in detail alongside the increased level of control and choices available. It is now typical for analysis at both levels of granularity, with firms adding fill-level analysis – which often incorporates data drawn from an additional source such as an execution management system (EMS).

Additionally, changes in European market structure have accelerated the evolution of TCA. Double volume caps on dark trading, the growth of systematic internalisers, the increasing role of electronic liquidity providers and the introduction of periodic auctions are all examples of changes which buy-side firms need to monitor and consider when deciding on optimal trading strategies.

The proliferation of EMSs has also led to the increased use of real-time analytics to monitor trades in-flight and make real-time adjustments in response to pre-trade TCA alerts and changing market conditions. EMS functionality continues to expand, providing greater execution strategy options, increased order control and more opportunity to analyse the post-event outcomes. Many EMS providers are now applying the same optionality used in equities to other asset classes.

The trend for more frequent post-trade analysis remains strong. While many firms are beginning to leverage pre-trade and real-time TCA to support intra-day optimisations, most review their post-trade results daily for significant outliers in equity trading. This process usually starts with the trading desk and is then checked by the compliance function. There is still an important place for monthly and quarterly processes that look at larger volumes of trades to determine any recurring trends or biases.

The definition of outliers has evolved, firms frequently use multiple filters – for instance a basis point threshold coupled with a minimum value – to highlight true deviations in performance. Similarly, multi-level analysis may incorporate information relating to portfolio manager instructions to assess trading desk performance and trader instructions to accurately evaluate broker performance.

Another hot topic for leading firms is alpha profiling. Greenwich Associates reports that more than a third of those surveyed now conduct analysis of this type, linking the execution strategy and its outcomes back to the investment decision and portfolio construction process.

Previously used by leading-edge firms exclusively, algo wheel technology is becoming main stream among buy-side clients. Virtu’s Algo Wheel, originally developed as a best execution order routing solution for equities, has now expanded into FX and futures asset classes. The use of both automated routing and algo wheels are increasingly being leveraged to remove much of the process noise and (often unintended) biases introduced by humans. The data, once normalised, can provide an objective and fair comparison for counterparties and strategies.

FX TCA – transparency

Foreign exchange market structure has made tremendous advances in the last five years, supporting the evolution in the analysis of the FX market. Many buy-side firms are bringing FX trading back in-house, and in some cases this has led to increased algo usage and request for quote (RFQ) platform adoption. The use of WM Fix related trades has also come under intense scrutiny after allegations of manipulation by market participants of these time-specific price benchmarks. Lastly, improvements in technology now make capturing accurate timestamps achievable, providing the trader with more accurate data to perform better analysis. The combination of better data and new tools has significantly enhanced the quality and scope of analysis available in the fragmented global FX market.

The implementation of FX TCA by buy-side desks conducting their own trades, rather than outsourcing to the banks, has been widespread. Using data from an EMS and/or FX trading platform allows for meaningful data examination and guides the calibration of strategies accordingly. This includes the detailed review of algo behaviour and performance, as well as strategy options such as netting. Benchmarking for FX trading has also progressed to reflect the increased data reliability. The array of metrics now includes simple mid-price and bid/ask metrics as well as more advanced calculations such as size-adjusted spread and cost impact models.

The reliance on pre-trade tools, such as Virtu’s FX ACE model, has become more widespread in FX with currency pairs exhibiting patterns related to available liquidity at different times of the day. This information can be used in decision-support prior to trading and in post-trade review to help maximise liquidity, minimise spreads and to assist in comparing different strategies.

Additionally, leading firms are now leveraging peer-based comparisons, allowing them to experiment beyond standard benchmarking. The use of peer data must be handled with care to ensure true apples-to-apples comparison, with standardisation and curation to ensure the data is clean and relevant. Meaningful peer-based comparisons also require sufficient breadth and depth of data to have statistical significance. But such data can be invaluable in helping identify areas which need further scrutiny in what is a very fragmented market.

Analysis of FX trading has been used by leading edge firms to look at the total cost of trades in other instruments. In other words, if switching from one asset class to another involves a currency exchange (for instance, from bonds to equities, or from European to US equities), it is now possible to factor in the costs related to implementing the currency transaction alongside the cost associated with the underlying trades themselves. Often the cost of delaying a currency purchase can outweigh the gains made by the related trades – assuming the currency had been traded instantly. It is highly likely that this kind of multi-asset analysis will become mainstream in the years to come.

Fixed income TCA – time to play catch-up

Compared to equities or even to FX, fixed income is a late entrant to the world of analytics. Driven by many of the same factors as other asset classes, FI TCA is finally starting to catch up – mainly due to the regulatory requirements from MiFID II and the availability of reference data. The increased use of electronic platforms in fixed income assists firms in capturing data and assessing the outcomes against a variety of benchmarks and metrics. In some cases, the analysis is primarily for regulatory and compliance purposes, but as in other asset classes, leading firms are putting a lot of effort and using sophisticated data to enhance their performance.

However, not all executions have moved onto such systems with a considerable volume of trading still being conducted by voice, especially in less liquid instruments. The fragmented nature of the market, as it relates to trading venues and the number and complexity of instruments, has been a challenge which has slowed the transition.

The fixed income market incorporates a variety of asset types – ranging from the more liquid government and corporate bonds to the less frequently traded instruments such as Munis, mortgage-backed, CDS, IRS and other related categories. Each class of instrument has its own unique set of trading characteristics. These require the appropriate application of relevant market data and suitable metrics designed to provide actionable analysis.

As with all trading analysis, a key ingredient is good market data against which the trade is compared. While reliable sources are available for liquid instruments, it is considerably harder to find meaningful reference data for the more esoteric instruments. In many cases it requires the aggregation and cleaning of data from multiple sources in order to produce a realistic and fair comparison. As with all market data, the cost to obtain data has increased, in some cases very steeply. There is renewed talk from ESMA of a mandatory consolidated tape being introduced for analysing bond trading, although this is unlikely to occur quickly. In Europe there has been an ongoing discussion of a consolidated tape for equities since MiFID in 2007 and the regulators are still in the process of finding a way to implement it.

The multiplicity of venues and trading methods means that fixed income data must be sourced wherever it is available – indicative and firm quotes, historical prices, evaluated prices, dealer-to-client platforms, multi-dealer platforms and so on. One hope has been the Approved Publication Arrangement (APA) platforms, although so far these have proved less than ideal in terms of consistency and accuracy. To incorporate and standardise a range of data sources into a coherent and consistent repository requires significant scale and is not something any individual buy-side firm can easily do. Hence the need for an external analytics provider who can step in as the aggregator, cleanser and curator of vast amounts of data from these various inputs.

Interestingly this aggregation process is also an important ingredient as part of the workflow that an EMS provides. Just as with equities and FX, it is likely that greater data aggregation and decision support will become a vital ingredient in the EMS with the subsequent execution data feeding back into the post trade analysis. We expect to see leading firms implement deeper integration of pre and post-trade analytics for fixed income as trading technologies develop further.

Derivatives TCA – a mixed bag

Derivatives TCA is hard to summarise in a simple table. The most liquid instruments, such as listed index futures, have characteristics similar to global equities. OTC instruments are much harder to measure, and the availability of data and level of sophisticated analysis is akin to what is found in illiquid fixed income. As median firms and leading-edge firms expand their analysis and monitoring into derivatives, they will require the same tools available for other asset classes – including cost models and peer group analysis. Although data is available, the analysis and monitoring performed for derivatives by most firms lag other asset classes with the same liquidity characteristics.

Conclusion – how far we have come

The world of analytics continues to move forward rapidly, driven by regulatory priorities and spurred forward by leading firms and their quest to add alpha. Over time, the continued development and refinement of new technologies should deliver even more precise data, better transparency, oversight and control of trading events in multiple venues and their impact on all asset classes.

Given the increased complexity of market structure and the ever-changing patterns of trading, the key to improving execution remains a moving target – one that will continue to evolve, in some ways radically, for the foreseeable future. Firms will need to remain nimble and apply lessons learned from post-trade analysis to ensure they adapt and thrive in the changing landscape.

*Sources: “MiFID II Fuels Investor Demand for TCA” and “The State of Transaction Cost Analysis – 2019”, Greenwich Associates; available at https://www.greenwich.com/node/109726 and https://www.greenwich.com/market-structure-technology/state-of-transaction-cost-analysis-2019

©BestExecution 2020

[divider_to_top]

News

POUND REBOUNDS BUT IT MAY BE SHORT LIVED.

Sterling soared and reported one of its biggest one-day gains in the past two decades on the news that the Tory party won a larger than expected majority. However, analysts advise caution as it is likely to be a long and bumpy Brexit road ahead.

Sterling climbed 2.7% to $1.351, its highest level against the US dollar since May 2018, after the December 12 results, which put it on course for one of its biggest ever one-day gains. The UK currency reached its highest level against the euro since December 2016, hitting Ä1.207. Investors and analysts said a Conservative victory removed some of the uncertainty clouding the UK’s path towards leaving the EU, allowing Boris Johnson to get his Brexit deal through parliament.

A research report from Morningstar, said the pound and UK shares have been under pressure this year as two Brexit deadlines came and went, Theresa May’s premiership crumbled in the summer, and Labour’s radical plans to overhaul the UK economy gained traction with some voters.

Silvia Dall’Angelo, Senior Economist, Hermes Investment Management

Silvia Dall’Angelo, Senior Economist, Hermes Investment Management said, “the so-called most important general election in a generation has delivered a very clear mandate to the Conservative party to ‘get Brexit done’. While that sounds simple, the implementation is far from straightforward. Once the EU withdrawal agreement is approved – which is now almost certain to happen by the end of January – the uncertainty about the end point of the journey will linger and the path to it will likely be bumpy and lengthy. Crucially, the UK will have to rebuild its network of trade arrangements, which is unlikely to be frictionless.”

She added, “The removal of the uncertainty about the divorce aspect of the Brexit process is good news for the short-term economic outlook. Growth has slowed down significantly over the last few years, largely reflecting the drag from Brexit-related uncertainty on business investment. As the fog of uncertainty partially lifts, growth and business investment will probably experience a modest pick-up in the first half of next year.

“However,” she noted, “as a new sequence of deadlines concerning the negotiation of a trade deal with the EU looms, uncertainty will probably re-emerge over the course of next year, which will probably be accompanied by renewed volatility in the pound and UK assets. Indeed, according to the deal under discussion the transition period will end at the end of 2020 and any extension will need to be agreed by July, which will potentially create new cliff-edge effects. Moreover, there is no clarity about the trading arrangements that will eventually emerge between the UK and the rest of the world, which clouds the UK’s medium- to long-term outlook.”

©BestExecution 2020

[divider_to_top]

News

INFORALGO PARTNERS WITH TRUMID.

The partnership, which was formed initially in response to a specific client request, will benefit a broad spectrum of customers engaged in US-dollar corporate bond trading. Trumid’s trade volume grew 230% year-over-year in November, with an average daily trading volume of $480m. Trumid’s client network has also continued to expand this year, now totalling more than 450 buy- and sellside institutions.Inforalgo, the Capital Markets data automation specialist, has struck a new partnership with financial technology company, Trumid. The alliance will see Inforalgo provide on-demand integration to Trumid Market Center, the firm’s electronic bond trading platform, easing straight-through processing (STP).

Once built, the integration will be added to Inforalgo’s already extensive library of adapters. This in turn will broaden the trading potential of financial institutions, by providing easy connectivity into more trading venues to enable painless STP.

Fuller connectivity, meanwhile, could enable richer management information, as well as regulatory transparency and automated reporting in compliance with prominent financial markets regimes globally.

For Trumid, the partnership will streamline the trading process for corporate bond clients which, increasingly, are transacting electronically.

“Traditionally, the majority of bond trading activity has been very manual. However, electronic trading, and transaction management in general, are rapidly growing. In turn, connectivity becomes much more relevant – presenting the opportunity to streamline processes and improve efficiency,” commented Josh Hershman, Chief Operating Officer at Trumid.

Offering a wide spectrum of integration options to existing clients and prospects will further promote growth in Trumid’s business, by prioritising STP through smart, automated data flow and removing the need to key in data manually between systems.

“Connectivity is significant in our market. The more connections a vendor has, the easier it is to use that platform,” Hershman said.

“Trumid is already connected to multiple OMS and EMS providers, and Inforalgo gives us another important connection used by our clients. This is a win-win-win – for us, for Inforalgo, and for our clients.”

“By working together to provide fast and efficient STP, Inforalgo and Trumid are able to ensure that Trumid’s clients have maximum options, removing the barrier that STP integration can create when attracting new clients,” added Phil Flood, Chief Commercial Officer at Inforalgo in the UK.

“Meanwhile, existing clients need to increase volumes, or new features might become available that mutual clients can take advantage of, and a direct connection can transform the possibilities here too.”

©BestExecution 2020

[divider_to_top]

News

FIRMS STILL STRUGGLING WITH MIFID COMPLIANCE.

The survey canvassed 83 respondents across brokerages, banks, asset managers and other financial and non-financial institutions. The findings showed that a majority or 68.4% of firms had received no feedback whatsoever from regulators on transaction reporting, widely regarded as one of the more burdensome requirements under MiFID II. The remaining 31.6% said they did receive some form of feedback on transaction reports, specifically on errors such as incorrect prices in currencies, mismatches between CFI codes, and other anomalies in identifier fields for clients.European regulators are failing to provide feedback to most market participants on the quality of transaction reports submitted under MiFID II, almost two years after the rules were implemented, according to Cappitech’s second annual MiFID II and Best Execution Survey.

While none of those who received feedback from regulators on the transaction reporting have been fined for the errors, Cappitech warned leniency is unlikely to continue indefinitely and firms should “not take too much comfort” from the relaxed approach from authorities so far.

Mark Kelly, a member of the Cappitech advisory board, and author of the report suggested the reporting regime’s requirements have not yet been fully grasped. He said, “The survey results point to firms still not being fully comfortable with MiFID II reporting requirements. At the beginning of 2019, firms had told us that this year would be one of setting KPIs (key performance indicators) and reviewing data quality, but this process is clearly happening more slowly than anticipated.

He added “the use of external analysis and tools to spot problems may alleviate some of these challenges on the basis of ‘don’t mark your own homework’ which will be important as the regulators will start to impose sanctions on firms who are not managing their data appropriately.

Another worrying trend is that 61.4% of firms admitted they are not monitoring best execution, despite it being a requirement under MiFID II. Most firms said they have not implemented systematic monitoring processes while over 70% are also not using the RTS 28 reports to make trading decisions, while 56.6% said they are not reviewing RTS 28 data.

Ronen Kertis, CEO and founder of Cappitech, commented, “Firms are not fully complying with monitoring best execution requirements. They also face far-reaching challenges such as reconciliation‚ which was particularly highlighted together with not having the internal expertise to keep up with the fast-changing regulations.

©BestExecution 2020

[divider_to_top]

News

UK LEGAL TASKFORCE PROVIDES LEGAL CERTAINTY ON CRYPTO-ASSETS.

Blockchain and smart contracts were given a major boost towards becoming a standard method for securely storing and transferring crypto-assets when the expert panel charged with giving the technology legal certainty declared they should be treated in principal as property.

The much-anticipated legal statement by the UK jurisdiction taskforce of the Law-tech Delivery Panel is expected to reduce the legal doubt that has slowed the adoption of distributed ledger technology by international traders and could make Britain a global centre for digital property disputes.

The Legal Statement was compiled following an extensive industry consultation process. A diverse range of views, ranging from industry participants, technologists and the legal profession, was sought and incorporated. Aside from concluding that crypto-assets, including cryptocurrencies, can be treated as property, it also deemed that smart contracts, if they demonstrate the key defining legal elements of a contract, can be regarded as contracts themselves. In addition, it clarified that a private cryptographic key may be used to meet a “signature” requirement on a contract.

This means that, in any dispute involving either a crypto-asset or a smart contract, existing law on property and contracts will apply. It also means that smart contracts, where they meet the eligible criteria, can be legally enforced in an English court. In other common-law countries, such as Italy, the legal treatment of crypto-assets has largely remained unclear.

According to Richard Hay, Head of UK Fintech at Linklaters, who worked with the group behind the Legal Statement, “The Legal Statement is an important milestone providing much-needed clarity on the legal standing of crypto-assets and smart contracts. It demonstrates the power and flexibility of the common law in adapting to innovation, and in providing the foundations for a truly digital economy.

“For blockchain and DLT, the legal statement answers fundamental questions that many see as having held back development and investment in the space. Uncertainty holds back the creation of new financial structures and products, so the recognition of the legal standing of these developments in English law will be welcomed by investors.”

Jack Thornborough, Group Head of Compliance and Money Laundering Reporting Officer (MLRO) at 2030 Group added, “It is great to see the statement seeking to address the legal uncertainty by recognising digital assets as tradable property and smart contracts as enforceable agreements under English Law, which is something we have supported since we conducted a primary issuance of tokenised equity earlier this year. We are delighted to see smart contracts supported by the legal framework in the UK.

“The lack of legal certainty has been a barrier for adoption, investment and development, and this statement provides the foundation to address the uncertainty around the classification and characterisation of digital assets. By providing investors with increased confidence of their rights and entitlement, the statement provides a dependable foundation for mainstream buy-in from institutional investors.”

©BestExecution 2020

[divider_to_top]

News

LSE CONTEMPLATES SHORTER TRADING DAY.

The London Stock Exchange (LSE) launched a consultation exploring a possible reduction in market hours in order to encourage staff diversity, concentrate liquidity in a shorter time frame, and positively impact the mental well-being of staff.

The shorter trading hours would require changes to regulatory reporting and commitments for trading on systematic internalisers and OTC. It would also mean reducing the important overlap with US and Asia trading hours. However, LSE also said all main European trading venues would have to agree to shorter hours in order to maximise the potential.

The consultation, which ends on January 31st, sought feedback on five options: four covering shorter hours, and a fifth on making no changes to the trading day that currently starts at 08:00 local time (07:00 GMT in summer) and ends at 16:30.

“There is a general sentiment that a co-ordinated approach by European exchanges would be required in order for a change to be effective,” the LSE said. “Regulatory approval would be needed for any change to go ahead, it added.

Europe has the longest stock trading hours in the world, between eight and 10 hours, compared to 6-8 hours on Wall Street and Singapore. Traders say shorter hours would cut the overlap in trading hours with the United States and have far reaching benefits for traders, but could drive some business to exchanges outside Europe.

The Investment Association, which represents asset managers, said it was very pleased that the LSE had listened to traders’ calls. “We need to call time on the long hours culture, which is detrimental to diversity and mental health, and inefficient for the markets,” said Galina Dimitrova, the IA’s director for investment and capital markets.

“A shorter trading day would not only improve market structure but would also go a long way towards building a more diverse trading floor and fostering better mental health,” said April Day, head of equities at the Association for Financial Markets in Europe (AFME). “Equities trading risks lagging behind a wider financial services industry push for more diversity and inclusion unless the long trading day is tackled by an industry-wide approach.”

Both industry groups also argue that the longer trading hours compared to the other global markets are no longer serving material benefits to savers, investors or firms. One reason is that liquidity would be improved by a more concentrated trading window. Market data seems to support this. On the LSE on any given day, around a third of trading volume takes place in the final hour of opening hours, when index-tracking funds are forced to balance their holdings. Conversely, there is comparatively little trading volume in the first hour of the day, which in turn affects bid-offer spreads.

©BestExecution 2020

[divider_to_top]

We're Enhancing Your Experience with Smart Technology

We've updated our Terms & Conditions and Privacy Policy to introduce AI tools that will personalize your content, improve our market analysis, and deliver more relevant insights.These changes take effect on Aug 25, 2025.
Your data remains protected—we're simply using smart technology to serve you better. [Review Full Terms] |[Review Privacy Policy] By continuing to use our services after Aug 25, 2025, you agree to these updates.

Close the CTA