Home Blog Page 594

Market opinion : Brian Reid : Systemic risk

Be26_BrianReid_ICILESS RISK, NOT MORE.

Brian Reid, Chief Economist of the Investment Company Institute (ICI), looks at the age of asset management.

As banks learn to live under tighter post-crisis constraints, central bankers around the world are worrying about financial risks that could move from banks to capital markets and perhaps trigger the next great crisis. After the experience of 2007–2008, regulators rightly should be on guard for sources of weakness in the financial system.

Unfortunately, in their vigour, many regulators are seeing ‘systemic risk’ – threats to the stability of the financial system – when the issue at hand is investment risk. Investment risk is a necessary part of a well-functioning economy, attracting investors willing to take known risks in hopes of gaining a reward. Systemic risk occurs when the financial system itself breaks down and is unable to perform its normal functions of matching savings to investment opportunities or facilitating economic activity.

One of the more thoughtful discussions of this issue was ‘The Age of Asset Management,’ an April speech by Andrew Haldane, now chief economist at the Bank of England. Haldane speculates that rapid growth and structural changes in asset management could open new and “more potent” “transmission channels” for systemic risk.

But Haldane, like others, fails to make the crucial distinction between investment and systemic risks. In banking, the two are intertwined, because banks have a limited capacity to absorb investment losses or even fluctuations in asset prices. With their high leverage, banks that suffer even relatively small losses can find their existence and that of their counterparties under threat. So it’s not surprising that bank regulators fear that bank ‘de-risking’ will move investment risk out of banks, only to increase systemic risk in other parts of the financial system.

Moving investment risk to other financial market participants, however, may actually reduce systemic risk. Asset managers, in particular, act as agents, hired to manage and oversee investors’ assets through separate accounts or collective investment schemes. Rather than centralise risks – as banks do – asset managers leave risk-taking to the end investors. Those investors absorb investment losses without creating a cascade of failures that can occur when banks and other leveraged financial firms experience losses.

Haldane implicitly acknowledges this point by noting that “history is not littered with examples of failing funds wreaking havoc in financial markets.” But – like other banking regulators – Haldane falls back on banking models and regulatory approaches both when identifying emerging risks and when proposing solutions.

For example, Haldane highlights the potential for investor ‘herding’ and ‘run’ behaviour that he – along with the US Treasury Office of Financial Research and others – argues can lead to procyclical swings in asset prices.

The examples Haldane cites, however, are idiosyncratic and far from systemic. And arguments about ‘herding’ and ‘runs’ in capital markets fail both historically and conceptually. Anyone who wants to demonstrate systemic financial risk in the alleged ‘flightiness’ of investors in regulated funds has a heavy weight of data and history to overcome.

Just as important, ‘run behaviour’ is inherently a banking concept. Banks cannot accommodate large outflows of deposits, because they hold illiquid, hard-to-value portfolios of loans and securities. Instead, banks rely on opacity and protection of depositors from losses – by government intervention, if necessary – to maintain a stable deposit base.

What Haldane and other bank regulators have not demonstrated is how fluctuating asset prices can lead to systemic financial risk in non-leveraged financial institutions. The expansion and then collapse of the tech bubble in the 1990s and 2000 caused large swings in stock prices. But that episode did not take down financial institutions or freeze the financial system, because the losses were largely borne by investors who used little leverage, not by banks.

Unlike banks, capital markets rest on transparency. In well-functioning markets, risks are clearly assigned, well understood, and knowingly accepted by all parties. As information enters the market, asset prices adjust and investors experience gains and losses. Investors accept losses because risks were disclosed and because they believe gains will win out in the balance.

When either the assignment or acknowledgement of risk breaks down, risks are sure to increase – and in some cases become systemic.

For example, in the tri-party repo market, custodial banks were for decades unwinding the trades intraday, leaving them and the lenders exposed to the risks of a borrower default. The risks of that practice were compounded by the methods banks and brokers used to manage and track the collateral behind these loans. This unclear assignment and acknowledgement of risks led to one of the most significant market failures during the crisis. The Investment Company Institute (ICI) and its members have been supportive of regulatory changes to this market to ensure that it operates effectively even during future periods of market stress.

Financial products also arose that failed to assign risks clearly and provide transparency. Collateralised debt obligations (CDOs) and certain asset-backed securities had complicated capital structures that led to disastrous product failures during the financial crisis. The structuring of payoffs and underwriting standards were too complicated and poorly controlled, preventing a clear assignment and acknowledgement of risk.

For capital markets regulators, the right approach is to clarify what the risks are and who bears them, so that willing investors can knowingly judge and accept risks. But in the banking world, where investment and systemic risk are intertwined, bank regulators instead resort to microprudential tools to intricately manage bank balance sheet activities and macroprudential tools to manage capital flows and banks’ overall levels of risk taking. Haldane and other central bankers would bring these tools to capital markets. Haldane, for example, speculates about central banks taking “an explicit role in managing the risk-taking cycle and activity in the wider economy.”

That would be a historic and colossal mistake.

The risk of relying on such tools in securities markets is that microprudential rules could lead more asset managers to act in a similar manner – in other words, they could increase ‘herding’. Macroprudential rules would direct capital flows by tilting the investment landscape in favour of one set of assets over another.

Would this make the financial system more secure? The historical record is filled with examples where policymakers inflated bubbles, rather than deflating them. Replacing the collective decisions of millions of investors with the judgment of a handful of regulators in allocating capital will most certainly lead to greater fluctuations in securities prices and larger distortions, not to mention decisions driven by politics over economics. That’s not the way forward to a sounder, more secure financial system.

© BestExecution 2014

[divider_to_top]

Viewpoint : Post-trade : Market harmonisation : Robert Almanas

Bob Almanas, SIX Securities Services
Bob Almanas, SIX Securities Services

A BLUEPRINT FOR EUROPE.

Be26_RAlmanas_SIX

Robert C. Almanas, Head of International Strategy and Alliances at SIX Securities Services outlines his thoughts on bringing down barriers and building an optimal trading environment in Europe.

Following several years of volatility after the global financial crisis (GFC), Europe’s markets are slowly climbing back from the brink, with most venues seeing a consistent uptick in trading volumes. While this sign of recovery will reassure Europe’s politicians, the continent’s market infrastructures are still in far from optimal conditions and we could face the prospect of a real-economy recession in numerous markets. As competition heats up from rival trading blocs such as North America and Asia, I believe that Europe must reform and streamline its financial markets before it can claim to have a best-in-breed trading environment.

As it stands, cross-border trading on the continent continues to prove expensive and inefficient, as Europe’s markets lack cohesion and common infrastructures. Despite having been identified more than a decade ago, we have yet to see the infamous Giovannini Barriers truly dismantled. These are a series of fifteen factors identified by a group of financial market experts which hinder the integration of EU settlement and clearing infrastructure, and so the efficiency of European markets in general. To achieve its goal of becoming a truly global economic power, the EU must create an open and transparent market, which provides access to non-EU countries and organisations based on clearly defined rules. I also feel that protectionism and any foot-dragging on compliance must be sanctioned by European authorities.

Given its key role in helping the European financial markets overcome the GFC, the input of the post-trade sector will be pivotal in the drive for change and efficiency. Already, many recent regulatory changes have targeted the clearing and settlement landscape, seeking to reinforce these risk mitigation systems, and harmonise Europe’s existing infrastructures. Much more can, and must be done. Below, I’ve listed some of my thoughts on how we as an industry can contribute to creating a better post-trade environment in Europe.

Pool collateral (virtually)

For Europe to be considered as a single, unified market, the location of physical assets must become irrelevant, as long as they are within the continent’s borders. This will become even more important as demand for high quality collateral grows due to enhanced concentration and systemic risk mitigation approaches, and it becomes harder and harder to find. For this reason, Europe’s post-trade infrastructure (such as CSDs and global custodians) need to virtualise their collateral pools, ensuring that whatever high quality collateral already exists is mobile and available. This approach can also help to minimise collateral overhang and concentration risks, while addressing many of the inefficiencies that are inherent when securities are being transferred across systems.

Avoid dilution

Because of the essential role that it plays in lubricating our financial markets, collateral is a crucial element in the risk management toolkit available to the post-trade sector. For these reasons, its quality must not be diluted. We believe that all collateral must be held to four core values – it must be simple, liquid, easy-to-value and high quality. If these values are diluted by collateral management service providers such as CCPs, CSDs and global custodians, we could run the risk of another financial crisis.

Rationalise Europe’s clearing layer

There is no avoiding it: Europe has too many CCPs. With more CCPs than EU member states, we now have a situation where risk mitigation is not optimised and institutional investors are being forced to connect to multiple trading partners, increasing the cost for them of doing business. The introduction of interoperability to more European markets would help with this issue, while a wave of mergers and acquisitions could play an important role in reducing the number of active CCPs. We are already seeing this process commence, as bigger, cross-border CCPs that operate in multiple markets acquire smaller rivals. EMCF and EuroCCP recently merged to become EuroCCP NV while SIX x-clear finalised its acquisition of Oslo Clearing in May.

Mandate interoperability

At this point in time, only two countries in Europe, the UK and Switzerland, have embraced interoperability for their clearing layer, despite the obvious benefits this approach can provide to market participants. When an institution trades across multiple venues that have interoperability agreements in place, it can then choose to consolidate its flows through its preferred CCP, gaining operational efficiencies and the possibility of volume discounts as a result. The increased competition fostered between CCPs by this approach benefits the entire market as it encourages them to improve the quality of the services they provide, drives innovation and can lower clearing fees. Given these benefits, it would be valuable to the European markets if EU regulators were to urge national regulators to open up their closed siloes.

Accept the cost of compliance

Post-GFC, the European capital markets have had to contend with wave after wave of regulation, as authorities seek to avoid the mistakes of the past. Banks are now subject to far more stringent capital requirements, the middle and back offices have additional obligations while compliance and IT departments at institutions of all sizes are working overtime. Europe’s post-trade sector is no stranger to these regulatory demands, as changes such as T2S and CSDR have forced post-trade service providers to invest in their systems and update their procedures. These reforms are expensive and time consuming for the markets, however, they are essential to ensuring that Europe’s financial markets are robust enough to survive any future crises. Although it may prove tempting, post-trade service providers must not react to increased compliance costs by automatically passing them on to their own clients. Instead, these increased expenses should be seen as the cost of protecting and enhancing one of the world’s biggest financial markets.

© BestExecution 2014

[divider_to_top]

Viewpoint : Best execution : Caroline Brotchie

Be25_02.Algomi_C.BrotchieART NOT SCIENCE.

Caroline Brotchie, Sales Manager at Algomi, explains why best execution will always be an art and not a science.

For several years now, voices from all corners of the financial industry have opined on how to tackle requirements for best execution. While all broadly agree on the fundamental tenets of investor protection and rigorous codes of conduct for market participants, the reality of putting these into practice, especially in the knottier illiquid fixed income universe, has long caused dissent among the ranks.

At the core of the problem lie two issues. First, the fact that the parties involved in the practice and supervision of the process come from differing standpoints, and each is themselves, trying to bridge an information gap. Second, what exactly that ‘process’ or ‘information gap’ represents is different for each and every trade. Clearly therefore, we are left in a bit of a bind: the winning combination of somewhat abstract concepts trying to unify different perspectives and standardise practice across a wide range situations.

First, it may be useful to identify the parties involved in the debate.

• The end investor

Be they institutional or individual, the end investor is both the focus and beneficiary of all best execution practices, and simultaneously the least able to assess them. Being perhaps a step removed from the workings of the financial markets  means that the day-to-day decisions being made with regard to their portfolios, are not always entirely clear. As a result, investors are reliant on regulators to supervise and monitor on their behalf.

• The regulator

MiFiD’s directive on best execution aims to protect the end investor where they may not be best placed to do so themselves. This has come in the form of overarching directives, and indeed, can only be so. Given the gamut of market situations and individual trade requirements more detailed rules are unworkable in practice. A corollary of this is the maintenance of the reputation of the industry as a whole to the outside world.

• The buyside compliance team

Internal compliance departments have the unenviable task of managing the internal implementation of regulations, as well as the external perception of the firm’s conduct in this regard. This is clearly an iterative process, and one which is impacted significantly by discussion with industry bodies and market peers. It can lead to a ‘safety in numbers’ approach: industry practice norms such as the ‘three quote rule’ evolve and become so widespread they can treated as a regulatory requirement rather than simply an interpretation of it.

• The execution desk

Dealers are required to bridge the gap between the requirements of compliance and the realities of trading in an illiquid market. An often-used example of this is the three quote rule as mentioned above. For particularly sensitive trades, seeking out three competing quotes would actively impede best execution because the market, more broadly being aware of a large or illiquid position, would cause spreads to move adversely. What would indeed be ‘best’ in such situations is for dealers to use their expertise and discretion. But how is this auditable by the other guests at the best execution party?

The solution is less than forthcoming. Marrying practice and supervision in a world of limited information is an art rather than a science. But that does not mean the status quo cannot be improved. Indeed, the most obvious way to solve for an information gap is simply to provide more information. That is one crucial piece for sure, but, in a vacuum, it may not be the end of the story. It forgets that ultimately investors select money managers on the basis of a belief in the soundness of their judgment. That is the art. That judgment runs through from the analysts, fund managers and execution dealers. The latter must be enabled to exercise that judgment based on the market he or she sees in front of them. That is their expertise and value.

So the key to improving best execution practices and supervision is in improving information provision and, crucially, using that to assess dealing desks’ quality of judgment. More and better information is the bedrock for good decisions.

Best execution is as much a process as it is an outcome. It begins with the execution dealer being better informed about the market picture by the sellside. As we said before, the information gap varies on a trade-by-trade basis – what is therefore needed for each trade will similarly vary. But broadly, he or she needs to understand exactly how involved a bank is in a specific sector, issuer or bond in order to evaluate the information being sent, and make an informed judgment call. What flows or enquiry is each bank seeing? Is market sentiment shifting? Who has the most relevant distribution channel for this position?

Clearly not every piece of this intricate jigsaw can be communicated up the chain, but even a distilled version must be preferable to existing standards: the inadequate three quote rule, or price on screen covers.

Solutions such as our Honeycomb network and the recently announced JP Morgan Agency division are examples of recent innovation that seek to address this. More specifically, best execution is evidenced by showing that the channel utilized by the execution desk (i.e. which global bank and sales team was selected as counterpart, or which e-platform, or which trading style) was the most appropriate for the trade in question. This additional colour allows compliance teams, and if needed, the regulator, and ultimately the end investor to assess the validity of the judgment calls being made.

Perhaps unsurprisingly, meaningful improvements in market-wide best execution processes begin with the sellside and permeate all the way through to the end investor. Inefficient information flow is a distinguishing feature of the illiquid end of the fixed income space, and where it can be improved, it should be. However, that does not and should not detract from the essential requirement for discretion and judgment that also act as a guardian of value for end investors. Both elements, appropriately balanced, form the optimal way to manage best execution.

© BestExecution 2014

[divider_to_top]

 

Post-trade : Viewpoint : T+2 : Denis Orrock

GBST, Denis Orrock
GBST, Denis Orrock

GBST, Denis OrrockTHE FINAL COUNTDOWN.

Denis Orrock, Chief Executive, GBST Capital Markets.

In an earlier article for Best Execution, we looked at some of the likely impacts of the upcoming T+2 settlement regime on asset managers and investment firms, and we reviewed the various operational burdens that banks and institutional brokers will face.

At the time of writing the October 6th deadline is imminent, so it seems a good time to re-visit the subject. Just how ready is the industry? What are the key considerations for buyside and sellside firms with T+2 fast approaching? And how can technology vendors and service providers assist with a smooth transition?

How ready are firms for T+2?

From all accounts it would seem that the majority of sellside firms are now geared-up for T+2, having invested in the necessary technology and updated their processes to handle the upcoming changes. However, many of their clients on the buyside seem less engaged, showing a marked lack of interest, investment or action.

This situation is not unique. Earlier this year, GBST published a white paper about the introduction of T+2 to the Australian equities market [1] and the implications of a potential move to T+2 in the US [2]. In both research studies, we found little appetite from smaller buyside firms to make the operational and technical changes that were required. In fact even larger firms struggle to keep up with the operational challenges associated with T+2.

Here in Europe, anecdotal evidence suggests that the situation is likely to be similar. This means that many buyside firms – including some of the biggest names – will be pushing their brokers to continue to offer T+3 settlement terms, regardless of the knock-on effects.

These effects cannot be understated. Institutional brokers who agree to continue providing T+3 terms will be on the hook for covering the additional capital requirements of that extra day, which could have serious risk and treasury implications for them. In some cases this could even render the business unprofitable because the brokers will in effect be providing additional funding to their clients.

What can the sellside do?

Faced with this predicament, sellside firms have a number of choices:

• Refuse to offer T+3 settlement terms

• Agree, but pass on the additional capital costs to their clients

• Agree, but offer rewards – such as lower commission thresholds or rebates – to clients who are efficient in their post-trade processes

In the short term at least, it is unlikely that brokers will refuse point blank to offer T+3 settlement, particularly to their larger customers, otherwise they could end up losing too much business to their competitors.

The second option, passing the charges on, is tricky. Traditionally, sellside firms have not charged capital costs on to their clients, so this could be a difficult pill for the buyside to swallow. Particularly as any costs passed on to investment firms will have an impact on their returns, on fund performance and ultimately on the end investor.

This leads us to the third option, rewarding efficient post-trade processing, which is probably the most palatable to the buyside.

In order to make this happen however, brokers will need to clearly articulate to their clients what they consider to be best practices around T+2, providing all the necessary guidance as to when allocations will confirm and when affirmations are needed.

A clear channel of communication between both sides is needed.

Latest AFME recommendations

In its recently published briefing note3, The Association for Financial Markets in Europe (AFME) made various recommendations around what market participants should be doing in the final run-up to T+2. One key proposal was that firms should aim to complete trade allocation, confirmation and affirmation, and complete input and matching of settlement instructions, on T+0.

With same-day affirmation a key component to ensuring efficient settlement, this could be one of the areas where institutional brokers may wish to offer rewards to clients who achieve it. And the smart buysides, who have the vision to see how this will lead to a positive impact on their returns, will invest in ways to make it happen.

Reference data

Another area of focus, also covered in the AFME recommendations, will be around client reference data and ensuring that standard settlement instructions (SSIs) are 100% correct. So brokers will need to provide clear specifications regarding data formats for SSIs and other source data.

Additionally, there are industry initiatives, such as Omgeo & DTCC’s global repository for SSIs4, that can help facilitate this by ensuring all settlement instructions, Omgeo codes, SWIFT codes, etc., are in place and accurate across the bank.

Out-of-scope transactions

One issue that may arise post-October 6th is that T+2 is not mandatory for all transactions and asset classes, with OTCs for example falling outside the scope of T+2. In its recommendation document, AFME has proposed that as best practice, firms should aim to settle all trades, including out-of-scope OTC transactions, on a standard T+2 settlement cycle. However, it is not always possible, or even desirable, to settle all out-of scope transactions on T+2.

How technology can help

Firms that try to adopt a “one size fits all” approach will struggle unless their middle and back office systems are actually designed with the capability to run multiple markets, currencies, assets, etc. on the one platform. And firms that run multiple systems to handle different markets and different settlement cycles will not benefit from any economies of scale.

So in order to continuously adapt to the post T+2 environment, market participants on both the buyside and the sellside need technology frameworks that allow process changes to be rapidly committed into workflow, particularly when dealing with multiple settlement structures and cycles.

Rules-based systems are particularly effective in this regard. With a rules-based system, firms can write rules to match events, adapt processes to markets very easily, and rapidly adapt the workflows to meet the new rules.

At GBST, we have clients who use our rules-based technology for post-trade processing in multiple markets around the world, all of which have their own unique settlement requirements. And we believe that it is this implementation of innovative, smart, cost-effective technology, enabling rapid operational process change, that is the key enabler and differentiator for effective T+2 participation, not just in Europe, but on a global basis.

The final countdown to T+2 is under way. Will you be ready?

1. https://gbst.com/wp-content/uploads/2014/02/GBST-SAA-Tplus2-in-Australia-Whitepaper.pdf
2. https://gbst.com/t2-settlement-whitepaper
3. https://www.afme.eu/WorkArea/DownloadAsset.aspx?id=11493
4. https://www.omgeo.com/page/omgeo_dtcc_partner_global_hub_ssis
 
© BestExecution 2014

[divider_to_top]

Viewpoint : Regulation : Silvano Stagni

Be26-_SStagni_HatstandHEADS IN THE SAND.

Silvano Stagni, group head of marketing & research at Hatstand, points out that MiFID II and MiFIR are only 24–30 months away, and explains why you should be thinking about them now.

ESMA is currently working on the first draft of technical standards for MiFID II and MiFIR. According to their published MiFID II timeline*, the relevant consultation papers will be ready towards the end of 2014, and the consultation period should last till March 2015.

It will be a longer consultation than the one ending 1st August this year. The questions considered then were included in two documents – a consultation paper and a discussion paper – covering both the Directive (MiFID II) and the Regulation (MiFIR). Respondents had to read and absorb 844 pages in a relatively short time in order to formulate responses that would be eligible for consideration by the supervisory agency.

In fact, not many responses were received, and many people commented on the difficulty of meeting the deadline.

It is interesting to look at who the respondents were, especially in the United Kingdom, the country with the largest financial industry in Europe.

ESMA identified 11 categories of respondents. The table below shows responses from the EU as a whole.

Be26_Stagni_Fig.1

The bottom four represent financial institutions; the top seven represent other players: trade associations, exchanges, etc. Interestingly enough, the largest number of responses from a single category came from ‘Others’, who mostly represent professional and trade associations, industry standards, etc. Overall, the bottom four categories combined represent about half of the responses (49%).

Looking at the responses from the UK on its own paints a slightly different picture:

Be26_Stagni_Fig.2

The bottom four represent slightly more than 40% of the responses (43% to be precise). Professional and trade associations, industry standards and other institutions engaged in representing, advising or selling services to the financial industry, provided more answers than the total of the respondents from financial industries. In other words, it looks as if a lot of those who work in the financial industry cannot be bothered, are too busy, or are simply ‘too cool to comment’.

The MiFID Review will probably become effective two bonus cycles away, so if the industry chooses not to engage it does so at its own risk.

Judging from past experience, ESMA listens to constructive criticism during consultations. If enough respondents raise substantiated and well presented objections, they usually listen. This is not the only reason to start thinking of MiFID II and MiFIR at such an early stage.

Extending best execution to non-equity instruments involves more than just writing a bunch of reports. Just by looking at the high level of detail discussed in the text of the Directive and the Regulation approved by the European Parliament last April, there are a lot of strategic decisions that will need to be taken, decisions that may affect the business model of institutions. For instance:

• Dark pool trading

Negotiated trades will be very limited and they will have to be operated in an environment run by an exchange. There could be other ways to allow, ‘de facto’, some form of crossing clients’ orders. Whatever happens, though, this model will have to change; how and when cannot be decided quickly, and involves more than implementing systems.

• Obligation to trade on exchange

This is more than a standardisation of instruments and changes to the reference data. Existing operators of trading platforms will have to decide whether to operate their own exchange – in other words whether to register as an OTF. Operators of messaging-based trading platforms will have to provide transaction data in a way that is not possible from a pure messaging-based system. Will they buy or build a new trading system or will they ‘bolt on’ a module on the existing system that will capture the information required?

The overall hassle of trading will increase. Marginal activities may no longer be profitable. Looking at what is happening across the pond with Dodd-Frank, it appears that a number of institutions have stopped trading some financial instruments but other institutions have picked up the slack. What will you decide to do?

Last but not least: more instruments traded on exchange and more trading venues mean more market data. Are you ready for an explosive growth in volume and data type? Market data are expensive, and preparing for the new trading environment will mean more than cleaning a database. There are contracts to consider, and potential renegotiation may take more time than that between acquiring a reasonably stable set of technical standards and kick-off date.

These examples are, by no means an exhaustive list of reasons why the UK financial industry neglects the process of definition of MiFID II at its own risk. Another chance to engage is approaching towards the end of the year. Don’t miss it. Any early work in assessing the impact of MiFID II and MiFIR will pay off later.

*The timeline can be found at: www.esma.europa.eu/page/Markets-Financial-Instruments-Directive-MiFID-II

 

© BestExecution 2014

[divider_to_top]

Viewpoint : Impact of regulation : Mark Pumfrey

CONFLICTS OF INTEREST MODELS UNDER PRESSURE.

Be26_MPumfrey_Liquidnet

Mark Pumfrey, CEO for EMEA at Liquidnet, examines the issue.

Following the global financial crisis, greater regulatory scrutiny globally is curtailing once ubiquitous financial products and practices and fundamentally changing market structure as we know it today. Recent high profile probes by the New York Attorney General, the SEC and FINRA into the practices of broker-owned dark pools have ignited the issues that are inherent in conflicts of interest models. The overarching issue for regulators is to create a more level playing field in the best interest of all market participants. Not an easy problem to solve.

In an industry where all participants are forced to co-exist, conflicts of interest inevitably arise. While we are far from seeing an environment where the interests of every market participant are fully aligned with its clients, increased regulatory scrutiny and global directives are forcing the industry to evolve and to place a far greater emphasis on transparency. As a result, we are seeing an institutional shift towards trading venues in which the trading model is transparent and where conflicts of interest can be minimised.

This point is important because it is conflicts of interest, inherent in some business models, which have been called into question recently. In particular, the extent to which broker owned dark pools are being transparent about the involvement of high frequency trading firms’ activity in those pools and on what basis client flow is internalised.

Trust in venues and quality of execution relies on a buyside firm’s ability to clearly know what liquidity they are accessing, where their trades are executed and how their data is used. The trading community has taken for granted that the information they access belongs to their clients and should be used at the discretion of their clients.

It is now more important than ever for institutional clients to understand the clear differences between the trading venues they use and the performance they achieve. Today, only a handful of entities which label themselves dark pools can genuinely claim to add value above and beyond what is found on lit markets in terms of execution size and price improvement. Block crossing networks such as Liquidnet complement lit markets by offering size discovery alongside the exchange’s price discovery. This can add and give market participants access to deeper liquidity and the choice of where to execute their trades and with whom, creating a market that is more aligned to the interests of its investors

For the long term investor, with obligations relating to the performance of its funds and therefore to the underlying investors, three items stand out as being of particular importance. These are: sourcing liquidity, reducing market impact and minimising information leakage. Transparency, and execution quality, should not only be a concern to the trader; it should be on the radar of the entire firm.

A trader’s fiduciary obligation when navigating this complex trading landscape is to achieve best execution. However, they face daily challenges relating to the need to reduce the conflicts that stem from the simultaneous pursuit of best execution and the need to manage commission spending. The FCA is calling for greater transparency on payments by asset managers for brokers’ execution and research services. The regulator recently conducted a review and found that firms are still using dealing commission to pay for these services and that too few firms properly assess the value added and cost of research paid for using client dealing commission.

Since the launch of the FCA review, we have seen an increasing trend among our asset management members and the broader community seeking to understand the cost of research and execution and the value of it. The FCA recently calculated that the UK asset management industry may be leaving as much as £264m in annual client returns on the table for every basis point that could be saved due to the lack of monitoring of how efficiently brokers are managing their trades*. With the FCA’s suggestion the savings are as high as 16 basis points, translating into £4.2bn. This trend is potentially a game-changer in the industry, which could bring to an end the conflicts of interest on the sellside and removing some of the over-capacity in the system – both on the research and execution side. What we are seeing is a return to a simpler model where asset managers take greater control over the research and execution, paying only for true value, thus protecting their clients from the potential conflicts of interest that are inherent in the current model.

The world is now moving to greater levels of transparency, and there seems to have been a seismic change in the view of the issue and conflicts that exist, causing the industry to ask new questions. This is driving the investigations into the dark pools we are seeing today. Regulators are right to look at venues in detail and these dark venues will need to be more transparent than they have been in the past. At the same time, as the trader requiring anonymity to protect trade order information, they also need assurances and transparency from trading venues around what happens to their order information, where their trade is routed and executed, on what basis, and with which (type of) counterparty.

Abel Noser, a global TCA provider, analyses broker’s performance in an annual study. They mine over $7.5trn equity trades to determine which brokers provide the best execution for institutional members. The common denominator among the brokers that topped this year’s list was they only traded on behalf for their clients. These results are further proof that the aligned interest model provides increased value to the buyside.

*FCA Thematic review: Best Execution and payment for order flow, July 2014 
 

© BestExecution 2014

[divider_to_top]

Viewpoint : Fixed income trading technology : Paul Reynolds

Be25_06.BondCube_PReynoldsEVOLUTION OR REVOLUTION?

By Paul Reynolds, CEO of BondCube.

The evolution of fixed income trading technology is not often described as steady. There was a burst of activity fifteen years ago and now there is a further frenzy of activity to try to unlock the present liquidity conundrum.

Fixed income trading technology is trapped in a triple clamp of huge regulatory change, sub-optimal market structure and pinched sell-side budgets. It is not all bad news though, as electronic trading volumes have in fact risen, just not as much as you might expect compared to the overall growth in size of the market of 50%, and the growth is mainly in small orders.

Hidden in these statistics lie the underlying problems of the evolution of fixed income trading technology, or you might say, the lack of consistent evolution of fixed income trading technology, compared to other asset classes like equities.

Fifteen years ago companies like Bloomberg, MarketAxess and Tradeweb introduced ground-breaking technology that provided the channel for investors to ask multiple banks for prices to trade, known as request for quote (RFQ). These companies now dominate corporate bond trading in the US and the EU.

After fifteen years RFQ still remains the only effective electronic method of trading between investors and banks. Before the crisis it was possible to use RFQ to execute all sizes of orders. Since then order sizes have consistently declined, although the number of them has grown dramatically.

So what about the large orders? Investors have an ever-increasing backlog of unexecuted, large orders known as ‘failed to trade’ (FTT). If fixed income is to evolve it must address how to improve the efficiency and cost of executing large orders.

The first talon of the triple clamp is the new regulations requiring banks to apply more capital to conduct fixed income activity with investor clients. The result is that their capacity to provide buying and selling prices to investors has fallen by some 75% since the 2008 global financial crisis.

RFQ relies very heavily on banks providing accurate pre-trade prices followed by execution at those prices. Any investor will tell you this is simply not the case these days.

The second talon is sub-optimal market structure. Existing platforms have tried new variations on the RFQ model or asked investors to submit entire orders to banks to find matches. The evidence suggests none of them has worked so far.

Finally, banks have recently been through a cycle of single dealer platform initiatives that did not deliver because they further fragmented liquidity. Scarce IT budgets have led them to consider third party solutions to return their trading and sales businesses to profitability.

So, at a time when investors have more exposure to bonds than ever before, at historically low interest rates, they are relying on constrained bank liquidity within an RFQ model that struggles to execute large and illiquid trades. This concentration of liquidity risk amongst investors has recently caught the attention of the Fed and the SEC.

This situation is unique to fixed Income. In other markets like equities, FX & futures everyone trades with each other through exchanges and volumes are huge, especially when there is volatility. In fixed income volatility means dramatic price changes and very low volumes as recently witnessed in the high yield market.

This is because banks are solely responsible for setting prices and deciding which volumes trade. If they do not want to buy in a falling market then very little investor selling will take place and prices will be marked down. Even if a buyer wanted to take advantage of lower prices they would struggle to find the aforementioned offers as there is no effective way to discover them.

With banks unable to ‘grease the wheels’ the market structure needs to evolve in a number of ways to connect buyers with sellers outside the RFQ channel. The first way is to emulate the other markets and permit ‘all to all’ connectivity.

So what would this ‘all to all’ infrastructure look like and would it in fact be like equities, FX & futures? The main difference between these markets and fixed income is that in fixed income there are hundreds of thousands of different bonds that trade very infrequently. In the other markets there are just a few that trade very intensively.

All to all would certainly be a significant evolutionary step, but can technology innovate sufficiently to overcome the different make-up of fixed income compared to the other markets, to make it work?

If you have a large trade to execute in fixed income you need to be very careful who you tell. The only person you really want to tell is a person who wants to do the other side of your trade, so if you are a seller, you only want to find a buyer and vice versa, not another seller or even someone who has nothing to do. They may use the knowledge that you are a seller to their own advantage and your disadvantage.

So connecting everyone together using ‘all to all’ is one thing, but minimising ‘information leakage’ is another. Posting enquiries by deploying ‘indications of interest (IoIs) with just ‘side’ and ‘minimum size’, but no price, significantly reduces market risk. Adding ‘dark matching’ is however the key to ensuring minimal market risk.

Dark matching is where a seller can select a group of other users to ask who is a buyer, but the only person who will actually know, is a genuine buyer, because they have entered an enquiry asking for a seller. The two matched users can then negotiate and trade privately, posting the traded volume afterwards.

Matching a genuine buyer with a genuine seller also means the transaction costs should be much better. This is because the buyer and seller are likely to negotiate a price between the bid and ask price quoted in the market. Neither has to hit the bid, nor lift the offer.

Fortunately for fixed income there is abundant effort and activity in introducing new ideas to solve the failed to trade problem, it remains to be seen if the answer will be a giant leap or a steady evolution.

© BestExecution 2014

[divider_to_top]

Viewpoint : Performance metrics : Darren Toulson

Be26_DToulson_LiquidMetrix

WHO IS MY PEER?

Darren Toulson, Head of Research, LiquidMetrix.

Following any presentation to a buyside of the top line performance metrics in a TCA / Execution Quality Report, the natural question is “So, is this performance good or bad?”

An absolute implementation shortfall (IS) number of 8.6 BPS may sound good – most people talk about IS numbers in double digits – but is it really good? Or is it simply that the types of orders you are doing are relatively ‘easy’. Similarly if broker A has a shortfall of 4.6 BPS and broker B has a shortfall of 12.6 BPS, is this simply because you’ve trusted broker B with your tougher orders?

One approach to the difficulty of interpreting performance metrics is to compare each of your order’s outcomes to some kind of pre-trade estimate of IS and risk, based on characteristics of your order such as the instrument traded, percentage ADV etc. This may work fairly well when looking at the relative performance of brokers for your own flow, where you can take relative order difficulty into account. However, in terms of your overall, market-wide TCA performance, how can you be sure that the pre-trade estimates you’re using are at the right absolute levels? Many pre-trade estimates themselves come from broker models. How realistic are these for all brokers’ trading styles and how can you be certain that they’re not over or under estimating costs relative to the market as a whole and thus giving you a false picture of your real performance?

To get an idea of how well your orders are performing versus the market, you need to compare your own performance to orders done by other buysides.

The principal difficulty with any kind of peer comparison lies in the fact that you will be comparing your orders with orders from many different buysides; each with different investment styles in different types of instruments, given to different brokers with different algorithms. So if you compare your orders to some kind of ‘market average’ how meaningful is it? Who exactly are your peers?

For the results to be meaningful and believable it’s necessary to be open on how exactly peer comparisons are constructed so as to be sure that we’re comparing like with like.

Order similarity metrics

The starting point in any type of peer analysis is that your orders should be measured against other ‘similar’ orders. But how do we measure similarity?

Consider an order to buy 6% ADV of a European mid-cap stock, starting at around 11am and finishing no later than end of day. Apart from the basics of the order, pre- or post-trade we can also determine many other details such as: the average on-book spread of the stock being traded, the price movement from open of the trading day to start of the order, the price movement in this stock the previous day, the annual daily volatility of the stock, the average amount of resting lit liquidity on top of the order book for this stock and the number of venues it trades on, etc. It’s easy to come up with a list with many different potential ‘features’ that can be extracted and used to characterise an order.

Each of these features may or may not be helpful in determining how ‘easy’ it might be to execute an order at a price close to the arrival price. Some features make intuitive sense. Trying to execute 100% ADV in a highly volatile stock with a wide spread is likely to be much more expensive and risky than executing 0.1% ADV in a highly liquid stock with tight spreads and little volatility. But how relatively important might yesterday’s trading volume in the stock, or market beta be in predicting trade costs? Which are the best features to use?

Assume we’ve come up with 100 different potential features that might help characterise an order. If we’re designing a similarity metric that uses these features to find similar orders to compare ourselves to, we need to do one or more of the following:

 Identify which amongst the 100 features are best at predicting outcomes such as Implementation Shortfall (Feature Selection).

 Combine some of our selected features to reduce any data redundancy or duplication of features which are telling us basically the same thing (Dimension Reduction).

 Come up with a weighting of how important each remaining feature is when looking for similar orders (Supervised Statistical Learning).

The good news is that there are decades of academic research on how to do most of the above. Methods such as stepwise regression, principal components analysis, discriminant analysis and statistical learning (KNN, Support Vector Machines, Neural Nets) all lend themselves well to this type of analysis.

 

Be26_LiquidMetrix_Fig1

 

The upshot of all this is that for each buyside order we wish to analyse, we produce a similarity measure that can be used to find, from a market-wide order-outcome database, a set of the, say, 100 most-similar orders done by other buysides. They do not have to be necessarily orders done on the same stock, just on the same type of stock. Assuming we’ve done our analysis well, the TCA outcomes of these orders should represent a good target to compare our order with. If we do this for each of our orders, we discover how well we’ve really done versus ‘The Market’.

An example of peer analysis done well

What might this kind of analysis look like in practice? Figure 1 shows one way of presenting peer results. A set of client orders has been matched, using a similarity metric as described above, against a market-wide order database. We’re looking in this example at implementation shortfall (IS); one can look at other TCA metrics such as VWAP or post order reversions in exactly the same way. Using the matched orders we’re able to see the distribution of our buyside client IS outcomes with market-wide IS outcomes (green).

This tells us how well both the IS and risk (standard deviation of outcome) of our client orders compare to the market average for similar orders. Based on the number of orders analysed and a measure similar to a ‘t test’ we can then also translate the differences in performance into a significance scale, from 0 to 100, to qualify how much better or worse than average a client’s performance is.

Conclusion

A fundamental question buysides want to know from any kind of top level TCA analysis is how the costs associated with their orders compares to their peers. The danger of any kind of peer analysis is being certain that you really are being compared fairly to other participants. The solution to this is to ensure that any kind of peer comparison only compares orders with like orders, rather than orders from similar companies, preferably from a large database of orders from different buysides.

 

© BestExecution 2014

[divider_to_top]

 

News review : Buyside survey

BRIDGING THE BUYSIDE GAP.

Asset managers are struggling to keep pace with their more demanding client base and need to broaden their skillset, according to a global survey of 300 senior executives at asset management firms conducted by State Street.

The report exposed a gap between the needs of their end clients and asset managers’ expertise particularly in areas such as portfolio construction, the ability to assess and monitor risks across portfolios and knowledge of increasingly complex compliance and regulatory requirements.

Be26_JAmbrosius_StateStreetOne reason behind these skills shortages is the increasing popularity of multi-asset solutions which encompass a diverse range of asset classes from real estate and commodities to hedge funds and infrastructure. They have gained traction since the financial crisis and the low interest rate environment.

The study found that two-thirds (67%) of asset managers surveyed identified this type of investment as most likely to drive growth over the next three years. However, 74% of respondents agreed that few asset managers have the capabilities required to thrive in a multi-asset investment world.

Another factor relates to asset managers casting their investment nets wider at new markets and geographies to generate returns.

This has required them to not only navigate the regulatory demands of different jurisdictions but also identify the right product and distribution channels for each new market and deliver the requisite levels of transparency to both regulators and investors.

“To address these skills shortages, asset managers plan to augment their expertise through recruitment, while also developing their existing employees’ capabilities,” said Joerg Ambrosius (above), head of asset manager solutions for Europe, Middle East and Africa, State Street.

“Over the next three years, more than half the asset managers in our survey (54%) plan to bring in new talent, while two-thirds expect to invest in training for their current teams.”

© BestExecution 2014

[divider_to_top]

News review : Post-trade

LME CLEAR MAKES ITS DEBUT.

Be25_15.Cinnober_VAugustsson

The London Metal Exchange, the world’s oldest and largest market for industrial metals, has launched its own clearinghouse. LME Clear will initially clear all trades on the exchange and those matched on the OTC matching service but the long term plan is to add the renmimbi as collateral by the end of the year.

The central counterparty which has been in the works for three years, is a key plank in the Hong Kong Exchanges and Clearing’s (HKEx) £1.4bn takeover of the group 18 months ago. The aim is to exploit the opening of China’s capital account and develop infrastructure capable of trading and settling offshore renminbi. Asia accounts for about a quarter of trading on the LME’s electronic system.

The exchange has now migrated away from third-party provider LCH.Clearnet, with all of its members’ positions moved to risk and clearing system LMEmercury, the trading system behind the LME Clear business.

Be26_TSpanner_LMEOn the technological front, LME Clear will bring real-time — where clearing takes places within a few hours of a trade — multi-asset clearing to the exchange, by employing the core technology of long time partner Cinnober Financial Technology which described the move as a “paradigm shift.” According to Cinnober’s CEO Veronica Augustsson, “In the future the financial industry will look back on this period and talk about before and after real-time clearing.”

Trevor Spanner, chief executive of LME Clear, said that the project “would be a significant contributor to the bottom line”. This could appease investors who felt HKEx’ pricetag was too high – a hefty multiple of 180 times the LME’s final year of earnings as an independent company, according to analyst estimates at the time.

 

© BestExecution 2014

[divider_to_top]

We're Enhancing Your Experience with Smart Technology

We've updated our Terms & Conditions and Privacy Policy to introduce AI tools that will personalize your content, improve our market analysis, and deliver more relevant insights.These changes take effect on Aug 25, 2025.
Your data remains protected—we're simply using smart technology to serve you better. [Review Full Terms] |[Review Privacy Policy] By continuing to use our services after Aug 25, 2025, you agree to these updates.

Close the CTA