Home Blog Page 605

Market opinion : Regulation : Jannah Patchay

Jannah-Patchay

Jannah Patchay

WHERE THERE’S A WILL…

Jannah Patchay of Agora Global Consultants considers whether the current raft of regulations will be fit for purpose, and indeed whether some might even create loopholes for the less scrupulous market participants?

Financial institutions of all shapes and sizes – investment banks, brokers, exchanges and fund managers alike – are awash in a sea of regulation, with tidal waves of new rules coming in from regulators in all global regions. The drivers behind this regulatory deluge are well known. Some, such as much of the Dodd-Frank Act’s Title XII rules, EMIR, and the impending second instalment of MiFID, as well as the many new Asian regulatory reporting regimes, seek to fulfil the 2009 G20 commitments to OTC market reform in the matters of transparency, market efficiency, risk management and integrity. Others, such as the updated Capital Requirements Directive (CRD IV) in Europe, and the Federal Reserve’s recently finalised prudential standards for US and foreign banks within its jurisdiction, require assurances of the prudential health and stability of financial institutions, and their ability to recover (or at least to fail gracefully) in the event of large losses or unfavourable market events.

In the background to all this looms the spectre of the financial crisis of 2007-2008 and the ensuing credit crunch. But will new regulation actually prevent another financial crisis? Does it add sufficiently useful weapons to the arsenal of regulators to enable them to identify and eliminate sources of systemic risk at an early enough stage to prevent a future crisis? Or does it in fact create an environment in which new and previously unimaginable crises might occur, with potentially devastating consequences for the increasingly intertwined global financial markets?

Take for example central clearing, the move to which should ostensibly reduce systemic risk by removing counterparty risk from the pool of clearing-eligible OTC and exchange-traded transactions, thus avoiding another potentially catastrophic default by the likes of a Bear Stearns or Lehman Brothers. However, central clearing itself creates a new set of risks. Firstly, risk is concentrated in a few commercial entities, namely the clearinghouses, instead of being dispersed across a network of counterparties. These clearinghouses, although well-capitalised and in possession of default funds to which their members must contribute a fair share, are driven by commercial imperatives i.e. to gain a competitive advantage over their rivals. In a marketplace in which all providers are offering a very similar service, it is reasonable to extrapolate that, at some point, one or more may choose to lower their charges in order to attract business. In this example, “lowering charges” may equate to lowering default funding or collateral requirements, leaning towards taking on precisely the type of risk that they were established to avoid. Secondly, clearinghouses require collateral, and most often either cash or high-quality securities. Where will all this collateral come from? Will financial institutions be forced to invent new and potentially hazardous means of converting their less-desirable assets into “safe” collateral? The potential collateral shortfall is estimated by some to be as much as 2.5 trillion US dollars.

The Volcker Rule, embedded in Title VI of the Dodd-Frank Act, originally had a very simple objective: to prevent risky speculation by systemically important banks, thus reducing risk for the global economy. It was off to a rocky start, as no fewer than five US regulators had to come to an agreement on the rule’s final wording, a process that took more than two years. The final version permits limited proprietary trading activity for the purposes of market making and hedging (albeit requiring a plethora of supporting data to prove the intent of these trades), and restricts ownership in private investment vehicles, known as “covered funds”.

Market making is both a lucrative business line for many banks, and a vital source of liquidity, particularly in less liquid securities, for the markets as a whole. Banks often go long or short on securities in anticipation of client demand, as part of their market making activities. In the case of some instruments, this invariably leads to an appreciation or depreciation in the value of the asset while the bank holds it, due to market movements. The Volcker Rule’s stringent data reporting requirements are designed to look for precisely this sort of change in the value of an instrument between the time it is acquired and the time it is passed on to a client – i.e. detection of any financial gain for the bank itself, other than the spread added to the client price. Some banks, due either to the onerous reporting requirements or to the difficulty in proving the validity of their business, could decide to exit the market making business, thus ultimately reducing liquidity in some instruments, to the potential detriment of the market as a whole.

It has also been noted by some pundits that the Volcker Rule’s restrictions on bank ownership of covered funds (funds whose activities would normally be classed as equivalent to those of an investment bank, were they to be publicly owned or owned by less qualified investors, e.g. hedge funds) may lead to greater creativity on the part of banks when it comes to taking a stake in a potentially profitable fund. For example, the restrictions apply only to equity holdings, not to debt issued by the fund. A fund could structure its debt issuances in such a way that the cashflows mirror profits and losses, thus mimicking an ownership stake to all intents and purposes, and banks, whilst staying within the bounds of the Volcker proscription on fund ownership, could legitimately acquire these debts.

These are but a few examples of the ways in which new regulation might not entirely meet its objectives, and might instead lead to some rather unexpected and ultimately undesirable consequences. However, the situation may not be as gloomy as it initially appears. Centralisation of risk in clearinghouses and the collateral shortfall are increasingly under discussion at an industry level, with the risks being recognised and analysed more widely. The Volcker Rule has still to be refined and expanded upon by all five regulatory agencies involved, of which some, such as the CFTC and SEC, may choose to issue detailed guidance more specific to, and in the interests of, preserving stability within their own markets. There is yet time for regulators to address these concerns before another crisis can occur, if not now, then in future iterations.

©BestExecution 2014
 
[divider_to_top] 

Post-trade : Trade reporting : PJ DiGiammarino

PJ DiGiammarino
PJ DiGiammarino

PJ DiGiammarino

The OTC reporting marathon: is there a finish line?

PJ DiGiammarino.

EU and US taxpayers have been scratching their heads in recent week as regulators made it painfully clear that they have squandered both years and billions with little to show for it.

The politicians that gathered in Pittsburgh were quite explicit – they want OTC transparency. Did they expect that, nearly five years later, we would still be pumping billions into creating mountains of information that is so untrustworthy, incomplete and flawed that it is of little value? Doubt it. Did they think Google would come up with a ‘magic box’ to sort this out? Perhaps (God bless them). Did they want to create a lot of IT jobs for the boys – well, not sure they caught that one.

The point is that everyone now expects that regulators are armed with the information needed to mitigate future systemic risks and protect against market abuse. The reality is light years away from that belief.

Questions over data quality

Just as the EU has begun the process of mandatory reporting of derivatives data under EMIR, the US has begun its review of the ongoing swaps reporting process under Dodd-Frank. The results are not particularly pretty. In a 10 February conference call, the 11th Meeting of the Technology Advisory Committee made clear that, while initial progresses in swaps data repository (SDR) reporting have been made, the Commission has “a long way to go” on many reported asset classes.

On the other side of the pond, ESMA was getting ready to make it worse. Given that their chosen reporting strategy is exponentially more difficult, few are hopeful for a better outcome. Industry professionals have been scrambling to produce reports with incomplete and shifting requirements for months – although perhaps fewer months than required. They were expecting the inevitable last-minute problems with servers falling over and bugs, but they were not ready for the game to change at 18:00 the day before.

Just hours before the 12 February trade reporting deadline, ESMA released its final round of Q&As for firms. The guidance changed specifications for how a trade is uniquely identified, what code to use to classify products and how multi-leg transactions should be reported. Sound like a few tweaks? We think not. In addition, confusion still reigns in some cases over ‘who’ should report. Jurisdictional questions were highlighted following a communication by a leading Dutch clearing bank notifying its clients that (under certain caveats) they would be exempt from their reporting obligations because of a Dutch legal concept known as ‘lastgeving’, which was an interpretation upheld by their national regulators, De Nederlandsche Bank and Autoriteit Financiële Markten.

These are serious issues that need proper debate. And it isn’t just the consistency of reported trade data that is in question. It is also the regulators’ ability to process that data.

Can regulators process the data?

In a recently published statement of dissent, CFTC Commissioner Scott D. O’Malia bemoaned the lack of resources being allocated to the Commission’s IT budget. In essence, he suggested that the increased trade reporting requirements brought about by Dodd-Frank may not actually improve market surveillance or reduce risk, if the CFTC did not have the IT resources to decipher the data.

In particular, O’Malia noted that the current 2015 budget figure would be insufficient to support plans to improve the quality of swaps data presently being collected, and to “utilize this data for critical risk analysis and surveillance”. He also suggested that the Commission needed a database that could analyse exposures across different classes of derivatives – including futures, swaps and options. Finally, he noted a requirement to extend the CFTC’s data collection infrastructure to include order, or quote, messages, which would imply a more than tenfold increase in the volume of data it currently collects.

Without investments in those kinds of IT projects, O’Malia said the Commission would struggle to meet its goals of ensuring market integrity and protecting the market (and general public) from “fraud, manipulation, abusive practices, and systemic risk”.

And it isn’t just Commissioner O’Malia that is complaining about regulators’ ability to aggregate and make sense of trade data. The Financial Stability Board’s consultation paper on aggregating OTC derivatives data, which closed for comment at the end of February, will look to propose solutions to the challenges currently faced by regulators in aggregating trade data for more meaningful analysis.

Those challenges are likely to be even more evident in Europe. If the CFTC is complaining that it doesn’t have the funds to make sense of its swaps data, you might wonder how well ESMA is deciphering the data it started collecting last month, with ESMA’s IT spend just a fraction of the CFTC’s. In 2014 it budgeted €1.4m for Information and Communication technology spend, with a further €4.75m set aside for its ‘Collection of information: IT project’. In comparison, the CFTC’s proposed IT budget for 2015, which Commissioner O’Malia clearly sees as being insufficient, allocates $50m to IT services and a further $21m to fund full-time IT staff.

Add to that the fact that ESMA is likely facing even greater issues with aggregating its swaps data – given the double-sided reporting regime in Europe and the ratio of six trade repositories (TRs) to the three swaps data repositories (SDRs) in the US – and it’s difficult to believe the data currently being reported under EMIR is being put to much use. That said, perhaps a like-for-like comparison between ESMA and the CFTC is unfair, as National Competent Authorities will also be helping to police EMIR trade data. But exactly how the collaboration between various NCAs will work and what levels of IT spend each is investing in retrieving, cleansing and analysing trade data, is really not known.

Where is the finish line?

Whether it means identifying specific trading patterns for further investigation or more gradual build-ups of exposure that could prompt systemic risk concerns, it would seem there are some serious question marks around whether regulatory reporting will help to make the markets safer. We know that the regulatory reporting race cannot be stopped, so deciding on the best path forward and defining the end-goals will be crucial to the success of this initiative. Clearly, it would be easy to fine market participants for incorrectly reporting. But surely that is not a worthwhile aim in itself.

Rather, the market should be asking itself some serious questions. Will the politicians re-engage and audit what has happened? Will real regulatory leadership emerge? Can the laws be modified and the standards worked out now that the race is on? The answers are unknown but, if we don’t find them one way or another, this will be an awfully long run.

PJ is the founder and CEO of independent analysts, JWG-IT Group

©BestExecution 2014

[divider_to_top]

Viewpoint : Post-trade : Clearing & settlement : Denis Orrock

GBST, Denis Orrock
GBST, Denis Orrock

GBST, Denis Orrock

New frontiers, new opportunities.

The new market frontier for clearing and settlement operations?

Investment banks operating in emerging and frontier markets across Eastern and Central Europe, Asia, Americas and Africa must create hubs for their clearing and settlement processes to make market entry cost effective, writes Denis Orrock, CEO, GBST Capital Markets. 

The search for better returns, the emergence of global indices and opportunities in developing regions is increasingly leading large investment banks to look to the emerging economies. Perhaps the greatest challenge for these banks is how to cost- effectively service such markets, given the operational infrastructure and market connectivity that is frequently lacking in these regions.

Core jurisdictions such as the United Kingdom, Germany, France, Hong Kong, Japan and the United States are very well served for clearing and settlement infrastructures – usually developed by highly skilled in-house teams. Our experience over the last five years in Asia has been the need to widen the investment net beyond Japan, Hong Kong and Singapore; and in Europe beyond the major financial territories and into other often faster-growing markets to the east.

When investment banks extend into non-core markets, the processing and market connectivity practices are more often than not very different. The banks invariably need to purchase a local vendor system (due to a lack of resource availability, time and market knowledge) to be in a position to handle all the local rules (including specifics such as books of records rules). Yet many of these markets suffer from a lack of system vendors that are capable of providing a scalable solution, or one that can accommodate clearing and settlement rules in more than a single country.

The dilemma for banks (whilst only processing a low number of trades) is to avoid buying different systems to cater for all of the non-core markets in their region, without incurring huge expense. The only viable solution is to implement one central hub that has multiple market interfaces for clearing and settlement processing.

Europe and T2S

In Europe we have a complex scenario: 27 countries, many with their own unique requirements. In terms of Eastern Europe, it is the likes of Czech Republic, Russia, Poland and Turkey that are seeing a significant increase in trading volumes. Outside of these countries some of the investment banks are also offering coverage of the smaller Eastern European countries as a ‘loss leader’ in order to win client portfolios that include the major markets mentioned above. In such situations, it is imperative to reduce operational costs and maximise efficiencies.

Further complicating market expansion is T2S. TARGET2-Securities is a new European settlement engine that aims to harmonise much of the highly fragmented securities infrastructure across European markets. The project is scheduled to go live starting in 2015. The fundamental objective of the T2S project is to reduce the costs of European cross-border securities settlement, within the euro area and across participating non-euro countries, as well as to increase competition and choice amongst providers of post-trading services.

In our experience, many of the big investment houses are already hard-pressed handling the implications of T2S; therefore in delivery and operational terms they are too stretched to accommodate the nuances and problems associated with entering emerging markets. Currently many banks outsource the execution and settlement of trades in these non-core markets through a local market participant, such as a local broker or custodian. This reduces the margin for the bank, feeds potential competitors revenue, reduces processing timeframes and increases risk. One hub that can cater for all these frontier markets and provide scalability as transaction volumes increase is far more palatable.

Major obstacles involved in setting up a hub

There are three key requirements when attempting to set up a clearing and settlement hub in emerging and frontier markets:

• Understanding the market requirements – these requirements are extremely likely to change; therefore adaptability is key to any clearing and settlement hub.

• Messaging knowledge – understanding the messaging requirements and building these out. This involves not just knowledge of how to send the messages out but also being capable of understanding the replies as they are received.

• Interfacing with other settlement engines and in-house systems. Investment banks may have several settlement engines feeding into one market hub.

Benefits of finding the right hub strategy

The benefits of such an endeavour are clear:

Delivery of a scalable solution providing future proofing for volume growth, and new market additions, through a single solution.

A significant reduction in the investment bank’s time to enter a new market.

Economies of scale: with multiple clients, vendors providing hubs have several entities that are financing the build of the system, as opposed to banks bearing all the development, and maintenance costs themselves from an in-house build.

The key is the ability to cope as the volume of trades increases, whilst minimising operational costs while trading volumes to these markets are relatively low.

Conclusion

Investment banks have little choice but to adapt to a new type of operating model in order to effectively enter frontier markets for two key reasons.

Firstly, as the flow of capital becomes more widely distributed (outside of the traditional global markets) and governments in emerging economies increasingly become aware of the importance of having a sovereign capital market infrastructure that can help corporates to reduce their reliance on the traditional sources, the move toward clearing and settlement hubs will accelerate.

Secondly, chasing index performance is having a substantial effect on asset allocations, helping to determine where the fund flows go. As countries are reweighted, this will inevitably affect the direction of capital and drive investment to the frontiers.

 

© Best Execution 2014
 
[divider_to_top]
 
 
 

News review : FX & Derivatives

Best execution failings.

Be24_MooreStephens_ARawlins

Issuing a Market Watch before the conclusion of a thematic review is unusual. The Financial Conduct Authority’s decision to highlight best execution issues in its February 2014 Market Watch 45 therefore deserves serious attention, says Anthony Rawlins, senior manager, Financial Services Compliance Consultancy, Moore Stephens LLP

February 2014 Market Watch 45  suggests the FCA has uncovered consistent failings in a substantial number of firms, and that others not included in the review could well be making similar mistakes.

Rather worryingly, some firms may be unaware that they are in fact covered by the ‘best execution’ obligation – the requirement to take all reasonable steps to obtain the best possible results for clients (retail and professional) on a consistent basis when executing orders on their behalf. In particular, the FCA notes that brokers in certain markets such as regulated contracts-for-difference, spread betting firms and those offering rolling spot forex CFDs may be failing to recognise the full extent of the best execution obligation which applies to them.

As the FCA spells out, investment firms are required to take into account the characteristics of their clients and a range of factors, including price, costs, speed, and likelihood of execution and settlement. Firms that execute orders and decisions to deal should establish an order execution policy and monitor the effectiveness of their order execution arrangements, correcting any deficiencies as necessary. Firms that receive and transmit orders for execution should have a policy on how they select executing brokers.

Best interests

So what are firms doing wrong? Some may not be taking proper account of the sourcing of pricing data. Firms need to make sure they look to appropriate firms on the market when comparing their pricing on a deal, and without performing the comparison in such a way that the results are skewed in favour of the deal. In this context the concept of ‘better than best’, which pre-dates the FCA, still has relevance to firms today. It enshrines the concept that whatever the firm does must be towards the best possible outcome and in the best interests of its clients.

Some firms may also be failing in their best execution obligation in relation to any positive price movements that occur between the submission of an order and its execution. These must be passed on to the client. They cannot be retained by the firm, even if the deal price is within an approved range. Making the range and keeping the change is not acceptable.

It is important to note that when dealing with specific client instructions that these only satisfy best execution in relation to the part or aspect of an order that the trade covers. A firm should not induce a client to give instructions that would circumvent its best execution duty.

Achieving improvement

If, as seems likely, many firms are failing to comply with at least some aspects of the best execution requirements, there is almost certainly widespread scope for improvement. All firms, whether dealing on the wholesale market or actually trading themselves, need to ensure that they have appropriate best execution arrangements in place. When reviewing this do they actively question those arrangements and the extent to which they are meeting best execution requirements? What are the trigger events that would cause the firm to look at its order execution policies and procedures? Those trigger events will inevitably be affected by the nature of the transactions and the types of trades being undertaken.

The quality of management information on trading activity is important here. For example, timing of trade and volumes are key factors, if a large number of trades are made at a particular point in time, this could be a trigger event to raise a query on why this is happening. The explanation may be perfectly reasonable, but it needs to be established. If an individual is deliberately holding back on client executions, the firm needs to be aware of this and take any necessary action.

Many trading firms will have real-time monitoring of their trading activity, using a variety of software applications. For them, a systems audit may be needed to check that the software and systems are still working as they should. In particular, firms should conduct an interrogation of the system ‘through the system’, not just ‘around the box’. The risk of error is typically higher with bespoke software as opposed to off-the-shelf packages.

FCA focus

When reviewing firms, the FCA will be looking to assess overall compliance with the overarching best execution requirements. This includes, for example, trying to find the best possible price on a financial instrument. The FCA will therefore be looking at factors such as the products being used and types of trade being undertaken, the comparisons the firm is using in the marketplace, and the level of transparency around price comparison data. The FCA will also look at the levels of fees charged for clearing and settlement, market depth and liquidity, market potential speed of execution, and price momentum before and during execution. The regulator is also likely to assess how proactive the firm is being in terms of continually reviewing its own actions.

A recurring theme around all these actions is the management risk. Management information and systems audits are both important aspects of managing risk in relation to best execution failings. But the FCA will also be looking at how firms address conduct risk. For example, do the firm’s training programmes contain ethical dimensions? Has the firm recruited the right people for their roles? Do the firm’s systems and processes ensure that employees are not placed in any situations where there is a conflict between what is best for their own wealth and the client’s wealth? How does the firm identify honest mistakes, which can occur even when employees have the best intentions? How does the firm respond in such situations? Does it provide additional training to individuals as necessary?

In summary, achieving best execution compliance is a complex challenge. Most firms that assess their own performance are likely to find scope for improvement. Given the FCA’s focus on this area, taking action to assess and improve standards is highly recommended.

©BestExecution 2014

[divider_to_top]

Viewpoint : Post-trade : Trade reporting : Nic Boatwright

Early days.

Prior to February 12th, the EMIR Reporting Start Date for derivative contracts, many observers were pessimistic; one side predicting wholesale non-compliance, whilst the other predicted apocalyptic meltdown of trade reporting platforms due to high volume and data quality issues. Now that a month has passed without catastrophe and the dust is starting to settle, Nic Boatwright, managing director of REGIS-TR, offers us a cautiously optimistic view for the journey ahead.

Be24_REgISTR_NBoatwright2

The successful Trade Repository authorisation in November 2013 was the trigger for a number of customers, although we already had a very strong pipeline, evidenced by high levels of sign-up to our test environment. Levels of awareness differed across client size and segment, previous postponements (and expectations of a further one) meant that even some of the more prepared institutions were not prepared to commit until this point. That gave them 90 days from our registration until Reporting Start Date (RSD). The number of customers using our test environment multiplied six-fold following the November announcement.

Our test environment, which had been in place since 2012, was not the only ground work we had done. REGIS-TR was also the first trade repository to have contracts and published fee schedules in place. During the course of 2013, executive representatives from REGIS-TR attended 80+ conferences, roadshows and trade events across a number of different sectors and locations to try to keep this on people’s list of priorities but, despite our best efforts, there was still significant compression in account opening applications.

Those well-prepared, high volume customers completed their on-boarding in good time but there were a large number of (mainly) corporate customers coming to us very late in the day, which presented challenges. More than 80% of our entire client base only submitted their account opening documentation within the 5 weeks prior to the reporting start date. The document pack included things like articles of association, signature specimens and user preference profile information, all of which had to be diligently validated and set-up before they could start reporting in the production environment.

There were different reasons why customers came late. For some of the smaller corporates, who are not well represented in industry bodies and don’t have the luxury of large legal and compliance teams, they simply weren’t aware that EMIR applied to them. For some of the larger buyside clients, there was an expectation that the implementation of the regulation would be delayed or that their banks would simply take care of this on their behalf. Also, our sales performance was strong and the subsequent demand was top of our expectations. In hindsight, one may ask why we didn’t set a January deadline to allow us clear time to process applications after a certain cut-off point, however we took the approach that we wanted to keep our doors open to try to help as many customers as possible to be EMIR compliant from February 12th. This was achieved for the overwhelming majority, although I won’t pretend that for some it wasn’t taken to the wire.

When RSD arrived, we had every reason to be confident regarding our system performance. Indeed, everything held up robustly, as we had planned, and customers report on average more than 6,500,000 trades and more than 1,500,000 positions on a daily basis. We had a mirror-image test environment in place since 2012 and we also spent a lot of time consulting with a number of our biggest customers to get a firm idea of capacity and volume benchmarks. As to be expected, we experienced a minor processing slow down at peak usage on RSD, but this did not alter performance materially. In any case, we undertake real-time system performance monitoring and the platform is scalable for volume intraday.

Whilst we are proud of this achievement, we are also acutely aware that this was only the start of the journey, with collateral and valuation reporting and potentially additional reporting obligations for the likes of REPO to come. Now the focus is rightly shifting from account opening to data quality and, specifically, in helping our clients fine-tune their reporting so as to achieve the highest level of automation.

Work is now moving full steam ahead on any teething problems our customers are experiencing with the new regime – their questions tend to focus on rejection issues, pairings and reconciliation. In terms of rejections, we are finding that customers with the largest number of rejections also manage to report significant numbers without rejection. We are working with them to identify the glitches and, hopefully, some tweaking should see the majority of rejections cease. Similarly with pairing, we have seen some issues as a result of the legal entity identifier regime (LEI – whereby customers need an LEI for each trade) not yet being fully fleshed out – however, this is an interim issue that we are working to resolve by utilising other available data to pair. Longer term, the LEI initiative is very positive since it helps to centrally identify entities and track trades, and as of today, roughly 90% of our customers are using LEIs.

The reconciliation differences will, though, be harder to resolve. In terms of data quality, customers must first have the data right in-house. Understandably, a lot of them did not have time to focus on perfect mapping from their own, potentially multiple, trade capture and matching platforms at the level of detail required across 85+ fields prior to RSD. They took the pragmatic approach that, first and foremost, actually reporting was the main concern.

But a more fundamental issue has been the lack of clarity in some areas like the methodology for UTI (Unique Trade Identifier) or UPI (Unique Product Identifier) allocation, meaning that there is not currently an industry consensus on the right approach. For UTIs, counterparties will need to agree bilaterally how they will be generated, and who will generate and who will consume. For UPIs, both parties may in fact be reporting “correctly”, but because – in the absence of a common standard protocol – they use a different identifier, a reconciliation break occurs.

Whilst Inter-TR reconciliation has started working in earnest over the last weeks and we have now reconciled large numbers of transactions on an automated basis with our TR peers, the exact matching for all fields – i.e. perfectly matched across all fields on both sides – remains a real challenge and we still have far to go. The visibility of the Inter-TR reconciliation to the regulator should ensure that discussions surrounding a standardised approach continue and a sensible approach adopted to prioritise which of those fields is fundamental and allow matching tolerance where appropriate. It will get better, but it is going to take time and require determination and co-operation. Ongoing dialogue with ESMA and the wider market will, I’m sure, result in “normalisation” of market practice.

There is a prevailing view that the implementation of EMIR should have been further delayed to iron out such issues in advance. I’m not sure I really subscribe to this. Firstly, many of our customers tell us that they are literally jumping from implementing one piece of legislation to the next based on urgency of implementation date. On this basis, a postponement to RSD would likely not have resulted in significantly more time being spent on preparing for it; rather the time would be spent on the next hard regulatory deadline. Secondly, with something as complex and involving as many different market participants as this, there was always going to be an element of having to “learn by doing”, although a firm endorsement of a universal standard for LEIs, UTIs, UPIs would have undoubtedly helped. I think it’s also important to take a step back and look again at the wider context: ESMA was a new regulatory body, having been tasked with the implementation of a new, unpopular piece of regulation, in large part to a sector hitherto without reporting obligations and very reluctant to embrace it, with only the infant Dodd Frank Act, itself beset with challenges, by way of a reference point.

Looking to the immediate future, the milestones for back-loading (three months and three years) and collateral and valuation reporting (six months) should be easier to achieve now that the groundwork has been done. As the number of LEIs issued suggests, not all counterparties have started reporting yet and we continue to see a steady volume of account opening requests coming through. Furthermore, some clients have indicated that they took a pragmatic view in terms of putting a reporting model in place for RSD and, when things are operating on a business-as-usual basis, they intend to revisit their model with a view to their longer-term strategic objectives.

Considering all of this, we are now at a stage where hundreds of millions of trades have already been reported by tens of thousands of market participants. Intra-TR reconciliation has started and regulators have already started to review data. Certainly, it has been difficult and a lot of focussed effort remains to make this as pain-free, mainstream and unobtrusive to the business of trading as possible, but getting to where we are has been no small achievement for everyone concerned. A rocky start has, in my view, been better than no start.

©BestExecution 2014
 
[divider_to_top]
 
 
 

Viewpoint : Market surveillance : Lorne Chambers

Be24_Nasdaq_L.Chambers

The need to be alert.

Lorne Chambers, Global Head of Sales and Account Management, SMARTS Integrity, NASDAQ OMX, talks to Roger Aitken about the market surveillance issues that face brokers, exchanges and regulators.

The market surveillance tools that your organisation provides were originally developed by SMARTS Group, an Australian company acquired by NASDAQ OMX in 2010. Did the financial crisis mark a turning point in the need for your solutions?

In a way the financial crisis did mark a turning point since we’d plateau’d a little bit as investment dollars really dried up. We were adding customers at a pretty high rate up until 2007, initially driven by markets becoming increasingly electronic, then, more recently by surges in the volume of trading messages leading to system replacements such as at the London Stock Exchange in 2006. Additionally, as emerging markets started to mature, they wanted to demonstrate that they had international best practice on surveillance.

From 2008 to 2011 growth was flat. We brought in some big names such as BATS, BM&FBovespa and ICE (IntercontinentalExchange Inc.) but at a slower rate than prior years for the exchange and regulator product (SMARTS Integrity), however it was a different story for the broker side, and the financial crisis was in many ways a turning point for our SMARTS Broker product. This is what we provide to trading participants, primarily the sellside but also the buyside. The crisis put a great deal of focus on regulatory requirements for trading participants, which explained the uptick seen in brokers coming to us looking for compliance solutions and concerned about reputational risk.

What are the key regulatory drivers behind the requirement for increased market surveillance in Europe and North America? And, are the different regions in step with each other?

In terms of regulations in Europe, the Markets in Financial Instruments Directive (MiFID) and the Market Abuse Directive (MAD) came first. However, MiFID and MAD did not spell out exactly what was expected and were fundamentally principles-based. This contrasts with the European Securities Market Association’s (ESMA) ‘Guidelines for systems and controls in an automated trading environment for trading platforms, investment firms and competent authorities’ (21st December 2011). ESMA’s publication provided more detailed guidance than was contained in MiFID and MAD.

However, it should be noted that our products – for exchanges, regulators and brokers – were already complaint well before those guidelines came out because we had led the way in developing these solutions. And, we certainly saw heightened interest from brokers asking for solutions once it was clearly spelled out what they were meant to be doing under MiFID and MAD.

In the US, it has been more rules-based. FINRA rule 5270 (1st June 2013), for example, spells out in greater detail what you can and cannot do. Despite this distinction all of the alerts and reports that might be run on our solutions in Europe would still run in the US marketplace.

What are the main challenges that the regulatory authorities face in policing market abuse?

The first is getting adequate budget to get appropriate systems and staffing in place, but assuming you pass that hurdle, its finding the non-trading-data data points that help solidify your case. We call this going beyond the alert, identifying the smoking gun that helps to answer the question of intention, or whether there was a link between an insider and the trader. We have partnered with Catelus and SMARSH to help customers identify electronic communication networks of relevance, and identify the communications of interest, and link that back to a case in the trading data. We think that this will help cut down investigation times for market abuse cases by relieving the need to review thousands or millions of emails/communications.

Market abuse has also been the news. What are the specific issues that your solutions address and across which asset classes?

We help customers identify potential cases of all the forms of market abuse, across asset classes – including equities, derivatives, fixed income, commodity and energy trading. To that end we run over sixty different concepts covering all the different forms of market abuse and market manipulation including insider dealing, front running, spoofing and all items mentioned in the ESMA guidelines. The reports and alerts our systems generate seek to pick anything that is not within the spirit of the rules.

You already provides solutions to 50 trading venues and regulators, and over 80 brokers across 60 countries. Who else needs to consider implementing market surveillance measures and why? And, why haven’t they done so already?

We are looking at the energy regulators, energy markets, trading participants, the European regulators and the Swap Execution Facilities (SEFs) in the US. Everyone would like to have a better system but it comes down to budgets and an inertia to change.

We are seeing a lot of activity in the energy markets. Just having ACER (the Agency for the Co-operation of Energy Regulators) undertaking cross-market surveillance does not alleviate all the market participants and all the local regulators of their monitoring responsibilities. A number of regulators are coming to us and we are also seeing an uptick among the trading participants on the energy markets.

Given some of the wording contained in the drafts of MiFID II, there is the potential that European regulators will need to consolidate order book data from fragmented markets and either handle the surveillance of order book data or outsource it to a third party. What we do in Canada with The Investment Industry Regulatory Organisation of Canada (IIROC) – a national self-regulatory organisation (SRO) monitoring trading rules across Canadian trading venues – is provide a solution that takes thirteen separate order books and merges them into a single view. We feel this is a robust model of how to consolidate multiple order books for surveillance that we can explain to regulators in other jurisdictions.

What are your future plans?

Certainly we want to expand into other asset classes. In the US, the Consolidated Audit Trail (CAT) rule presents opportunities. Essentially the CAT project is trying to take every trade in the US – accurately time stamped and client account identified – and put it into one massive repository with 20-40 billion records a day that will need processing. Nasdaq OMX can utilise that data with its own technology

Having secured the first SMARTS Broker client in China we are trying to expand there as well. It means on-boarding more local brokers to use the product and securing our first SMARTS Integrity win in China.

©BestExecution 2014

[divider_to_top]

Viewpoint : Regulation : Jurisdictional differences : Silvano Stagni

Be24_Hatstand_S.Stagni

The regulation dilemma?

Silvano Stagni, group head of marketing & research at IT consultancy, Hatstand points out a number of instances where regulations in different jurisdictions have shown a marked difference of emphasis and suggests that in a global marketplace there may be unintended consequences.

The capital markets business is global. Technology enables even small players based in one country to trade in markets based elsewhere. The current changes in the regulatory environment do not always take into account the global nature of the business they are trying to regulate even when they use a common starting point to define the rules.

Most of the post-trade rules on derivative reporting and centralised clearing derive from a paper entitled Principles of Financial Market Infrastructure, jointly published by the Bank of International Settlements and IOSCO. However, whilst regulators such as the MAS in Singapore or the Hong Kong Monetary Authority clearly state what should happen when a counterparty based in their jurisdiction trades across borders, EMIR and Dodd-Frank do not mention this.

The regulations that deal with trading do not have a common starting point. For instance, the MiFID review (EU) and Title VII of Dodd-Frank and the Volcker rules (US) do not exactly focus on similar aspects.

The MIFID review is at the beginning of its legislative process and it is likely to become effective in 2016. In the US, Dodd-Frank is already effective as far as derivative trading is concerned and programmes to comply with the Volcker rules must be completed by July 2015.

There is a lot of emphasis on transparency in the EU. The obligation to have most trades executed in an organised venue (Regulated Exchange, MTF, SI and OTF) implies a de-facto ban on crossing networks since OTFs will not be allowed to execute cash equity trades. Dark pool trading will have to find a different way to operate in the MiFID area once MiFID II/MiFIR become effective, beyond some exemptions for very large trades. Proprietary trading, on the other hand, will be relegated to systematic internalisers, but not banned.

Meanwhile, across the pond, Volcker practically banned proprietary trading, with some heavily regulated exemptions for market making and hedging, but there is no real or de-facto ban on dark pool trading. There is no emphasis on forcing equity to be traded on exchanges because the rules on SEFs only apply to derivatives.

This difference may not lead to regulatory arbitrage according to the most common understanding of this concept, i.e. looking for the jurisdiction with the least strict rules about something, such as regulatory reporting, but it is interesting to see what consequences this difference in emphasis will bring. The Volcker rules prevent US financial institutions to do outside the US what they cannot do domestically, but there is no similar provision for European institutions in MiFID II/MiFIR. Only time will tell how an industry that has become more dependent on technology rather than a physical location will react to regulations with a different focus in different jurisdictions.

N.B. This article was written before the vote of the European Parliament on level 1 of the MiFID Review (MIFID II/MiFIR), therefore any reference to implementation timescale is a best guess and may be incorrect at the time of publication
 
© Best Execution 2014
 
[divider_to_top]
 
 

Equities trading focus : Transparency : John Kelly

Be24_Liquidnet_John-Kelly

Increasing transparency in trading.

John Kelly, chief operating officer at Liquidnet.

The growth of electronic trading has provided many benefits to equity investors by creating new market efficiencies and reducing trading costs. But it has also led to a complex web of venues, liquidity types, and trading strategies that have contributed to a loss of confidence among investors globally. It is in the interest of institutional trading firms and venues to work with regulators to restore confidence and clarity to the global market structure. Indeed, the industry should take the initiative without waiting for regulatory action. One place to start is transparency and control over liquidity interactions and use of customer information.

Since the beginning of regulated markets, institutional investors have needed mechanisms to trade their large blocks away from the “open outcry” of the exchange floor. These institutions – that manage billions of dollars of pension and mutual fund assets on behalf of millions of investors and often trading in block sizes exceeding $1million – still need to be matched away from the retail market so as not to cause price volatility for the rest of the market and trigger adverse price movements.

With the digitization and liberalization of the financial markets, institutions now have a multitude of trading venues in which to trade. However, each of these venues caters to different, and sometimes contrasting, trading needs. And, while some platforms serve their original intent of providing institutions with a venue to trade large blocks, most have become destinations for algorithms that slice blocks into pieces which are indistinguishable from the trades found in the displayed world of the exchanges. The highly competitive, complex, and interconnected market structure results in block orders being transformed into small orders that move quickly from one venue to another before eventually executing. This results in information leakage, adverse price movement, and an erosion of fund performance. When large institutional investors have big investment ideas, they need to mitigate these risks in order to protect the performance of the fund.

Market fragmentation has also presented challenges to regulators in defining the types of transparency that are – or are not – beneficial to the overall market structure. It is difficult for regulators to implement “level playing field” rules common to all participants and venues while giving consideration to the large number, variety, and co-dependency of trading venues. In such an environment, regulation could easily become a web of micro-managing rules that impede efficiency rather than stand as broad and powerful principles that enhance transparency and improve market structure.

For institutional investors – who have the greatest need for market transparency – the complexity of today’s market structure itself has resulted in challenges in defining and standardizing transparency across different business models. Regulators want greater transparency over the marketplace, which enables a robust and well-functioning market. However, in their quest to increase transparency in European markets, regulators and policy makers have, with their revisions to the Markets in Financial Instruments Directive (MiFID II), potentially created more opacity by proposing volume caps on dark trading. Dark trading reduces slippage costs and is an essential tool in a trader’s arsenal to achieve best execution.

By forcing trading away from these venues, the regulators believe this will direct flow onto the lit markets. No doubt some will, but potentially flow will move more to OTC activity, which will decrease not increase transparency. The technical details in terms of how this will be implemented are currently in progress through ESMA’s technical trialogue discussions but without a consolidated tape, it is difficult to understand how this will work in practice, in the best interests of all market participants.

On the flipside, we have seen some examples of regulators establishing sound principles that strike the right balance between individual participant protection and fostering broader market transparency and efficiency. One example is FINRA’s new reporting standards for venue trading volumes that includes an appropriate delay of two to four weeks, depending on the underlying liquidity available for a particular stock in the US. Regulators in Canada and Australia have been successful in their efforts to improve the quality of dark pools by implementing rules that govern trade size and establish price improvement requirements for dark trading.

Every trading venue is different. The operators of each trading venue are not only best placed to understand individual customer needs and how they interact with liquidity and the various trading technology solutions, but these venues also have a responsibility to their clients to communicate this. One way forward is for the venues to develop tools that give customers ultimate control over how markets are accessed and how their data is managed and utilized.

We recently launched “Transparency Controls”, a simple web-based interface that provides clients with the ability to set their preferences for interacting with different types of liquidity and the different purposes for which we may use their data to create opportunities to trade. The options are detailed and the system sets preferences in real time providing added control and transparency to our platform.

Through the new Transparency Controls, Liquidnet discloses to Members all sources of liquidity with which they can interact and all services and products that use their trading data. Members are required to opt-in to interact with sources of liquidity outside of our core offering and to opt-in to any service or product that accesses predetermined types of data. In utilizing trading data, Liquidnet will never reveal trading parties’ names or attributes. In addition, Members will benefit from existing or new value-added products or services if they have opted-in to allowing that product or service to access the firm’s relevant data.

We believe this new level of transparency and control goes beyond existing industry standards and addresses the concerns of the institutional investor community as well as regulators.

Still, the industry needs to do more. There needs to be an industry-wide focus on protecting and respecting client information. Trading venues need to be responsible guardians of what is, essentially, proprietary information. They should have clear and transparent principles on how they interact with venues, liquidity, and how their information is used. Importantly, they should give their clients complete control over these parameters through simple, user-friendly tools. Institutions have every reason to expect this, and trading venues have every reason to provide it as a matter of good business practice rather than simply the result of regulator pressure.

Both regulators and market participants crave a new level of trust and transparency in the markets. Trading venues themselves must take a more active role in creating it.

© BestExecution 2014

[divider_to_top]

Equities trading focus : Algorithms : Darren Toulson

LiquidMetrix, Darren Toulson
LiquidMetrix, Darren Toulson

LiquidMetrix, Darren Toulson

What fuels your algorithms?

Modern execution algorithms are complicated, but Darren Toulson, head of research at LiquidMetrix offers a few guiding principles to designing execution algorithms that can fully realise their intended outcome.

Venue fragmentation, Dark Pools, Broker Crossing Networks, HFT liquidity providers (and predators) and the many other changes to the execution landscape in recent years all present a mix of opportunity and danger that the smartest execution algorithms should be able to turn to their advantage, or at least navigate their perils.

The job of coming up with smart new ways to seek liquidity, capture spread or generally outperform other execution strategies is most often given to ‘quants’. As the name suggests, quants approach this task by first analysing historical/real time market data and based upon observations/heuristics (and maybe back-testing), they will devise a suite of algos with different execution objectives.

Implicit in this modelling is usually a set of market data and statistics that the algorithms have access to at ‘run time’ so the execution models can make optimal decisions.

But this can be where things can get messy from a practical perspective.

Consider a simple algorithm that is attempting to replicate or beat a market wide VWAP benchmark over the trading day, what kind of input data might such an algo require?

• Based on a start of day estimate of today’s intraday ‘market’ trading volume profile, the execution algorithm should try to closely match this profile, executing trades at a fixed percentage of the current day’s trading activity and thus closely match the day’s VWAP. We may or may not wish to include start and end of day auction trading.

• As the trading day progresses, we may find that the actual market volume curve significantly diverges from our start of day estimate. We may need to tweak our target trading curve during the day, preferably based on some kind of conditional dynamic model that predicts what the rest of the day is likely to be based on the day’s trading so far.

• The simplest way of executing would be to ‘aggress’ the lit markets each time we need to trade in order to closely track the target volume profile. However, this would mean ‘paying the spread’ for every trade and will ultimately result in underperforming VWAP by about half the spread. Additionally, even worse performance can result if we aggress more than one level down the order book and there is a price impact followed by short-term mean reversion in the market.

• Therefore most modern VWAP algorithms will try to trade as much as possible in mid-point matching dark pools or use passive limit orders on lit markets. This should improve overall average performance versus VWAP (by capturing some or all of the spread) but as the timing of execution of passive orders is outside our control, it makes the task of exactly replicating the intraday volume profile more complex and may lead to more deviation (risk) in our VWAP performance.

To summarise the above, inputs our execution algorithm would ideally have access to include:

• Start of day estimates of the intraday volume curves (including estimates of auction volumes)

• Dynamic models for updating these curves based on actual trading.

• Estimates of intraday bid/offer spreads and short-term impact/reversion models so the algorithm can model the likely cost of going aggressive.

• Estimates of execution probabilities for passive orders on lit markets so the algorithm can model the likely risk of non-execution.

• Intraday models of price volatility.

To be truly optimal these data sets should be instrument specific and venue specific as we want to know not only the probability of getting executed passively on any lit venue or dark pool but also which venues for a given instrument we are most likely to be executed on so we can set preferences on which venues to favour.

With this type of detailed run-time data, a cleverly designed VWAP algorithm should be able to make sensible decisions throughout the day on the speed at which it should trade, what venues to post passively to in order to capture spread, when to switch from passive to aggressive orders and the right ‘size’ to send to lit markets to minimise impact/reversion.

The practical challenges faced by algo operators are twofold:

1. Calculating or sourcing instrument/venue level statistics. The challenge here is maintaining detailed and accurate tick level databases and having processes to calculate the statistics on a daily basis.

2. Feeding these statistics into their run time execution logic (and integrating this data with other real time data that should be used to drive decisions). How easy this is will depend upon the degree of control provided by the execution software.

The prize, for firms that get this right, will be execution algorithms that fully realise the intention and cleverness of the people who design them. Otherwise, no matter how clever the algo design or impressive the real time execution architecture, Garbage In, Garbage Out…

Be24_Liquidmetrix_Chart

Estimating intraday volume profiles is not as straightforward as one might think. The chart above shows various intraday volume prediction models, each giving different estimates. The black dots show actual volumes traded, illustrating the natural intraday variability and difficulty of robust estimation.                                

 

© BestExecution 2014

[divider_to_top]

Equities trading focus : TCA : Tim Healy

Be24_FIX_Tim.HealyTrade Cost Analysis – A few steps back to take it a long way forward.

Tim Healy, Global Marketing and Communications Manager, FIX Trading Community.

Trade Cost Analysis (TCA) can mean very different things to different people. Historically, TCA has been a very subjective study. Different methodologies in the market have meant that TCA has become somewhat shrouded in an air of mystery.

The definition of TCA on one well-known internet encyclopaedia site is that it is a method of determining the effectiveness of financial investment portfolio transactions. Simple, clear and to the point. Theoretically yes, but the reality is somewhat different.

In this modern era of regulation and a push for transparency across the board, there have been growing voices for TCA to have a universally recognised set of benchmarks and data points that everyone understands and that everyone can use. This isn’t just for the buyside of course. The sellside and vendors have a vested interest in making sure that the data that they are providing, and being provided with, is standard across all aspects of the market. Only then can the true value of the trading services of the sellside be measured against each other. No small task.

The FIX Trading Community has worked on a large number of initiatives over the years and always with the ideal of addressing the needs of our members. The working groups that spring up as a result of these initiatives are run by volunteers from the industry who give up time from their day-to-day work to contribute to the working groups in any way they can.

The FIX TCA Working Group was formed two years ago. Co-chaired by representatives from the buyside and sellside, it has identified that there are real needs in the market to create a glossary of industry acceptable TCA terms and to define a standardised set of best practices. The reasoning behind this is to remove that air of mystery and uncertainty that can pervade the world of TCA. Transparency, as in many aspects of the market these days, is key for the buyside and for their clients.

The initial scope of this working group has been to develop industry-wide terminology and methodology for equities, from pre-trade to post-trade analytics. Within this initial scope, there are some fundamental questions that the group has posed and then attempted to answer.

1)      What is the cost of trading?

2)      What do we want to measure?

3)      How do we measure the cost of trading?

From the perspective of the buyside and the sellside, these questions have a number of different answers. For the buyside, there are generally three traditional views of TCA: portfolio manager (PM), trader and management. Whilst the PM and trader will have somewhat similar requirements, the trader’s need for implicit cost measurement, which includes the execution cost, is key to his role within the organisation and for the measurement of brokers. Aside from this, TCA professionals are starting to make their mark in buyside firms where individuals are employed and dedicated to provide TCA across their firm.

From the perspective of the sellside, brokers will typically provide services to the buyside that combine the needs of the PM, trader and management. This preserves, strengthens and deepens the long-term relationship between client and broker and is absolutely fundamental to a broker’s TCA offering. Additionally, brokers will analyse TCA data in great detail to continuously improve execution quality through the fine-tuning of existing and new automated strategies.

For the TCA Providers, those three questions could provide some diverse replies. TCA providers are for-profit enterprises and have an incentive to see TCA as an evolving topic. By promoting their offerings as unique or superior to their peers, they will attempt to gain market share. Both standardised and custom solutions present drawbacks and advantages. The challenge for both entails trying to represent vastly different buyside investment and trading strategies, either in a single methodology or in a customer-driven bespoke solution, meaning that the true cost of trading may be masked.

In an attempt to help these three groups tackle these three questions, the TCA Working Group identified that the number one issue was the inconsistent and often confusing use of terminology and methodology. That is not to say that the group is looking to tell the TCA providers that what they are currently doing is wrong or misleading. The group is trying to set down a list of general definitions and benchmark definitions for the industry to leverage.

General definitions are terms that are commonly found within TCA and represent the contribution of all the member firms of the Working Group. Benchmark definitions are derived from the order lifecycle. With this, the group is providing a neutral naming convention which acknowledges different perspectives from the buy and sellside on the same point in the order lifecycle.

The Working Group encourages the financial community to review the document* and to consider adopting the terminology and other best practice guidelines within their own organisations.

There has been strong interest from FIX Trading Community members to address the issues within TCA. By taking a few steps back and providing a standardised terminology and methodology, there will be an opportunity to move forward with a more consistent evaluation of the investment process. Just recently there has been a ‘call to arms’ to the members to expand the Working Group to broaden the asset class scope to include FX, futures and options and fixed income. FIX is now the standard across all asset classes and by leveraging off our neutrality, we will attempt to promote the idea of creating a permanent, neutral body providing global validation and ongoing maintenance of TCA standards.

*The document that contains this information is publicly available to download on the FIX Trading Community website at www.fixtradingcommunity.org

© BestExecution 2014
 
[divider_to_top]

We're Enhancing Your Experience with Smart Technology

We've updated our Terms & Conditions and Privacy Policy to introduce AI tools that will personalize your content, improve our market analysis, and deliver more relevant insights.These changes take effect on Aug 25, 2025.
Your data remains protected—we're simply using smart technology to serve you better. [Review Full Terms] |[Review Privacy Policy] By continuing to use our services after Aug 25, 2025, you agree to these updates.

Close the CTA