Home Blog Page 590

Buyside challenges : Trading : ARQA

TODAYS TECHNOLOGIES FOR TODAYS TRADING AGENDA

ARQA Technologies is a leading independent financial software provider in Russia and CIS. Founded in 2000 in the geographical centre of Russia – Novosibirsk – it offers automated frontto- back solutions to over 300 sell- and buyside clients worldwide. The company offers its solutions for deployment at clients’ premises or as managed services on the base of three data centres. They comprise:

Front-office solutions

• EMS/OMS QUIK for desk operations and client services.

• The essential components of the front-office software are QUIK matching engine (the platform’s core), SOR, and ALGO suite.

• RISQ solutions – a family of software applications for comprehensive risk management.

Middle-and-back-office solutions

• midQORT – monitoring and control of positions/ risks across markets/clients.

• backQORT – real-time operational accounting and reporting.

 

RISQ solutions – different approaches to risk management in pre-trade mode

What is it?

RISQ solutions are a family of software solutions specially designed as a stand-alone complex for effective pre-trade risk control of transactions.

Two distinct cases for applying RISQ solutions are those when:

1. Transactions to trading venues come from OMS or EMS, and

2. Trading takes place through direct sponsored access often involving HFT and algorithms.

In the first case transactions are checked when they go through the RISQ server.

In the second case a special piece of software named RISQ filter (a proprietary knowhow) screens orders based on pre-set ‘fat finger’ restrictions with additional safeguards including flood control) and continuously updated computations from the RISQ server.

In both cases the sophistication of pre-trade checks depends on selected models and risk parameters. There are a number of risk control approaches (risk management models) and also various combinations that are applied for achieving a particular business goal. Such approaches extend from simple checks (‘fat finger’, flood control) to more complex ones such as cross-asset portfolio margining, involving computations of correlation between instruments.

Distinctive features of RISQ solutions

• accurate accounting for working orders in position evaluation;

• scenario-based approach;

• capability to ‘screen’ the very first HFT order;

• easy integration with other software;

• additional latency of a pre-trade check (per round trip) – 3 to 50 microseconds in case 2 (depending on infrastructure) and 0.9 to 5 milliseconds in case 1 (depending on risk control approach).

How it works

ARQA 1 - RISQ

Software used

1. RISQ Server – the computation core,

2. RISQ filter – a low latency ‘screen and halt’ device. Solutions built around RISQ filter include FIXPreTrade (universal), FIX2LSE, FIX2MICEX, FIX2Plaza2, and QUIK KillSwitch.

3. RISQ terminals – GUI for risk managers and end-clients.

 

QUIK KillSwitch – risk solution for sponsored access to the LSE

What is it?

The latest solution incorporating RISQ filter is QUIK KillSwitch. It was designed for online risk control of low-latency sponsored access trading at the London Stock Exchange (LSE) spot market. It is built on the existing technology of the LSE (halting trading by enabling a killswitch when a drop copy connection is down). The point of this solution (as with other RISQ filter-based applications) is to leverage RISQ capacity for lower risk and more efficient trading. QUIK KillSwitch has been certified by the LSE for joint use with the exchange risk control system.

How it works

ARQA 2 - QUIK killswitch

1. Client’s platform is connected to the LSE trading system.

2. ‘Fat finger’ pre-trade checks are made by the exchange according to broker settings.

3. Complex risk checks are carried out by RISQ server.

4. Drop copy comes to RISQ server.

5. RISQ server updates QUIK KillSwitch on client’s position.

6. When RISQ server detects a risk it signals QUIK KillSwitch to halt client’s transactions by breaking his drop copy connection.

7. When risk parameters are satisfactory, the separate drop copy connection is not used.

8. When the drop copy connection is broken client’s orders are rejected by the exchange.

Software used

• QUIK KillSwitch

• RISQ server QUIK KillSwitch

 

Aggregation & internalization

What is it?

This is a universal multi-instrument solution for equity, FX, and derivatives trading. Its infrastructure is streamlined to aggregate and internalise liquidity.

The solution interconnects a number of trading venues and dark pools, bringing together quotes and liquidity. Orders are processed and routed to liquidity pools. Instrument quotes for clients are generated internally, hedging broker positions according to selected computation schemes. Clients’ orders are registered, matched and executed. Currently the solution is mostly applied to the FX market.

How it works

 

ARQA 3 - Aggregation

Software used

1. QUIK server – global connectivity, combined positions and overall risk management,

2. QUIK SOR – liquidity aggregation.

3. Automated Order Processing – internal pricing engine, quotations for clients,

4. QUIK Matching Engine – internal matching.

 

QUIK OMS

What is it?

A fully automated order processing (including registration, execution, monitoring, reporting and exporting data to external systems).

An external platform or OMS (buyside or sellside) connects to QUIK OMS. QUIK OMS automatically processes order flow with the help of Sales and Trader. Execution is done by QUIK EMS (cross-instrument, cross-venue, OTC or through ALGO suite). Risk management is provided by RISQ solutions (built into QUIK server). P&L is calculated online.

How it works

ARQA 4 - Kwik oms

Software used

1. QUIK OMS Manager – order processing automation,

2. RISQ server,

3. ALGO suite – algorithmic order processing,

4. OTC module – internal OTC trading core.

www.arqatech.com

sales@arqatech.com

 

© BestExecution 2014

 

Global Post Trade Working Group (GPTWG) Review 2014

With Laura Craft, Director, Equities & Fixed Income Product Strategy at Traiana and Co-Chair of Global Post Trade Working Group, FIX Trading Community.
Laura CraftAfter updating and reissuing the guidelines for Equities in 2013, did you see a significant increase in buy-side engagement?
The buy-side that has really driven FIX for post trade has tended to be some of the large US-based asset managers. The initiative was kicked off a number of years ago by the group. Their early input in drafting of the guidelines was no doubt advantageous and their continued support to the initiative has assisted in encouraging more firms to adopt FIX allocation and confirmation messaging. As we draw a close to the final quarter in 2014, we are now seeing some European asset managers now adopting FIX. The continued adoption of this will be a focus for the GPTWG as we move forward into 2015.
 
Has there been much push back from the sell-side on this initiative at all?
The sell-side has been largely supportive of the initiative, and on the whole has implemented FIX in their post-trade process relatively seamlessly or in some cases have utilised middle-ware vendors to assist in the transition to a new messaging protocol. Since FIX is so prevalent in the front office, the protocol is seen as a natural extension to the middle office for allocation and confirmation processing. The ability to leverage the existing FIX infrastructure has proved to be cost effective and another persuasive argument for the sell-side to be involved.
What have been the challenges from region to region for Post-Trade FIX workflow?
The main challenges that the working group has noticed has been the education process across the regions and the staggered regional rollout of FIX on the buy-side. Certainly, Europe and Asia Pac tend to have very similar processes that can be leveraged, yet North America has traditionally had a different workflow and this has been more of a challenge for some firms to implement FIX – notwithstanding the typically high volume within the North America region.
At the beginning of 2014, the plan was to broaden the asset class for FIX in Post-Trade. Has there been success with this initiative? If so, in which areas?
Once the FIX Post Trade guidelines were signed off for Equities, there was a conscious effort from the GPTWG to ensure that additional asset classes were to be included. Various sub-groups were created including Equity Swaps, FX and Fixed Income. These have been extremely successful and guidelines are in the process of being finalized and signed off across all of these instruments. It is important that the momentum these sub-groups have are followed through on.
The Fixed Income market has long been OTC. If there is a move towards more electronic trading, will that have a natural impact on the Post-Trade space?
Traditionally the Fixed Income (FI) post trade space has been extremely manual. However, as some buy-side firms have now adopted FIX post trade across both equities and FI, there is a shift towards greater automation in the post trade space. Firms are now investing in middle office systems and are needing to improve processes to cope with increased volumes.
2015 is just around the corner. Where do you see the main focus for the Post-Trade Working Group being?
Once all of the sub-groups have signed off their respective guidelines for the additional asset classes, one of the key roles of the GPTWG is to continue to drive adoption. The success of the group is only measured by usage and more buy-side firms need to be encouraged to review the efficiency statistics of those firms that are now fully functioning with FIX for allocation and confirmation processing and feel confident that they can emulate their improved efficiency and reduced cost of processing. There has been great participation from all market participants in the GPTWG – buy-side, sell-side and vendors have all been very actively involved. As mentioned earlier, pushing for greater adoption will be important task as we move forward with the likelihood of shorter settlement periods becoming normal as the year progresses.

Buyside challenges : Technology infrastructure : Data strategy

AnnaPajor_960x375
AnnaPajor_960x375

DERIVING VALUE FROM THE DATA.

AnnaPajor_960x375

Anna Pajor, Lead Consultant, Capital Market Intelligence at GreySpark Partners.

Capital markets data is produced by, analysed by and consumed by many different types of industry professionals. This data is so universal that everyone from data entry clerks to senior executives interacts with it on a daily basis. As such, it is important for all types of buyside firms to utilise strategic data management practices, which can help them better understand their client base as well as opportunities in the market to serve that client base. However, implementing the technology needed to support such a data strategy is a complex task. The complexity is twofold – several technologies are introduced at the same time, which then creates a need to drive cultural alignment within the firm toward the solutions.

GreySpark Partners’ experience advising capital market participants of all sizes means that we frequently witness situations within capital markets firms wherein the word ‘data’ is being discussed only in the context of problems that the company is experiencing. Low-quality data warehoused by a firm often requires recurring data cleansing projects, and reports-centric thinking frequently results in an overflow of useless, spreadsheetbased reports as business-critical issues are being overlooked. In those situations, the IT department is typically blamed for mismanaging the firm’s data.

The long-standing pain related to data management prevents many capital markets firms from seeing the data as an asset. For numerous firms, the value of data is hidden, especially if they are relying only on pre-set reports or spreadsheets as a source of conveying the data the firm accumulates as actionable information (see Figure 1). The firms that are aware of the existence of best practices for data management as well as the technology solutions that can aid in the implementation of data strategy are heading for a new a stage in their development as a company wherein data management is straightforward and knowledge about the business environment the firm operates in is built up based on the data accumulated over time.

The data wave is coming

Even if a capital markets firm is not seeking a competitive edge through enhanced utilisation of the data at its disposal, concerns about data management will inevitably emerge in the future. Clients are now demanding more sophisticated analysis and proofs of the performance of their investments, and firms must be able to automate these reports to avoid drowning their clients and themselves under spreadsheet reports. The rise of risk-on/risk-off behaviour means that clients are price sensitive and less loyal in their willingness to set and forget where they park their investments for the long-term. Instead, they require fact-based proof of above-average performance by their investment managers.

Additionally, capital markets regulations and the resulting raft of new compliance rules are incentivising investment houses to streamline their data management practices to fulfil reporting requirements.

Also, increases in the volume of diverse types of data that firms now must process is another reason why it is important for organisations of all sizes to build and implement a cohesive data strategy. The general electronification of capital markets activity is a driving force in the industry, which means that human relationships are increasingly substituted by electronic interactions. At the same time, the volume of data created by new levels of automated trading activity with brokers, clients and markets is growing. Generally speaking, these new volumes of data are a function of the current information age and, in the case of financial services, the need for their efficient management is exacerbated by demand for data required by regulators and other types of market entities. This data deluge will not abate, and it is critically important that all organisations have a strategic response in place now to respond to these pressures.

Navigating the storm

The concept of treating data as an asset should be the central pillar of a successful data strategy for any type of capital markets organisation. Treating data this way means that all of the firm’s other strategic objectives become corollaries of this idea. The principal deduction from the idea of treating data as an asset is that the data must then be used to drive business benefit, whether it is in the form of increasing revenues, reducing costs or increasing market share.

In exploring the implications of managing data as an asset, five strategic objectives emerge that, in concert, support the end goal of using data to deliver real business benefits. These strategic objectives are shown in Figure 2.

No wetware

Avoiding automation and relying on human intervention – the wetware solutions – in standardised processes are the biggest flaws in any firm’s data management practices. Using wetware means that process management becomes management-by-exception; the processes become more convoluted as exception handling, reconciliation and data cleanup or de-duping bolt-ons are added. This is a vicious circle as more human intervention is added in an attempt to maintain high quality of the data.

Wetware solutions should be avoided at all costs, including using low-cost offshore centres that may appear cheaper than a software solution. Wetware data management is a false economy because the true costs of doing so, including opportunity costs, are never correctly estimated. The long-term result of fixing data problems by using an army of cheap human resources is a reduction in the value of the data that will ultimately affect the firm’s bottom line in a negative manner.

Swim or sink

For many organisations, justifying investment in new technology is not a straightforward task because the main benefits from the effort that the task demands are intangible or are expressed as opportunities rather than as immediately profitable business benefits. Instead, the benefits of the new technology are observed over several years and, in cases of visualisation tools and ‘Big Data’ – as with any disruptive technology – the benefits cannot be fully appreciated while the technology is still maturing.

The justifications for the investment are shown in Figure 3 and are only a starting point for a broader discussion with potential technology implementation project sponsors. An in-depth investigation for other, organisationspecific opportunities that the implementation project could yield is needed, not only to create stronger justification for the spending, but also to understand what business benefits and opportunities would be enabled by the investment.

The new land of data-led superiority

A successful data strategy relies on a set of different technologies. All the elements illustrated in Figure 4 are necessary to support an effective capital markets data strategy, although specific technology choices must be made according to a firm’s unique requirements. The full value of this model is realised only when the pervasive application of the relevant technology services is made available at every tier in the architectural model. A mature data organisation is supported by a complete architectural vision of data-related services that transcends business siloes and unlocks maximum utility from the data throughout the organisation.

GreySpark Partners produced a series of reports to guide financial institutions through a transformation from a state wherein data is treated as a regulatory burden or technology overhead to a state wherein data management is considered central to generating bottom line results. Those reports are available at: research.greyspark.com. Additional contributors: Bradley Wood, Bruce Craven, Jonathan Parsons.

www.greyspark.com

 

© BestExecution 2014

Buyside challenges : Technology infrastructure : The cloud

StuartTurnham_960x375
StuartTurnham_960x375

De-fluffing the cloud paradigm

StuartTurnham_960x375

Stuart Turnham, Director Enterprise Field Development, EMEA at Equinix.

With Basel III high on the agenda of the European banking sector right now, along with all its associated technology implications, it is perhaps not surprising that the wider community of institutional investors is looking beyond traditional technology models to ensure compliance. And while the increased reserve requirements of Basel III will drive higher costs of capital across the board, it is the new regulatory standards involving the holding of structured products, requiring more rigorous valuation processes and portfolio analysis, that will be particularly onerous for the buyside.

Although investment firms are adapting to
these market conditions, many of their existing IT models – in particular those relying on batch jobs and excel spreadsheets – are proving insufficient for the task post-Basel III, as firms will require more compute power and data storage than their legacy equipment can provision. So how can this be achieved on existing, or in some cases reduced,
IT budgets?

Enter the cloud

From a technology standpoint, many capital markets firms are now starting to realise that
the cloud provides the additional compute and storage capacity required for their ever-evolving IT workloads. And by introducing an efficient cost model for delivering computing utilities as services, the cloud is enabling users to completely change their technology architecture strategies to take advantage of this paradigm.

In fact, according to a recent Aite Group survey* over 52% of capital markets firms already have cloud initiatives in place. For those that do not, a further 80% are expecting to implement cloud- based technologies over the next 24 months.

However, the research also indicates that cloud adoption is slower on the buyside than it is on the sellside.

Why is this? With cloud initiatives gradually gaining momentum across the wider capital markets industry, with better technological offerings readily available, and with the obvious compute cost savings that can be applied to processes such as portfolio analysis, why do today’s buyside organisations continue to under-utilise cloud computing when compared to their other capital markets counterparts?

Equinix_Cloud-Service-ModelsBarriers to adoption

Barriers to cloud adoption on the buyside seem to fall around three areas: compliance; security; and access to public and private cloud services.

Compliance: Investment firms have always needed to stay on the right side of regulators and avoid anything that could be deemed a compliance risk, so the issue of regulation and compliance is not new; it has always been high on the buyside’s agenda. And from a compliance perspective, many firms are nervous about the issue of data storage – particularly customer data – in the cloud, especially where jurisdictional concerns or sensitive market data come into play. But how valid are these concerns?

The reality is that most cloud offerings do provide users with control over what is and is not acceptable regarding data storage, such as which data needs to be kept onshore for example. So although users might not know the exact location of where the information is stored, or where specific servers are running applications and models, from a regulatory standpoint compliance issues around data can be addressed.

In fact, on a wider level, regulatory bodies and cloud service providers are working more and more closely to address issues surrounding cloud usage in financial services, particularly in the US and Europe.

Commission (SEC) and the Financial Industry Regulatory Authority (FINRA) are leading the
way amongst regulators using cloud computing, encouraged by the US Federal Risk and Authorization Management Program (FedRAMP), which provides standards for – and ultimately authorises specific cloud offerings as appropriate for use by – federal agencies.

Another US regulator, the Commodity Futures Trading Commission (CFTC), recently issued cyber security guidelines that, while not addressing cloud computing directly, set forth standards and put cloud squarely within the acceptable category.

In Europe, the EU has also worked to help cloud adoption, recently releasing proposed changes to the existing Data Protection Directive that would considerably simplify the process of moving data across borders.

Such initiatives, viewed by cloud providers as positive developments and a signal that regulators are open to the technology, should help to address many of the buysides’ concerns regarding compliance.

Security: With many investment firms using proprietary financial models to generate alpha
and hence profits, it is perhaps not surprising that they focus on the security and secrecy of their intellectual property (IP) and how to keep it to themselves. As a result, many are wary of holding their models and associated applications and data in the cloud.

Their fears however stem from perceived, rather than actual, risks. And perceptions change. Over the last twelve years or so for example, there has been a major shift in perception around the relative security of in-house versus third-party data centres, with many firms now realising that with better physical security and disaster recovery facilities, third-party data centres are generally more secure than their in-house equivalents.

Will the next ten years see a similar shift
in perception towards the security of cloud computing? Maybe undoubtedly so. The major cloud providers already operate in data centres in unknown locations, leading to greater physical security. And the flexibility offered around multiple levels of security options, ranging from complex passwords to enterprise-level encryption, means

that customers are not simply reliant on the broader security features of the cloud, they can also add on additional layers of security over which they have complete control.

With the right procedures in place, firms will no doubt come to realise over time that their concerns around cloud security are unfounded.

Access: One of the major operational barriers preventing financial organisations from adopting a cloud strategy remains in how to securely access and consume these services. More often than not there has only been one way for financial services firms to access cloud services, by connecting over the public Internet. Current models rely heavily

on the Internet as the sole means of connecting
to the cloud for compute, storage and software services, introducing risks around data security, performance and customer experience. There are of course tools and technologies that mitigate the risks of transiting via untrusted third-party networks but – as we were reminded by the Heartbleed vulnerability – these are not infallible. They also carry a non-trivial performance and efficiency cost.

The only way to be 100% confident that your data remains safe while in transit is to connect directly to your chosen cloud providers. However, there is no “one size fits all” cloud solution. Savvy customers therefore deploy a combination of public and private cloud (hosted or on-premises), and legacy infrastructure (“hybrid-cloud”). They also leverage multiple public cloud service providers (“multi-cloud”), choosing the right tool for the job on a case-by-case basis. Even the cloud providers themselves are increasingly building others’ cloud services into their architectures (“inter-cloud”), in order to cost-effectively deliver a reliable service globally.

Clearly, a simple and direct access method
of connecting to cloud services, both public and private, is required if operational risk and the associated costs are to be kept at bay, allowing the true business benefits of cloud technology to flow through to buyside firms. This is why an increasing number of financial services firms are accessing the cloud through multi-tenant data centres, which offer a range of flexible interconnection offerings
to allow these hybrid-, multi-, and inter-cloud architectures to be built seamlessly and easily.

Equinix_Cloud-Direct-Access

Realising the benefits of cloud

With the cost of replacing legacy infrastructure often being prohibitive, hedge funds in particular may benefit from adopting cloud on an as-needed basis, despite their concerns around compliance, security and access. Adopting cloud can lower the cost of trading by reducing infrastructure spend and reducing costs outside of the bid/ask spread, as well as lowering overall portfolio compute costs by up to 50%, according to industry estimates.

As investors and regulators decide which IT workloads are suitable for migration to the cloud and they begin to see beyond hosted email servers and execution management systems to unlock
the true scale cloud can provide, it is important that they understand how to cost-effectively and securely connect to their chosen cloud services.

To meet the challenges surrounding cloud adoption within the industry Equinix is working hard to enable our customers to directly access multiple cloud service providers inside our

global data centres, providing faster, safer and smarter interconnection options to cloud service providers, bypassing the public internet and leveraging cross-connect and Ethernet technology to improve performance and security, while reducing costs. Essentially we assist customers in their design of cloud connectivity architectures, taking the risk of Internet connectivity out of the equation, so our customers can experience all

the benefits offered by cloud technology within a secure and safe data centre environment, near to where their business offices and end users are. In fact, over 800+ financial services firms are currently located inside Equinix data centres, directly interconnecting to 450+ cloud service providers, including leading players such as Amazon

and Microsoft.
We believe cloud computing’s software-, platform- and infrastructure-as-a-service (SaaS, PaaS and IaaS) solutions can be used in a secure fashion if they are implemented correctly, as part
of a balanced IT infrastructure model and with securely controlled connectivity and access. By utilising these services on the cloud, buyside
firms can gain direct and secure access to the originating sources of market data, as well as SaaS risk management applications and IaaS compute power and storage technologies, all of which serve to increase performance and reduce costs, whether that is applied to overnight batch window Value at Risk (VAR) calculations, real-time pre-trade risk analytics, application development, or other processes.

In conclusion, cloud computing is a hugely promising paradigm for delivering computing utilities as services for the financial services industry. Just as personal computers and servers shook up the world of mainframes and mini-computers, or as smartphones and tablets revolutionised the mobile commerce industry, cloud computing is bringing similar far-reaching changes to the financial services industry. And with the right data centre strategy
in place, investment firms can break through any remaining barriers to adoption.

www.equinix.com

*www.aitegroup.com/report/capital-markets-cloud-not-such-gray-area

©BestExecution 2014

[divider_to_top]

Buyside challenges : Technology infrastructure : SaaS

NeilSmyth_960x375
NeilSmyth_960x375

INVESTMENT MANAGEMENT: “IT’S THE TECHNOLOGY, STUPID”

NeilSmyth_960x375

Neil Smyth, Marketing and Technology Director at StatPro.

 

New assets come with new challenges

Assets under management are at an all-time
high. In 2013 they stood at $68.7 trillion, which represents a 13% increase over the pre-crash peak in 2007. The future also appears to be rosy for
the asset management industry with global AUM predicted to grow to over $100 trillion by 2020,
but how do active managers stay competitive and relevant in a world where investors will have more choice than ever before? We’re already seeing
the traditional institutional investor looking outside the normal channels of vanilla asset management by ploughing funds into alternative assets with hedge funds being the vehicle of choice. A recent survey found that pensions are the fastest growing investor segment and are the largest contributor
to the growth of the hedge fund industry. Personal wealth is also on the increase. High net worth client assets are predicted to increase from $52.4 trillion in 2012 to $76.9 trillion in 2020. As the global economy continues to recover and emerging markets grow, the mass affluent client market is expected to increase from $59.5 trillion to over $100 trillion in 2020.

Where will these assets end up? What kind
of investments and asset types will be dominant and how can asset managers compete to ensure they are managing this money over someone else? Traditional active management is under threat from new asset classes and new lower cost investment vehicles. The huge growth in ETFs allows any investor to gain access to markets
and market segments without the need for active management. It’s easier than ever to diversify your investments using passive means. The growth
in passive investing is not just restricted to ‘Joe Public’ either. A 2013 survey by Ignites of 1,001 investment professionals, many of whom make their living promoting active management products, found that two thirds have invested a sizeable amount of their own money in passive products with only one in five saying they avoided passive products all together. Having easy access to these new channels and only paying four basis points
for holding them in some cases, presents a real challenge to margins and profitability in the active management market.

It’s obvious that there is real pressure on margins and profitability within the capital markets industry in general. Regulations keep on coming and many directly affect the asset management industry and drive up costs. A recent KPMG paper states that the asset management industry is investing heavily in compliance, on average spending more than seven percent of their total operating costs on compliance technology, headcount or strategy. Based on extrapolations from their data, KPMG believes that compliance
is costing the industry more than $3 billion. This new wave of regulation affects all aspects of the asset management business, from the advice they provide, the way they trade securities to the way they report on performance and how they market themselves to investors. Keeping up with new regulations and ensuring effective implementation is a real challenge, and the impact of non- compliance has never been more serious.

Technology infrastructure – need for change

Is the existing technology infrastructure and application landscape within the average asset manager up to the task of ensuring cost effective management of increasing data volumes, reporting and regulatory pressure? I think not. With two thirds of asset management firms relying on technology from 2006 or before, what chance do they have of competing for new capital while managing the cost and margin pressures we’ve already discussed? Many systems exist as standalone solutions, they perform certain tasks and produce data, but they don’t consider a more complete workflow, they were not designed to integrate with multiple systems together on a single data set. This traditional infrastructure creates silos of data that doesn’t add any value because it’s immobile, difficult to share and cannot be
acted upon by the right business users in a timely fashion. Along with budget pressure, a growing desire to outsource non-core activities and this dislocation of data, the priorities regarding software development and infrastructure deployment are undergoing a major transformation within our industry. The IT strategy needed to meet today’s challenges must be based on an open architecture framework, elastic, scalable and on-demand hardware, collaborative workflows and new service delivery models such as Software as a Service (SaaS) and cloud. These new service delivery models mean new external partnerships. This switches the priority of internal IT teams from managing hardware and software development projects, to managed services and external partner and service level management. CEB TowerGroup estimates that the majority of applications could be delivered via alternative methods as early as 2016. The short term implication is that investment in IT capabilities is now assuming that traditional hosted and on-premise solutions are the last resort.

There is no doubt that existing hardware and software platforms are going to be replaced. The speed of that replacement is variable, but the trend towards outsourcing is continuing and makes sense as the new delivery models fit with the growing desire to streamline IT infrastructure and operations. This desire is being fuelled by the focus on core business activities rather than support operations such as IT. This new ability to consume technology as a service means the internal IT ‘power stations’ will no longer be required at such a scale as they are today. Think of this migration to managed services and outsourced infrastructure like the move to the power grid, instead of owning and operating a power station attached to your factory.

These traditional local IT ‘power stations’ are not up to the task when it comes to the growth in data volumes and the sheer amount of analytical calculations that are needed with today’s multiple asset class portfolios and the compliance reports that are needed to satisfy regulators on a daily basis. Legacy applications and the architecture behind them were simply not designed to handle the scale and complexity found all across the asset management world today. Even if a system went live in 2010, it was probably architected in 2006 and developed in 2007/8 before going through
an 18 month installation project. It was designed before the very first iPhone, so what chance does it have of coping with today’s requirements?! These on-premise applications were designed to run on single servers or at best, a small cluster of servers. The issue here is that eventually the system will plateau. It simply won’t handle any more data and it cannot produce the output any faster. Adding more data simply extends the processing window. Adding more processing power or memory doesn’t make any difference once you reach this point.
The problem lies in the underlying architecture
and design of the software. How many times do you experience delays in waiting for a system to recalculate results? Or if there is a data issue, waiting for an overnight process to catch up after the correction has been made?

Many IT departments have invested heavily in ‘virtualisation’ – the ability to create multiple virtual computers all living on one large pool of hardware. This enables flexibility in managing servers and also helps with portability and disaster recovery. It also reduces costs because you get full utilisation from your hardware allowing you to get more from less. The problem is that this infrastructure enhancement doesn’t address
the issue of the software application itself. If the software isn’t designed to scale then it is still going to plateau on a virtual server the same as it does on a physical one. The solution is scalable, multi-tenant software. Software needs to be able to scale out over many servers (even hundreds). Think about Google. Do you think the whole thing runs on one or two monster servers somewhere, or does it scale out across tens of thousands of smaller machines that each perform smaller tasks? Scaling out gives an application the ability to scale with business requirements. The cloud is where this scalability works best. Infrastructure as a Service (IaaS) providers like Amazon are able to power up thousands of virtual servers in a matter of seconds to meet the demands of an application during heavy loads or time critical calculations. They are simply powered off when not needed and the application owner is charged on an hourly basis for the servers they use. To put some real dollar values on this, Amazon charges $1.68 an hour for a server with 32 processors, 60 gigabytes of memory and 320 gigabytes of super-fast storage. These economies of scale are impossible to match with local on premise IT infrastructure. Multi-tenancy is also a key architectural element in the next generation of applications, especially from external providers (see more on multi- tenancy below). Cloud-based SaaS applications are constantly being upgraded behind the scenes which abstracts the business users from the pain of software upgrades.

Changing vendor landscape – the move to SaaS

The changes in technology and the deployment
of services are also having a tremendous impact on the technology vendors that supply the asset management industry. Many have come from the world of the large software deployment project and the support and maintenance that goes with it. This world only works for so long before the cost of innovation is too high. Costs rise and service levels and innovation drop because too much time is spent on servicing outdated deployments of the ‘land based’ software that may have been developed many years ago. Software deployed on-premise with a client, instantly leaves the control of the vendor. They aren’t in control of the environment or any customisations that are made locally. Supporting this structure when you have hundreds of clients quickly becomes very difficult and very expensive. This results in poor support, poor service levels and cost increases which eventually get passed on to the end client. The level of innovation drops because the technology vendor has to maintain old versions of software. A high percentage of developer resources are wasted on fixing issues in old versions that are still live and in production with the client, instead of focussing on new versions and improvements. Having the ability to focus on a single version of a technology solution no matter how many end clients there are is a huge productivity bonus for the technology vendor. These productivity and innovation gains mean that development companies in all industries are switching to multi-tenant architectures with the cloud as the delivery mechanism. Certainly new software start-ups are able to benefit from day one using this new approach. This will eventually drive vendors who fail to adapt, out of business altogether.

While the business advantages of a streamlined IT strategy with outsourced partners are
becoming clearer for asset managers, there are key questions of control and data security that must be addressed. It seems every month there is a new headline story of a ‘cloud hack’ or data security breach somewhere and this damages the reputation for cloud vendors and SaaS developers. What is the reality and what can be done to ensure control is maintained and data is secure? Classifying data and the business importance of various services helps to create a framework for assessing the readiness of internal applications for alternative methods of delivery. Not all applications are mission critical, not all data is ultra-sensitive so it’s important to map the requirements to the reality, rather than having a single policy that prevents any of the business benefits from being realised. Comparing new cloud-based SaaS applications with existing on-premise solutions will quickly result in anyone realising that pure play SaaS applications are more secure. Security is built in from the ground up and not added on at the end of the development process which is the case for many on premise applications.

Choosing a vendor: Pure play cloud vs fake ‘cloudwash’ – 5 key questions

But what is a pure play SaaS application and how can you compare one service versus another and why should you care? Well, the simple reason you should care is that fake ‘cloudwash’ applications do not deliver all the benefits you may be expecting. You may be sleepwalking into another IT project that is as flexible as an oil tanker and as cost effective as an English Premier League football team. Here are five questions you should be asking the application vendor and the answers you should look for to help you spot the difference.

 

1. How many versions of the system are in production?
  The answer should be one.

Why? True SaaS applications are single version, no matter how many clients are using the system. This is called multi-tenancy. Think of it like a five star hotel. Lots of private rooms with security and even safes in the room, but they’re all in the same building. They share the pool and the bar and
if the hotel upgrades its facilities then everyone benefits at the same time. All the rooms are secure but the hotel can get access with permission to clean and service your room. If your solution has multiple versions then you have ‘the software grid of death’ – lots of live installations with different versions everywhere. This is bad because it means the software vendor has to maintain all their clients while trying to write new versions of the software. They must divert resources to support legacy implementations instead of focusing on innovation.

 

2. Does the service run on shared hardware or your own dedicated server?
The answer should be shared hardware.

 

Why? Sounds counter-intuitive, you want your system on your own hardware right? Wrong. A true cloud-based SaaS application is scalable. It has access to hardware that is elastic. It can grow as more demand is placed on it. You simply ‘spin up’ new servers and keep on scaling. True SaaS applications, like StatPro Revolution, have been written and built especially to utilise hardware in this way. Applications pretending to be real SaaS still sit on side-by-side servers that cannot scale.

 

3. What is the update schedule? How many updates get released each year?
It should be frequent – at least five or six updates a year.

 

Why? With SaaS you’re paying for a service. The beauty of a real SaaS application is that it’s much easier for the software vendor to issue new updates and improvements because they only have to do this on one environment. SaaS vendors don’t have to manage IT upgrade projects so they do what they do best, make great applications. You may think that too many new versions could be a problem. How can I keep up? How can
I test all these changes? Testing everything is
an old legacy process that you may be used
to with traditional software. You don’t need to
test everything every time there is a new release because they are in smaller bite-size pieces.
When you get a huge dump of new code every 12 months then you need to test many things. With SaaS and the cloud, you don’t, you simply log in and start using new functionality.

 

4. Is the platform secure? Has it passed external security audits?
Well, obviously the answer should be yes and yes
.

 

Security in pure SaaS applications is paramount because of the multi-tenant features and the fact that they are designed to be internet facing. It’s
like building a house then thinking about adding a security alarm afterwards versus building a
castle and designing it to be secure by digging
a moat and adding a drawbridge from the very start. Passing well known industry audits is key
to demonstrating a vendor’s competency and commitment to information security. Look for things like ISO27001:2013 and SSAE16 certifications. Also ask about penetration tests, not quite as painful as they sound but they effectively involve
a specialist security company ethically hacking your service to report on possible vulnerabilities. These are an essential part of maintaining a secure service.

5. Does the system play nicely with others?
Yes, pure cloud SaaS applications think about integration with other systems.

 

Why? Asset managers use many applications and they all generate data in some form or another. It’s very common to share data between applications and to integrate them together. Data is the essential ingredient in many systems and it can be a very time consuming and expensive thing to manage. Cloudwash systems that are not true SaaS multi-tenant applications often have multiple versions with various release cycles (see question 1). They don’t always have easy ways to integrate, leaving you to build your own solution to get it to talk with anything other than itself. If you have to build your own integration solutions you may get left behind when a new version is released that breaks your integration and leaves you with a big problem. Pure SaaS based applications often include a web API. This is an interface that allows you to connect with all your local applications
and other cloud based platforms. This interface is supported by the software vendor and you’re never left alone to maintain it yourself. As new versions are released you simply get access to more data that you can integrate with other systems.

 

Conclusion

It’s clear that the investment management industry needs to replace legacy IT systems in order to
stay competitive. This isn’t something that will take place overnight but we have seen how on-premise and traditional hosted solutions can’t begin to scale or meet the business requirements of today’s asset manager. Regulation is here to stay and investors are looking for returns from a wider array of asset classes. All this creates data volume and pressures on reporting and transparency. A new generation of IT and software is required to meet these demands and to bring greater levels of agility and collaboration to the industry. As a portfolio analytics provider, StatPro began planning its journey to the cloud in 2008 and we’re confident this was the right decision as we continue to bring pure SaaS based applications to the market.

 

www.statpro.com

 

©BestExecution 2014

[divider_to_top]

 

Buyside challenges : Post-trade : Reconciliations

TimMartin_960x375
TimMartin_960x375

ALL ROADS LEAD TO A SINGLE SOLUTION

TimMartin_960x375

Tim Martin, SVP Product Management, SmartStream.

The quest for improved internal controls, as well as a growing body of regulation, has driven a rise in the volume of reconciliations and, in particular, an increase in the number of inter-system reconciliations. To support these, financial institutions have put in place a series of tactical solutions. While tactical fixes have clearly served an important end, I would argue that in the future, organisations should consider returning to a single, consolidated platform that is capable of meeting all their reconciliation requirements.

Five to ten years ago, the financial industry witnessed a drive amongst organisations to consolidate core reconciliations on to a single platform. Technology firms responded with appropriate solutions. Recently, the move
towards consolidation has been interrupted as regulation and more stringent internal controls have driven institutions to create a plethora of intersystem reconciliations, in addition to existing core reconciliations. Some financial institutions have brought these new reconciliations on to their main platform while others have created a series
of tactical solutions, leaving them with a core platform, as well as one or more tactical solutions – often in the form of Excel spreadsheets.

So why have financial institutions simply not brought these tactical reconciliations on to their main platform? The answer lies in the fact that requirements have moved on: existing practices and methodologies do not fit current demands, often leaving organisations with a considerable backlog of reconciliations to migrate.

Piling added pressure on financial institutions are business users, who want more control
over the configuration and operation of their reconciliations. This is largely a reaction to the
lead times users face when waiting for their reconciliations to be configured elsewhere (typically by IT departments), as well as a response to the difficulty in sourcing data. Ironically, rather than building and running their own reconciliations, end users would in fact prefer less involvement. What they actually require is a service that provides them with reconciliation results and allows them to focus solely on managing exceptions. However, until this can be achieved they will continue to require more control.

In addition to the previously mentioned pressures, banks are also experiencing a greater need for ad hoc reconciliations. These may arise as a result of a system upgrade where it is necessary to confirm a migration has occurred correctly,
or, alternatively, be as simple as providing one-
off comparisons of a pair of data sets. For these reconciliations, there is often no need to persist all the data, what is important is having a snapshot of the results.

Given these diverse requirements does consolidation on to a single reconciliation solution still remain a desirable goal? At SmartStream, we believe this still to be the case. Importantly, a single platform for all reconciliation processing allows the reduction of infrastructure costs, along with those of solution maintenance and monitoring. As a common approach can be taken towards all tasks, no specialist training is needed for different users e.g. IT staff, administrative personnel or business users, thereby also reducing overheads. A single solution makes it possible to standardise all reconciliations, create a common set of procedures and make more efficient use of resources e.g. by putting in place a core team that can easily switch between different reconciliations.

So how do organisations best consolidate
the existing core reconciliations deployed in their strategic solution with reconciliations contained
in the tactical fixes devised to satisfy new requirements? And what are the essential features the resulting system should possess?

At SmartStream, we believe that a reconciliation system must be capable of processing large data sets and have powerful matching capabilities.
In addition, it should be flexible enough to allow a wider set of users to create and manage reconciliations. It is essential that it enables faster and simpler on-boarding.

We have responded to industry demands by enhancing the loading and matching capabilities of Transaction Lifecycle Management (TLM) Reconciliations Premium, which, in turn, has enabled us to deliver the TLM SmartRecs configuration component. The new TLM SmartRecs module allows reconciliations to be built and run by business users, with the results being presented directly to users.

Not only can on-boarding times be significantly improved but a far wider group of users is able to build and manage reconciliations than traditionally possible. Additionally, the changes made have enabled us to support end-to-end in memory reconciliations where there is no need to persist the results.

Our technology allows organisations to go one step further than simply improving on-boarding times and opening up reconciliation configuration and management to a wider audience – although these are highly important goals. It provides the potential to combine individual reconciliations to form more business-focussed solutions that bring together common components.

An example of an area in which this tactic could be taken is that of ETD and OTC reconciliations. Here, organisations usually reconcile open contracts, buys/sells and cash flows independently. Drawing these together into a Total Equity Proof would be far more beneficial to business users who could then drill down from a currency to see any cash or contract breaks that would roll up into a top level currency difference.

Once an organisation had brought these
types of individual reconciliations on to TLM Reconciliations Premium, it would be a simple next step to combine them into a single proof view.
By employing this type of approach it should be possible not only to consolidate reconciliations – while providing flexibility at deployment and at run time – but to achieve further, longer-term cost- efficiency and operational benefits.

www.smartstream-stp.com

 ©BestExecution 2014

[divider_to_top]

Buyside challenges : Trading : Performance metrics

WHO IS MY PEER?

DarrenToulson_960x375

Darren Toulson, Head of Research, LiquidMetrix.

Following any presentation to a buyside of the top line performance metrics in a TCA / Execution Quality Report, the natural question is “So, is this performance good or bad?”

An absolute implementation shortfall (IS) number of 8.6 BPS may sound good – most people talk about IS numbers in double digits – but is it really good? Or is it simply that the types of orders you are doing are relatively ‘easy’. Similarly if broker A has a shortfall of 4.6 BPS and broker B has a shortfall of 12.6 BPS, is this simply because you’ve trusted broker B with your tougher orders?

One approach to the difficulty of interpreting performance metrics is to compare each of your order’s outcomes to some kind of pre-trade estimate of IS and risk, based on characteristics of your order such as the instrument traded, percentage ADV etc. This may work fairly well when looking at the relative performance of brokers for your own flow, where you can take relative order difficulty into account. However, in terms of your overall, market-wide TCA performance, how can you be sure that the pre-trade estimates you’re using are at the right absolute levels? Many pre-trade estimates themselves come from broker models. How realistic are these for all brokers’ trading styles and how can you be certain that they’re not over or under estimating costs relative to the market as a whole and thus giving you a false picture of your real performance?

To get an idea of how well your orders are performing versus the market, you need to compare your own performance to orders done by other buysides.

The principal difficulty with any kind of peer comparison lies in the fact that you will be comparing your orders with orders from many different buysides; each with different investment styles in different types of instruments, given to different brokers with different algorithms. So if you compare your orders to some kind of ‘market average’ how meaningful is it? Who exactly are your peers?

For the results to be meaningful and believable it’s necessary to be open on how exactly peer comparisons are constructed so as to be sure that we’re comparing like with like.

Order similarity metrics

The starting point in any type of peer analysis is that your orders should be measured against other ‘similar’ orders. But how do we measure similarity?

Consider an order to buy 6% ADV of a European mid-cap stock, starting at around 11am and finishing no later than end of day. Apart from the basics of the order, pre- or post-trade we can also determine many other details such as: the average on-book spread of the stock being traded, the price movement from open of the trading day to start of the order, the price movement in this stock the previous day, the annual daily volatility of the stock, the average amount of resting lit liquidity on top of the order book for this stock and the number of venues it trades on, etc. It’s easy to come up with a list with many different potential ‘features’ that can be extracted and used to characterise an order.

Each of these features may or may not be helpful in determining how ‘easy’ it might be to execute an order at a price close to the arrival price. Some features make intuitive sense. Trying to execute 100% ADV in a highly volatile stock with a wide spread is likely to be much more expensive and risky than executing 0.1% ADV in a highly liquid stock with tight spreads and little volatility. But how relatively important might yesterday’s trading volume in the stock, or market beta be in predicting trade costs? Which are the best features to use?

Assume we’ve come up with 100 different potential features that might help characterise an order. If we’re designing a similarity metric that uses these features to find similar orders to compare ourselves to, we need to do one or more of the following:

•  Identify which amongst the 100 features are best at predicting outcomes such as Implementation Shortfall (Feature Selection).

•  Combine some of our selected features to reduce any data redundancy or duplication of features which are telling us basically the same thing (Dimension Reduction).

•  Come up with a weighting of how important each remaining feature is when looking for similar orders (Supervised Statistical Learning).

The good news is that there are decades of academic research on how to do most of the above. Methods such as stepwise regression, principal components analysis, discriminant analysis and statistical learning (KNN, Support Vector Machines, Neural Nets) all lend themselves well to this type of analysis.

The upshot of all this is that for each buyside order we wish to analyse, we produce a similarity measure that can be used to find, from a market-wide order-outcome database, a set of the, say, 100 most-similar orders done by other buysides. They do not have to be necessarily orders done on the same stock, just on the same type of stock. Assuming we’ve done our analysis well, the TCA outcomes of these orders should represent a good target to compare our order with. If we do this for each of our orders, we discover how well we’ve really done versus ‘The Market’.

An example of peer analysis done well

What might this kind of analysis look like in practice? Figure 1 shows one way of presenting peer results. A set of client orders has been matched, using a similarity metric as described above, against a market-wide order database. We’re looking in this example at implementation shortfall (IS); one can look at other TCA metrics such as VWAP or post order reversions in exactly the same way. Using the matched orders we’re able to see the distribution of our buyside client IS outcomes with market-wide IS outcomes (green).

This tells us how well both the IS and risk (standard deviation of outcome) of our client orders compare to the market average for similar orders. Based on the number of orders analysed and a measure similar to a ‘t test’ we can then also translate the differences in performance into a significance scale, from 0 to 100, to qualify how much better or worse than average a client’s performance is.

Conclusion

A fundamental question buysides want to know from any kind of top level TCA analysis is how the costs associated with their orders compares to their peers. The danger of any kind of peer analysis is being certain that you really are being compared fairly to other participants. The solution to this is to ensure that any kind of peer comparison only compares orders with like orders, rather than orders from similar companies, preferably from a large database of orders from different buysides.

www.liquidmetrix.com

 

© BestExecution 2014

[divider_to_top]

 

Buyside challenges : An overview

AnitaKarppi_960x375
AnitaKarppi_960x375

Buyside Technology challenges

AnitaKarppi_960x375

Anita Karppi, managing director of buyside consultancy K&K Global Consulting, gives a summary of her firm’s findings on the latest industry developments.

Trading technology

Although finding the right safe trading systems and platforms have been long running themes in buyside circles, there have recently been some promising resolutions.

For example, equity traders have been concerned about predatory HFT (high frequency trading), gaming and information leakage for some time. With the recent publication of a ‘certain book’ as well as brokers being exposed to legal challenges around their advertised levels of HFT in their dark pools, there is greater transparency into the functions of their non-lit pools. In parallel, this creates further business opportunities for independent venues and crossing networks. There is a common understanding among the buyside that there will be an increasing amount of block trading in the future and that they will need to have the right optimised tools to ease high touch trading.

Within the fixed income market space, there are now over 40 new trading technology projects in development to unlock liquidity in the OTC market, and this figure keeps increasing. The buyside are now less interested in the details but more in what their peers are going to purchase. There is an consensus that there will be a handful of leading platform suppliers in the future and technology will change the way bonds are traded beyond just the request for quote (RFQ) and central limit order book protocols. Pre-trade, dark trading capabilities are required within the platforms to protect the investors and enable the priority of electronic block trading.

Fixed income traders are also challenged by the lack of pre-trade transparency of available liquidity. This is significantly amplified by the fact that there are too many bond types and a lack of standardisation limiting the frequency and types of bond issuance. At K&K Global Consulting (K&KGC) we are encouraging the industry to convince regulators to regulate the standardisation of bond issuance and narrow the stream of trading which will be in the best interest of the retail investor. While the buyside want to continue trading bonds with the major banks, they are less keen on adapting single broker platforms unless there is some form of aggregation capability to unifythe liquidity from multiple brokers into a single interface.

In FX, there are now a growing number of crossing networks triggering buyside concerns that FX trading is evolving towards the same type of fragmentation challenges as within equities. The buyside are reviewing their choice of order management systems (OMS) and are concerned about the potential hidden charges within the bid/ offer spread charged by the vendor to the brokers. Against this backdrop, compliance teams have become increasingly supportive in the purchasing process of new technologies. Their key interest is in unifying trading across all asset classes into a single trading platform in order to improve regulatory monitoring and reporting capabilities. Whereas the head traders are concerned that such cross-asset platforms will lack the cutting edge features required for best possible results for their clients in each asset class.

Transaction Cost Analysis (TCA)

We have seen an increasing adoption of TCA within equities, but the key drivers and the use of TCA still varies significantly between UK and continental European buyside traders. For example, the internal investment and trading teams in UK-based firms utilise far more of the functionalities of the TCA systems they employ. However, it also appears that some of the available TCA systems for equities don’t actually fulfil the minimum required specifications that we are outlining in our upcoming ‘Buyside Perspectives’ report. One of the mentioned key drivers for TCA across equities, fixed income and foreign exchange are “Client requests”, and we believe buyside firms would benefit from being more pro-active in using TCA reports to demonstrate to their clients that they have a process in place to monitor and review their execution arrangements. There are still uncertainties around how the standard minimum TCA system requirements for fixed income and FX should look like but K&KGC is researching and working with the buyside on a definition which will be published in a separate TCA report.

Across all three asset classes there are varying levels of challenges in accessing the necessary market data. However, we found that wellresourced buyside trading desks have already found a way to resolve some of these data challenges, even within Fixed Income, with their own in-house TCA systems and databases. It is our aim to present such solutions to the wider community and enable vendors to develop systems that can be afforded by less well-resourced buyside trading desks. K&KGC is a buyside consultancy which organises private, exclusive, invitation-only roundtable debates and unbiased, independent peer research for senior and head traders within long-only asset management firms and tier 1 hedge funds. The Alpha Trader Forum (ATF) and Asia Buyside Forum (ABF) debates cover equities, fixed income and foreign exchange across Europe and Asia (London, Paris, Frankfurt, Stockholm, Copenhagen, Singapore/Hong Kong). The Buyside Perspectives research reports cover the most important issues and challenges for the trading desk. They have also built an exclusive buyside only web portal – buysideintel.com.

K&KGC is a buyside consultancy which organises private, exclusive, invitationonly roundtable debates and unbiased, independent peer research for senior and head traders within long-only asset management firms and tier 1 hedge funds. The Alpha Trader Forum (ATF) and Asia Buyside Forum (ABF) debates cover equities, fixed income and foreign exchange across Europe and Asia (London, Paris, Frankfurt, Stockholm, Copenhagen, Singapore/Hong Kong). The Buyside Perspectives research reports cover the most important issues and challenges for the trading desk. They have also built an exclusive buyside only web portal – buysideintel.com.

© BestExecution 2014

 

Buyside challenges : Trading : Best execution

LSDodds_960x375
LSDodds_960x375

Open to interpretation

LSDodds_960x375While MiFID II may not have furthered the definition of best execution, it remains a topic of discussion. Best Execution’s Lynn Strongin Dodds asks three leading buyside traders for their views.

Neil Joseph, Co-chair of EMEA FIX Trading Community and Senior Trader JP Morgan Asset Management What does ‘best execution’ mean and are expectations being met?

Best execution is the requirement to take all reasonable steps to obtain the best possible result for an investment firm’s client. From our perspective, the requirement to deliver and monitor our attainment of best execution is a fundamental requirement of fiduciary duties owed to our clients. By staying at the forefront of technology and staying abreast of market structure and regulatory changes, we maintain a suite of tools and skills to ensure that our activities when placing a client trade in the marketplace obtains the best possible result. In addition to this we invest significant budget and time into trade analytics to monitor execution.

Has best execution improved since MiFID?

The concept of obtaining the best possible result for our clients is a simple one, which is enshrined in FCA principle 6: Customers Interests. The implementation of MiFID in 2007 did not really change the concept of “doing the right thing for your clients”, but it did implement a framework for firms to consider their written policies on delivering best execution, as well as requiring them to expend some thought into designing systems and controls for monitoring and evidencing execution standards. Additionally MiFID was the catalyst for the proliferation of new trading venues. MiFID may have partly prompted market advances and developments over the last six years that include the introduction and improvement in pre and post trade analytics, as well as encouraging different ways of accessing liquidity under a regulated framework via Multi-lateral Trading facilities. The advances in the proliferation of trading tools, along with advances in the analytical tools for monitoring, which have driven the real gains for clients when firms discharge their execution duties in the marketplace.

What challenges are there to achieving best execution today?

The challenges are almost as varied or as limited as a firm chooses to define them. However, as we evolve targets and implement ever more ambitious monitoring frameworks, our challenges increase in number and complexity. For example, when brokers provide us with improved and varied methods of trading, ranging from new algorithmic strategies to numerous ways of accessing risk capital, a greater challenge is presented to firms to find better ways of monitoring and evaluating these new offerings, as well as building tools within our systems and controls to make better use of them. When market developments manifest themselves as quicker access to liquidity for client trades and more consistent decision-making in terms of defining an appropriate strategy for implementing trades; the real winner is the underlying client.

Rob McGrath, Global Head of Trading, Schroders Investment Management What does ‘best execution’ mean and are expectations being met?

For us it is about the processes and pre and post trade procedures that give us confidence that we can source liquidity efficiently. We have all the tools and strong order management and execution management systems and can adjust to current markets conditions. We are also looking how we can improve our execution. However, to me one of the real issues is the lack of a consolidated tape and there was a hope that MiFID II would have addressed this problem.

Has best execution improved since MiFID?

There is an argument that regulation has helped improve best execution, but I also think there were other market forces at play. For example, volatility has come down and spreads have narrowed. You can try and figure out who is responsible. For example, some high frequency trading strategies have helped narrow the spreads but it is not just because of them. It is important to take into account all of the factors. Things can always be better and I attend many meetings as part of the buyside on this subject. We believe that it is important to be proactive and make suggestions on how things can improve to the regulators.

What challenges are there to achieving best execution today?

Fragmentation is certainly a problem especially for buyside firms who may not have as sophisticated tools and systems as the larger firms. Liquidity is also a big issue. One reason is that many of the high frequency traders that were there are now gone and to a certain extent the liquidity from the banks has also dried up. The other main problem as I mentioned before is the lack of a consolidated tape.

Michele Patron, Senior Quant Trader at AllianceBernstein What does ‘best execution’ mean and are expectations being met?

For a buy side firm, ‘best execution’ is a natural and obvious obligation: given that the price achieved in the market goes directly in the performance of the managed funds, there is a full alignment of interest between fund manager and end client. Starting from this concept, it is clear that execution is an integral part of the alpha production process, and being able to achieve ‘best’ outcome in the execution space goes to the end clients, in the same measure as a good investment decision does. At AllianceBernstein, we have streamlined the evaluation of our execution process in a highly systematic way, to achieve the best possible outcome for our clients.

Has best execution improved since MiFID?

Over the course of the years, we have seen the trading process moving from a more “lumpy” activity to “stream”, for a variety of reasons: the increasing use of algos, which fragment parent orders in smaller pieces in the attempt to minimise footprint; the decreased ability of investment banks to offer block axes and, clearly, the multiplication of trading venues introduced by MiFID. So, in a nutshell, I think that the trading process has changed, and the buyside firms need to stay abreast of changing market microstructure.

What challenges are there to achieving best execution today?

It comes down to liquidity: the multiplication of venues, in terms of number and type of offerings, has made the liquidity sourcing more challenging for the buyside. In addition to that, nowadays the trading world is extremely dynamic, hence continuous monitoring is needed to assure optimal outcome.

© BestExecution 2014

 

Buyside challenges : Trading : Operations

JonasHansbo_960x375
JonasHansbo_960x375

The third way

JonasHansbo_960x375Jonas Hansbo, CEO of Tbricks discusses how in difficult conditions, buyside firms can control their trading operations and remain competitive.

What in your opinion are the key challenges facing the buyside today?

We are now starting to see global markets recover from the protracted weakness endured since the credit crunch of 2008, but market practitioners have returned to the fray to find a much tougher landscape than before the crisis. Tougher market conditions allied to a constantly changing array of regulations are adding complexity to already complex markets. In this environment it is imperative that you control your costs otherwise you cannot remain competitive.

How does this impact the buyside’s trading systems?

Given this set of market circumstances, what’s emerging is a requirement for agile, cost-efficient and compliant trading platforms that are flexible enough in design to allow a rapid response to emerging opportunities while avoiding over-reliance on a single supplier

In the current environment of squeezed margins and cost pressures, what should buyside firms do with their legacy systems? What happens if they cannot afford new systems?

The ability to act upon change, faster than the competition, is the very nature of trading. In today’s fast-moving markets, a closed-source, rigid trading technology can actually prevent a business from capturing new trading opportunities. However, implementing new functionality in such platforms is a major undertaking, and not only can upgrading take months but often advanced traders prefer to add new functionality at their own pace, rather than leaving such strategic decisions to an independent software vendor (ISV). These are some of the reasons why historically many banks have opted to build their own trading technology rather than buying from an ISV.

At Tbricks we enable a different approach, based upon the assumption that active market participants – such as traders and market makers – will always discover new trading opportunities before any software company. As a result, in addition to a complete trading platform, we also include innovative tools to help market professionals unleash their creativity. All business logic is delivered as separate apps along with complete source code, so our clients are completely in charge of their trading, while enjoying the benefits of a class-leading core platform. Clients use Tbricks’ trading apps out-of-the-box, customise them, or build proprietary apps using our apps as a blueprint. The result is a modern trading system with extremely high levels of performance, flexibility and agility.

If they have no choice but to implement new systems to stay competitive, what is the better route to follow – buy or build or is there a third approach?

There is a third – buy and build! With traditional trading technologies, change is slow, painful and often subject to the ISV’s release cycle. Implementing new functionality is messy, and upgrading to the next version can literally take years. No-one feels comfortable with placing their future in the hands of their ISV’s product management team, so this is why many banks choose to build their own trading system rather than buy. And that has always been the question: to buy or to build?

We believe our clients are always the best and fastest at predicting market trends, so the company decided to make the ‘buy vs build’ debate redundant by offering something different; a system that separated the core system from the business logic. That way, we could focus on delivering a state-of-the-art system, with optimised performance and powerful trading apps, to those who wanted an out-of-the-box system.

By delivering all apps complete with source code, we can also greatly improve time-to-market for those who wish to customise the system. Tbricks is a platform for the tech-savvy trader who needs to build complex algorithms and trade at high speeds.

What makes your offering truly innovative, and different from other vendors?

First off, our trading platform is ready to meet all the requirements I’ve mentioned out of the box. All functionality is available and ready to use on Day 1. This allows clients to take advantage of market opportunities straight away. More importantly, though, because all functionality is delivered as apps, customers are able to make the platform their own immediately.

In the same way iPhone users can customise their phones with their personal choice of apps, our clients can start building their own unique sets of functionality by adding and modifying apps to suit their needs. By freeing developers to build their own apps in the Tbricks environment, the platform is just the starting point to unlocking new capabilities, to better respond to market situations as they emerge and help differentiate from competitors’ offerings.

Who do you see as your competitors?

We have competitors on either side: the ISVs for offthe- shelf-functionality and the platform solutions or in-house systems for custom-built solutions. There is no true competitor that covers the full spectra, from the market access layer to the end-user layer.

What major impacts will be wrought on the buyside from the current raft of regulation over the next five years, and how will your firm help buyside clients meet these challenges and leverage the opportunities?

Financial organisations will have to satisfy a wide range of internal and external stakeholders in the area of regulatory compliance, with initiatives such as ESMA, MiFID II and MAD II placing new requirements on trading technology, including high levels of control, transparency and traceability.

Tbricks provides key features to help ensure compliance, including auditing, role-based access control, static and dynamic pre-trade risk limits, parameter contexts, hierarchical throughput limits, centralised authentication and real-time risk management.

In close co-operation with our clients’ compliance teams, the regulatory bodies and exchanges, our team of expert developers stay abreast of ongoing regulatory change, building new functionality to meet emerging requirements as they come to market. As a result, users never need to worry that their trading platform isn’t compliant with new rules and regulations across all the markets they trade in.
www.tbricks.com

© BestExecution 2014

 

We're Enhancing Your Experience with Smart Technology

We've updated our Terms & Conditions and Privacy Policy to introduce AI tools that will personalize your content, improve our market analysis, and deliver more relevant insights.These changes take effect on Aug 25, 2025.
Your data remains protected—we're simply using smart technology to serve you better. [Review Full Terms] |[Review Privacy Policy] By continuing to use our services after Aug 25, 2025, you agree to these updates.

Close the CTA