Home Blog Page 659

POSITIONING FOR THE FUTURE : Gaining competitive advantage. A buy-side perspective.

By Richard Coulstock,
The only certainty is change. Never has this been truer than in recent times. But change, Prudential Asset Management’s Richard Coulstock believes, brings with it the opportunity for the buyside desk to gain competitive advantage, especially in Asia.
How do we position ourselves for the future and gain that competitive edge? The starting point for a buy-side dealing desk is to ensure that your role is recognized as being an integral part of the overall investment process and not a stand alone operation. People, workflows, systems and benchmarks should all be built on that crucial foundation.
When we look at people and workflows, we should understand that the days of having dealers acting merely as order clerks are long behind us. Dealers can begin to help themselves by raising their profile internally through engagement with their Fund management teams. Contribute at meetings, listen to investment ideas, share flow information, and understand why trades are being placed, not just concentrate on how to execute.
We live in an age where the balance of power has swung away from sell-side traders to those on the buy-side. Historically, access to information and sophisticated trading systems with direct exchange access was the preserve of the sell-side. That is no longer the case. With direct market access, broker algorithms, crossing networks and sophisticated Order Management Systems (OMS), the role of a buy-side dealer carries more responsibility and opportunity than ever before.
These are responsibilities and opportunities we should eagerly embrace. The buy-side has been empowered, it is now up to us to seize the opportunity and prove our worth to our businesses by accepting responsibility for execution quality and by more firmly embedding our role into the overall investment process. There are a number of ways in which I believe dealing desks can demonstrate the growing importance of this critical role.
Dealing with the dark pools
I shall start by discussing the increased attention on buy-side dealers, largely led by regulatory issues; – a recent good example being the proposed U.S. regulation regarding the use of ‘dark pools’. Publicity such as this increases focus from both clients and internal compliance departments on the role of a dealing desk.

It is our responsibility to be in a position where we know the pros and cons of dark pool utilization. Accessing dark pools can facilitate the very principle of ‘best execution’ demanded of us by regulators. It is important to remember that the principle of crossing stock offexchange, with subsequent reporting to the exchange, is not new.
While the name can cause alarm, ‘dark pools’ in many ways, simply automate an existing process.
Another important principle to remember is that you will, or should, only trade on a dark pool if the terms of the trade facilitate ‘best execution’. Contrary to some reports, these are not uncontrollable electronic monsters that force you to trade on terms that are unfavourable to your underlying investors. Instead, as  we seek to attain ‘best execution’, we should ensure that our desks are equipped to access all relevant liquidity pools, light or dark. To do otherwise is verging on negligence. Our responsibility is to be aware of suitable liquidity pools, current and proposed legislation in Asia and elsewhere, and to educate our own companies to gain approval for their use where appropriate.
Aside from existing dark pools, we need to be aware of other developments in the Asian trading landscape. The Chi-X/SGX announcement earlier in 2009 is a live situation we have a duty to monitor. We should look to the US and Europe for guidance on how changes in market structure will impact our roles. For example, according to Chi-X, on July 14 this year, it reached 22.65% of market share in FTSE 100 stocks and executed 92% of trades at, or better than, the primary market spread. Most remarkably, this has all happened within two years of MiFID. Could Chi-X have as dramatic an impact in the Asian arena?
While we need more details before a reasoned judgment can be made, it is safe to say this is a development we cannot afford to ignore. We have a duty to raise awareness of such developments within our own organizations and to ensure that, when the time comes, we are in a position to trade on any alternative exchanges where we believe we may find meaningful liquidity.
Despite operating in a multi-market and multi-regulatory environment that makes it harder for new entrants to gain a pan-Asian foothold, we have seen advances in the last two years, for example, with Liquidnet and Blocsec. Now, with the Chi-X/SGX announcement, we are surely likely to see more changes in the coming months and years.
Competition among established exchanges can be a good development as it may see lower overall execution costs. However, we should be wary of over-fragmentation. The proliferation of alternative execution venues in the US has been extreme, but the multimarket environment here should be a natural barrier to a similar situation arising in Asia.
Putting the right technology in place
When we consider these new
alternative trading venues, we also have to ensure that our systems are robust enough to cope. It still surprises me when I hear of asset managers that are not FIX-enabled, despite the undeniable benefits of Straight-Through Processing (STP). Having a FIX-enabled order management system that allows you to access DMA and broker algorithms is a basic requirement of any professional dealing desk. An execution management system (EMS) would be even better, and there are a number of those to choose from, each with its own strengths and weaknesses and pricing structures.
An EMS helps dealers to monitor and manage trades in a number of ways. By incorporating DMA and algorithms, live market data, and pre and post-trade costs on the same blotter, an EMS should help your team execute more effectively. EMS development will continue and the end result may well be an EMS that, upon receiving a trade, will look at pre-trade data, decide on execution strategy/broker/algo, and then automatically send the trade with appropriate parameters. This may even change dynamically as the trade progresses and the market moves. This scenario is unlikely in the near term – and would risk putting many of us out of work – but it is development we need to monitor.
Measuring the value of execution
Another area where buy-side dealers can prove their worth is in the field of transaction cost analysis (TCA). We must benchmark ourselves in such a way as to demonstrate that execution is a crucial element of the overall investment process. Dealers must be aware that execution performance has an impact on underlying fund performance and they should be prepared to be measured and accept responsibility for their results. Having a robust TCA process will become an area of focus from regulators and clients, both existing and potential. We already use TCA tools to measure our own dealing team, as well as our relationships with our counterparties. This is an important point. TCA is a measurement of broker relationships, not of broker performance. Any buy-side dealing team has to accept responsibility for execution quality and not blame a broker when things go wrong. If a buy-side dealer with adequate systems, trading a live market does not accept execution responsibility, then his or her suitability for the role has to be questioned.

Sharing the benefits and responsibility
As dealing desks prove the value that they can add to investment performance, a natural follow-on is increased dealing discretion. Commission Sharing Agreements (CSAs) can and should be used, where permitted by regulations, to allow dealing desks the freedom to execute where they believe they can obtain the best results. The principle of dealing with a broker because they offer good research has to be dropped. An extreme example is in Australia where one asset manager has effectively outsourced all its dealing to a single broker. This broker uses their pre and post-trade analytics to decide upon execution strategy and uses CSAs to pay research brokers as instructed by the asset manager.

So, what does the future hold for Asia? I believe we will see more money flow into the region and that this will be followed by people, changes to market structures and increased pressure on buy-side dealing desks, in terms of performance results, as regulators and clients look more closely at how those desks manage their executions.
Our role is to be prepared for the future. We must monitor regional and international developments, and inform and educate appropriate people in our own organizations –fund managers, compliance departments and IT – and to readily accept increased responsibility for our execution performance. In short, we must prove our worth.

Is FIX Really Slow?

By Kevin Houston
Working on a Porsche analogy: To say that all Porsches are slow is, clearly, about as ridiculous as saying that FIX is slow! Kevin Houston explains.
All early Porsches have their roots in the design of the Volkswagen Beetle designed by Ferry Porsche’s father Ferdinand. The VW Beetle is only capable of about 80 mph and even that would be at the cost of a terrified driver. Early Porsches share a lot of their design elements with the Volkswagen, however that does not mean that Porsches’ are slow. Many of us will have been on track days where we have driven Porsches around a race track, hitting speeds of well over 80 mph, without inducing any great feelings of fear.
The early FIX engines, designed in an era of simply routing care orders between the buy-side and sales traders, are also slow and if used for modern high speed trading would result in traders nervously guessing whether each message would be one of the lucky ones that went through quickly or, more probably, one of the unlucky ones that took several seconds to arrive at its destination. Again some of us can testify that these early engine do not represent the state of the art; equally, however high velocity trading houses, today, are using FIX to trade in around 250 microseconds and a small number of FIX engine vendors are currently capable of consistently beating 10 microseconds for message processing, delivering throughput of around 100,000 messages per second.
To say that all Porsches are slow is, clearly, about as ridiculous as saying that FIX is slow. The remainder of this article examines the second myth in more detail.
First a bit of History
FIX started as a pilot between Salomon Brothers and Fidelity Investment to automate the placement of orders, the reports of executions and the distribution of advertisements and IOIs. Indeed many early FIX implementations did not even place the order electronically, but only reported the executions after order placement. At this time there was only one FIX engine on the market place and the price was extremely high, with the performance, by today’s standards, being extremely low. FIX adoption was driven by error reduction and the like. The arrival of the early commercial FIX engines did the community a great service by creating cheaper alternatives; but the performance bar was not very high. Often when people now refer to FIX as being slow, they are using the yard stick of early FIX engine performance as the measure. Since then and particularly over the last 5 to 10 years, increasing emphasis has been placed on the performance of FIX driven by a number of trends, and the FIX engine vendor community has responded. Some have accepted their current performance levels as adequate for order routing but not DMA and focused on that market; others, often new entrants, have engineered FIX engines from the ground up, to focus on future-proofing performance.

The drivers behind this need for speed are worth noting:
• Increased market data
• Increased order volume
• Exchange adoption of FIX
• Large percentage of trades going electronic
• Rise and rise of algorithmic trading

These drivers lead to two separate performance needs, high throughput and low latency. Whilst these are related needs there are optimisations that can be made that favour either. For example, FIX communicates over TCP/IP, typically a FIX message uses a few hundred bytes, but an IP packet has space for around 1,400 bytes of information. A setting on the IP communication layer allows you to select whether packets should be held back to wait for additional information that can be sent in the same packet.
This obviously improves throughput as the receiving application has to process fewer packets, but it has the potential drawback that it increases latency for at least the first message by it having to wait for subsequent messages. So, whether you allow this hold-back or not depends on your performance needs, or profile. This process was first introduced by John Nagel and is named after him; see http://en.wikipedia.org/wiki/Nagle’s_algorithm for more details. OK, so FIX can be fast but a lot of the implementations out there are dated and therefore can be slow; so, are there other things FPL is working on that will help with performance?
Where are we today, how long does it take?
Let’s take a look at some of the time costs in sending a FIX message. Obviously this is a very rough estimate and there are a lot of variables, but here are some general timing that are worth covering. Firstly, there is constructing the message, which typically takes something like 10 microseconds, saving a copy before sending, 50 microseconds; sending it to the wire, typically 10 microseconds; transmission time, a function of distance but easily worked out as distance divided by 2/3 the speed of light; switching time, a function of the number of routers and switches and their ilk, via the operating system into the user space, say 10 microseconds; and finally parsing in user space 10 microseconds. There are a number of ways you can improve these such as saving the copy of the message asynchronously, etc., but that still leaves a lot of time spent building the FIX message and it’s tag = value syntax.

What is the FPL Global Technical Commitee Committee doing to make things even better?
Whilst FIX can be as fast or faster than most of today’s exchange API’s there are a number of areas that the FPL GTC is working on and has approved Working Groups to look at these issues.

1) Putting FIX on a par with Native Exchange API’s – Stateless FIX
A lot of exchanges want to implement FIX in parallel to an existing interface. This means that they have a matching engine that stores its state and an interface that does not. If they architect their FIX interface to talk directly to the matching engine then the main difference between the FIX interface and the native interface is that the FIX interface has a costly extra persistence operation. This means that at best the FIX interface is going to be something like 50 microseconds slower than the native exchange interface.

FPL recognised this problem and introduced a partial solution, for non-trading messages in FIX5.0, the GTC has now prioritised extending this solution to include trading messages and makes more extensive use of the Application Sequencing messages introduced in this version.
2) Examining how much information we move?
FIX has historically been designed to meet the needs of many distinct constituents. One of the repercussions of this has been that some fields are marked as mandatory when in fact they are only required for a certain subset of the FIX community. FPL has decided to revisit what constitutes a mandatory field within the specification. For example, does there need to be 7 mandatory elements in the Order Cancel Request message, when all an exchange may need is the Order ID to be able to cancel the order?

3) Optimised transports for FIX – Binary FIX
Currently a major cost for a FIX interface is the time it takes to serialise a FIX message into a Tag=Value format. This Tag = Value string is simply a version of a piece of memory in the senders computer that is used to create a copy of the memory in the receivers computer. FPL introduced the repository some time ago and many firms already use this to describe the piece of memory to their computer programs by generating object models directly from it. Why not take this one stage further and also convert these object models (or pieces of memory) into information exchanged on networks using some of the lessons learnt from the FIX Adapted for STreaming (FAST) initiative? Then examine this structure and optimise it?

4) FIX Interparty Latency – FIPL
Whilst we are focusing on latency and performance of FIX engines in this article; what we as an industry are concerned about is the end-to-end latency across the whole trading infrastructure. One of the first things you need to do in any engineering endeavour that aims to improve the performance of a system is measure the performance of that system and identify where the largest performance problems are and where the easiest gains can be made. Many organisations have detailed network level information on the arrival and departure of network packets, many components of the trading system write log files that detail when they receive a message and when they forwarded it on, and sometimes even information to trace its path through their element of its overall journey.

However when you try to assemble an end- to- end picture of the journey of a trade across an organisation and through many systems that entails, you find that often this information is stored in different formats and compiled on a different basis from system to system. The FIX Interparty Latency Working Group is aiming to develop a standard that will allow the easy assembly of this information on a consistent basis across multiple organisations so that we can understand the latency introduced across the whole of the trading life cycle.
So why is FIX perceived to be slow? Because some implementations of FIX are slow and those implementations are slow because the industry didn’t need them to be fast! Now that people are demanding faster FIX interfaces the community is providing them. Similarly the Volkswagen was slow because the peoples’ car was targeted at simply moving people around, as an economy car, not moving them quickly.
Road-going Porsches are faster because they are optimised for a different problem, going quickly on public roads. Now we are entering the era when we are racing on a circuit and unless your car is designed for that you are going not to make it to the starting grid.
FIX is the same, the engines designed a decade ago to do one job cannot be expected to lead the field when asked to perform an almost entirely different one.

DMA and the Buy-side

Getting to the bottom of naked sponsorship and high-frequency trades.

FIX: What does the buy-side want from Direct Market Access (DMA)?

David Polen: There are two distinct market segments that use DMA – the human trader and the blackbox. I like to call this “Human DMA” and “Highfrequency Trading (HFT) DMA”.

With Human DMA, the extreme is a buy-side that has traders manually executing trades and looking at market data over the Internet; with HFT DMA, the extreme is a blackbox co-located at the exchange. One market segment is sub-millisecond and the other is more than tens of milliseconds – sometimes hundreds of milliseconds.

The human trader manually clicking around on a front-end is more interested in the full range of services a broker can provide than he is in latency. Although speed is always important, he’s keen on being able to access all his applications via one front-end versus having to go to different windows. He’s looking for his broker to be a one-stop-shop, providing all the necessary services, such as algorithms and options and basket trading, in one easy and convenient bundle. He wants clean and compliant clearing and settlement.

The high-frequency trader is different. He has his own algorithms and smart order routers (SORs). He wants to get to the market as quickly as possible and needs credit and also memberships to the various execution venues.

FIX: What is the controversy around naked sponsorship for highfrequency traders?

DP: With naked sponsorship, the HFT is trading directly on the exchange, and the broker is only seeing the orders and trades afterwards. To help with this flow, exchanges have built in risk checks, so the broker can rely on the exchanges for pre-trade risk management.

To get a view across the exchanges, the broker consolidates the posttrade information through drops of the orders and trades. Although, the broker has a reasonably good view of the risk at all times, it can take as long as a minute to turn off a buyside that has exceeded their pre-set risk parameters. This is often exaggerated into a doomsday scenario where a buyside trades up to $2 billion of stock in those 60 seconds, but that ignores the exchange’s own controls which would not be set to $2 billion. It is a lot more likely for a buy-side to barely stay within its risk limits at each exchange, but exceed the overall allotted risk by multiples. Brokers need to have measurements in place to prevent that.

FIX: What are the key concerns with latency?

DP: The best way to lower latency is to get rid of as many message hops as possible. Co-locating at an exchange is obvious as it eliminates network hops. Although, co-location is important, it does come with infrastructure costs that not all high-frequency traders are willing to bear, for example, they may need to co-locate at each exchange.

Some buy-sides or brokers may co-locate at only one exchange and use that venue’s network to access others. Co-location also depends on the buy-side’s trading strategy. High-frequency traders need to understand where they want to trade. They can’t think of the market as a montage when they’re trying to achieve the lowest execution latency. There’s no time to sew together the fragmented marketplace if you’re also trying to be incredibly reactive to each and every exchange.

It’s also important to focus on latency within each exchange. Shaving another 100 microseconds off your DMA solution may not matter much if you are hitting an exchange port that is using old hardware or if you are overloading a port at the exchange and not load-balancing to another port. You also have to be aware of the protocol you are using: some exchanges have created legacy FIX sessions that are wrappers around internal technology and can be quite slow converters.

They are now creating “next generation” API’s that are native FIX and much faster, but these sessions may only offer a subset of available messages, so you have to consider routers that send the legit subset down the fast FIX pipe.

FIX: Outside latency, what are the main drivers for DMA?

DP: Buy-sides like to have a single FIX session for all of their services: global, high-touch, baskets, options, DMA and algorithms. Having multiple connections for multiple services is inconvenient. It costs more money and is cumbersome for the buyside trader. Also, for the sell-side, it is harder to have a single risk control around the client’s business. Risk management is all about consolidation. Of course, the lower the latency a buy-side wants, the more they will support multiple FIX sessions. So a broker should support multi-asset class, multiple lines of business globally. They should present a single simple interface to the buy-side for risk, order entry, allocation and (where possible) clearing.

FIX: How do you put all of this into a low latency FIX connection?

DP: The architecture is reasonably straightforward. You build the fastest FIX router/gateway and you put a framework inline that checks for risk, locates and compliance. This means that your FIX gateway also has to be hooked up to real-time global market data. Asynchronously, you copy the data to an order management system (OMS) for advanced post-trade features. Then you start building features off the back of it, including algorithms and smart order routing (SOR).

FIX: So it is all about low latency?

DP: Absolutely not. That’s just one category of buy-sides. As previously mentioned, there are a huge number of buy-sides that have taken on the trading function themselves. They
have hired sell-side traders, or decided they can trade themselves, and are using DMA tools, like our EMS application, to watch the Level II data and hit the market. Again, the issue is centralization.

Typically, these tools have their own risk
checks that the broker can control. But what if the broker’s clients use an array of these tools? Regardless of the tool the buy-side is using to enter these orders, DMA allows them to centralize all the order flow and run it over their risk and compliance checks before the orders go to market. Fidessa’s DMA capability is also integrated with our OMS, so these trades go to the back-office and show up on regulatory reports and on more than 100 compliance reports.

FIX: What should a broker entering this game focus on?

DP: The key point is to understand what market segment you are after and what your differentiators are. If you are a regional market maker with strong research, then you have loyal buy-side firms who will look to use your services. They will keep trading through you to gain your research. If you can offer them DMA, that’s just one more service they don’t have to go to a Tier-1 broker for.

But to get into the high-frequency trading space, you need a stronger differentiator – perhaps a ton of stock to lend so the buy-sides can sell short. Obviously, prime brokers are already in this space. Either way, pricing is critical, and the maker/taker model in the US means that brokers have to get to top-tier on volume traded on the exchanges as quickly as possible.

The goal for new brokers is to be toptier in rebates and costs and have an internal dark pool as part of the SOR for internalization efficiencies. But there are always functional differentiators. There may be clients who are willing to send flow for specialized functionality – perhaps the calculation of commission on each notice of execution (NOE). Or maybe
they want the ability to let them allocate their own blocks. You have to understand what your client wants and tailor your offering accordingly.

Upgrading the Exchanges – With Arrowhead, Tokyo Stock Exchange takes aim at the traders of the future

By Satoshi Hayakawa
The wait is over. Arrowhead, Tokyo Stock Exchange’s next-generation trading platform, launches in the New Year and promises high-speed performance, accessibility and reliability for local and international investors. Satoshi Hayakawa, from the Tokyo Stock Exchange’s IT department highlights some of Arrowhead’s key features.
January 4, 2010 will usher in the Tokyo Stock Exchange’s renewed stock trading system. This next-generation system, called “Arrowhead,” not only provides high-speed performance at a global level, essential for order execution and data transmission, but also constitutes a world-class stock exchange trading system that brings both fairness and reliability, the fundamental attributes demanded of any market. Arrowhead will transform the Tokyo market into an even more appealing trading environment.
Why Arrowhead?
Tokyo’s long-awaited Arrowhead is due to go “live” on January 4, the first Monday of 2010. The concept is simple: to enhance the speed and reliability of the Tokyo Stock Exchange. The next generation trading environment uses state-of-the-art technology to increase capacity, expand the provision of market information and streamline the trading process. In short, the exchange is aiming to strengthen both the hard and software it provides.

Speed
Perhaps the most striking improvement will be to the speed with which orders are received and confirmed. Arrowhead will move the TSE from the one to two seconds it currently achieves to within 10 milliseconds. Test results show that Arrowhead is, in fact, posting even faster intervals. It’s an essential feature if Arrowhead is to accommodate the algorithm and high-frequency trades that are increasingly common among the global financial community.

High reliability
While speed may be the headline feature of Arrowhead, the reliability of the system is just as important. Gone is the concept that reliability is the trade-off to speed, or vice versa. Arrowhead guarantees the completion of the transaction process via a sophisticated system that holds orders and confirmation on three different servers to prevent deletion of data due to hardware malfunction. It’s a system that been proven to cope well, even with orders that constantly change.

A part of the reliability pledge of Arrowhead is its commitment to enhancing the availability of the system. This involves simplifying the user interface, by using the highest level of technological support behind the scenes. A secondary back-up site has also been newly constructed, in the event of a catastrophic incident in the primary exchange location.
Enhanced market data
Another aspect of availability are the enhancements the exchange has put into its market data feed. With Arrowhead, the exchange claims to have significantly reduced the interval between data creation and transmission, as well as increasing the amount and transparency of data and transactions. In practice, this will see the amount of nominal prices transmitted rising to eight from the current five, using the FLEX standard service. A further enhancement, via Arrowhead, is the new FLEX Full, a system that will transmit all nominal data for every issue. This data will be available to all investors, not only market players at the TSE, including institutional and individual investors in Japan and overseas.

Revisions to Trading Rules
Other changes include upgrades to the trading rules. These include:

  1. Partial reduction of tick size. To correct any disproportion overall and enhance user-friendliness, the tick size for a portion of the price will be reduced to allow more detailed orders.
  2. Introduction of sequential trade quote. To avoid sudden price changes over a short period of time, this function displays continuous confirmed bids and can suspend trading for a short period (e.g. one minute) if market players attempt to place buy or sell bids that greatly diverge from the preceding confirmed prices. For example, doubling the bid renewal margin either above or below the price.

Once Arrowhead is in operation, the TSE says it will continue to maintain all systems necessary for fair pricing to occur. The current system, which allows investors to securely place orders and supports price finding functions that values assets through “itayose” special bids, limit value margins and abnormal order checks, will continue without change.
Higher speed, lower latency and start of Co-llocation service
TSE’s Arrowhead preparation moved into high-gear in June 2009, with the launch of the ‘Arrownet’ network. Arrownet brought together all systems, traders and information vendors to achieve an internal latency among this core network of two milliseconds or lower. This represented a ten-fold decrease in latency, and created a ring-shaped network with 99.99% accessibility and future potential for expanding into international connections.

A few months later, the TSE rolled out its co-location services that allowed for the installation of servers and other devices within the primary site that houses Arrowhead. With this co-location, latency is reduced to a few hundred milliseconds.
The full results of extensive testing are on offer to all from January 2010, with the TSE billing Arrowhead as a “comprehensive, high-reliability, high-speed trading platform designed to provide a more appealing trade environment.”
Creating an exchange for the future
The development of Arrowhead has been seen by many across the industry, as both a technology achievement, and also an example of partnership between the exchange and market players, including traders and technology vendors. The TSE is confident that Arrowhead will send a message to the world about the reliability, latency and quality of its exchange.

FIXatdl: An Emerging Standard

With the official release of FIXatdl taking place in Spring 2010, Zoltan Feledy gives us a head start on the standard.
A few years back when algorithmic trading became a standard tool in the trader’s toolbox the explosion of algorithms from various providers presented a number of challenges for the trading community as well as FPL. These algorithms brought with them a number of new parameters. Orders now had to contain not only what to do but also how to do it. Since the existing implementations of FIX did not have tags to indicate strategies, starting times, and whether to include auctions or not, led to a proliferation of user defined tags that are unfortunately still in existence today.
In addition to standardizing parameter specification, FIX also wished to standardize the way they are delivered to the end-user to further improve the process and to reduce time to market. Previously, providers had to work with multiple vendors in presenting these new order types to their clients. This often required months before a trader got to see a screen to enter orders in.
FPL responded to address the needs of this trend by creating the Algorithmic Trading Working Group. The group solved the above issues sequentially. The issue of the proliferation of custom tags was solved by the Algorithmic Trading Extensions which provided support to express an unlimited number of parameters in a repeating group structure, which is now an integral part of the FIX 5.0 specification. The second problem of delivering algorithms to end-users was solved by FIXatdl. The FIXatdl project chose to use the richness of XML, which is well understood and supported to solve these problems. This would allow providers to release their specifications in computer readable format, as opposed to a long document, allowing end users instant access to the latest versions of their algorithms. This is exactly what the working group did.
There are 4 parts to the standard. The core is used to specify the parameters for the algorithm. This is accompanied by 3 other parts to express the visual layout, validation rules, and the ways in which parameters should interact with each other. The earliest versions of the specification have now been around for a while and following various enhancements, in spring 2010, FPL will launch FIXatdl Version 1.1.
Adoption was slow at the beginning as the space was developing rapidly and the work was substantial. The “chicken or egg” problem between providers and vendors was a difficult one to overcome in the beginning. Why would vendors develop support for this new standard if providers are not delivering their products in this format?
Conversely, why would providers deliver in this format if vendors will not support the standard? With development slowing and the effort gaining substantial traction from both sides of the fence, all the stars have lined up for this new technology to take off.
I recall first hearing about this standard in New York few years ago, when it was just a concept. I signed-up to help the efforts on the spot and helped draft the first specification until I moved to Asia two years ago. I was delighted at a recent demonstration to witness that this standard is gaining significant traction. Shortly thereafter, I saw the video of the panel discussion from the FPL Americas Electronic Trading Conference and it has become clear that this standard will inevitably be the way forward.
Please enjoy the feature article on FIXatdl in this issue of FIX Global. The effort is a great realization of FPL’s slogan where so many industry participants collaborated to solve an industry need for the benefit of all, in one impressive effort. With the official release of FIXatdl to take place in just a matter of months, FIXatdl is now ready for prime time. Look for it in a trading system near you.

Korea – A FIX Conversation Initiated

By Josephine Kim
FIX is still a relatively new concept in South Korea. Though it is on everyone’s mind and its potential is clear, how do we move forward? FIXGlobal initiated this conversation with almost 200 industry leaders at their recent Face2Face event in Seoul. Credit Suisse’s Josephine Kim encapsulates the conversation.

Korea is one of the ten largest economies in the world with an increasing number of institutional and foreign investors, making FIX Protocol an ideal choice. But why is the use of FIX in Korea still in its infancy?

This was the key conversation point with representatives from Korean financial players and the exchange at the recent FIXGlobal Face2Face forum held in Seoul on November 10, 2009. Ryan Pierce, Technical Director of FPL believes there are advantages to being a relative newcomer in the FIX arena. “If a market has not already adopted FIX, then they are working with a clean slate, making it easier to implement the latest version.” In other words, it gives them the ability to jump past the earlier FIX technologies to a more sophisticated version.

The buy-side speakers at the event were vociferous in their support for FIX. Fidelity’s Kan Wong and Samsung Investment’s Young-Sup Lee spoke to Nomura’s Rob Liable, and all were enthusiastic about the positive impact of FIX on their business. “Earlier, all our orders were over the telephone or email. Sometimes there were errors while placing orders or we had delays in order-processing that hurt our trading performance. We adopted FIX in 2006 and the results have been very clear: our team is more efficient and we are better able to respond to market conditions,” said Lee. “In 2006, when we adopted FIX, we had three traders. Today, though we handle a lot more trades, we still have only three traders. That’s a clear sign of how FIX has driven efficiency in our team,” he added.

With the buy-side supporting FIX, the attention turned to the exchange. Hong Kim and Chang Hee Lee from the KRX and Daegeun Jun from Koscom were all eager to be involved in the ongoing FIX discussions as they understand the growing importance of the protocol. They demonstrated a clear understanding of FIX and its many benefits, recognizing that many institutional investors are active users. So far, the Exchange has not adopted FIX due to the reluctance of some of their members to make changes.

The Korean market is dominated by retail investors who prefer to trade offshore stocks on their own. Some argue there is also no burning need to adopt FIX Protocol as the trading desks use the Exchange’s proprietary system to execute their trades. Though it resembles the global protocol, the Korean protocol – KFIX – has some fundamental differences from it and lacks the various message types available with FIX. KFIX originated in the Korean environment for trading between local institutions. KFIX has served well until now, but FIX would be the way to go forward. The Korean Exchange representatives believe their work with other international exchanges and increasing demand from the international buy-side to track liquidity may be a driving factor that will encourage the Exchange to adopt FIX and be part of the global standard.

The conversation has been initiated in Korea and the Exchange’s desire to be involved is also clear. Now what we need to do is to continue the discussion on adopting FIX while taking into consideration the uniqueness of the Korean market.

MiFID I Lessons Learnt and Looking Ahead

By Andrew Bowley, Chris Pickles
MiFID I has undoubtedly made its impact on the industry. FIXGlobal collates opinion from Nomura’s Andrew Bowley and BT Global Service’s Chris Pickles on the success of MiFID and its next manifestation.

Having digested the massive changes MIFID brought to the EU two years ago, what has the financial community learnt from the content of MIFID 1 and the process whereby it was developed and implemented?
Andrew Bowley (Nomura):
First and foremost we must conclude that MiFID has worked. We now have genuine competition and higher transparency across Europe.

Costs are down. MTFs (Multilateral Trading Facilities) have brought in cheaper trading rates and simpler cost structures, and most exchanges have followed with substantial fee cuts of their own. Indeed this pattern is also clearly demonstrated by exception. The one country where MiFID has not been properly introduced is Spain and this is one country where fees have effectively been increased. This teaches us that complete implementation is the key and the European Commission needs to look hard at such exceptions.
We have also seen clearing rates reduced, though the fragmentation itself has caused clearing charges to increase as a proportion of trading fees as typically the clearers charge per execution. Interoperability should help address that, assuming a positive outcome of the current regulatory review.
In terms of lessons learnt from the process we must consider that we have experienced a dramatic change in a short period of time, and should allow more time for the market to adjust before fully concluding or looking to further wholesale change. We are certainly still in a period of transition – new MTFs are still launching; and the commercial models of all of these, mean that we are far from the final equilibrium. To have so many loss-making MTFs means that we cannot be considered to be operating in a stable sustainable environment.
Chris Pickles (BT Global Services):
MiFID is a principles-based directive: it doesn’t aim to give detail, but to establish the principles that should be incorporated in national legislation and that should be followed by investment firms (both buy-side and sell-side). Some market participants may have felt that this approach allowed more flexibility, while others wanted to see specific rules for every possible occasion. The European Commission has perhaps taken the best approach by allowing investment firms and regulators to establish themselves what are the best ways of complying with the MiFID principles, and has perhaps “turned the tables” on the professionals. If the European Commission had tried to tell the professionals how to do their job, the industry would have been up in arms. Instead, MiFID says what has to be achieved – best execution. Leaving the details of how to achieve this to the industry means that the industry has to work out how to achieve that result. This takes time, effort and discussion. FIX Protocol Ltd. helped to drive that discussion by jointly creating the “MiFID Joint Working Group” in 2004. And the discussion is still continuing. A key thing that the industry has learned – and continues to learn – is to ask “why”. Huge assumptions existed before MiFID that are now being questioned or proven to be wrong. On-exchange trading doesn’t always produce the best price. Liquidity does not necessarily stick to existing 100% execution venues. Transparency is not sufficient by just looking at on-exchange prices. And the customer is not necessarily receiving “best execution” from today’s execution policy.

A European Commission spokesperson says a review of MiFID is likely in 2010, so what can the financial community feed into this review to make MiFID 2 a better version of its predecessor?
Chris Pickles ( BT Global Services):
The European Commission has always intended the implementation of MiFID to be a multi-stage process – using the Lamfalussy Process – and a review has always been planned to see how effectively MiFID has achieved its goals and what tuning measures are still needed. Key points that the industry can raise during this review process are:

  • Requiring the use of industry standards by regulators for reporting by the industry to those regulators. Though regulators monitor the industry, they are also part of the industry’s infrastructure and can help the cost-efficiency and compliance of the industry by using standards that the industry itself uses. This would include the use of standards like the FIX Protocol for trade reporting and transaction reporting.
  • Requiring the use of industry standards by investment firms to meet their MiFID transparency obligations. For example, using the FIX Protocol would allow investment firms to publish their own data in a format that is easy for all to integrate into their own systems. This could help to address the issues around the need for consolidated data across the EU. Execution venues also need to understand that continuing to use their own proprietary protocols is adding unnecessarily to the costs of the industry, whether for trading or for market data distribution.

Andrew Bowley (Nomura):
It is crucial that the financial community contributes to this review and the European Community is very keen for that too. On a recent AFME (Association for Financial Markets in Europe) LIBA (London Investment Banking Association) trip to Brussels those of us present were encouraged to return with written proposals where detail or refinement is needed – specifically on the consolidated tape for instance.

The “consolidated tape” debate is one that is not focussed enough today, and is a great example of where we can provide clear input. Nomura intends to be at the heart of this discussion.
It is also vital that the community makes data available to the policy groups. There are still voices suggesting that costs have risen, which is not the case. Data is essential to demonstrating the effects that MiFID has created.
Equally there is debate around the size of broker dark pools, which will be greatly enhanced via clarity on the real trading activity. We should be debating point of policy, not points of fact.

Summing up the year – FPL Americas Conference

By Sara Brady
The FPL Americas Electronic Trading Conference, for those in electronic trading, is always a year-end highlight and this year was no exception. Sara Brady, Program Manager, FPL Americas Conference, Jordan & Jordan thanks all the sponsors, exhibitors and speakers who made this year’s conference a huge success.

The 6th Annual FPL Americas Electronic Trading Conference took place at the New York Marriott Marquis in Times Square on November 4th and 5th, 2009. John Goeller, Co-Chair of the FPL Americas Regional Committee, aptly set the tone for the event in his opening remarks: “We’ve lived through a number of challenging times… and we still have quite a bit of change in front of us.” After a difficult year marked by economic turmoil, the remarkable turnout at the event was proof that the industry is back on its feet and ready to move forward with the changes to the electronic trading space set forth in 2009.
Market Structure and Liquidity
Two topics clearly stood out as key issues that colored many of the discussions at the conference – regulatory impact on the industry and market structure as influenced by liquidity, and high frequency trading. An overview of industry trends demonstrated that the current challenges facing the marketplace are dominated by these two elements. Market players are still trying to digest the events of 2008 and early 2009, adjusting to the new landscape and assessing the changing pockets of liquidity amidst constrained resources and regulatory scrutiny. The consistent prescription for dealing with this confluence of events is to take things slow and understand any proposed changes holistically before acting on these changes and encountering unintended consequences.

The need for a prudent approach towards change and reform was expressed by many panelists, including Owain Self of UBS. According to Self, “Everyone talks about reform. I think ‘reform’ may be the wrong word. Reform would imply that everything is now bad, but I think that we’re looking at a marketplace which has worked extremely efficiently over this period.”
What the industry needs is not an overhaul but perhaps more of a fine-tuning. Liquidity is one such area that needs carefully considered finetuning. Any impulsive regulatory changes to a pool of liquidity could negatively impact the industry. The problem is not necessarily with how liquidity is accessed, but the lack of liquidity that results in the downward price movements that marked a nightmarish 2008. Regulations against dark liquidity and the threshold for display sizes are important issues requiring serious discussion.
Rather than moving forward with regulatory measures that may sound politically correct, there needs be a better understanding of why this liquidity is trading dark. While there is encouraging dialogue occurring between industry players and regulatory bodies, two things are for sure. We can be certain that the evolution of new liquidity venues is evidence that the old market was not working and that participants are actively seeking new venues. We can also be assured that the market as a messaging mechanism will continue to be as compelling a force as it has been over the last two decades.
Risk
One of the messages that the market seems to be sending is that sponsored access, particularly naked access, is an undesirable practice. Presenting the broker dealer perspective on the issue, Rishi Nangalia of Goldman Sachs noted that while many agree that naked sponsored access is not a desirable practice, it still occurs within the industry. A panel on systemic risk and sponsored access identified four types of the latter: naked access, exchange sponsored access, sponsored access of brokermanaged risk systems (also referred to as SDMA or enhanced DMA) and broker-to-broker sponsored access.

According to the U.S. and Securities Exchange Commission (SEC), the commission’s agenda includes a look specifically into the practice of naked access. David Shillman of the SEC weighed in on the commission’s concern over naked access by noting, “The concern is, are there appropriate controls being imposed by the broker or anyone else with respect to the customer’s activity, both to protect against financial risk to the sponsored broker and regulatory risk, compliance with various rules?” Panelists agreed that the “appropriate” controls will necessarily adapt existing rules to catch up with the progress made by technology.
On October 23, NASDAQ filed what they believe to be the final amendment to the sponsored access proposal they submitted last year. The proposal addresses the unacceptable risks of naked access, and the questions of obligations with respect to DMA and sponsored access. The common element of both of these approaches is that both systems have to meet the same standards of providing financial and regulatory controls. . . Jeffrey Davis of NASDAQ commented on his suggested approach: “There are rules on the books now; we think that they leave the firms free to make a risk assessment. The NEW rules are designed to impose minimum standards to substitute for these risk assessments. This is a very good start for addressing the systemic risk identified.”
These steps may be headed in the right direction, but are they moving fast enough? Shillman added that since sponsored access has grown in usage there are increasing concerns and a growing sense of urgency to ensure a commission level rule for the future, hopefully by early next year. This commission proposal would address two key issues – should controls be pre-trade (as opposed to post-trade) and an answer to the very important question, “Who controls the controls?”

Counterparty Credit Risk Management

By David Kelly
Counterparty credit risk theory and practice have been evolving over the past decade, but the recent market crisis has brought it heightened focus. Quantifi’s David Kelly explains how the current best practice is the result of a long evolutionary process.
Over the past decade, banks addressed the problem of counterparty credit from traditional financing experience while the investment banks approached it from a derivatives perspective. As the industry consolidated in the 90’s, culminating with the repeal of Glass-Steagall in 1999, there was substantial cross-pollination of ideas and best practices. Consolidation and the necessity to free up capital as credit risk became increasingly concentrated within the largest financial institutions drove a series of innovations. These innovations involved methodologies, management responsibilities and technology.
The most significant evolution was the transition of the buy and hold mentality to a more marketbased, active risk management model. Simultaneously, substantial responsibility was transferred from credit officers to traders. As various extensions to the reserve and market models have been implemented, a general consensus has emerged that essentially combines portfolio theory and reserves with active management. This combination has placed tremendous emphasis on technology infrastructure.
Banks today tend to be distributed along the evolutionary timeline by size, where global banks have converged to the consensus model while most regional banks are closer to the beginning stages. This paper traces the evolution of counterparty credit risk based on actual experiences within banks that have had considerable influence.
Reserve model
Reserve models are essentially insurance policies against losses due to counterparty defaults. For each transaction, the trading desk pays a premium into a pool from which credit losses are reimbursed. The premium amount is based on the creditworthiness of the counterparty and the overall level of portfolio diversification. Premiums are comprised of two components – the expected loss or credit value adjustment (CVA) and the potential unexpected loss within a chosen confidence level, also referred to as economic capital. Traditional banks like pre-merger Chase and Citibank and their eventual investment banking partners J.P. Morgan and Salomon Smith Barney all used reserve models but the underlying methodologies were very different.

Banks converted exposures to loan equivalents and then priced the incremental credit risk as if it were a loan. In practice, traders simply added the number of basis points prescribed by a table for that counterparty’s risk rating, the transaction type and tenor. In contrast, the more derivatives oriented investment banks calculated reserves by simulating potential future positive exposures of the actual positions. The simulation models persevered because they more precisely valued each unique position and directly incorporated credit risk mitigants, such as collateral and netting agreements.

By 2000, the simulation based CVA and economic capital reserve model was state of the art. Institutions had expanded portfolio coverage in order to maximize netting and diversification benefits. However, trading desks were complaining that credit charges were too high while reserves seemed insufficient to cover mounting credit losses instigated by the Enron and WorldCom failures. The down credit cycle, following the wave of consolidation and increased concentration of risk, forced the large banks to think about new ways to manage credit risk. While banks had used Credit Default Swap (CDS) as a blunt instrument to reduce large exposures, there had been limited effort in actively hedging counterparty credit risk. The need to either free capital or increase capacity spawned two significant and mostly independent solutions. The first solution, driven by the front office, involved pricing and hedging counterparty credit risk like other market risks. This had the effect of replacing economic capital reserves with significantly lower VaR. The second solution, basically in response to the first, introduced active management into the simulation model. Active management or hedging reduced potential future exposure levels and corresponding economic capital reserves. The next two sections review these solutions in more detail.

Front-office market model
An innovation that emerged in the mid to late 90’s was the idea of incorporating the credit variable in pricing models in order to hedge counterparty credit risk like other market risks at the position level. There were two ways to implement this ‘market model’. The first involved valuing the counterparty’s unilateral option to default. The second used the bilateral right of setoff, which simplified the model to risky discounting due to the offsetting option to ‘put’ the counterparty’s debt struck at face value against the exposure. Using the unilateral or bilateral model at the position level was appealing since it collapsed credit risk management into the more mature and better understood market risk practice.

A few institutions, including Citigroup, considered transitioning as much of their credit portfolio as possible into the market model, using bilateral setoff wherever possible and the unilateral option model for everything else. The idea seemed reasonable since over 90% of corporate derivatives were vanilla interest rate swaps and cross-currency swaps. Implementing risky discounting or an additional option model for each product type was certainly plausible. Trades that were actively managed in this way were simply tagged and diverted from the reserve model. Aside from the obvious issues, e.g., credit hedge liquidity, the central argument against this methodology was that it either neglected credit risk mitigants like collateral and netting or improperly aggregated net exposures. The ultimate demise of the market model as a scalable solution was that the marginal price under the unilateral model was consistently higher than under the simulation model.

Another detriment was the viability of having each trading desk manage credit risk or be willing to transfer it to a central CVA desk. Having each desk manage credit risk meant that traders needed credit expertise in addition to knowledge of the markets they traded. In addition, systems had to be substantially upgraded. A central CVA desk proved a more effective solution but caused political turf wars over pricing and P&L. Institutions that tried either configuration basically concluded that credit risk belonged in a risk management unit with the ability to execute hedges but without a P&L mandate. In short, the substantial set of issues with the market model caused firms to revisit the reserve model.

Merger of the reserve and market models
Attempts to move credit risk out of the reserve model and into a market model inspired important innovations in the simulation framework with regard to active management. Banks had been executing macro or overlay CDS hedges with notional amounts set to potential exposure levels. The CDS hedges were effective in reducing capital requirements but ineffective in that the notional amount was based on a statistical estimate of the exposure, not a risk-neutral replication. In addition, that exposure (notional) varied over time.

The next logical step was to address active management from the input end simulation. This involved perturbing the market rates used in the simulation and then calculating the portfolio’s sensitivities, which could then be converted to hedge notionals. There were several issues with this approach. Simulation of the entire portfolio could take hours and re-running it for each perturbed input restricted rebalancing frequency to weekly or longer. In addition, residual correlation risk remained, which had critical consequences over the past two years.

Correlation in portfolio simulation remains an open problem. Simulators typically use the real or historical measures of volatility as opposed to risk-neutral or implied volatilities in projecting forward prices. One reason is that risk-neutral vols may not be available for some market inputs, e.g., credit spreads. The bigger reason is that historical vols already embed correlation. Correlation is not directly observable in the market and the dimensionality of pairwise correlations causes substantial if not unmanageable complexity. The end result is that correlation has been managed through portfolio diversification instead of replication. Given the role of correlation in terms of ‘wrong-way risk’ over the recent cycle, it is on the short list of priorities for the next evolution.

Current priorities
Over the past two years, firms that had a comprehensive, integrated approach to credit risk management survived and emerged while those that had a fragmented approach struggled and failed. This punch line and the evolutionary process that helped deliver it have resulted in a general convergence toward the portfolio simulation model with an active management component. Several global banks are at the cutting edge of current best practice whereas most mid-tier and regional banks are still balancing the need to comply with accounting requirements, which require CVA, with more ambitious plans.

Banks that have robust simulation models are pushing the evolution in four main areas. First, with the recent monoline failures, there is a recognized need to incorporate wrong-way risk. Basically, wrongwayrisk is the case where the counterparty’s probability of default increases with its exposure. Second, in the wake of AIG’s bailout, recognizing collateral risks in terms of valuation and delivery is clearly important. Third, capturing as much of the portfolio as possible, including exotics, increases the effectiveness of centralized credit risk management and allows more accurate pricing of the incremental exposure of new transactions. Finally, robust technology infrastructure is imperative to reliably capture the wide array of market and position data and then perform the simulation in a reasonable timeframe. Automation and standardized data formats like FIX speed implementation reduce errors and ultimately enhance the integrity of the results.

Counterparty credit risk remains a very complex problem and institutions have had to approach it in stages. Huge improvements have been made and current best practice is the result of a long and iterative evolutionary process. There is still much work to do and it will be exciting to see what new innovations lie ahead.

Australia’s Markets Stronger For GFC

By Greg Yanco
The Australian Securities and Investments Commission (ASIC), as the nation’s capital markets regulator, is looking at how it needs to integrate the experience of the Global Financial Crisis (GFC) into its future policy settings, writes ASIC’s Greg Yanco.
So far, Australia has fared better in the financial crisis than most other countries, in terms of both its economy and financial markets. Our banks have not required the degree of Government support provided overseas and our stock market has been an impressive source of equity capital raising for listed companies.
Australia’s current approach to financial regulation and its underlying philosophy emanates from two major inquiries into deregulation. The Wallis and Campbell inquiries emphasised competition, efficiency, neutrality, integrity and fairness, as well as financial stability and prudence as key policy principles for financial regulation.
The aim was to lower the cost of capital and increase the availability of funding sources, and provide the basis for increasing Australia’s sustainable economic growth rate. The means of achieving this aim was allowing markets to work and participants to decide on investments in their own best interests after suitable disclosure. Regulatory intervention was focused on correcting market failure.
This was embodied in the ‘twin peaks’ model of regulation between the Australian Prudential Regulation Authority (APRA) which regulates deposit taking institutions that the Government believes are sufficiently systematically important as to have their business models prudentially regulated and ASIC, which regulates securities and investments guided by the principles flowing from the Wallis and Campbell inquiries and reflected in the Corporations Act.
The Corporations Act sets out rules around disclosure and preventing market abuse, but does not intervene in business models or risk.
Doing its job
ASIC is acting decisively in exercising its powers to regulate market conduct and disclosure as part of its work to rebuild and maintain confidence in the integrity of our markets.

One of the ways we are doing this is through our work on short selling. The practice of short selling rose to increased prominence internationally last year.
Around September, during a period of heightened turbulence in world financial markets, ASIC took steps to require disclosure of gross short sales. This was in the context of concern as to the spreading of false rumours driving prices down to the benefit of short sellers. Shortly afterwards, following international developments including the banning of the practice for a period by a number of countries, including Australia, the UK and the US, we foreshadowed how disclosure rules would be an ongoing requirement once the ban was lifted.
The disclosure regime introduced by ASIC placed a positive obligation on brokers to ask their clients if a sale was short. One of the means of recording this information was the FIX protocol and, with the help of the FPL Asia Pacific Regional Committee and major brokers, we agreed messaging standards and protocols. The FIX Protocol tag 54 values were amended for the Australian market, to identify a long sale, a covered short sale and a covered short sale exempt. Gross Short Sales are reported to the ASX and disclosed to the market. This helps provide additional transparency on the amount of short selling in Australian securities.
Regulations lead the way
The Australian Government has now finalised regulations that will govern short selling on a permanent basis.

The regulations effectively replicate the existing requirements for gross reporting of short sales as under the current regime introduced by ASIC. In addition, the legislation will require short sellers to report their net short positions to ASIC within 3 business days of those positions being taken. ASIC will introduce, by way of a class order, a threshold so the net reporting obligation does not apply to small short positions. The short seller continues to report their short position each day until their position falls below the threshold. ASIC will publish the total short positions for each product on the day after receiving this information. The net reporting requirement is expected to take effect from 1 April 2010.
Disclosure is fundamental to these new regulations and ASIC is undertaking work with stakeholders to ensure the practical aspects of the new regime strike the right balance between regulation and efficiency.
One way we are seeking to strike this balance is to use the FIX protocol for reporting under the new regime. One of the reasons we chose FIX was the ease in which we were able to implement changes using FIX during the short selling ban.
We also wanted to minimise the reporting burden of market participants by using a messaging protocol that was already widely accepted. This means that while there will be additional reporting required, at least it can be done through existing and established market infrastructure and be communicated in an internationally recognised format.
The task of implementing the net reporting requirement by 1 April 2010 is nonetheless quite large and involves a number of facets ranging from changes to policies and procedures on trading room floors through to information communications technology considerations and modifications.
To assist us in adapting the FIX protocol to meet the new requirements, ASIC has retained FIX expert, John Cameron. John will assist us with vendor relationships, design issues and the implementation of a streamlined client interface.