FIX targets ‘machine-usable’ execution tags for AI agents

Buy side traders say they can spend months deciphering broker-specific FIX tags before they can even think of using AI tools for TCA, research and trading.

FIX is now pushing for machine-usable execution metadata. The AI working group is working on building up the standards from the buy-side-mandated, algo-specific flags trio: tags 29, 30 and 851. The framework would extend tagging to algo certification, urgency, and whether execution decisions were machine-led — and in what capacity.

For Hanane Dupouy-Moualil, a director in equity algorithmic trading at BSG, and FIX Trading working group contributor, this issue drives the industry need for “machine-usable” tags rather than yet another best-execution narrative. Industry professionals say the use of agents in trading remains cautious and mostly exploratory, with firms still working through governance, definitions and safe ways to plug agentic AI into workflows.

Asset managers trying to operationalise AI — and in particular large language models — in trading are running into an old problem: execution metadata arrives with non-mandated, non-standardised, broker-specific tags. During a FIX working-group discussion on AI, a buy-side firm said it took six months to exhaustively map broker FIX tags into something an AI agent could use reliably.

Rebecca Healey, managing partner of Redlap Consultancy

“Everybody knows the idiosyncrasies,” says Rebecca Healey, a market structure specialist involved in the work. “If I get FIX tag X from Broker A, it means this; if I get it from Broker B, it means that. If you’re building a model and you give it to an agent, those differentials mean the model falls apart straight from the start.”

Dupouy-Moualil thinks the first building block is not a grand multi-agent architecture but a single, very clear use case.

“Before you talk about multi-agent systems, you need a very precise use case for one agent,” she explains. “For example, an agent that reads a client order and the FIX tags coming out of the OMS, understands what each tag means, and maps that to a set of internal execution strategies. Once that mapping is clear, that agent can propose a strategy and start an execution workflow.”

In this framework, an agent can interpret a set of conditions from a trader’s order-management system (OMS) and convert it into a broker’s smart order router (SOR) and algo-specific set of instructions. Another agent could be in charge of analysing market regimes and suggesting tweaks to the OMS, analysing both incoming market data and execution reports coming back from the execution management system. In future, it would be helped by new fields for urgency and algo certification.

Industry professionals told Global Trading that higher autonomy increases complexity and risk.

The second challenge relates to evaluation metrics. Large language models are probabilistic; trading systems are expected to have predictable behaviour.

“Where you are putting money at risk, you need deterministic outputs,” Dupouy-Moualil says. “We know LLMs are not deterministic, so you have to constrain them: ask them to return a JSON or XML object with a very specific schema. You’ll never get 100% certainty — hallucinations exist — but at least you move closer to something you can trust in production.”

The need for control extends to the underlying data. FIX tags themselves, she argues, should be homogeneous so that clients can route the same order to different brokers without rewriting the meaning of the flags each time.

“FIX tags need to be homogeneous, because a client could end up sending the same order onto different brokers if there is a problem,” she says. “Brokers can map those FIX tags to their own internal strategies, but the tags and their meanings need to be consistent and well-explained so that everyone can process them in the same way.”

Current algo flagging revolves around tags 29, 30 and 851. Tag 29 indicates in which capacity the last fill was executed (agent, principal, cross…). Tag 30 indicates on what market the last fill was executed. Tag 851 specifies if the last fill removed or added liquidity, or if it is not known.

A head of dealing at an investment manager and co-chair of FIX’s EMEA investment management group recalls that these fields were only widely adopted when asset managers made them non-optional.

“The original three-tag solution — 29, 30 and 851 — only worked because the buy side drew a line: no tags, no flow,” he says. “Uptake is now very high, and it’s been a real success for the community.”

For him, the AI ramp-up is as much about governance as technology.

“There’s an old maxim: you can’t manage what you don’t measure,” he says.

Transaction cost analysis remains the primary tool for linking execution decisions to portfolio outcomes, but only if the inputs fed to the models are understood and calibrated for specific outcomes. “As AI moves into idea generation, portfolio construction and execution, we need to record the provenance of decisions, otherwise we fall into statistical inference traps.”

FIX has begun publishing recommended practices for European consolidated tapes and execution workflows.

Within the AI working group specifically, work is going on to repurpose existing tags, add new identifiers such as an algo certification ID, and build a future layer for AI-generated audit tags, agent intent and agent roles along the workflow. The idea is not to encode every model choice, but to give both humans and machines a consistent trail of “what we thought we were doing” and “what actually happened” at each step.

Dupouy-Moualil sees two speeds in the market. “Inside banks and asset managers, people are already using agents to automate pieces of their daily work, mostly to pull data from different sources and support decisions,” she says. “The standardisation of FIX tags is on a different, slower track, because it’s riskier and it affects a lot more stakeholders.”

 

©Markets Media Europe 2025

TOP OF PAGE

Related Articles

Latest Articles