MAS’s AI proposals “overly broad”, industry says

The Singapore Monetary Authority (MAS) is proposing to keep a tight rein on how the financial institutions it regulates use AI, prompting industry complaints that it is being overly prescriptive.

MAS takes a broad definition of AI in its ‘Consultation Paper on Guidelines on Artificial Intelligence Risk Management’ – “overly broad”, the Asia Securities Industry & Financial Markets Association (ASIFMA) believes, arguing that the implied full governance that would be placed on low-impact internal tools would be disproportionate. It suggests instead that MAS should adopt the Organisation for Economic Co-operation and Development’s (OECD) definition of an AI system and avoid interchangeably using phrases such as “use case”, “model” and “system”.

The Investment Company Institute (ICI) also voiced concerns, calling the MAS’s proposed regulatory guidelines “detailed and relatively prescriptive”. Instead, it said, existing technology-agnostic rules and regulations are sufficiently adaptable for AI and emerging technology. If the regulator’s proposals are adopted, the association warned, this should be done in a “proportionate and flexible manner”.

For its part, the FIX Trading Community recommended creating a globally recognised definition and taxonomy of what AI means in trading.

As it stands, MAS would regulate all from basic AI models that turn assumptions and input data into outputs such as estimates, decisions and recommendations to AI models and systems that learn or infer from inputs and produce outputs that could influence physical or virtual environments.

The paper’s proposals primarily focus on risk management, with the regulator concerned about governance gaps, a lack of human oversight, and a lack of transparency from poor or biased outcomes that could result from inadequate data governance. This could lead to financial losses, disruption of critical operations and harm to customers, it suggested – while generative AI hallucinations could be challenging for decision explainability, and AI agents could execute dangerous actions. Also of note are the risks of overreliance on third-party providers.

ASIFMA was broadly supportive of the proposals, stating, “ASIFMA appreciates MAS’ efforts to translate high-level principles into practical supervisory expectations that balance innovation with strong governance.”

Jim Kaye, executive director of the FIX Trading Community, commented, “This was an immensely useful exercise in teasing out both overarching themes and technical challenges. It’s allowed us to propose some practical measures that will be very helpful to firms as we, as an industry, meet the risks and opportunities that AI presents.”

FIX highlighted the need to consider the downstream implications of large language models within algorithms and calculators, to put cross-functional risk management committees in place within organisations, and to strengthen the standards that MAS has proposed. This would include taking into account risk dimensions of impact, complexity, reliance, interconnectedness/contagion potential, and change sensitivity/nonlinearality.

What is considered significant or material change should also be revised, they added, “So that retraining, data refreshes, fine-tuning, prompt/policy-layer changes, and infrastructure/latency changes are each evaluated under a clear, risk-based materiality test, that may require re-approval and re-testing.”

FIX went on to advocate for “internal and regulator-facing disclosure patterns analogous to food labelling: what data was used, training approach, evaluation results, mitigations, limitations, intended use, and user-data protection.”

Further to this point, AI inventories should be created and maintained at a granularity beyond those seen in typical algo inventories, they stated.

Following the industry feedback window, which closed on 31 January, MAS will develop guidelines for how financial institutions should use AI.

©Markets Media Europe 2025

TOP OF PAGE

Related Articles

Latest Articles