ai-governance mas-mindforge insurance-technology risk-management actuarial-science

Your GLM Is AI Now: How the MAS MindForge Definition Changes What Insurers Must Govern

· 5 min read · View on LinkedIn

Under the MAS MindForge framework, a generalised linear model calibrated to your claims experience is classified as AI. So is a clustering algorithm used in experience analysis. Most insurers and actuarial teams are unaware of this, which means their AI inventory — and their governance programme — starts with a gap on day one. The definition matters more than most practitioners realise, and getting it wrong creates compliance exposure before any governance work has even begun.

The Definition

The MindForge Consortium — 24 financial institutions coordinated by the Monetary Authority of Singapore — published a 173-page AI governance handbook that opens with a definitional question most organisations skip past too quickly: what actually counts as AI?

The framework answer is precise. AI includes models or systems that learn and/or infer from inputs to generate outputs such as predictions, recommendations, content, or decisions. The critical qualifier sits at the other end: if outputs are solely based on predefined rules or programming logic, the system is not AI.

Two words carry the weight here: learn and infer. A system that fits parameters to data is learning. A system that applies a trained model to new inputs is inferring. Both are in scope. The output type — whether it is a price, a flag, a recommendation, or a generated document — is secondary. What matters is whether the system derived its behaviour from data rather than from explicit human-authored rules.

What Counts as AI

Under this definition, the following are AI systems for governance purposes:

  • Logistic regression and GLMs trained on historical data. A motor pricing GLM fitted to claims experience is learning from data. It is AI — the category that surprises most actuarial teams.
  • Machine learning models of any architecture — gradient boosting, random forests, neural networks, support vector machines.
  • Deep learning and large language models, including any GenAI tooling embedded in internal workflows.
  • Computer vision and OCR systems that use trained models to classify or extract information from documents.
  • AI agents — systems that plan and act across multiple steps using model outputs to drive decisions.
  • Any model that was calibrated, trained, or fitted using observed data, regardless of how long it has been in production or how simple its architecture appears.

The last point is where most inventories go wrong. Age and simplicity do not exempt a model. A logistic regression from 2017 is as much AI under this framework as a transformer model deployed last month.

What Does Not Count

The exclusions are equally important for scoping purposes:

  • Rule-based software where outputs follow logic written explicitly by a developer or analyst. If a human authored every decision branch, it is not AI.
  • Excel macros and formula-driven models, including most traditional actuarial projection models where assumptions are fixed inputs rather than model outputs.
  • Keyword chatbots that route based on string matching without any trained component.
  • Predefined data processing pipelines — ETL, validation rules, threshold alerts — where no learning occurs.
  • Traditional robotic process automation (RPA) that automates click-sequences without an ML component. The moment an RPA tool incorporates a trained classifier or NLP model, that component crosses into AI scope.

The dividing line is always the same question: did a human write every rule that produces the output, or did a system derive those rules from data?

The Grey Zones

This is where the practical difficulty sits, and where the MindForge definition does its most useful work.

Pricing GLM calibrated from experience data. This is AI. The model coefficients were not chosen by a human — they were estimated from observed claims, exposures, and risk factors. The actuary designed the structure and selected variables, but the parameters came from data. That is the learning step that brings it into scope.

Experience analysis using clustering algorithms. If you are using k-means, hierarchical clustering, or DBSCAN to identify homogeneous risk groups in your portfolio, you are using AI. The cluster assignments are inferred from data, not defined by rule.

Prophet or IFRS 17 projection model with fixed assumptions. Not AI. If an actuary sets the discount curve, lapse rates, and mortality assumptions manually — as point estimates or deterministic tables — and the model applies those inputs mechanically, no learning is occurring. The model is a calculator, not a learner.

IFRS 17 model where assumptions are set by a machine learning output. This is a hybrid, and the framework handles it cleanly: the ML component that sets assumptions is AI and is in scope for governance. The projection engine that consumes those assumptions is not AI and sits outside scope. You govern the part that learns, not the part that calculates.

Vendor platform with an embedded AI feature. Many actuarial platforms, claims systems, and underwriting tools now embed AI features — anomaly detection, pricing suggestions, document extraction. The non-AI core product does not exempt those features. Each AI component is independently in scope, and governance obligations do not disappear because the model was built by a third party.

Automated experience factor updates. If the update mechanism applies a formula with fixed weights, it is not AI. If it re-fits a model, it is. The distinction turns on whether the system is recalibrating parameters or re-running a calculation with new inputs.

The grey zone requires genuine engagement with how each model works, not surface-level classification based on what the model is called or how it was initially described to stakeholders.

Why This Matters

Governance programmes cannot be scoped until the inventory is complete. The MAS framework is explicit: organisations are expected to identify all AI systems, assess the risk those systems carry, and apply controls proportionate to that risk. A system that was not identified cannot be assessed. A system that was not assessed cannot be controlled.

Many insurers running governance workstreams have incomplete inventories — not from carelessness, but from applying a narrower definition than the framework requires. Pricing models, experience studies, and assumption-setting tools are often missing from the first draft of an AI register.

The definition also has downstream implications for model risk management. Once a GLM is classified as AI, it falls under AI governance obligations on top of existing model risk requirements — affecting validation thresholds, monitoring cadence, and documentation standards.

The Practical Next Step

Start with the definition, not the inventory. Agree internally on what counts, then walk the MindForge criterion through your model landscape: does this system learn or infer from data? Apply it to pricing models, reserving tools, experience analysis, and vendor platforms with embedded analytics.

Where you find hybrid systems, document the boundary explicitly. Governance applies to the learned component, not the calculation engine that consumes its outputs.

The 173-page handbook covers much more — risk tiering, human oversight, monitoring obligations, and vendor management. But none of that work is anchored without first knowing what you are governing.

If your organisation is working through AI governance scoping, I’d welcome the conversation. Get in touch.