Create machine learning models that run inside FrameworX using Script Classes with ML.NET. The AI writes the C# code, connects it to live tags, creates output tags for predictions, and configures model persistence — all within the FrameworX scripting engine.

Guiding Principle: Always Build Complete

Always generate the full production-ready implementation. Every ML Script Class includes model persistence (SaveModel), startup reload (LoadModel), and the ServerStartup wiring — no stripped-down versions.

One model per interaction. Always create exactly one Script Class ML model per session, targeting one sensor or one prediction goal. Do NOT create multiple ML classes unprompted — even if the solution has many tags. If the user wants additional models, they will ask in follow-up.


What This Skill Does

Build ML.NET models as FrameworX Script Classes. The AI generates the full C# ML pipeline (data classes, training, prediction, tag integration) based on the user's requirements. Models run server-side, read from input tags, and write predictions to output tags.

Input Tags -> Script Class (ML.NET) -> Output Tags
   |              |                         |
 Live data    Train / Predict           Predictions, scores,
 from UNS     Model persisted to        anomaly flags, forecasts
              solution folder            Alarms / Dashboard

When to Use This Skill

Use when:

  • The user wants to add machine learning to a FrameworX solution
  • The user mentions "ML", "prediction", "anomaly detection", "forecasting", "regression", "classification", or "ML.NET"
  • The user mentions industrial goals like "predictive maintenance", "quality control", "fault prediction", "demand forecasting", or "energy prediction"
  • The user wants to run ML models on-premise inside FrameworX scripts
  • The user asks "how do I use ML.NET in FrameworX" or "how do I create an ML script"

Do NOT use when:

  • The user wants cloud-based AI or LLM integration (not ML.NET)
  • The user only needs simple threshold alarms (no ML needed — use Alarms module)
  • The user wants to use Python for ML (ML.NET is C# only; Python ML libraries are available for FrameworX, but not covered here)

Prerequisites

  • Solution open with input tags already created (the tags the model will read from)
  • Tags should have live or simulated data flowing (Value Simulator or real device)
  • If starting from scratch, apply the New Solution skill first to create tags and a data source

MCP Tools and Tables

Category

Items

Tools

get_table_schema, write_objects, get_objects, list_elements, search_docs

Tables

UnsTags, ScriptsClasses, ScriptsExpressions, ScriptsTasks


Step 0: Identify the ML Task

HARD STOP — Do not create any tags, classes, tasks, or expressions until Step 0 is complete.
The ML task type and input tags must be confirmed before writing any objects. Proceeding without this information produces incorrect pipelines that are costly to fix.

Before writing any code, the AI must always ask the user the following questions — no exceptions, regardless of how much context is available. Do not silently choose for the user.

Mandatory questions — always ask all three

1. Which ML algorithm do you want to use?

  • Anomaly Detection — SSA Spike — detects sudden outliers, spikes, or abnormal readings
  • Anomaly Detection — SSA ChangePoint — detects gradual drift or regime shifts
  • Time-Series Forecasting — SSA — predicts future values from historical patterns
  • Regression — FastTree — predicts a continuous value from multiple inputs
  • Binary Classification — FastTree — predicts yes/no outcomes from multiple inputs

Not sure which to pick? Describe what you want to achieve and I'll recommend the best fit.

After Q1 is answered, adapt Q2 and Q3 based on the chosen algorithm:

Anomaly Detection — SSA Spike or ChangePoint:

2. Which single tag member should be monitored for anomalies?
(e.g., OilGas_Co/WestTexas_Field/WellPad_A/Well_A01.TubingPressure — full path + member name)

3. The output will be AnomalyScore, IsAnomaly, and LastPrediction tags under <AssetPath>/ML/. Confirm the asset path prefix, or suggest a different output folder.

Time-Series Forecasting — SSA:

2. Which single tag member should be forecast?
(e.g., OilGas_Co/.../Tank_01.Level — full path + member name)

3. How many steps ahead should the forecast horizon be? What does the value represent?

Regression — FastTree:

2. Which 2–5 feature tags are the inputs, and which tag is the label (the value to predict)?

3. What does the predicted value represent (unit/context)?

Binary Classification — FastTree:

2. Which 2–5 feature tags are the inputs, and which tag is the boolean label?

3. What does the yes/no outcome represent?

Do not proceed past Step 0 until all three questions are answered.

Goal-to-algorithm mapping (use as suggestions only — always confirm with user)

User Goal

Suggested Algorithm

Predictive maintenance — single sensor

Anomaly Detection (Spike)

Predictive maintenance — multiple sensors

Binary Classification

Detect sensor failures / outliers

Anomaly Detection (Spike)

Detect gradual drift or process shift

Anomaly Detection (ChangePoint)

Predict future values

Time-Series Forecasting (SSA)

Energy / consumption modeling

Regression

Quality control pass/fail

Binary Classification

Fault prediction yes/no

Binary Classification

Production / demand forecasting

Time-Series Forecasting (SSA)

Process output from multiple inputs

Regression

Required information before proceeding

Information

Why

Input tag path(s)

The model reads from these tags

ML algorithm

Determines the ML.NET pipeline to generate

Output semantics

What the predictions mean (anomaly score, forecast value, etc.)


After Step 0: Load the Specific Sub-Skill

Once the algorithm is chosen, load and follow the corresponding sub-skill for complete implementation steps (output tags, Script Class code, trigger, verification, pitfalls):

Algorithm

Sub-Skill to Load

Anomaly Detection — SSA Spike or ChangePoint

search_docs(fetch_url='https://docs.tatsoft.com/display/FX/Skill+ML.NET+Anomaly+Detection')

Time-Series Forecasting — SSA

search_docs(fetch_url='https://docs.tatsoft.com/display/FX/Skill+ML.NET+Forecasting')

Regression — FastTree

search_docs(fetch_url='https://docs.tatsoft.com/display/FX/Skill+ML.NET+Regression')

Binary Classification — FastTree

search_docs(fetch_url='https://docs.tatsoft.com/display/FX/Skill+ML.NET+Classification')

Each sub-skill is fully self-contained with output tags, complete C# class examples, MCP write commands, trigger configuration, and pitfalls specific to that ML task.


Decision Guide

Scenario

ML Task

Trigger

Notes

Single sensor, detect outliers/spikes

Anomaly Detection (Spike)

Expression OnChange

Fast, one tag in / flags out

Single sensor, detect gradual drift

Anomaly Detection (ChangePoint)

Expression OnChange

For "drift" or "regime change"

Single sensor, predict future values

Forecasting (SSA)

Expression OnChange or Periodic

Outputs forecast + confidence bounds

Multiple sensors → one continuous value

Regression

Task Periodic

Energy prediction, process modeling

Multiple sensors → yes/no

Binary Classification

Task Periodic

Fault prediction, quality pass/fail

User says "predictive maintenance" + single sensor

Anomaly Detection

Expression OnChange

Most common PdM entry point

User says "predictive maintenance" + multiple sensors

Binary Classification

Task Periodic

Predicts failure from combined inputs

User says "quality control"

Binary Classification

Task Periodic

Pass/fail prediction

User says "forecast" or "predict demand"

Forecasting (SSA)

Expression OnChange or Periodic

Time-series based

User says "you decide" + single sensor

Anomaly Detection

Expression OnChange

Safest default for monitoring

User says "you decide" + multiple sensors

Regression

Task Periodic

Most general multi-input approach