--- title: "Edge ML Pipeline — ML.NET Anomaly Detection" tags: [ml, machine-learning, anomaly, detection, mlnet, script, prediction, edge-ai, intelligence] description: "End-to-end pipeline: sensor tags, data collection, ML.NET anomaly detection model, prediction tags, alarms, dashboard" version: "1.1" min_product_version: "10.1.3" author: "Tatsoft" ---
Build a complete edge ML pipeline that runs ML.NET anomaly detection on-premise: sensor tags feed the AnomalyML Library class, predictions are captured via Expressions into output tags, wired to alarms, and displayed on a dashboard.
Build a complete edge ML pipeline that runs deterministic machine learning models on-premise using ML.NET. The AnomalyML Library class collects live sensor data, auto-trains an SR-CNN anomaly detection model, and exposes results through methods. Expressions wire these methods to output tags for alarms and visualization.
textPipeline ArchitectureSensor Tags → Expression calls Check() → AnomalyML class → Expressions read Get methods → Output Tags → Alarms → Dashboard ↑ ↓ Devices/Simulator Operator sees anomalies
Use this skill when:
Do NOT use this skill when:
skill-alarm-pipeline).zip file (see ML.NET Example docs or Variation A below)skill-getting-started firstCategory | Items |
|---|---|
Tools |
|
Tables |
|
Before importing the ML class, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS under a /ML/ subfolder.
textget_table_schema('UnsTags')
jsonwrite_objects call{ "table_type": "UnsTags", "data": [ { "Name": "{Area}/ML/Score", "Type": "Double", "Description": "ML anomaly severity (0.0=normal, 1.0=severe)" }, { "Name": "{Area}/ML/IsAnomaly", "Type": "Boolean", "Description": "True when ML model detects anomaly" }, { "Name": "{Area}/ML/Baseline", "Type": "Double", "Description": "Expected normal value from ML model" }, { "Name": "{Area}/ML/ModelReady", "Type": "Boolean", "Description": "True after ML model finishes training (~100 data points)" } ] }
Replace {Area} with the actual sensor area path (e.g., Plant/Reactor1). The tag field name is Type, not DataType.
Key decisions:
/ML/ subfolder for clean separation from raw sensor dataScore is continuous 0.0–1.0 (for trending), IsAnomaly is boolean (for alarms)Baseline shows what the model expects — operators can see the deviation from normalModelReady lets the UI show warmup status — the model needs ~100 data points before it starts scoringFrameworX ships with a pre-built AnomalyML class in the Script Library. Always import it — do NOT write ML code from scratch.
Instruct the user to:
The Library class uses ClassContent: "MCPTool" and NamespaceDeclarations: Microsoft.ML;Microsoft.ML.Data;Microsoft.ML.Transforms;Microsoft.ML.Transforms.TimeSeries. ML.NET is pre-installed with FrameworX — no NuGet packages or external downloads are needed.
After import, the only customization needed is the SENSITIVITY parameters at the top of the class (_windowSize, _threshold, _trainingBufferSize). Defaults work well for most industrial processes at 1–5 second scan rates.
The class API:
| Method | Returns | Purpose |
|---|---|---|
Check(double) | double | Feed sensor value, returns anomaly score. Call first each cycle. |
GetIsAnomaly() | bool | True if last Check() detected anomaly |
GetScore() | double | Anomaly severity (0.0 to 1.0) |
GetBaseline() | double | Expected normal value |
IsModelReady() | bool | True after training completes (~100 data points) |
GetAnomalies() | string | MCP Tool: recent anomaly history for AI retrieval |
Critical: The class has NO tag references — it is pure .NET. All tag wiring is done via Expressions in Step 3. Do not add @Tag. references inside the class code.
Expressions connect the AnomalyML class methods to output tags. Each expression calls a method and writes the return value to a tag. All four expressions must use the same TriggerTag (the sensor being monitored).
textget_table_schema('ScriptsExpressions')
jsonwrite_objects — ML Expressions{ "table_type": "ScriptsExpressions", "data": [ { "ObjectName": "Tag.{Area}/ML/Score", "Expression": "Script.Class.AnomalyML.Check(Tag.{Area}/Temperature)", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/IsAnomaly", "Expression": "Script.Class.AnomalyML.GetIsAnomaly()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/Baseline", "Expression": "Script.Class.AnomalyML.GetBaseline()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/ModelReady", "Expression": "Script.Class.AnomalyML.IsModelReady()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" } ] }
Execution order matters: The Score expression calls Check() which runs the model. The other three expressions read results from that same prediction cycle via getter methods. All four must share the same Trigger so they execute together. Replace {Area} and Temperature with the actual tag paths.
Create an alarm that triggers when the ML model detects an anomaly.
textget_table_schema('AlarmsGroups') get_table_schema('AlarmsItems')
jsonwrite_objects multi-table call{ "tables": [ { "table_type": "AlarmsGroups", "data": [ { "Name": "MLAlarms", "Description": "Machine Learning generated alarms" } ] }, { "table_type": "AlarmsItems", "data": [ { "Name": "AnomalyDetected", "Group": "MLAlarms", "TagName": "{Area}/ML/IsAnomaly", "Type": "Digital", "Description": "ML model detected anomaly" } ] } ] }
Build a display that shows sensor data alongside ML predictions.
textlist_elements('Dashboard,TrendChart,TextBlock') get_table_schema('DisplaysList')
jsonwrite_objects — ML Dashboard{ "table_type": "DisplaysList", "data": [ { "Name": "MLMonitor", "PanelType": "Dashboard", "Columns": 2, "Rows": 3, "Title": "ML Anomaly Monitor", "Elements": [ { "Type": "TrendChart", "Column": 0, "Row": 0, "ColumnSpan": 2, "Pens": [ { "TagName": "Tag.{Area}/Temperature", "Color": "#FF3498DB" }, { "TagName": "Tag.{Area}/ML/Score", "Color": "#FFE74C3C" } ] }, { "Type": "TextBlock", "Column": 0, "Row": 1, "LinkedValue": "Tag.{Area}/ML/Score", "Label": "Anomaly Score" }, { "Type": "TextBlock", "Column": 1, "Row": 1, "LinkedValue": "Tag.{Area}/ML/Baseline", "Label": "Expected Baseline" }, { "Type": "TextBlock", "Column": 0, "Row": 2, "LinkedValue": "Tag.{Area}/ML/IsAnomaly", "Label": "Anomaly Detected" }, { "Type": "TextBlock", "Column": 1, "Row": 2, "LinkedValue": "Tag.{Area}/ML/ModelReady", "Label": "Model Ready" } ] } ] }
get_objects('ScriptsClasses', names=['AnomalyML']) — confirm class exists and ClassContent is MCPToolget_state() — check for compilation errors (NamespaceDeclarations must include ML.NET namespaces)get_objects('ScriptsExpressions') — confirm all four ML expressions are configuredScore will be 0 and ModelReady will be falseModelReady becomes true and Score starts reflecting real predictionsbrowse_runtime_properties('Tag.{Area}/ML') — verify ML output tags are updatingMistake | Why It Happens | How to Avoid |
|---|---|---|
ML.NET assembly not found | NamespaceDeclarations missing or wrong | Import from Library (auto-includes declarations). Or manually set: |
No output for first few minutes | Model needs ~100 data points to train | This is expected. Check |
Class state lost on restart | ML model lives in process memory | A full runtime restart retrains from scratch — the model accumulates new data automatically. This is by design. |
Understanding Score vs IsAnomaly | Score is continuous (0.0–1.0), IsAnomaly is binary | Use |
High CPU on fast data | OnChange execution runs model every value change | For high-frequency data (>1 update/sec), use Periodic execution instead of OnChange to reduce CPU load. |
Adding @Tag references inside the class | Library class is pure .NET — tags don't exist in Library context | Never modify the class to write tags directly. All tag wiring must be done via Expressions (Step 3). The class exposes results through methods only. |
Variation A: Pre-trained Model from File
.zip file in the solution directorymlContext.Model.Load(modelPath) at startupVariation B: Multiple Independent Models
AnomalyML_Temperature, AnomalyML_Pressure, etc.Variation C: Historian-Fed Training
Dataset.Query to fetch past 24 hours of sensor dataskill-getting-started — Create the base solution with tags and simulatorskill-alarm-pipeline — Configure alarms (used in Step 4)skill-historian-configuration — Log data for ML training and analysisskill-cloud-ai-integration — Connect Claude/LLMs via MCP for Runtime (complementary to edge ML)