| Code Block | ||||
|---|---|---|---|---|
| ||||
---
title: "Edge ML Pipeline — ML.NET Anomaly Detection"
tags: [ml, |
...
machine-learning, anomaly, detection, mlnet, script, prediction, edge-ai, intelligence] |
...
description: "End-to-end pipeline: sensor tags |
...
, data collection |
...
, ML.NET anomaly detection model |
...
, prediction tags |
...
, alarms |
...
, dashboard"
version: "1.1"
min_product_version: "10.1.3"
author: "Tatsoft"
---
|
...
Build a complete edge ML pipeline that runs ML.NET anomaly detection on-premise: sensor tags feed the AnomalyML Library class, predictions are captured via Expressions into output tags, wired to alarms, and displayed on a dashboard.
...
Build a complete edge ML pipeline that runs deterministic machine learning models on-premise using ML.NET. The AnomalyML Library class collects live sensor data, auto-trains an SR-CNN anomaly detection model, and exposes results through methods. Expressions wire these methods to output tags for alarms and visualization.
textPipeline ArchitectureSensor Tags → Expression calls Check() → AnomalyML class → Expressions read Get methods → Output Tags → Alarms → Dashboard ↑ ↓ Devices/Simulator Operator sees anomalies
Use
Build a complete edge ML pipeline that runs deterministic machine learning models on-premise using ML.NET. Data flows from sensor tags through an anomaly detection model, and results are written back to tags for alarms and visualization.
Sensor Tags → ML Script Class → Prediction Tags → Alarms → Dashboard
↑ ↓
Devices/Simulator Operator sees anomalies
Table of Contents maxLevel 2 minLevel 2 indent 10px exclude Steps style none
...
this skill when:
Do NOT use this skill when:
skill-alarm-pipeline...
...
...
.zip file (see ML.NET ...
skill-getting-started firstCategory | Items |
|---|---|
Tools |
...
| |
Tables |
...
|
...
, |
Before
...
importing the ML
...
class, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS under a /ML/ subfolder.
...
textget_table_schema('UnsTags')
...
...
jsonwrite_objects call{ "table_type":
...
"UnsTags",
...
"data":
...
[
...
{
...
"Name":
...
"
...
{Area}/
...
ML/
...
Score",
...
"Type": "Double",
...
"Description":
...
"ML
...
anomaly
...
severity (0.0=normal,
...
1.0=severe)" }, { "Name":
...
"{Area}/ML/IsAnomaly",
...
"Type":
...
"Boolean",
...
"Description":
...
"True
...
when
...
ML
...
model detects anomaly"
...
},
...
{
...
"Name":
...
"{Area}/ML/
...
Baseline",
...
"Type": "Double",
...
"Description":
...
"Expected normal value from ML model" }, { "Name":
...
"
...
{Area}/
...
ML/
...
ModelReady",
...
"Type":
...
"
...
Boolean",
...
"Description":
...
"True after ML model finishes training (~100 data points)" } ] }
Replace {Area} with the actual sensor area path (e.g., Plant/Reactor1). The tag field name is Type, not DataType.
Key decisions:
...
Key decisions:
...
/ML/ subfolder for clean separation from raw sensor data...
Score is continuous 0.0–1.0 (for trending), IsAnomaly is boolean (for alarms)...
Baseline shows what the model expects — operators can see the deviation from normalModelReady lets the UI show warmup status — the model needs ~100 data points before it starts scoringFrameworX ships with a pre-built AnomalyML class in the Script Library.
...
Always import it — do NOT write ML code from scratch.
get_table_schema('ScriptsClasses')
...
Instruct the user to:
Alternatively, create the class via MCP with the anomaly detection logic:
{
"table_type": "ScriptsClasses",
"data": [
{
"Name": "AnomalyML",
"ClassContent": "Methods",
"Code": "// ML.NET Anomaly Detection\nusing Microsoft.ML;\nusing Microsoft.ML.Data;\nusing Microsoft.ML.TimeSeries;\n\nprivate static MLContext mlContext = new MLContext(seed: 0);\nprivate static ITransformer model;\nprivate static List<SensorData> trainingBuffer = new List<SensorData>();\nprivate static bool modelTrained = false;\nprivate const int TrainingWindowSize = 100;\nprivate const int SeasonalityWindowSize = 10;\n\npublic class SensorData\n{\n public float Value { get; set; }\n}\n\npublic class AnomalyPrediction\n{\n [VectorType(7)]\n public double[] Prediction { get; set; }\n}\n\npublic static void Check(double sensorValue)\n{\n trainingBuffer.Add(new SensorData { Value = (float)sensorValue });\n \n if (!modelTrained && trainingBuffer.Count >= TrainingWindowSize)\n {\n TrainModel();\n }\n \n if (modelTrained)\n {\n RunPrediction(sensorValue);\n }\n}\n\nprivate static void TrainModel()\n{\n var dataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);\n var pipeline = mlContext.Transforms.DetectSpikeBySsa(\n outputColumnName: nameof(AnomalyPrediction.Prediction),\n inputColumnName: nameof(SensorData.Value),\n confidence: 95.0,\n pvalueHistoryLength: SeasonalityWindowSize,\n trainingWindowSize: TrainingWindowSize,\n seasonalityWindowSize: SeasonalityWindowSize);\n \n model = pipeline.Fit(dataView);\n modelTrained = true;\n}\n\nprivate static void RunPrediction(double sensorValue)\n{\n var dataView = mlContext.Data.LoadFromEnumerable(\n new[] { new SensorData { Value = (float)sensorValue } });\n var predictions = model.Transform(dataView);\n var results = mlContext.Data.CreateEnumerable<AnomalyPrediction>(\n predictions, reuseRowObject: false).First();\n \n double isAnomaly = results.Prediction[0];\n double score = results.Prediction[1];\n double pValue = results.Prediction[2];\n \n @Tag.Plant/Reactor1/ML/IsAnomaly.Value = isAnomaly > 0;\n @Tag.Plant/Reactor1/ML/AnomalyScore.Value = Math.Abs(score);\n @Tag.Plant/Reactor1/ML/Confidence.Value = 1.0 - pValue;\n @Tag.Plant/Reactor1/ML/LastPrediction.Value = DateTime.Now;\n}"
}
]
}
Important: The code above is a reference implementation. The actual AnomalyML library class may differ. Always check for the library version first via the Designer UI.
Expressions connect the ML class to live tag changes. When a sensor value changes, the expression calls the ML model.
get_table_schema('ScriptsExpressions')
{
"table_type": "ScriptsExpressions",
"data": [
{
"Name": "ML_CheckTemperature",
"Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Temperature)",
"Execution": "OnChange",
"TriggerTag": "Plant/Reactor1/Temperature"
}
]
}
To monitor multiple tags, add one expression per tag:
{
"table_type": "ScriptsExpressions",
"data": [
{
"Name": "ML_CheckTemperature",
"Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Temperature)",
"Execution": "OnChange",
"TriggerTag": "Plant/Reactor1/Temperature"
},
{
"Name": "ML_CheckPressure",
"Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Pressure)",
"Execution": "OnChange",
"TriggerTag": "Plant/Reactor1/Pressure"
}
]
}
Create an alarm that triggers when the ML model detects an anomaly.
get_table_schema('AlarmsGroups')
get_table_schema('AlarmsItems')
{
"tables": [
{
"table_type": "AlarmsGroups",
"data": [
{ "Name": "MLAlarms", "Description": "Machine Learning generated alarms" }
]
},
{
"table_type": "AlarmsItems",
"data": [
{
"Name": "AnomalyDetected",
"Group": "MLAlarms",
"TagName": "Plant/Reactor1/ML/IsAnomaly",
"Type": "Digital",
"Description": "ML model detected anomaly on Reactor 1"
}
]
}
]
}
Build a display that shows sensor data alongside ML predictions.
list_elements('Dashboard,TrendChart,CircularGauge')
get_table_schema('DisplaysList')
Create a dashboard combining raw data with ML outputs:
{
"table_type": "DisplaysList",
"data": [
{
"Name": "MLMonitor",
"PanelType": "Dashboard",
"Columns": 2,
"Rows": 3,
"Title": "ML Anomaly Monitor",
"Elements": [
{
"Type": "TrendChart",
"Column": 0, "Row": 0, "ColumnSpan": 2,
"Pens": [
{ "TagName": "Tag.Plant/Reactor1/Temperature", "Color": "#FF3498DB" },
{ "TagName": "Tag.Plant/Reactor1/ML/AnomalyScore", "Color": "#FFE74C3C" }
]
},
{
"Type": "TextBlock",
"Column": 0, "Row": 1,
"LinkedValue": "Tag.Plant/Reactor1/ML/AnomalyScore",
"Label": "Anomaly Score"
},
{
"Type": "TextBlock",
"Column": 1, "Row": 1,
"LinkedValue": "Tag.Plant/Reactor1/ML/Confidence",
"Label": "Confidence"
},
{
"Type": "TextBlock",
"Column": 0, "Row": 2,
"LinkedValue": "Tag.Plant/Reactor1/ML/IsAnomaly",
"Label": "Anomaly Detected"
},
{
"Type": "TextBlock",
"Column": 1, "Row": 2,
"LinkedValue": "Tag.Plant/Reactor1/ML/LastPrediction",
"Label": "Last Prediction"
}
]
}
]
}
get_objects('ScriptsClasses', names=['AnomalyML']) — confirm class existsget_designer_state() — check for compilation errors (ML.NET references must be resolved)get_objects('ScriptsExpressions') — confirm expressions are configuredbrowse_namespace('Tag.Plant/Reactor1/ML') — verify ML output tags existAnomalyScore and Confidence values are updating...
The Library class uses ClassContent: "MCPTool" and NamespaceDeclarations: Microsoft.ML;Microsoft.ML.Data;Microsoft.ML.Transforms;Microsoft.ML.Transforms.TimeSeries. ML.NET is pre-installed with FrameworX — no NuGet packages or external downloads are needed.
After import, the only customization needed is the SENSITIVITY parameters at the top of the class (_windowSize, _threshold, _trainingBufferSize). Defaults work well for most industrial processes at 1–5 second scan rates.
The class API:
| Method | Returns | Purpose |
|---|---|---|
Check(double) | double | Feed sensor value, returns anomaly score. Call first each cycle. |
GetIsAnomaly() | bool | True if last Check() detected anomaly |
GetScore() | double | Anomaly severity (0.0 to 1.0) |
GetBaseline() | double | Expected normal value |
IsModelReady() | bool | True after training completes (~100 data points) |
GetAnomalies() | string | MCP Tool: recent anomaly history for AI retrieval |
Critical: The class has NO tag references — it is pure .NET. All tag wiring is done via Expressions in Step 3. Do not add @Tag. references inside the class code.
Expressions connect the AnomalyML class methods to output tags. Each expression calls a method and writes the return value to a tag. All four expressions must use the same TriggerTag (the sensor being monitored).
textget_table_schema('ScriptsExpressions')
jsonwrite_objects — ML Expressions{ "table_type": "ScriptsExpressions", "data": [ { "ObjectName": "Tag.{Area}/ML/Score", "Expression": "Script.Class.AnomalyML.Check(Tag.{Area}/Temperature)", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/IsAnomaly", "Expression": "Script.Class.AnomalyML.GetIsAnomaly()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/Baseline", "Expression": "Script.Class.AnomalyML.GetBaseline()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/ModelReady", "Expression": "Script.Class.AnomalyML.IsModelReady()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" } ] }
Execution order matters: The Score expression calls Check() which runs the model. The other three expressions read results from that same prediction cycle via getter methods. All four must share the same Trigger so they execute together. Replace {Area} and Temperature with the actual tag paths.
Create an alarm that triggers when the ML model detects an anomaly.
textget_table_schema('AlarmsGroups') get_table_schema('AlarmsItems')
jsonwrite_objects multi-table call{ "tables": [ { "table_type": "AlarmsGroups", "data": [ { "Name": "MLAlarms", "Description": "Machine Learning generated alarms" } ] }, { "table_type": "AlarmsItems", "data": [ { "Name": "AnomalyDetected", "Group": "MLAlarms", "TagName": "{Area}/ML/IsAnomaly", "Type": "Digital", "Description": "ML model detected anomaly" } ] } ] }
Build a display that shows sensor data alongside ML predictions.
textlist_elements('Dashboard,TrendChart,TextBlock') get_table_schema('DisplaysList')
jsonwrite_objects — ML Dashboard{ "table_type": "DisplaysList", "data": [ { "Name": "MLMonitor", "PanelType": "Dashboard", "Columns": 2, "Rows": 3, "Title": "ML Anomaly Monitor", "Elements": [ { "Type": "TrendChart", "Column": 0, "Row": 0, "ColumnSpan": 2, "Pens": [ { "TagName": "Tag.{Area}/Temperature", "Color": "#FF3498DB" }, { "TagName": "Tag.{Area}/ML/Score", "Color": "#FFE74C3C" } ] }, { "Type": "TextBlock", "Column": 0, "Row": 1, "LinkedValue": "Tag.{Area}/ML/Score", "Label": "Anomaly Score" }, { "Type": "TextBlock", "Column": 1, "Row": 1, "LinkedValue": "Tag.{Area}/ML/Baseline", "Label": "Expected Baseline" }, { "Type": "TextBlock", "Column": 0, "Row": 2, "LinkedValue": "Tag.{Area}/ML/IsAnomaly", "Label": "Anomaly Detected" }, { "Type": "TextBlock", "Column": 1, "Row": 2, "LinkedValue": "Tag.{Area}/ML/ModelReady", "Label": "Model Ready" } ] } ] }
get_objects('ScriptsClasses', names=['AnomalyML']) — confirm class exists and ClassContent is MCPToolget_state() — check for compilation errors (NamespaceDeclarations must include ML.NET namespaces)get_objects('ScriptsExpressions') — confirm all four ML expressions are configuredScore will be 0 and ModelReady will be falseModelReady becomes true and Score starts reflecting real predictionsbrowse_runtime_properties('Tag.{Area}/ML') — verify ML output tags are updatingMistake | Why It Happens | How to Avoid |
|---|---|---|
ML.NET assembly not found | NamespaceDeclarations missing or wrong | Import from Library (auto-includes declarations). Or manually set: |
No output for first few minutes | Model needs ~100 data points to train | This is expected. Check |
Class state lost on restart | ML model lives in process memory | A full runtime restart retrains from scratch — the model accumulates new data automatically. This is by design. |
Understanding Score vs IsAnomaly | Score is continuous (0.0–1.0), IsAnomaly is binary | Use |
High CPU on fast data | OnChange execution runs model every value change | For high-frequency data (>1 update/sec), use Periodic execution instead of OnChange to reduce CPU load. |
Adding @Tag references inside the class | Library class is pure .NET — tags don't exist in Library context | Never modify the class to write tags directly. All tag wiring must be done via Expressions (Step 3). The class exposes results through methods only |
...
. |
Variation A: Pre-trained Model from File
...
.zip file in the solution directorymlContext.Model.Load(modelPath) at startup...
Variation B: Multiple Independent Models
...
...
AnomalyML_Temperature, AnomalyML_Pressure, etc....
...
Variation C: Historian-Fed Training
...
Dataset.Query to fetch past 24 hours of sensor dataskill-getting-started — Create the base solution with tags and simulatorskill-alarm-pipeline — Configure alarms (used in Step 4)skill-historian-configuration — Log data for ML training and analysisskill-cloud-ai-integration — Connect Claude/LLMs via MCP for Runtime (complementary to edge ML)...
...
...