...
| Code Block | ||||
|---|---|---|---|---|
| ||||
--- title: "Edge ML Pipeline — ML.NET Anomaly Detection" tags: [ml, machine-learning, anomaly, detection, mlnet, script, prediction, edge-ai, intelligence] description: "End-to-end pipeline: sensor tags ?, data collection ?, ML.NET anomaly detection model, ? prediction tags ?, alarms ?, dashboard" version: "1.01" min_product_version: "10.1.3" author: "Tatsoft" --- | ||||
...
Build a complete edge ML pipeline that runs ML.NET anomaly detection on-premise: sensor tags feed
...
the AnomalyML Library class, predictions are
...
captured via Expressions into output tags, wired to alarms, and displayed on a dashboard.
...
...
| width | 50% |
|---|
...
Build a complete edge ML pipeline that runs deterministic machine learning models on-premise using ML.NET.
...
The AnomalyML Library class collects live sensor data, auto-trains an SR-CNN anomaly detection model, and
...
exposes results through methods. Expressions wire these methods to output tags for alarms and visualization.
| Code Block | ||||
|---|---|---|---|---|
| ||||
Sensor Tags ? ML Script Class ? Prediction Tags ? Alarms ? Dashboard
? ?
Devices/Simulator Operator sees anomalies
|
textPipeline ArchitectureSensor Tags → Expression calls Check() → AnomalyML class → Expressions read Get methods → Output Tags → Alarms → Dashboard ↑ ↓ Devices/Simulator Operator sees anomalies
Use this skill when:
Do NOT use this skill when:
skill-alarm-pipeline).zip file (see ML.NET Example docs or Variation A below)skill-getting-started firstCategory | Items |
|---|---|
Tools |
|
Tables |
|
Before importing the ML class, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS under a /ML/ subfolder.
textget_table_schema('UnsTags')
jsonwrite_objects call{ "table_type": "UnsTags", "data": [ { "Name": "{Area}/ML/Score", "Type": "Double", "Description": "ML anomaly severity (0.0=normal, 1.0=severe)" }, { "Name": "{Area}/ML/IsAnomaly", "Type": "Boolean", "Description": "True when ML model detects anomaly" }, { "Name": "{Area}/ML/Baseline", "Type": "Double", "Description": "Expected normal value from ML model" }, { "Name": "{Area}/ML/ModelReady", "Type": "Boolean", "Description": "True after ML model finishes training (~100 data points)" } ] }
Replace {Area} with the actual sensor area path (e.g., Plant/Reactor1). The tag field name is Type, not DataType.
Key decisions:
/ML/ subfolder for clean separation from raw sensor dataScore is continuous 0.0–1.0 (for trending), IsAnomaly is boolean (for alarms)Baseline shows what the model expects — operators can see the deviation from normalModelReady lets the UI show warmup status — the model needs ~100 data points before it starts scoringFrameworX ships with a pre-built AnomalyML class in the Script Library. Always import it — do NOT write ML code from scratch.
Instruct the user to:
The Library class uses ClassContent: "MCPTool" and NamespaceDeclarations: Microsoft.ML;Microsoft.ML.Data;Microsoft.ML.Transforms;Microsoft.ML.Transforms.TimeSeries. ML.NET is pre-installed with FrameworX — no NuGet packages or external downloads are needed.
After import, the only customization needed is the SENSITIVITY parameters at the top of the class (_windowSize, _threshold, _trainingBufferSize). Defaults work well for most industrial processes at 1–5 second scan rates.
The class API:
| Method | Returns | Purpose |
|---|---|---|
Check(double) | double | Feed sensor value, returns anomaly score. Call first each cycle. |
GetIsAnomaly() | bool | True if last Check() detected anomaly |
GetScore() | double | Anomaly severity (0.0 to 1.0) |
GetBaseline() | double | Expected normal value |
IsModelReady() | bool | True after training completes (~100 data points) |
GetAnomalies() | string | MCP Tool: recent anomaly history for AI retrieval |
Critical: The class has NO tag references — it is pure .NET. All tag wiring is done via Expressions in Step 3. Do not add @Tag. references inside the class code.
Expressions connect the AnomalyML class methods to output tags. Each expression calls a method and writes the return value to a tag. All four expressions must use the same TriggerTag (the sensor being monitored).
textget
| Column | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
Use this skill when:
Do NOT use this skill when:
skill-alarm-pipeline)skill-getting-started firstCategory | Items |
|---|---|
Tools |
|
Tables |
|
Before writing the ML script, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS.
| Code Block | ||
|---|---|---|
| ||
get_table_schema('UnsTags')
|
| Code Block | ||||
|---|---|---|---|---|
| ||||
{
"table_type": "UnsTags",
"data": [
{
"Name": "Plant/Reactor1/ML/AnomalyScore",
"DataType": "Double",
"Description": "ML anomaly score (0=normal, higher=more anomalous)"
},
{
"Name": "Plant/Reactor1/ML/IsAnomaly",
"DataType": "Boolean",
"Description": "True when anomaly detected by ML model"
},
{
"Name": "Plant/Reactor1/ML/Confidence",
"DataType": "Double",
"Description": "Model prediction confidence (0-1)"
},
{
"Name": "Plant/Reactor1/ML/LastPrediction",
"DataType": "DateTime",
"Description": "Timestamp of last ML prediction"
}
]
}
|
Key decisions:
/ML/ subfolder for clean separation from raw sensor dataAnomalyScore is continuous (for trending), IsAnomaly is boolean (for alarms)Confidence lets the operator gauge reliabilityLastPrediction timestamp helps detect if the model stops runningFrameworX ships with a pre-built AnomalyML class in the Script Library. Import it rather than writing from scratch.
| Code Block | ||
|---|---|---|
| ||
get_table_schema('ScriptsClasses')
|
To import from library, instruct the user to:
Alternatively, create the class via MCP with the anomaly detection logic:
| Code Block | ||||||
|---|---|---|---|---|---|---|
| ||||||
{
"table_type": "ScriptsClasses",
"data": [
{
"Name": "AnomalyML",
"ClassContent": "Methods",
"Code": "// ML.NET Anomaly Detection\nusing Microsoft.ML;\nusing Microsoft.ML.Data;\nusing Microsoft.ML.TimeSeries;\n\nprivate static MLContext mlContext = new MLContext(seed: 0);\nprivate static ITransformer model;\nprivate static List<SensorData> trainingBuffer = new List<SensorData>();\nprivate static bool modelTrained = false;\nprivate const int TrainingWindowSize = 100;\nprivate const int SeasonalityWindowSize = 10;\n\npublic class SensorData\n{\n public float Value { get; set; }\n}\n\npublic class AnomalyPrediction\n{\n [VectorType(7)]\n public double[] Prediction { get; set; }\n}\n\npublic static void Check(double sensorValue)\n{\n trainingBuffer.Add(new SensorData { Value = (float)sensorValue });\n \n if (!modelTrained && trainingBuffer.Count >= TrainingWindowSize)\n {\n TrainModel();\n }\n \n if (modelTrained)\n {\n RunPrediction(sensorValue);\n }\n}\n\nprivate static void TrainModel()\n{\n var dataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);\n var pipeline = mlContext.Transforms.DetectSpikeBySsa(\n outputColumnName: nameof(AnomalyPrediction.Prediction),\n inputColumnName: nameof(SensorData.Value),\n confidence: 95.0,\n pvalueHistoryLength: SeasonalityWindowSize,\n trainingWindowSize: TrainingWindowSize,\n seasonalityWindowSize: SeasonalityWindowSize);\n \n model = pipeline.Fit(dataView);\n modelTrained = true;\n}\n\nprivate static void RunPrediction(double sensorValue)\n{\n var dataView = mlContext.Data.LoadFromEnumerable(\n new[] { new SensorData { Value = (float)sensorValue } });\n var predictions = model.Transform(dataView);\n var results = mlContext.Data.CreateEnumerable<AnomalyPrediction>(\n predictions, reuseRowObject: false).First();\n \n double isAnomaly = results.Prediction[0];\n double score = results.Prediction[1];\n double pValue = results.Prediction[2];\n \n @Tag.Plant/Reactor1/ML/IsAnomaly.Value = isAnomaly > 0;\n @Tag.Plant/Reactor1/ML/AnomalyScore.Value = Math.Abs(score);\n @Tag.Plant/Reactor1/ML/Confidence.Value = 1.0 - pValue;\n @Tag.Plant/Reactor1/ML/LastPrediction.Value = DateTime.Now;\n}"
}
]
}
|
| Info |
|---|
The code above is a reference implementation. The actual AnomalyML library class may differ. Always check for the library version first via the Designer UI. |
Expressions connect the ML class to live tag changes. When a sensor value changes, the expression calls the ML model.
...
| language | text |
|---|
...
_table_schema('ScriptsExpressions')
...
jsonwrite
...
_objects —
...
ML Expressions{ "table_type":
...
"ScriptsExpressions",
...
"data":
...
[
...
{
...
"ObjectName":
...
"
...
Tag.{Area}/ML/Score", "Expression":
...
"Script.Class.AnomalyML.Check(Tag.
...
{Area}/Temperature)",
...
"Execution":
...
"OnChange",
...
To monitor multiple tags, add one expression per tag:
...
| language | json |
|---|---|
| title | write_objects — multiple sensors |
...
"Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/IsAnomaly", "Expression": "Script.Class.AnomalyML.GetIsAnomaly()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/Baseline", "Expression": "Script.Class.AnomalyML.
...
GetBaseline()",
...
"Execution":
...
"OnChange",
...
"Trigger":
...
"
...
Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/ModelReady", "Expression":
...
"Script.Class.AnomalyML.
...
IsModelReady()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" } ] }
Execution order matters: The Score expression calls Check() which runs the model. The other three expressions read results from that same prediction cycle via getter methods. All four must share the same Trigger so they execute together. Replace {Area} and Temperature with the actual tag paths.
Create an alarm that triggers when the ML model detects an anomaly.
...
| language | text |
|---|
...
textget_table_schema('AlarmsGroups')
...
get_table_schema(
...
'AlarmsItems')
jsonwrite
...
_objects multi-table call
...
{
...
"tables":
...
[
...
{
...
"table_type":
...
"AlarmsGroups",
...
"data":
...
[
...
{
...
"Name":
...
"MLAlarms",
...
"Description":
...
"Machine
...
Learning
...
generated
...
alarms"
...
}
...
]
...
},
...
{
...
"table_type":
...
"AlarmsItems",
...
"data":
...
[
...
{
...
"Name":
...
"AnomalyDetected",
...
"Group":
...
"MLAlarms",
...
"TagName":
...
"{Area}/ML/IsAnomaly",
...
"Type":
...
"Digital",
...
"Description":
...
"ML
...
model detected anomaly" } ] } ] }
Build a display that shows sensor data alongside ML predictions.
...
| language | text |
|---|
...
textlist_elements('Dashboard,TrendChart,
...
TextBlock')
...
get_table_schema('DisplaysList')
...
jsonwrite
...
_objects — ML Dashboard
...
{
...
"table_type":
...
"DisplaysList",
...
"data":
...
[
...
{
...
"Name":
...
"MLMonitor",
...
"PanelType":
...
"Dashboard",
...
"Columns":
...
2,
...
"Rows":
...
3,
...
"Title":
...
"ML
...
Anomaly
...
Monitor",
...
"Elements": [ { "Type":
...
"TrendChart",
...
"Column":
...
0,
...
"Row":
...
0,
...
"ColumnSpan":
...
2, "Pens": [ { "TagName":
...
"Tag.
...
{Area}/Temperature",
...
"Color":
...
"#FF3498DB"
...
},
...
{
...
"TagName":
...
"Tag.
...
{Area}/
...
ML/
...
Score",
...
"Color":
...
"#FFE74C3C"
...
}
...
]
...
},
...
{
...
"Type":
...
"TextBlock",
...
"Column":
...
0,
...
"Row":
...
1,
...
"LinkedValue":
...
"Tag.
...
{Area}/ML/
...
Score",
...
"Label":
...
"Anomaly
...
Score"
...
},
...
{
...
"Type":
...
"TextBlock
...
", "Column":
...
1,
...
"Row":
...
1,
...
"LinkedValue":
...
"Tag.
...
{Area}/
...
ML/
...
Baseline",
...
"Label":
...
"
...
Expected Baseline" }, { "Type":
...
"TextBlock",
...
"Column":
...
0,
...
"Row":
...
2,
...
"LinkedValue":
...
"Tag.
...
{Area}/ML/IsAnomaly"
...
, "Label":
...
"Anomaly
...
Detected"
...
},
...
{
...
"Type":
...
"TextBlock",
...
"Column":
...
1,
...
"Row":
...
2, "LinkedValue":
...
"Tag.
...
{Area}/
...
ML/
...
ModelReady",
...
"Label":
...
"
...
Model Ready" } ] } ] }
get_objects('ScriptsClasses', names=(['AnomalyML')]) — confirm class exists and ClassContent is MCPToolget_designer_state() — check for compilation errors (NamespaceDeclarations must include ML.NET references must be resolvednamespaces)get_objects('ScriptsExpressions') — confirm all four ML expressions are configuredScore will be 0 and ModelReady will be falseModelReady becomes true and Score starts reflecting real predictionsbrowse_runtime_properties('Tag.{Area}browse_namespace('Tag.Plant/Reactor1/ML') — verify ML output tags existCheck that AnomalyScore and Confidence values are updatingMistake | Why It Happens | How to Avoid | How to Avoid | ||
|---|---|---|---|---|---|
ML.NET assembly not found | NamespaceDeclarations missing or wrong | Import from Library (auto-includes declarations). Or manually set: | ML.NET assembly not found | Missing reference to | Check Scripts → References. Add NuGet package or use library import |
No output for first 1-2 few minutes | Model needs ~100 data points to train | This is expected. The training buffer fills first, then predictions start. Check | |||
Class state lost on restart | ML model lives in process memory | Static class state lost |
| A full runtime restart retrains from scratch — the model accumulates new data automatically. This is by design. | |
Understanding Score vs IsAnomaly | Score is continuous (0.0–1.0), IsAnomaly is binary | Use | Wrong alarm threshold on score |
| Only |
High CPU on fast data | OnChange execution runs model every value change | For high-frequency data , consider periodic execution to reduce CPU load(>1 update/sec), use Periodic execution instead of OnChange to reduce CPU load. | |||
Adding @Tag references inside the class | Library class is pure .NET — tags don't exist in Library context | Never modify the class to write tags directly. All tag wiring must be done via Expressions (Step 3). The class exposes results through methods only. |
Variation A: Pre-trained Model from File
.zip file in the solution directorymlContext.Model.Load(modelPath) at startupskill-mlnet-model-builder or the ML.NET Model Builder the or documentationVariation B: Multiple Independent Models
AnomalyML_Temperature, AnomalyML_Pressure, etc....
skill-getting-started — Create the base solution with tags and simulatorskill-alarm-pipeline — Configure alarms (used in Step 4)skill-historian-configuration — Log data for ML training and analysisskill-cloud-ai-integration — Connect Claude/LLMs via MCP for Runtime (complementary to edge ML)...
...