Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

excerpt
Code Block
languageyaml
titleSkill Frontmatter
---
title: "Edge ML Pipeline — ML.NET Anomaly Detection"
tags: [ml, machine-learning, anomaly, detection, mlnet, script, prediction, edge-ai, intelligence]
description: "End-to-end pipeline: sensor tags ?, data collection ?, ML.NET anomaly detection model, ? prediction tags ?, alarms ?, dashboard"
version: "1.01"
min_product_version: "10.1.3"
author: "Tatsoft"
---


...

Build a complete edge ML pipeline that runs ML.NET anomaly detection on-premise: sensor tags feed

...

the AnomalyML Library class, predictions are

...

captured via Expressions into output tags, wired to alarms, and displayed on a dashboard.

...

...

width50%

...

What This Skill Does

Build a complete edge ML pipeline that runs deterministic machine learning models on-premise using ML.NET.

...

The AnomalyML Library class collects live sensor data, auto-trains an SR-CNN anomaly detection model, and

...

exposes results through methods. Expressions wire these methods to output tags for alarms and visualization.

Code Block
languagetext
titlePipeline Architecture
Sensor Tags ? ML Script Class ? Prediction Tags ? Alarms ? Dashboard
     ?                                                         ?
  Devices/Simulator                                    Operator sees anomalies

textPipeline ArchitectureSensor Tags → Expression calls Check() → AnomalyML class → Expressions read Get methods → Output Tags → Alarms → Dashboard ↑ ↓ Devices/Simulator Operator sees anomalies

When to Use This Skill

Use this skill when:

  • The user wants to add machine learning or anomaly detection to a solution
  • The user mentions "predictive maintenance", "ML", "anomaly", or "edge AI"
  • The user wants to run ML models on-premise (not cloud-based)
  • Building a ProveIT-style demo with intelligent monitoring

Do NOT use this skill when:

  • The user wants cloud AI / LLM integration (use MCP for Runtime skill instead)
  • The user needs only simple threshold alarms (use skill-alarm-pipeline)
  • The user wants to load a pre-trained model from a .zip file (see ML.NET Example docs or Variation A below)

Prerequisites

  • Solution with sensor tags already created (at least 1-2 analog tags with changing values)
  • Value Simulator or real device feeding data to those tags
  • If starting from scratch, apply skill-getting-started first

MCP Tools and Tables Involved

Category

Items

Tools

get_table_schema, write_objects, get_objects, list_elements, search_docs

Tables

UnsTags, ScriptsClasses, ScriptsExpressions, AlarmsItems, AlarmsGroups, DisplaysList

Implementation Steps

Step 1: Create ML Output Tags

Before importing the ML class, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS under a /ML/ subfolder.

textget_table_schema('UnsTags')


jsonwrite_objects call{ "table_type": "UnsTags", "data": [ { "Name": "{Area}/ML/Score", "Type": "Double", "Description": "ML anomaly severity (0.0=normal, 1.0=severe)" }, { "Name": "{Area}/ML/IsAnomaly", "Type": "Boolean", "Description": "True when ML model detects anomaly" }, { "Name": "{Area}/ML/Baseline", "Type": "Double", "Description": "Expected normal value from ML model" }, { "Name": "{Area}/ML/ModelReady", "Type": "Boolean", "Description": "True after ML model finishes training (~100 data points)" } ] }

Replace {Area} with the actual sensor area path (e.g., Plant/Reactor1). The tag field name is Type, not DataType.

Key decisions:

  • Place ML outputs under a /ML/ subfolder for clean separation from raw sensor data
  • Score is continuous 0.0–1.0 (for trending), IsAnomaly is boolean (for alarms)
  • Baseline shows what the model expects — operators can see the deviation from normal
  • ModelReady lets the UI show warmup status — the model needs ~100 data points before it starts scoring

Step 2: Import the AnomalyML Script Class from Library

FrameworX ships with a pre-built AnomalyML class in the Script Library. Always import it — do NOT write ML code from scratch.

Instruct the user to:

  1. Navigate to Scripts → Classes
  2. Click New → Import from Library
  3. Select AnomalyML

The Library class uses ClassContent: "MCPTool" and NamespaceDeclarations: Microsoft.ML;Microsoft.ML.Data;Microsoft.ML.Transforms;Microsoft.ML.Transforms.TimeSeries. ML.NET is pre-installed with FrameworX — no NuGet packages or external downloads are needed.

After import, the only customization needed is the SENSITIVITY parameters at the top of the class (_windowSize, _threshold, _trainingBufferSize). Defaults work well for most industrial processes at 1–5 second scan rates.

The class API:

MethodReturnsPurpose
Check(double)doubleFeed sensor value, returns anomaly score. Call first each cycle.
GetIsAnomaly()boolTrue if last Check() detected anomaly
GetScore()doubleAnomaly severity (0.0 to 1.0)
GetBaseline()doubleExpected normal value
IsModelReady()boolTrue after training completes (~100 data points)
GetAnomalies()stringMCP Tool: recent anomaly history for AI retrieval

Critical: The class has NO tag references — it is pure .NET. All tag wiring is done via Expressions in Step 3. Do not add @Tag. references inside the class code.

Step 3: Create Script Expressions to Wire Tags

Expressions connect the AnomalyML class methods to output tags. Each expression calls a method and writes the return value to a tag. All four expressions must use the same TriggerTag (the sensor being monitored).

textget

Column
width50%

Table of Contents
maxLevel2
minLevel2
indent10px
stylenone

When to Use This Skill

Use this skill when:

  • The user wants to add machine learning or anomaly detection to a solution
  • The user mentions "predictive maintenance", "ML", "anomaly", or "edge AI"
  • The user wants to run ML models on-premise (not cloud-based)
  • Building a ProveIT-style demo with intelligent monitoring

Do NOT use this skill when:

  • The user wants cloud AI / LLM integration (use MCP for Runtime skill instead)
  • The user needs only simple threshold alarms (use skill-alarm-pipeline)
  • The user wants to train a model (this skill covers inference only — for training, see ML.NET Model Builder docs)

Prerequisites

  • Solution with sensor tags already created (at least 1-2 analog tags with changing values)
  • Value Simulator or real device feeding data to those tags
  • If starting from scratch, apply skill-getting-started first

MCP Tools and Tables Involved

Category

Items

Tools

get_table_schema, write_objects, get_objects, list_elements, search_docs

Tables

UnsTags, ScriptsTasks, ScriptsClasses, ScriptsExpressions, AlarmsItems, AlarmsGroups, DisplaysList

Implementation Steps

Step 1: Create ML Output Tags

Before writing the ML script, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS.

Code Block
languagetext
get_table_schema('UnsTags')
Code Block
languagejson
titlewrite_objects call
{
  "table_type": "UnsTags",
  "data": [
    {
      "Name": "Plant/Reactor1/ML/AnomalyScore",
      "DataType": "Double",
      "Description": "ML anomaly score (0=normal, higher=more anomalous)"
    },
    {
      "Name": "Plant/Reactor1/ML/IsAnomaly",
      "DataType": "Boolean",
      "Description": "True when anomaly detected by ML model"
    },
    {
      "Name": "Plant/Reactor1/ML/Confidence",
      "DataType": "Double",
      "Description": "Model prediction confidence (0-1)"
    },
    {
      "Name": "Plant/Reactor1/ML/LastPrediction",
      "DataType": "DateTime",
      "Description": "Timestamp of last ML prediction"
    }
  ]
}

Key decisions:

  • Place ML outputs under a /ML/ subfolder for clean separation from raw sensor data
  • AnomalyScore is continuous (for trending), IsAnomaly is boolean (for alarms)
  • Confidence lets the operator gauge reliability
  • LastPrediction timestamp helps detect if the model stops running

Step 2: Import the AnomalyML Script Class from Library

FrameworX ships with a pre-built AnomalyML class in the Script Library. Import it rather than writing from scratch.

Code Block
languagetext
get_table_schema('ScriptsClasses')

To import from library, instruct the user to:

  1. Navigate to Scripts → Classes
  2. Click New → Import from Library
  3. Select AnomalyML

Alternatively, create the class via MCP with the anomaly detection logic:

Code Block
languagejson
titlewrite_objects — ScriptsClasses
collapsetrue
{
  "table_type": "ScriptsClasses",
  "data": [
    {
      "Name": "AnomalyML",
      "ClassContent": "Methods",
      "Code": "// ML.NET Anomaly Detection\nusing Microsoft.ML;\nusing Microsoft.ML.Data;\nusing Microsoft.ML.TimeSeries;\n\nprivate static MLContext mlContext = new MLContext(seed: 0);\nprivate static ITransformer model;\nprivate static List<SensorData> trainingBuffer = new List<SensorData>();\nprivate static bool modelTrained = false;\nprivate const int TrainingWindowSize = 100;\nprivate const int SeasonalityWindowSize = 10;\n\npublic class SensorData\n{\n    public float Value { get; set; }\n}\n\npublic class AnomalyPrediction\n{\n    [VectorType(7)]\n    public double[] Prediction { get; set; }\n}\n\npublic static void Check(double sensorValue)\n{\n    trainingBuffer.Add(new SensorData { Value = (float)sensorValue });\n    \n    if (!modelTrained && trainingBuffer.Count >= TrainingWindowSize)\n    {\n        TrainModel();\n    }\n    \n    if (modelTrained)\n    {\n        RunPrediction(sensorValue);\n    }\n}\n\nprivate static void TrainModel()\n{\n    var dataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);\n    var pipeline = mlContext.Transforms.DetectSpikeBySsa(\n        outputColumnName: nameof(AnomalyPrediction.Prediction),\n        inputColumnName: nameof(SensorData.Value),\n        confidence: 95.0,\n        pvalueHistoryLength: SeasonalityWindowSize,\n        trainingWindowSize: TrainingWindowSize,\n        seasonalityWindowSize: SeasonalityWindowSize);\n    \n    model = pipeline.Fit(dataView);\n    modelTrained = true;\n}\n\nprivate static void RunPrediction(double sensorValue)\n{\n    var dataView = mlContext.Data.LoadFromEnumerable(\n        new[] { new SensorData { Value = (float)sensorValue } });\n    var predictions = model.Transform(dataView);\n    var results = mlContext.Data.CreateEnumerable<AnomalyPrediction>(\n        predictions, reuseRowObject: false).First();\n    \n    double isAnomaly = results.Prediction[0];\n    double score = results.Prediction[1];\n    double pValue = results.Prediction[2];\n    \n    @Tag.Plant/Reactor1/ML/IsAnomaly.Value = isAnomaly > 0;\n    @Tag.Plant/Reactor1/ML/AnomalyScore.Value = Math.Abs(score);\n    @Tag.Plant/Reactor1/ML/Confidence.Value = 1.0 - pValue;\n    @Tag.Plant/Reactor1/ML/LastPrediction.Value = DateTime.Now;\n}"
    }
  ]
}
Info

The code above is a reference implementation. The actual AnomalyML library class may differ. Always check for the library version first via the Designer UI.

Step 3: Create Script Expressions to Trigger the Model

Expressions connect the ML class to live tag changes. When a sensor value changes, the expression calls the ML model.

...

languagetext

...

_table_schema('ScriptsExpressions')

...


jsonwrite

...

_objects —

...

ML Expressions{ "table_type":

...

"ScriptsExpressions",

...

"data":

...

[

...

{

...

"ObjectName":

...

"

...

Tag.{Area}/ML/Score", "Expression":

...

"Script.Class.AnomalyML.Check(Tag.

...

{Area}/Temperature)",

...

"Execution":

...

"OnChange",

...

To monitor multiple tags, add one expression per tag:

...

languagejson
titlewrite_objects — multiple sensors

...

"Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/IsAnomaly", "Expression": "Script.Class.AnomalyML.GetIsAnomaly()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/Baseline", "Expression": "Script.Class.AnomalyML.

...

GetBaseline()",

...

"Execution":

...

"OnChange",

...

"Trigger":

...

"

...

Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/ModelReady", "Expression":

...

"Script.Class.AnomalyML.

...

IsModelReady()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" } ] }

Execution order matters: The Score expression calls Check() which runs the model. The other three expressions read results from that same prediction cycle via getter methods. All four must share the same Trigger so they execute together. Replace {Area} and Temperature with the actual tag paths.

Step 4: Add Alarm on Anomaly Detection

Create an alarm that triggers when the ML model detects an anomaly.

...

languagetext

...

textget_table_schema('AlarmsGroups')

...

get_table_schema(

...

'AlarmsItems')


jsonwrite

...

_objects multi-table call

...

{

...

"tables":

...

[

...

{

...

"table_type":

...

"AlarmsGroups",

...

"data":

...

[

...

{

...

"Name":

...

"MLAlarms",

...

"Description":

...

"Machine

...

Learning

...

generated

...

alarms"

...

}

...

]

...

},

...

{

...

"table_type":

...

"AlarmsItems",

...

"data":

...

[

...

{

...

"Name":

...

"AnomalyDetected",

...

"Group":

...

"MLAlarms",

...

"TagName":

...

"{Area}/ML/IsAnomaly",

...

"Type":

...

"Digital",

...

"Description":

...

"ML

...

model detected anomaly" } ] } ] }

Step 5: Create ML Dashboard

Build a display that shows sensor data alongside ML predictions.

...

languagetext

...

textlist_elements('Dashboard,TrendChart,

...

TextBlock')

...

get_table_schema('DisplaysList')

...


jsonwrite

...

_objects — ML Dashboard

...

{

...

"table_type":

...

"DisplaysList",

...

"data":

...

[

...

{

...

"Name":

...

"MLMonitor",

...

"PanelType":

...

"Dashboard",

...

"Columns":

...

2,

...

"Rows":

...

3,

...

"Title":

...

"ML

...

Anomaly

...

Monitor",

...

"Elements": [ { "Type":

...

"TrendChart",

...

"Column":

...

0,

...

"Row":

...

0,

...

"ColumnSpan":

...

2, "Pens": [ { "TagName":

...

"Tag.

...

{Area}/Temperature",

...

"Color":

...

"#FF3498DB"

...

},

...

{

...

"TagName":

...

"Tag.

...

{Area}/

...

ML/

...

Score",

...

"Color":

...

"#FFE74C3C"

...

}

...

]

...

},

...

{

...

"Type":

...

"TextBlock",

...

"Column":

...

0,

...

"Row":

...

1,

...

"LinkedValue":

...

"Tag.

...

{Area}/ML/

...

Score",

...

"Label":

...

"Anomaly

...

Score"

...

},

...

{

...

"Type":

...

"TextBlock

...

", "Column":

...

1,

...

"Row":

...

1,

...

"LinkedValue":

...

"Tag.

...

{Area}/

...

ML/

...

Baseline",

...

"Label":

...

"

...

Expected Baseline" }, { "Type":

...

"TextBlock",

...

"Column":

...

0,

...

"Row":

...

2,

...

"LinkedValue":

...

"Tag.

...

{Area}/ML/IsAnomaly"

...

, "Label":

...

"Anomaly

...

Detected"

...

},

...

{

...

"Type":

...

"TextBlock",

...

"Column":

...

1,

...

"Row":

...

2, "LinkedValue":

...

"Tag.

...

{Area}/

...

ML/

...

ModelReady",

...

"Label":

...

"

...

Model Ready" } ] } ] }

Verification

  1. get_objects('ScriptsClasses', names=(['AnomalyML')]) — confirm class exists and ClassContent is MCPTool
  2. get_designer_state()check for compilation errors (NamespaceDeclarations must include ML.NET references must be resolvednamespaces)
  3. get_objects('ScriptsExpressions') — confirm all four ML expressions are configured
  4. Start runtime → wait 1-2 minutes for the model to train on initial datainitially Score will be 0 and ModelReady will be false
  5. After ~100 data points (varies by scan rate), ModelReady becomes true and Score starts reflecting real predictions
  6. browse_runtime_properties('Tag.{Area}browse_namespace('Tag.Plant/Reactor1/ML') — verify ML output tags existCheck that AnomalyScore and Confidence values are updating

Common Pitfalls

Mistake

Why It Happens

How to Avoid

How to Avoid

ML.NET assembly not found

NamespaceDeclarations missing or wrong

Import from Library (auto-includes declarations). Or manually set: Microsoft.ML;Microsoft.ML.Data;Microsoft.ML.Transforms;Microsoft.ML.Transforms.TimeSeries. ML.NET is pre-installed — no NuGet or external downloads needed.

ML.NET assembly not found

Missing reference to Microsoft.ML

Check Scripts → References. Add NuGet package or use library import

No output for first 1-2 few minutes

Model needs ~100 data points to train

This is expected. The training buffer fills first, then predictions start. Check ModelReady tag — it becomes true when training completes. At 5-second scan rate, expect ~8 minutes warmup.

Class state lost on restart

ML model lives in process memory

Static class state lost

mlContext and model are static in-process only

A full runtime restart retrains from scratch — the model accumulates new data automatically. This is by design.

Understanding Score vs IsAnomaly

Score is continuous (0.0–1.0), IsAnomaly is binary

Use IsAnomaly (boolean) for alarms. Use Score for trending — it shows severity. Baseline shows what the model expected.

Wrong alarm threshold on score

AnomalyScore is unbounded (not 0-1)

Only Confidence is 0-1. Don't set thresholds on score without understanding data range

High CPU on fast data

OnChange execution runs model every value change

For high-frequency data , consider periodic execution to reduce CPU load(>1 update/sec), use Periodic execution instead of OnChange to reduce CPU load.

Adding @Tag references inside the class

Library class is pure .NET — tags don't exist in Library context

Never modify the class to write tags directly. All tag wiring must be done via Expressions (Step 3). The class exposes results through methods only.

Variations

Variation A: Pre-trained Model from File

  • Export model from Visual Studio ML.NET Model Builder
  • Place .zip file in the solution directory
  • Modify the script to call mlContext.Model.Load(modelPath) at startup
  • See skill-mlnet-model-builder or the ML.NET Model Builder the or documentation

Variation B: Multiple Independent Models

  • Each AnomalyML class instance tracks ONE sensor stream
  • For multiple sensors, create separate class instancesCreate separate classes: AnomalyML_Temperature, AnomalyML_Pressure, etc.
  • Each maintains gets its own training buffer and modelset of 4 expressions and 4 output tags
  • Better isolation but higher memory usage

...

  • skill-getting-started — Create the base solution with tags and simulator
  • skill-alarm-pipeline — Configure alarms (used in Step 4)
  • skill-historian-configuration — Log data for ML training and analysis
  • skill-cloud-ai-integration — Connect Claude/LLMs via MCP for Runtime (complementary to edge ML)
In this section...

...

...