Skill Frontmatter
---
title: "Edge ML Pipeline — ML.NET Anomaly Detection"
tags: [ml, machine-learning, anomaly, detection, mlnet, script, prediction, edge-ai, intelligence]
description: "End-to-end pipeline: sensor tags, data collection, ML.NET anomaly detection model, prediction tags, alarms, dashboard"
version: "1.1"
min_product_version: "10.1.3"
author: "Tatsoft"
---



Build a complete edge ML pipeline that runs ML.NET anomaly detection on-premise: sensor tags feed the AnomalyML Library class, predictions are captured via Expressions into output tags, wired to alarms, and displayed on a dashboard.


What This Skill Does

Build a complete edge ML pipeline that runs deterministic machine learning models on-premise using ML.NET. The AnomalyML Library class collects live sensor data, auto-trains an SR-CNN anomaly detection model, and exposes results through methods. Expressions wire these methods to output tags for alarms and visualization.

textPipeline ArchitectureSensor Tags → Expression calls Check() → AnomalyML class → Expressions read Get methods → Output Tags → Alarms → Dashboard ↑ ↓ Devices/Simulator Operator sees anomalies

When to Use This Skill

Use this skill when:

  • The user wants to add machine learning or anomaly detection to a solution
  • The user mentions "predictive maintenance", "ML", "anomaly", or "edge AI"
  • The user wants to run ML models on-premise (not cloud-based)
  • Building a ProveIT-style demo with intelligent monitoring

Do NOT use this skill when:

  • The user wants cloud AI / LLM integration (use MCP for Runtime skill instead)
  • The user needs only simple threshold alarms (use skill-alarm-pipeline)
  • The user wants to load a pre-trained model from a .zip file (see ML.NET Example docs or Variation A below)

Prerequisites

  • Solution with sensor tags already created (at least 1-2 analog tags with changing values)
  • Value Simulator or real device feeding data to those tags
  • If starting from scratch, apply skill-getting-started first

MCP Tools and Tables Involved

Category

Items

Tools

get_table_schema, write_objects, get_objects, list_elements, search_docs

Tables

UnsTags, ScriptsClasses, ScriptsExpressions, AlarmsItems, AlarmsGroups, DisplaysList

Implementation Steps

Step 1: Create ML Output Tags

Before importing the ML class, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS under a /ML/ subfolder.

textget_table_schema('UnsTags')

jsonwrite_objects call{ "table_type": "UnsTags", "data": [ { "Name": "{Area}/ML/Score", "Type": "Double", "Description": "ML anomaly severity (0.0=normal, 1.0=severe)" }, { "Name": "{Area}/ML/IsAnomaly", "Type": "Boolean", "Description": "True when ML model detects anomaly" }, { "Name": "{Area}/ML/Baseline", "Type": "Double", "Description": "Expected normal value from ML model" }, { "Name": "{Area}/ML/ModelReady", "Type": "Boolean", "Description": "True after ML model finishes training (~100 data points)" } ] }

Replace {Area} with the actual sensor area path (e.g., Plant/Reactor1). The tag field name is Type, not DataType.

Key decisions:

  • Place ML outputs under a /ML/ subfolder for clean separation from raw sensor data
  • Score is continuous 0.0–1.0 (for trending), IsAnomaly is boolean (for alarms)
  • Baseline shows what the model expects — operators can see the deviation from normal
  • ModelReady lets the UI show warmup status — the model needs ~100 data points before it starts scoring

Step 2: Import the AnomalyML Script Class from Library

FrameworX ships with a pre-built AnomalyML class in the Script Library. Always import it — do NOT write ML code from scratch.

Instruct the user to:

  1. Navigate to Scripts → Classes
  2. Click New → Import from Library
  3. Select AnomalyML

The Library class uses ClassContent: "MCPTool" and NamespaceDeclarations: Microsoft.ML;Microsoft.ML.Data;Microsoft.ML.Transforms;Microsoft.ML.Transforms.TimeSeries. ML.NET is pre-installed with FrameworX — no NuGet packages or external downloads are needed.

After import, the only customization needed is the SENSITIVITY parameters at the top of the class (_windowSize, _threshold, _trainingBufferSize). Defaults work well for most industrial processes at 1–5 second scan rates.

The class API:

MethodReturnsPurpose
Check(double)doubleFeed sensor value, returns anomaly score. Call first each cycle.
GetIsAnomaly()boolTrue if last Check() detected anomaly
GetScore()doubleAnomaly severity (0.0 to 1.0)
GetBaseline()doubleExpected normal value
IsModelReady()boolTrue after training completes (~100 data points)
GetAnomalies()stringMCP Tool: recent anomaly history for AI retrieval

Critical: The class has NO tag references — it is pure .NET. All tag wiring is done via Expressions in Step 3. Do not add @Tag. references inside the class code.

Step 3: Create Script Expressions to Wire Tags

Expressions connect the AnomalyML class methods to output tags. Each expression calls a method and writes the return value to a tag. All four expressions must use the same TriggerTag (the sensor being monitored).

textget_table_schema('ScriptsExpressions')

jsonwrite_objects — ML Expressions{ "table_type": "ScriptsExpressions", "data": [ { "ObjectName": "Tag.{Area}/ML/Score", "Expression": "Script.Class.AnomalyML.Check(Tag.{Area}/Temperature)", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/IsAnomaly", "Expression": "Script.Class.AnomalyML.GetIsAnomaly()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/Baseline", "Expression": "Script.Class.AnomalyML.GetBaseline()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" }, { "ObjectName": "Tag.{Area}/ML/ModelReady", "Expression": "Script.Class.AnomalyML.IsModelReady()", "Execution": "OnChange", "Trigger": "Tag.{Area}/Temperature", "Label": "ML" } ] }

Execution order matters: The Score expression calls Check() which runs the model. The other three expressions read results from that same prediction cycle via getter methods. All four must share the same Trigger so they execute together. Replace {Area} and Temperature with the actual tag paths.

Step 4: Add Alarm on Anomaly Detection

Create an alarm that triggers when the ML model detects an anomaly.

textget_table_schema('AlarmsGroups') get_table_schema('AlarmsItems')

jsonwrite_objects multi-table call{ "tables": [ { "table_type": "AlarmsGroups", "data": [ { "Name": "MLAlarms", "Description": "Machine Learning generated alarms" } ] }, { "table_type": "AlarmsItems", "data": [ { "Name": "AnomalyDetected", "Group": "MLAlarms", "TagName": "{Area}/ML/IsAnomaly", "Type": "Digital", "Description": "ML model detected anomaly" } ] } ] }

Step 5: Create ML Dashboard

Build a display that shows sensor data alongside ML predictions.

textlist_elements('Dashboard,TrendChart,TextBlock') get_table_schema('DisplaysList')

jsonwrite_objects — ML Dashboard{ "table_type": "DisplaysList", "data": [ { "Name": "MLMonitor", "PanelType": "Dashboard", "Columns": 2, "Rows": 3, "Title": "ML Anomaly Monitor", "Elements": [ { "Type": "TrendChart", "Column": 0, "Row": 0, "ColumnSpan": 2, "Pens": [ { "TagName": "Tag.{Area}/Temperature", "Color": "#FF3498DB" }, { "TagName": "Tag.{Area}/ML/Score", "Color": "#FFE74C3C" } ] }, { "Type": "TextBlock", "Column": 0, "Row": 1, "LinkedValue": "Tag.{Area}/ML/Score", "Label": "Anomaly Score" }, { "Type": "TextBlock", "Column": 1, "Row": 1, "LinkedValue": "Tag.{Area}/ML/Baseline", "Label": "Expected Baseline" }, { "Type": "TextBlock", "Column": 0, "Row": 2, "LinkedValue": "Tag.{Area}/ML/IsAnomaly", "Label": "Anomaly Detected" }, { "Type": "TextBlock", "Column": 1, "Row": 2, "LinkedValue": "Tag.{Area}/ML/ModelReady", "Label": "Model Ready" } ] } ] }

Verification

  1. get_objects('ScriptsClasses', names=['AnomalyML']) — confirm class exists and ClassContent is MCPTool
  2. get_state()check for compilation errors (NamespaceDeclarations must include ML.NET namespaces)
  3. get_objects('ScriptsExpressions') — confirm all four ML expressions are configured
  4. Start runtime → initially Score will be 0 and ModelReady will be false
  5. After ~100 data points (varies by scan rate), ModelReady becomes true and Score starts reflecting real predictions
  6. browse_runtime_properties('Tag.{Area}/ML') — verify ML output tags are updating

Common Pitfalls

Mistake

Why It Happens

How to Avoid

ML.NET assembly not found

NamespaceDeclarations missing or wrong

Import from Library (auto-includes declarations). Or manually set: Microsoft.ML;Microsoft.ML.Data;Microsoft.ML.Transforms;Microsoft.ML.Transforms.TimeSeries. ML.NET is pre-installed — no NuGet or external downloads needed.

No output for first few minutes

Model needs ~100 data points to train

This is expected. Check ModelReady tag — it becomes true when training completes. At 5-second scan rate, expect ~8 minutes warmup.

Class state lost on restart

ML model lives in process memory

A full runtime restart retrains from scratch — the model accumulates new data automatically. This is by design.

Understanding Score vs IsAnomaly

Score is continuous (0.0–1.0), IsAnomaly is binary

Use IsAnomaly (boolean) for alarms. Use Score for trending — it shows severity. Baseline shows what the model expected.

High CPU on fast data

OnChange execution runs model every value change

For high-frequency data (>1 update/sec), use Periodic execution instead of OnChange to reduce CPU load.

Adding @Tag references inside the class

Library class is pure .NET — tags don't exist in Library context

Never modify the class to write tags directly. All tag wiring must be done via Expressions (Step 3). The class exposes results through methods only.

Variations

Variation A: Pre-trained Model from File

  • Export model from Visual Studio ML.NET Model Builder
  • Place .zip file in the solution directory
  • Modify the script to call mlContext.Model.Load(modelPath) at startup
  • See the or documentation

Variation B: Multiple Independent Models

  • Each AnomalyML class instance tracks ONE sensor stream
  • For multiple sensors, create separate class instances: AnomalyML_Temperature, AnomalyML_Pressure, etc.
  • Each gets its own set of 4 expressions and 4 output tags
  • Better isolation but higher memory usage

Variation C: Historian-Fed Training

  • Use Dataset.Query to fetch past 24 hours of sensor data
  • Train model on historical data at startup
  • Provides better initial model but requires Historian to be configured first

Related Skills

  • skill-getting-started — Create the base solution with tags and simulator
  • skill-alarm-pipeline — Configure alarms (used in Step 4)
  • skill-historian-configuration — Log data for ML training and analysis
  • skill-cloud-ai-integration — Connect Claude/LLMs via MCP for Runtime (complementary to edge ML)