New for 10.1.5 (draft preview). This demo ships with FrameworX 10.1.5 (planned April 2026). The feature is not available in 10.1.4 or earlier. Content under review. |
This demo grounds Local AI in a real plant model: an ISA-88 mini-pharmaceutical site whose operators ask the AI plain-language questions and get answers anchored in live tag values, alarm state, and batch context.
How-to → Examples → Architecture → Local AI Ontology Demo
Download the demo files: This demo showcases, among other features, the following:
|
Hero screenshot: Canvas + chat display showing Mixer M-101 with the chat panel open and the money-shot reply visible. Replace with attached image.
This demo showcases, among other features, the following:
SelectedAsset tag: Canvas with chat, Dashboard, Alarms, Trends.A good starting point for solutions that need a typed, hierarchical model is to import the ontology from a graph file rather than author it row by row in Designer.
For this demo, we picked ISA-88 as the source of structure. ISA-88 is an industry standard for batch-control plant models, and it gives us a clean class hierarchy out of the box: Enterprise, Site, Area, ProcessCell, Unit, EquipmentModule, plus the procedural classes Operation and Batch. We hand-crafted a small RDF/OWL file (Local AI Ontology Demo Source.rj, 21 KB) that defines those eight classes, the containment predicates between them (hasSite, hasArea, hasProcessCell, and so on), and a fictional pharma site populated with one batch in progress.
The importer reads the ontology and materializes the FrameworX UNS in a single pass:
DisplayText, Labels, SourceIri, BaseUserType, plus a free-form Attributes column for any annotation we did not promote to a first-class field./Attr convention — PharmaCo/Attr, PharmaCo/NewBostonPlant/Attr, and so on down to Mixer_M101/Heater_H101/Attr.executesOn and belongsToBatch Operation links.Click Relationship Graph on the Asset Tree toolbar at any time. A Markdown file with embedded Mermaid diagrams opens, showing the eight UDTs as a class diagram with their Reference members, the thirteen tags as Asset Tree nodes with /Attr leaves, and the cross-links between them. Hover any element to see its SourceIri, the OWL IRI it came from.
The ontology source file stays pure ISA-88, so it is portable. Take any reference ontology you already trust (your corporate asset model, a vendor template, an open standard such as ISA-95 or BFO) and import it the same way. The demo's import workflow does not depend on ISA-88 specifically. |
Pure ISA-88 defines structural classes: Enterprise, Site, Unit. It does not define the SCADA members operators read at runtime: temperature, pressure, speed, setpoint, running state, output. Those belong to the deployment, not to the standard.
We kept the two layers cleanly separated. The ontology source file stays pure, free of SCADA details, so it round-trips back to RDF without leaking deployment-specific members. The operational members are added in Designer after the import, on top of the materialized UDTs:
S88_Unit, we added Capacity_L, Temperature_C (range 0–150 °C), Pressure_bar (0–10), Speed_rpm (0–600), Setpoint (default 60), and a Running Boolean.S88_EquipmentModule, we added Running, Output (0–100 %), Setpoint (default 50), and Current_A (0–30 A).Because the operational members live on the UDTs and not on the instances, every S88_Unit instance automatically inherits Temperature, Pressure, Speed, Setpoint, and Running. We did not author thirteen sets of fields for thirteen tags. We authored two UDTs.
The pattern generalizes. Bring your reference ontology for the asset structure; layer your operational members on top in Designer. The ontology does what ontologies are good at (typed structure); the runtime members do what runtime members are good at (live values).
One asset tree on the left, four content displays on the right, all driven by the same SelectedAsset client tag. Pick an asset, every display retargets to it. Switch displays, the selection persists. We chose this pattern so the operator builds a single mental model of "what asset am I looking at right now" and the page only changes the lens.
The headline display. We laid out a process diagram of Cell A on the left side of the canvas, and docked an AI chat panel on the right. The diagram shows Mixer M-101 with its Heater Jacket and Stirrer modules, Reactor R-101 next to it, live values bound to the operational members of each unit. The chat panel takes operator questions in plain language and replies anchored in the same tags the operator can see.
Type "Why is High-Shear Mixer M-101's temperature climbing?" into the chat. The reply names the Heater output, the Stirrer cycling pattern, the operation in progress, and the anomaly score, in two short paragraphs.
A responsive grid of metrics for the selected asset. We built one dashboard layout that works at every level of the hierarchy because the underlying UDTs share consistent member names. When the selection is at the Site or Area level, the dashboard summarizes child-equipment status. When the selection lands on a Unit or Module, it shows that one unit's live numbers: Temperature, Pressure, Speed, Setpoint, Running state.
The standard alarms grid plus an annotation column. When an anomaly fires, a server-side script calls the AI for a two-sentence operator-facing explanation, and the reply lands as an annotation on the active alarm row. Operators read the explanation in their own language, no runbook lookup. We covered the wiring under Server-Side AI for Annotations below.
Time-series chart for the selected asset, with the same AI-generated annotations pinned to the trend points where the anomaly was detected. Hover an annotation pin, see the AI's reply inline. We used this surface to make a temperature excursion explainable to a colleague three hours after the fact, without going back to the alarm history.
A chat panel that returns generic LLM answers is not a plant tool. The reason this demo's chat is useful is that every reply is grounded: the model receives, on every turn, the live values of the selected asset's operational members, the active alarm state, the Operation and Batch the asset is currently part of, and the AnomalyScore for the asset.
We wired the grounding once, at the script that builds the chat prompt. The chat panel passes the operator's question and the current SelectedAsset; the script reads the asset's live values from the UNS, walks one level up to find the parent ProcessCell and its peers, looks up the active Operation and Batch, and assembles a system message that gives the model everything it needs to reply specifically. The model is OpenAI-compatible and runs locally by default.
Four scripted prompts on the demo's Canvas display walk what grounding looks like in practice:
The reply names the Heater Jacket output, the Stirrer cycling pattern, the operation currently running on the Mixer (a wet-granulation mix step in Batch B-001), and the AnomalyScore reading. Two short paragraphs. No call to a generic knowledge base.
The reply quotes the AnomalyScore value, the model's confidence band, and the input features that contributed most to the score. The chat is reading the same anomaly-detection model that powers the alarm annotations.
The reply walks one level up the asset hierarchy and lists the peer units (Reactor R-101 and its Stirrer module). The AI reads the typed asset tree directly; it knows what a ProcessCell is because the ontology said so.
The reply names Batch B-001, the two operations linked to it (Op_Charge_001 and Op_Mix_001), the unit each operation executes on, and the batch state. Pulled from the Batch and Operation tags the ontology materialized.
The chat panel is one consumption pattern for Local AI. Server-side AI calls from a Script Task is the other. We wired one server-side AI call into this demo: when the AnomalyScore on a Unit crosses threshold, a script asks the AI for a two-sentence explanation and writes the reply as an annotation on both the active alarm and the trend point at that timestamp.
Two consequences:
The annotation pattern works for any tag with a numeric or Boolean signal: temperature excursion, motor stall, batch-state hang. Wire one Script Task per signal; the script body is roughly twenty lines of C# or Python.
The AnomalyScore itself is produced by an ML.NET model trained on historical data, following the same pattern we used in the BottlingLine ML Demo. The chat and the annotation script both read the score from the same tag: one model, two consumption surfaces.
The Display chat panel and the server-side script call share one |
Open the solution in Designer. The full UNS is already populated, the displays already wired, the AnomalyScore script already in place. Two steps to a live demo:
ModelSettings defaults expect an OpenAI-compatible endpoint at http://localhost:11434/v1/chat/completions with model qwen2.5:7b-instruct. The fastest way to set this up is the install script that ships with FrameworX: run AISetup\Install-LocalAI.ps1. The script checks for an existing install, pulls the runtime and the model only if needed, and verifies the endpoint with a smoke test. See Local AI for the full install reference.To customize the ontology for a different plant or equipment hierarchy, replace Local AI Ontology Demo Source.rj with your own RDF/JSON file and re-import. The displays, the AnomalyScore script, and the chat prompts work against any UDT that exposes the same operational members.
Key facts about the demo solution:
Local AI Ontology Demo Source.rj (21 KB, W3C RDF/JSON).Local AI Ontology Demo Policy.json (one override, namedIndividualThreshold: 0, to keep typed individuals as tags rather than enumerations)./Attr convention; one client-domain SelectedAsset tag drives all four displays.SolutionSettings.ModelSettings. Default is a local Ollama instance running Qwen 2.5 7B-Instruct; any OpenAI-compatible endpoint works.