| Warning |
|---|
DRAFT v10.1.5. Pre-release draft for content review. Do not link from public material. The final page replaces this draft once 10.1.5 ships. |
| Page properties | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
| Info |
|---|
New in 10.1.5. MongoDB is a new Dataset provider for Find, Aggregate, Count, Insert, Update, and Delete operations on document databasesintegrated across all four FrameworX data connection surfaces - Dataset Provider, Device Protocol, UNS TagProvider, and Historian StorageLocation. The Dataset Provider also accepts a SQL subset dialect that the connector translates to MongoDB filter, sort, and projection automatically. |
MongoDB document databases.
| Table of Contents | ||||||
|---|---|---|---|---|---|---|
|
Use MongoDB with the Datasets module to read and write document data from a FrameworX solution. The provider exposes three query shapes through the standard Dataset Query object and binds full document collections through Dataset Table for CRUD work.
Steps to connect:
Install a MongoDB server (6.0 or newer, 7.0 LTS recommended), create a database and user, then move to the FrameworX configuration below.
Follow the official MongoDB installation guide for your platform at mongodb.com/docs/manual/installation. Summary by target:
mongod as a Windows service on port 27017.apt install mongodb-org on Debian or Ubuntu with the MongoDB repository configured). Start the service with systemctl start mongod.brew tap mongodb/brew && brew install mongodb-community, then brew services start mongodb-community.docker run -d --name fx-mongo -p 27017:27017 mongo:7.Install the mongosh command-line shell alongside the server. Most platform installers include it by default.
Run the command below to confirm the server responds:
| Code Block | ||
|---|---|---|
| ||
mongosh "mongodb://localhost:27017" --quiet --eval "db.adminCommand('ping').ok" |
A successful ping returns 1.
Run these commands in mongosh to create a target database, a starter collection, and a user for FrameworX to authenticate as:
| Code Block | ||
|---|---|---|
| ||
use plant_data
db.readings.insertOne({ plant: "Plant01", value: 0, ts: new Date() })
db.createUser({
user: "fxuser",
pwd: "<choose-a-strong-password>",
roles: [ { role: "readWrite", db: "plant_data" } ]
}) |
The user is stored against the plant_data database above. Use plant_data as the Auth Source value in the FrameworX connection string in the next section, or use admin when the user lives in the admin database.
The MongoDB connector targets netstandard2.0 and runs unchanged on both .NET Framework 4.8 (Designer and Windows Runtime) and .NET 10 (cross-platform Runtime on Windows, Linux, and containers). The vendored MongoDB driver ships in both flavors with the FrameworX installation. The mongod server is OS-native and has no .NET dependency.
Follow the steps below to connect FrameworX to a MongoDB server.
mongodb:// or mongodb+srv:// URI, which overrides the structured fields. Field | Required | Description | Example |
|---|---|---|---|
Server | Yes | Host name or IP of the MongoDB server. | localhost |
Port | No | TCP port. Default is 27017. | 27017 |
User | No | Database user. Leave empty for unauthenticated access. | appuser |
Password | No | User password. Stored encrypted in the solution file. | ******** |
Database | Yes | Target database name. | plant_data |
Auth Source | No | SCRAM authentication database. Default is | admin |
TLS | No | Set to true for TLS/SSL connections. | true |
Connection URI | No | Full MongoDB URI. Overrides the structured fields. Use for replica sets and | mongodb+srv://user:pwd@cluster0.example.net/?retryWrites=true |
MongoDB queries use BSON filter documents and aggregation pipelines rather than SQL statements. The provider routes the Command Text of a Dataset Query to the correct driver operation based on the input shape:
Input shape | Operation | Description |
|---|---|---|
Plain collection name, no braces or brackets | Find | Returns all documents in the collection, with columns auto-detected from the first document. |
JSON object starting with | Find or Count | Reads |
JSON array starting with | Aggregate | Treated as an aggregation pipeline. Requires a default collection set on the Dataset Query. |
is the first document database that FrameworX integrates across all four data connection surfaces. One MongoDB instance can serve any combination of these surfaces from the same solution.
Surface | Configure in | What it does |
|---|---|---|
Dataset Provider | Datasets / DBs / Provider = | Query collections with JSON directives, aggregation pipelines, or the built-in SQL subset dialect. CRUD on collections through Dataset Tables. |
Device Protocol | Devices / Channels / Protocol = | Map individual document fields as Channel, Node, and Point tags. Write updates the latest document in a collection. |
UNS TagProvider (TagDiscovery) | UNS / TagProviders / Protocol = | Browse collections as a tag tree. Register document fields as UNS tags. Updates write back with |
Historian StorageLocation | Historian / Storage Locations / Protocol = | Store historian tag samples in MongoDB Time Series Collections. Append-style writes. |
A MongoDB UnsTagProvider row and a Historian StorageLocation row can both point at the same MongoDB instance.
MongoDB .NET drivers ship unconditionally with FrameworX in FS/ThirdPartyBin/. No separate install and no optional pack. Approximate footprint is 4 MB per framework (net48 and net10).
Use MongoDB with the Datasets module to read and write document data from a FrameworX solution. The provider exposes four query shapes through the standard Dataset Query object and binds full document collections through Dataset Table for CRUD work.
Steps to connect:
Install a MongoDB server (6.0 or newer, 7.0 LTS recommended), create a database and user, then move to the FrameworX configuration below.
Follow the official MongoDB installation guide for your platform at mongodb.com/docs/manual/installation. Summary by target:
mongod as a Windows service on port 27017.apt install mongodb-org on Debian or Ubuntu with the MongoDB repository configured). Start the service with systemctl start mongod.brew tap mongodb/brew && brew install mongodb-community, then brew services start mongodb-community.docker run -d --name fx-mongo -p 27017:27017 mongo:7.Install the mongosh command-line shell alongside the server. Most platform installers include it by default.
Run the command below to confirm the server responds:
| Code Block | ||
|---|---|---|
| ||
mongosh "mongodb://localhost:27017" --quiet --eval "db.adminCommand('ping').ok" |
A successful ping returns 1.
Run these commands in mongosh to create a target database, a starter collection, and a user for FrameworX to authenticate as:
| Code Block | ||
|---|---|---|
| ||
use plant_data
db.readings.insertOne({ plant: "Plant01", value: 0, ts: new Date() })
db.createUser({
user: "fxuser",
pwd: "<choose-a-strong-password>",
roles: [ { role: "readWrite", db: "plant_data" } ]
}) |
The user is stored against the plant_data database above. Use plant_data as the Auth Source value in the FrameworX connection string in the next section, or use admin when the user lives in the admin database.
The MongoDB connector targets netstandard2.0 and runs unchanged on both .NET Framework 4.8 (Designer and Windows Runtime) and .NET 10 (cross-platform Runtime on Windows, Linux, and containers). The vendored MongoDB driver ships in both flavors with the FrameworX installation. The mongod server is OS-native and has no .NET dependency.
Follow the steps below to connect FrameworX to a MongoDB server.
mongodb:// or mongodb+srv:// URI, which overrides the structured fields. Field | Required | Description | Example |
|---|---|---|---|
Server | Yes | Host name or IP of the MongoDB server. | localhost |
Port | No | TCP port. Default is 27017. | 27017 |
User | No | Database user. Leave empty for unauthenticated access. | appuser |
Password | No | User password. Stored encrypted in the solution file. | ******** |
Database | Yes | Target database name. | plant_data |
Auth Source | No | SCRAM authentication database. Default is | admin |
TLS | No | Set to true for TLS/SSL connections. | true |
Connection URI | No | Full MongoDB URI. Overrides the structured fields. Use for replica sets and | mongodb+srv://user:pwd@cluster0.example.net/?retryWrites=true |
MongoDB queries accept four input shapes. The provider routes the Command Text of a Dataset Query to the correct driver operation based on the input shape:
Input shape | Operation | Description |
|---|---|---|
Plain collection name, no braces or brackets | Find | Returns all documents in the collection, with columns auto-detected from the first document. |
JSON object starting with | Find or Count | Reads |
JSON array starting with | Aggregate | Treated as an aggregation pipeline. Requires a default collection set on the Dataset Query. |
SQL statement starting with | SQL subset (new in 10.1.5) | Parsed by the built-in SQL-to-MongoDB translator. See the next subsection. |
New in Phase B of the 10.1.5 MongoDB connector, users can write standard SELECT statements in the Dataset Query SqlStatement. The connector translates the statement to a MongoDB filter, sort, and projection automatically and executes it through the same driver that handles JSON directives and aggregation pipelines.
The translator accepts this subset of standard SQL:
SELECT cols | *, FROM collection, WHERE expr, ORDER BY col [ASC|DESC] (, col [ASC|DESC]...), LIMIT n.=, <>, !=, <, <=, >, >=.AND, OR, NOT.IS [NOT] NULL, [NOT] BETWEEN ... AND ..., [NOT] IN (...), [NOT] LIKE.plant), double-quoted ("plant"), bracketed ([plant]), back-ticked (`plant`).site.line.station resolve to nested BSON fields.-42 or -3.14.x IS NULL and x = NULL both match documents where the field is missing or has a null value. This follows the MongoDB idiom for missing-or-null match.LIKE translates to an anchored regular expression. The wildcard % becomes .* and _ becomes .. Case-folding collation is not applied.@Tag.Name references before the statement reaches the SQL-to-MongoDB bridge, so tag values appear as literals in the parsed statement.All three shapes below coexist at the same Provider = MongoDB.Driver. Pick the shape that fits the call site.
JSON directive.
| Code Block | ||
|---|---|---|
| ||
{ "collection": "readings", "filter": { "plant": "Plant01" }, "sort": { "ts": -1 }, "limit": 10 } |
Aggregation pipeline.
| Code Block | ||
|---|---|---|
| ||
[
{ "$match": { "plant": "Plant01" } },
{ "$sort": { "ts": -1 } },
{ "$limit": 10 }
] |
SQL subset.
| Code Block | ||
|---|---|---|
| ||
SELECT plant, value, ts
FROM readings
WHERE plant = 'Plant01'
ORDER BY ts DESC
LIMIT 10 |
The translator raises MongoSqlTranslationException for any construct outside the supported grammar. Out-of-scope constructs include:
GROUP BY and aggregate functions (COUNT, SUM, AVG, MIN, MAX).JOIN in any form.DISTINCT.OFFSET.SELECT or WHERE.INSERT, UPDATE, DELETE statements (use Dataset Tables, or write an aggregation pipeline).These constructs are parked for later releases. No ETA is promised. Use an aggregation pipeline or a Dataset Table when the SQL subset does not cover the need.
This example shows how to read the latest production readings from a MongoDB collection, run an aggregation, and write a new record. The example requires objects in other modules of the platform.
In Datasets / Queries, create the queries below. For full details on query configuration, see Datasets Queries Reference.
| Code Block | ||
|---|---|---|
| ||
{ "collection": "readings", "filter": { "plant": "Plant01" }, "sort": { "ts": -1 }, "limit": 10 } |
| Code Block | ||
|---|---|---|
| ||
[
{ "$match": { "plant": "Plant01" } },
{ "$group": { "_id": { "$dateTrunc": { "date": "$ts", "unit": "hour" } }, "avg": { "$avg": "$value" } } },
{ "$sort": { "_id": 1 } }
] |
| Code Block | ||
|---|---|---|
| ||
{ "count": "readings", "filter": { "quality": "good" } } |
In Datasets / Tables, create one table bound to the readings collection for insert and update work. A Dataset Table configured on a MongoDB collection uses the document _id as the primary key. Updates apply $set on the tracked columns and preserve any fields outside the tracked column list, matching the behavior of SQL Dataset Tables backed by a CommandBuilder.
In Unified Namespace / Tags, create the tags used in the example:
In Scripts / Tasks, create the tasks below, each triggered by the matching tag:
Find task
| Code Block | ||
|---|---|---|
| ||
@Tag.QueryResult = @Dataset.Query.QueryFindReadings.SelectCommand();
@Info.Trace("Find OK: " + @Tag.QueryResult); |
Aggregate task
| Code Block | ||
|---|---|---|
| ||
@Tag.QueryResult = @Dataset.Query.QueryAggregateHourly.SelectCommand();
@Info.Trace("Aggregate OK: " + @Tag.QueryResult); |
Insert task (via Dataset Table)
| Code Block | ||
|---|---|---|
| ||
@Dataset.Table.TableReadings.AddRow();
@Dataset.Table.TableReadings.Row["plant"] = @Tag.LatestPlantCode;
@Dataset.Table.TableReadings.Row["value"] = @Tag.LatestValue;
@Dataset.Table.TableReadings.Row["ts"] = DateTime.UtcNow;
int i = @Dataset.Table.TableReadings.Save();
@Info.Trace("Insert OK: " + i); |
After you finish the configuration and create the scripts, run the solution and trigger each task. Values arrive in the MongoDB readings collection and the Find and Aggregate results populate the QueryResult tag.
UpdateOne with $set on tracked columns. Fields outside the tracked column list are preserved on update.BulkWrite with the same tracking rules.mongodb+srv:// URIs.Use the Device Protocol surface when you want each MongoDB document field addressed as an individual Tag with a Node-level connection. This is the classic SCADA Channel / Node / Point model.
Use UNS TagProvider instead (next section) when you want dynamic browse and tree-style registration.
Channel. Devices / Channels / New. Protocol = MongoDB. Interface is automatically Custom (no CommAPI).
Node. One Node per MongoDB server plus database. PrimaryStation is a delimited string with the shape:
| Code Block | ||
|---|---|---|
| ||
ConnectionUri;Database;AuthSource;TlsEnabled |
Example value:
| Code Block | ||
|---|---|---|
| ||
mongodb://localhost:27017;plant_a;admin;false |
Point. Address is a single-dot collection.field string. Example: sensors.temperature.
On each scan cycle, the driver runs one find per configured address, sorts by _id descending, limits to 1, and projects the requested field. The value is returned as a string on every successful read. Quality is 192 (Good) on hit, 0 (Bad) when the collection is empty or the field is absent.
Values round-trip as strings in the Device path. Numeric or boolean tags show their value as text. Use the UNS TagProvider surface (next section) when your integration needs the value native type preserved.
WriteTagValue finds the most recent document in the collection and runs $set on that document _id. If the collection is empty, a fresh document is inserted with the field plus a ts UTC timestamp.
This is config-style or settings-style write behavior. Every write mutates the latest document. It does NOT append a new time-series sample. Use a Historian StorageLocation (later section) for append-style time-series writes.
Only single-dot addresses are accepted in 10.1.5. collection.field works. collection.field.subfield is rejected. Nested BSON path support is planned for a later release.
Use the UNS TagProvider surface when you want to browse a MongoDB instance from the Designer UNS tree, discover field names dynamically, and register fields as UNS tags.
UNS / TagProviders / New. Protocol = MongoDB. Station uses the same delimited format as the Device Protocol:
| Code Block | ||
|---|---|---|
| ||
ConnectionUri;Database;AuthSource;TlsEnabled |
The browse view has two levels:
sort({_id:-1}).limit(1) and lists that document top-level field names. The _id field is hidden from the browse view.The browse is schema-less. If different documents in a collection carry different fields, the browse reflects only the sample document. Register the fields you need and define them explicitly on the UNS Tag when the schema varies.
Designer refresh does not mutate state. Browse is side-effect-free and safe to call on every tree expansion.
Select a field under a collection and use Add to UNS. The registered address is collection.field. The attribute type is string in 10.1.5 (all scalars). Typed registration is planned for a later release.
Identical to the Device Protocol section:
$set on the latest document _id, or inserts a minimal document when the collection is empty.Single-dot collection.field only. The registration call rejects multi-dot addresses with the message Multi-segment paths not supported in v0.9, use 'collection.field' with a single dot.
Use this surface when you want FrameworX Historian to write tag samples into MongoDB as an append-style time series. This is the append behavior most users expect from a historian.
A MongoDB UNS TagProvider row must already exist (previous section). The Historian StorageLocation references it by name.
Historian / Storage Locations / New. The Protocol combo includes MongoDB when the TagProvider row has IsHistorian="true" (the default for 10.1.5). The DataRepository field selects which MongoDB UnsTagProvider to use.
Each FrameworX tag-collection maps to a MongoDB Time Series Collection (MongoDB 6.0+). The canonical document schema is:
| Code Block | ||
|---|---|---|
| ||
{
"t": "<UTC DateTime>",
"m": { "tag": "<address>" },
"v": "<value>",
"q": "<quality ushort>"
} |
The collection is created on first write if absent. Granularity is hard-coded to seconds in 10.1.5. This value is immutable on the MongoDB side after collection creation. Station-configurable granularity is planned for a later release.
WriteHistorianDataEx groups the incoming batch by collection name and calls InsertMany with IsOrdered=false. Every sample becomes a new document. Partial-batch failures are surfaced on a per-document basis. A bad document does not fail the whole batch.
GetSamples does a range query ($gte and $lte on the t field) with ascending sort. The interval and getSamplesMode parameters are ignored in 10.1.5. Raw samples only.
Aggregation (Average, Min, Max, Sum with non-zero interval) is planned for a later release using MongoDB $bucket and $bucketAuto stages.
If a user configures the same collection as BOTH a UNS TagProvider browse target AND a Historian StorageLocation target, the Historian write path succeeds silently into a plain (non-Time-Series) collection but reads behave incorrectly. A runtime guard is planned for a later release. Until it lands, use distinct collection names for the two roles.
The table below collects per-surface limits that apply to the 10.1.5 release.
Area | Limit | Workaround |
|---|---|---|
Dataset SQL subset |
| Use an aggregation pipeline or a Dataset Table for the operation. |
Dataset SQL subset - collation | No collation is applied. Comparisons follow BSON default collation and identifiers are case-sensitive. | Normalize case on ingest, or use the aggregation pipeline with an explicit |
Device and UNS address format | Single-dot | Flatten nested documents on ingest, or wait for a later release. |
Device and UNS values | Round-trip as strings. | Use typed tag bindings on the UNS side for numerical dashboards. |
Historian granularity | Hard-coded to | Choose the most precise granularity at collection creation. Recreate the collection to change. |
Historian aggregation | Raw samples only. Average, Min, Max, Sum with an interval are not supported. | Request raw samples and aggregate in FX Trend or Script. |
Historian connection pool | One | Scope time ranges tightly on dashboards. |
UNS and Historian on the same collection | No runtime guard against mixing. | Use distinct collection names for the two roles |
This example shows how to read the latest production readings from a MongoDB collection, run an aggregation, and write a new record. The example requires objects in other modules of the platform.
In Datasets / Queries, create the queries below. For full details on query configuration, see Datasets Queries Reference.
| Code Block | ||
|---|---|---|
| ||
{ "collection": "readings", "filter": { "plant": "Plant01" }, "sort": { "ts": -1 }, "limit": 10 } |
| Code Block | ||
|---|---|---|
| ||
[
{ "$match": { "plant": "Plant01" } },
{ "$group": { "_id": { "$dateTrunc": { "date": "$ts", "unit": "hour" } }, "avg": { "$avg": "$value" } } },
{ "$sort": { "_id": 1 } }
] |
| Code Block | ||
|---|---|---|
| ||
{ "count": "readings", "filter": { "quality": "good" } } |
In Datasets / Tables, create one table bound to the readings collection for insert and update work. A Dataset Table configured on a MongoDB collection uses the document _id as the primary key. Updates apply $set on the tracked columns and preserve any fields outside the tracked column list, matching the behavior of SQL Dataset Tables backed by a CommandBuilder.
In Unified Namespace / Tags, create the tags used in the example:
In Scripts / Tasks, create the tasks below, each triggered by the matching tag:
Find task
| Code Block | ||
|---|---|---|
| ||
@Tag.QueryResult = @Dataset.Query.QueryFindReadings.SelectCommand();
@Info.Trace("Find OK: " + @Tag.QueryResult); |
Aggregate task
| Code Block | ||
|---|---|---|
| ||
@Tag.QueryResult = @Dataset.Query.QueryAggregateHourly.SelectCommand();
@Info.Trace("Aggregate OK: " + @Tag.QueryResult); |
Insert task (via Dataset Table)
| Code Block | ||
|---|---|---|
| ||
@Dataset.Table.TableReadings.AddRow();
@Dataset.Table.TableReadings.Row["plant"] = @Tag.LatestPlantCode;
@Dataset.Table.TableReadings.Row["value"] = @Tag.LatestValue;
@Dataset.Table.TableReadings.Row["ts"] = DateTime.UtcNow;
int i = @Dataset.Table.TableReadings.Save();
@Info.Trace("Insert OK: " + i); |
After you finish the configuration and create the scripts, run the solution and trigger each task. Values arrive in the MongoDB readings collection and the Find and Aggregate results populate the QueryResult tag.
UpdateOne with $set on tracked columns. Fields outside the tracked column list are preserved on update.BulkWrite with the same tracking rules.mongodb+srv:// URIs. |
| Page Tree | ||
|---|---|---|
|