Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Fix broken Time Series schema code macro: language=json -> language=text. The JSON validator was rejecting the placeholder strings (<UTC DateTime> etc.) and emitting Confluence 'Error rendering macro' to readers. Other 11 JSON-shaped code blocks on this page already use language=text. No content change.
Warning

DRAFT v10.1.5. Pre-release draft for content review. Do not link from public material. The final page replaces this draft once 10.1.5 ships.


Page properties
hiddentrue
idconnector-metadata


Property

Value

Sector

Historian & Databases

Group

SQL Databases

Connector

MongoDB Database

Name

MongoDB



Info

New in 10.1.5. MongoDB is integrated across all four FrameworX data connection surfaces - Dataset Provider, Device Protocol, UNS TagProvider, and Historian StorageLocation. The Dataset Provider also accepts a SQL subset dialect that the connector translates to MongoDB filter, sort, and projection automatically.

MongoDB document databases.

  • Name: MongoDB
  • Version: 1.0.0.0
  • Protocol: Proprietary (MongoDB Wire Protocol)
  • Interface: TCP/IP
  • Runtime: .NET Framework 4.8 (Windows) and .NET 10 (cross-platform)
  • Configuration:
    • Datasets / DBs
    • Devices / Channels
    • UNS / TagProviders
    • Historian / Storage Locations
  • Minimum server version: MongoDB 6.0 (recommended 7.0 LTS)

Table of Contents
maxLevel2
minLevel2
stylenone


Overview

MongoDB is the first document database that FrameworX integrates across all four data connection surfaces. One MongoDB instance can serve any combination of these surfaces from the same solution.

Surface

Configure in

What it does

Dataset Provider

Datasets / DBs / Provider = MongoDB.Driver

Query collections with JSON directives, aggregation pipelines, or the built-in SQL subset dialect. CRUD on collections through Dataset Tables.

Device Protocol

Devices / Channels / Protocol = MongoDB

Map individual document fields as Channel, Node, and Point tags. Write updates the latest document in a collection.

UNS TagProvider (TagDiscovery)

UNS / TagProviders / Protocol = MongoDB

Browse collections as a tag tree. Register document fields as UNS tags. Updates write back with $set.

Historian StorageLocation

Historian / Storage Locations / Protocol = MongoDB

Store historian tag samples in MongoDB Time Series Collections. Append-style writes.

A MongoDB UnsTagProvider row and a Historian StorageLocation row can both point at the same MongoDB instance.

Supported MongoDB versions

  • Floor: MongoDB 6.0 (Time Series Collections require 6.0+).
  • Recommended target: MongoDB 7.0 (current LTS).
  • Tested against 6.0, 7.0, 8.0.

Prerequisites

MongoDB .NET drivers ship unconditionally with FrameworX in FS/ThirdPartyBin/. No separate install and no optional pack. Approximate footprint is 4 MB per framework (net48 and net10).


Dataset Provider

Use MongoDB with the Datasets module to read and write document data from a FrameworX solution. The provider exposes four query shapes through the standard Dataset Query object and binds full document collections through Dataset Table for CRUD work.

Steps to connect:

  1. Install a MongoDB server and create a database.
  2. Create a Database Connection with the MongoDB provider.
  3. Configure the connection string and test the connection.
  4. Create Dataset Queries for Find, Aggregate, Count, or SQL subset.
  5. Create Dataset Tables to bind collections for insert, update, and delete.
  6. Call the queries and tables from scripts, displays, or reports.

Installation

Install a MongoDB server (6.0 or newer, 7.0 LTS recommended), create a database and user, then move to the FrameworX configuration below.

Install the MongoDB server

Follow the official MongoDB installation guide for your platform at mongodb.com/docs/manual/installation. Summary by target:

  • Windows: download the MongoDB Community Server MSI and run the installer. The default install registers mongod as a Windows service on port 27017.
  • Linux: install the distro package per the vendor instructions (for example, apt install mongodb-org on Debian or Ubuntu with the MongoDB repository configured). Start the service with systemctl start mongod.
  • macOS: brew tap mongodb/brew && brew install mongodb-community, then brew services start mongodb-community.
  • Docker: docker run -d --name fx-mongo -p 27017:27017 mongo:7.

Install the mongosh command-line shell alongside the server. Most platform installers include it by default.

Verify the server is reachable

Run the command below to confirm the server responds:

Code Block
languagetext
mongosh "mongodb://localhost:27017" --quiet --eval "db.adminCommand('ping').ok"

A successful ping returns 1.

Create the database and user

Run these commands in mongosh to create a target database, a starter collection, and a user for FrameworX to authenticate as:

Code Block
languagetext
use plant_data

db.readings.insertOne({ plant: "Plant01", value: 0, ts: new Date() })

db.createUser({
  user: "fxuser",
  pwd: "<choose-a-strong-password>",
  roles: [ { role: "readWrite", db: "plant_data" } ]
})

The user is stored against the plant_data database above. Use plant_data as the Auth Source value in the FrameworX connection string in the next section, or use admin when the user lives in the admin database.

Platform compatibility

The MongoDB connector targets netstandard2.0 and runs unchanged on both .NET Framework 4.8 (Designer and Windows Runtime) and .NET 10 (cross-platform Runtime on Windows, Linux, and containers). The vendored MongoDB driver ships in both flavors with the FrameworX installation. The mongod server is OS-native and has no .NET dependency.

Configuration

Follow the steps below to connect FrameworX to a MongoDB server.

  1. Access Datasets / DBs.
  2. Click the plus icon to create a new Database Connection.
  3. In the Name field, enter a name for the connection, for example MongoDB1.
  4. Choose MongoDB Data Provider as the Provider.
  5. Click OK.
  6. In the data grid, click the Connection String column of the newly created row.
  7. Configure the connection. The Designer dialog shows structured fields for Server, Port, User, Password, Database, Auth Source, and TLS. For replica sets and Atlas clusters, use the Advanced Connection URI field with a mongodb:// or mongodb+srv:// URI, which overrides the structured fields.
  8. Click Test to verify the connection with the MongoDB server.

Connection string options

Field

Required

Description

Example

Server

Yes

Host name or IP of the MongoDB server.

localhost

Port

No

TCP port. Default is 27017.

27017

User

No

Database user. Leave empty for unauthenticated access.

appuser

Password

No

User password. Stored encrypted in the solution file.

********

Database

Yes

Target database name.

plant_data

Auth Source

No

SCRAM authentication database. Default is admin.

admin

TLS

No

Set to true for TLS/SSL connections.

true

Connection URI

No

Full MongoDB URI. Overrides the structured fields. Use for replica sets and mongodb+srv:// Atlas URIs.

mongodb+srv://user:pwd@cluster0.example.net/?retryWrites=true

Query shapes

MongoDB queries accept four input shapes. The provider routes the Command Text of a Dataset Query to the correct driver operation based on the input shape:

Input shape

Operation

Description

Plain collection name, no braces or brackets

Find

Returns all documents in the collection, with columns auto-detected from the first document.

JSON object starting with {

Find or Count

Reads collection, filter, sort, and limit fields. If the object has a count key, runs Count and returns a single-row table.

JSON array starting with [

Aggregate

Treated as an aggregation pipeline. Requires a default collection set on the Dataset Query.

SQL statement starting with SELECT

SQL subset (new in 10.1.5)

Parsed by the built-in SQL-to-MongoDB translator. See the next subsection.

SQL subset dialect (new in 10.1.5)

New in Phase B of the 10.1.5 MongoDB connector, users can write standard SELECT statements in the Dataset Query SqlStatement. The connector translates the statement to a MongoDB filter, sort, and projection automatically and executes it through the same driver that handles JSON directives and aggregation pipelines.

Supported grammar (v1)

The translator accepts this subset of standard SQL:

  • Clauses: SELECT cols | *, FROM collection, WHERE expr, ORDER BY col [ASC|DESC] (, col [ASC|DESC]...), LIMIT n.
  • Comparison operators: =, <>, !=, <, <=, >, >=.
  • Boolean operators: AND, OR, NOT.
  • Predicates: IS [NOT] NULL, [NOT] BETWEEN ... AND ..., [NOT] IN (...), [NOT] LIKE.
  • Identifier forms: bare (plant), double-quoted ("plant"), bracketed ([plant]), back-ticked (`plant`).
  • Dotted BSON paths: identifiers like site.line.station resolve to nested BSON fields.
  • Numeric literals: integer and decimal, including negative values such as -42 or -3.14.
  • Case sensitivity: identifiers are case-sensitive by design. BSON field names are case-sensitive in MongoDB and the translator preserves that behavior.

Semantic notes

  • x IS NULL and x = NULL both match documents where the field is missing or has a null value. This follows the MongoDB idiom for missing-or-null match.
  • LIKE translates to an anchored regular expression. The wildcard % becomes .* and _ becomes .. Case-folding collation is not applied.
  • No collation. Comparisons follow BSON default collation.
  • No parameter plumbing at the translator layer. The Dataset host pre-interpolates @Tag.Name references before the statement reaches the SQL-to-MongoDB bridge, so tag values appear as literals in the parsed statement.

Three query shapes side by side

All three shapes below coexist at the same Provider = MongoDB.Driver. Pick the shape that fits the call site.

JSON directive.

Code Block
languagetext
{ "collection": "readings", "filter": { "plant": "Plant01" }, "sort": { "ts": -1 }, "limit": 10 }

Aggregation pipeline.

Code Block
languagetext
[
  { "$match": { "plant": "Plant01" } },
  { "$sort": { "ts": -1 } },
  { "$limit": 10 }
]

SQL subset.

Code Block
languagesql
SELECT plant, value, ts
FROM readings
WHERE plant = 'Plant01'
ORDER BY ts DESC
LIMIT 10

Out of scope in v1

The translator raises MongoSqlTranslationException for any construct outside the supported grammar. Out-of-scope constructs include:

  • GROUP BY and aggregate functions (COUNT, SUM, AVG, MIN, MAX).
  • JOIN in any form.
  • Subqueries.
  • DISTINCT.
  • OFFSET.
  • Arithmetic expressions inside SELECT or WHERE.
  • INSERT, UPDATE, DELETE statements (use Dataset Tables, or write an aggregation pipeline).

These constructs are parked for later releases. No ETA is promised. Use an aggregation pipeline or a Dataset Table when the SQL subset does not cover the need.


Configuration Example

This example shows how to read the latest production readings from a MongoDB collection, run an aggregation, and write a new record. The example requires objects in other modules of the platform.

In Datasets / Queries, create the queries below. For full details on query configuration, see Datasets Queries Reference.

  • QueryFindReadings
    • DB: the MongoDB connection created above.
    • Command Text:
      Code Block
      languagetext
      { "collection": "readings", "filter": { "plant": "Plant01" }, "sort": { "ts": -1 }, "limit": 10 }
  • QueryAggregateHourly
    • DB: the MongoDB connection.
    • Default collection: readings.
    • Command Text (aggregation pipeline):
      Code Block
      languagetext
      [
        { "$match": { "plant": "Plant01" } },
        { "$group": { "_id": { "$dateTrunc": { "date": "$ts", "unit": "hour" } }, "avg": { "$avg": "$value" } } },
        { "$sort": { "_id": 1 } }
      ]
  • QueryCount
    • Command Text:
      Code Block
      languagetext
      { "count": "readings", "filter": { "quality": "good" } }

In Datasets / Tables, create one table bound to the readings collection for insert and update work. A Dataset Table configured on a MongoDB collection uses the document _id as the primary key. Updates apply $set on the tracked columns and preserve any fields outside the tracked column list, matching the behavior of SQL Dataset Tables backed by a CommandBuilder.

In Unified Namespace / Tags, create the tags used in the example:

  • QueryResult: receives the output of the Find and Aggregate queries.
  • TriggerFind, TriggerAggregate, TriggerInsert: script task triggers.
  • LatestPlantCode, LatestValue: values written back to MongoDB.

In Scripts / Tasks, create the tasks below, each triggered by the matching tag:

  • Find task

    Code Block
    languagec#
    @Tag.QueryResult = @Dataset.Query.QueryFindReadings.SelectCommand();
    @Info.Trace("Find OK: " + @Tag.QueryResult);
  • Aggregate task

    Code Block
    languagec#
    @Tag.QueryResult = @Dataset.Query.QueryAggregateHourly.SelectCommand();
    @Info.Trace("Aggregate OK: " + @Tag.QueryResult);
  • Insert task (via Dataset Table)

    Code Block
    languagec#
    @Dataset.Table.TableReadings.AddRow();
    @Dataset.Table.TableReadings.Row["plant"] = @Tag.LatestPlantCode;
    @Dataset.Table.TableReadings.Row["value"] = @Tag.LatestValue;
    @Dataset.Table.TableReadings.Row["ts"] = DateTime.UtcNow;
    int i = @Dataset.Table.TableReadings.Save();
    @Info.Trace("Insert OK: " + i);

After you finish the configuration and create the scripts, run the solution and trigger each task. Values arrive in the MongoDB readings collection and the Find and Aggregate results populate the QueryResult tag.

Notes on writes

  • The Save call on a Dataset Table dispatches to MongoDB UpdateOne with $set on tracked columns. Fields outside the tracked column list are preserved on update.
  • Batched changes use BulkWrite with the same tracking rules.
  • Authentication supports SCRAM-SHA-256, TLS, replica sets, and mongodb+srv:// URIs.

Device Protocol

When to use

Use the Device Protocol surface when you want each MongoDB document field addressed as an individual Tag with a Node-level connection. This is the classic SCADA Channel / Node / Point model.

Use UNS TagProvider instead (next section) when you want dynamic browse and tree-style registration.

Configure

Channel. Devices / Channels / New. Protocol = MongoDB. Interface is automatically Custom (no CommAPI).

Node. One Node per MongoDB server plus database. PrimaryStation is a delimited string with the shape:

Code Block
languagetext
ConnectionUri;Database;AuthSource;TlsEnabled

Example value:

Code Block
languagetext
mongodb://localhost:27017;plant_a;admin;false

Point. Address is a single-dot collection.field string. Example: sensors.temperature.

Read semantics

On each scan cycle, the driver runs one find per configured address, sorts by _id descending, limits to 1, and projects the requested field. The value is returned as a string on every successful read. Quality is 192 (Good) on hit, 0 (Bad) when the collection is empty or the field is absent.

Values round-trip as strings in the Device path. Numeric or boolean tags show their value as text. Use the UNS TagProvider surface (next section) when your integration needs the value native type preserved.

Write semantics - UPDATE, not append

WriteTagValue finds the most recent document in the collection and runs $set on that document _id. If the collection is empty, a fresh document is inserted with the field plus a ts UTC timestamp.

This is config-style or settings-style write behavior. Every write mutates the latest document. It does NOT append a new time-series sample. Use a Historian StorageLocation (later section) for append-style time-series writes.

Address limits

Only single-dot addresses are accepted in 10.1.5. collection.field works. collection.field.subfield is rejected. Nested BSON path support is planned for a later release.


UNS TagProvider

When to use

Use the UNS TagProvider surface when you want to browse a MongoDB instance from the Designer UNS tree, discover field names dynamically, and register fields as UNS tags.

Configure

UNS / TagProviders / New. Protocol = MongoDB. Station uses the same delimited format as the Device Protocol:

Code Block
languagetext
ConnectionUri;Database;AuthSource;TlsEnabled

Browse semantics

The browse view has two levels:

  • Level 0 (tree root for this TagProvider). Lists all collections in the configured database.
  • Level 1 (inside a collection). Samples the most recent document with sort({_id:-1}).limit(1) and lists that document top-level field names. The _id field is hidden from the browse view.

The browse is schema-less. If different documents in a collection carry different fields, the browse reflects only the sample document. Register the fields you need and define them explicitly on the UNS Tag when the schema varies.

Designer refresh does not mutate state. Browse is side-effect-free and safe to call on every tree expansion.

Register a tag

Select a field under a collection and use Add to UNS. The registered address is collection.field. The attribute type is string in 10.1.5 (all scalars). Typed registration is planned for a later release.

Read and write semantics

Identical to the Device Protocol section:

  • Read pulls the latest document field value and tags quality 192 on hit, 0 on miss.
  • Write runs $set on the latest document _id, or inserts a minimal document when the collection is empty.

Address limits

Single-dot collection.field only. The registration call rejects multi-dot addresses with the message Multi-segment paths not supported in v0.9, use 'collection.field' with a single dot.


Historian StorageLocation

When to use

Use this surface when you want FrameworX Historian to write tag samples into MongoDB as an append-style time series. This is the append behavior most users expect from a historian.

Prerequisites

A MongoDB UNS TagProvider row must already exist (previous section). The Historian StorageLocation references it by name.

Configure

Historian / Storage Locations / New. The Protocol combo includes MongoDB when the TagProvider row has IsHistorian="true" (the default for 10.1.5). The DataRepository field selects which MongoDB UnsTagProvider to use.

Storage model - Time Series Collections

Each FrameworX tag-collection maps to a MongoDB Time Series Collection (MongoDB 6.0+). The canonical document schema is:

Code Block
languagejsontext
{
  "t": "<UTC DateTime>",
  "m": { "tag": "<address>" },
  "v": "<value>",
  "q": "<quality ushort>"
}

The collection is created on first write if absent. Granularity is hard-coded to seconds in 10.1.5. This value is immutable on the MongoDB side after collection creation. Station-configurable granularity is planned for a later release.

Write semantics - BATCHED APPEND

WriteHistorianDataEx groups the incoming batch by collection name and calls InsertMany with IsOrdered=false. Every sample becomes a new document. Partial-batch failures are surfaced on a per-document basis. A bad document does not fail the whole batch.

Read semantics - raw samples only in 10.1.5

GetSamples does a range query ($gte and $lte on the t field) with ascending sort. The interval and getSamplesMode parameters are ignored in 10.1.5. Raw samples only.

Aggregation (Average, Min, Max, Sum with non-zero interval) is planned for a later release using MongoDB $bucket and $bucketAuto stages.

Known edge case - same collection as both TagProvider and StorageLocation

If a user configures the same collection as BOTH a UNS TagProvider browse target AND a Historian StorageLocation target, the Historian write path succeeds silently into a plain (non-Time-Series) collection but reads behave incorrectly. A runtime guard is planned for a later release. Until it lands, use distinct collection names for the two roles.


Known limitations and deferrals

The table below collects per-surface limits that apply to the 10.1.5 release.

Area

Limit

Workaround

Dataset SQL subset

GROUP BY, aggregate functions, JOIN, subqueries, DISTINCT, OFFSET, arithmetic inside SELECT or WHERE, and INSERT / UPDATE / DELETE statements are not supported. Out-of-scope constructs raise MongoSqlTranslationException.

Use an aggregation pipeline or a Dataset Table for the operation.

Dataset SQL subset - collation

No collation is applied. Comparisons follow BSON default collation and identifiers are case-sensitive.

Normalize case on ingest, or use the aggregation pipeline with an explicit $regexMatch for case-insensitive matching.

Device and UNS address format

Single-dot collection.field only in 10.1.5.

Flatten nested documents on ingest, or wait for a later release.

Device and UNS values

Round-trip as strings.

Use typed tag bindings on the UNS side for numerical dashboards.

Historian granularity

Hard-coded to seconds.

Choose the most precise granularity at collection creation. Recreate the collection to change.

Historian aggregation

Raw samples only. Average, Min, Max, Sum with an interval are not supported.

Request raw samples and aggregate in FX Trend or Script.

Historian connection pool

One ConnectionInfo per call in GetSamples (no cache).

Scope time ranges tightly on dashboards.

UNS and Historian on the same collection

No runtime guard against mixing.

Use distinct collection names for the two roles.


In this section...

Page Tree
root@parent