DRAFT v10.1.5. Pre-release draft for content review. Do not link from public material. The final page replaces this draft once 10.1.5 ships.
New in 10.1.5. MongoDB is a new Dataset provider for Find, Aggregate, Count, Insert, Update, and Delete operations on document databases.
MongoDB document databases.
Use MongoDB with the Datasets module to read and write document data from a FrameworX solution. The provider exposes three query shapes through the standard Dataset Query object and binds full document collections through Dataset Table for CRUD work.
Steps to connect:
Install a MongoDB server (6.0 or newer, 7.0 LTS recommended), create a database and user, then move to the FrameworX configuration below.
Follow the official MongoDB installation guide for your platform at mongodb.com/docs/manual/installation. Summary by target:
mongod as a Windows service on port 27017.apt install mongodb-org on Debian or Ubuntu with the MongoDB repository configured). Start the service with systemctl start mongod.brew tap mongodb/brew && brew install mongodb-community, then brew services start mongodb-community.docker run -d --name fx-mongo -p 27017:27017 mongo:7.Install the mongosh command-line shell alongside the server. Most platform installers include it by default.
Run the command below to confirm the server responds:
mongosh "mongodb://localhost:27017" --quiet --eval "db.adminCommand('ping').ok"
A successful ping returns 1.
Run these commands in mongosh to create a target database, a starter collection, and a user for FrameworX to authenticate as:
use plant_data
db.readings.insertOne({ plant: "Plant01", value: 0, ts: new Date() })
db.createUser({
user: "fxuser",
pwd: "<choose-a-strong-password>",
roles: [ { role: "readWrite", db: "plant_data" } ]
})
The user is stored against the plant_data database above. Use plant_data as the Auth Source value in the FrameworX connection string in the next section, or use admin when the user lives in the admin database.
The MongoDB connector targets netstandard2.0 and runs unchanged on both .NET Framework 4.8 (Designer and Windows Runtime) and .NET 10 (cross-platform Runtime on Windows, Linux, and containers). The vendored MongoDB driver ships in both flavors with the FrameworX installation. The mongod server is OS-native and has no .NET dependency.
Follow the steps below to connect FrameworX to a MongoDB server.
mongodb:// or mongodb+srv:// URI, which overrides the structured fields.
Field | Required | Description | Example |
|---|---|---|---|
Server | Yes | Host name or IP of the MongoDB server. | localhost |
Port | No | TCP port. Default is 27017. | 27017 |
User | No | Database user. Leave empty for unauthenticated access. | appuser |
Password | No | User password. Stored encrypted in the solution file. | ******** |
Database | Yes | Target database name. | plant_data |
Auth Source | No | SCRAM authentication database. Default is | admin |
TLS | No | Set to true for TLS/SSL connections. | true |
Connection URI | No | Full MongoDB URI. Overrides the structured fields. Use for replica sets and | mongodb+srv://user:pwd@cluster0.example.net/?retryWrites=true |
MongoDB queries use BSON filter documents and aggregation pipelines rather than SQL statements. The provider routes the Command Text of a Dataset Query to the correct driver operation based on the input shape:
Input shape | Operation | Description |
|---|---|---|
Plain collection name, no braces or brackets | Find | Returns all documents in the collection, with columns auto-detected from the first document. |
JSON object starting with | Find or Count | Reads |
JSON array starting with | Aggregate | Treated as an aggregation pipeline. Requires a default collection set on the Dataset Query. |
This example shows how to read the latest production readings from a MongoDB collection, run an aggregation, and write a new record. The example requires objects in other modules of the platform.
In Datasets / Queries, create the queries below. For full details on query configuration, see Datasets Queries Reference.
{ "collection": "readings", "filter": { "plant": "Plant01" }, "sort": { "ts": -1 }, "limit": 10 }
[
{ "$match": { "plant": "Plant01" } },
{ "$group": { "_id": { "$dateTrunc": { "date": "$ts", "unit": "hour" } }, "avg": { "$avg": "$value" } } },
{ "$sort": { "_id": 1 } }
]
{ "count": "readings", "filter": { "quality": "good" } }
In Datasets / Tables, create one table bound to the readings collection for insert and update work. A Dataset Table configured on a MongoDB collection uses the document _id as the primary key. Updates apply $set on the tracked columns and preserve any fields outside the tracked column list, matching the behavior of SQL Dataset Tables backed by a CommandBuilder.
In Unified Namespace / Tags, create the tags used in the example:
In Scripts / Tasks, create the tasks below, each triggered by the matching tag:
Find task
@Tag.QueryResult = @Dataset.Query.QueryFindReadings.SelectCommand();
@Info.Trace("Find OK: " + @Tag.QueryResult);
Aggregate task
@Tag.QueryResult = @Dataset.Query.QueryAggregateHourly.SelectCommand();
@Info.Trace("Aggregate OK: " + @Tag.QueryResult);
Insert task (via Dataset Table)
@Dataset.Table.TableReadings.AddRow();
@Dataset.Table.TableReadings.Row["plant"] = @Tag.LatestPlantCode;
@Dataset.Table.TableReadings.Row["value"] = @Tag.LatestValue;
@Dataset.Table.TableReadings.Row["ts"] = DateTime.UtcNow;
int i = @Dataset.Table.TableReadings.Save();
@Info.Trace("Insert OK: " + i);
After you finish the configuration and create the scripts, run the solution and trigger each task. Values arrive in the MongoDB readings collection and the Find and Aggregate results populate the QueryResult tag.
UpdateOne with $set on tracked columns. Fields outside the tracked column list are preserved on update.BulkWrite with the same tracking rules.mongodb+srv:// URIs.