You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

DRAFT v10.1.5. Pre-release draft for content review. Do not link from public material. The final page replaces this draft once 10.1.5 ships.

New in 10.1.5. MongoDB is a new Dataset provider for Find, Aggregate, Count, Insert, Update, and Delete operations on document databases.

MongoDB document databases.

  • Name: MongoDB
  • Version: 1.0.0.0
  • Protocol: Proprietary (MongoDB Wire Protocol)
  • Interface: TCP/IP
  • Runtime: .NET Framework 4.8 (Windows) and .NET 10 (cross-platform)
  • Configuration:
    • Datasets / DBs
  • Minimum server version: MongoDB 6.0 (recommended 7.0 LTS)


Overview

Use MongoDB with the Datasets module to read and write document data from a FrameworX solution. The provider exposes three query shapes through the standard Dataset Query object and binds full document collections through Dataset Table for CRUD work.

Steps to connect:

  1. Create a Database Connection with the MongoDB provider.
  2. Configure the connection string and test the connection.
  3. Create Dataset Queries for Find, Aggregate, or Count.
  4. Create Dataset Tables to bind collections for insert, update, and delete.
  5. Call the queries and tables from scripts, displays, or reports.

Configuration

Follow the steps below to connect FrameworX to a MongoDB server.

  1. Access Datasets / DBs.
  2. Click the plus icon to create a new Database Connection.
  3. In the Name field, enter a name for the connection, for example MongoDB1.
  4. Choose MongoDB Data Provider as the Provider.
  5. Click OK.
  6. In the data grid, click the Connection String column of the newly created row.
  7. Configure the connection. The Designer dialog shows structured fields for Server, Port, User, Password, Database, Auth Source, and TLS. For replica sets and Atlas clusters, use the Advanced Connection URI field with a mongodb:// or mongodb+srv:// URI, which overrides the structured fields.
  8. Click Test to verify the connection with the MongoDB server.

Connection string options

Field

Required

Description

Example

Server

Yes

Host name or IP of the MongoDB server.

localhost

Port

No

TCP port. Default is 27017.

27017

User

No

Database user. Leave empty for unauthenticated access.

appuser

Password

No

User password. Stored encrypted in the solution file.

********

Database

Yes

Target database name.

plant_data

Auth Source

No

SCRAM authentication database. Default is admin.

admin

TLS

No

Set to true for TLS/SSL connections.

true

Connection URI

No

Full MongoDB URI. Overrides the structured fields. Use for replica sets and mongodb+srv:// Atlas URIs.

mongodb+srv://user:pwd@cluster0.example.net/?retryWrites=true

Query shapes

MongoDB queries use BSON filter documents and aggregation pipelines rather than SQL statements. The provider routes the Command Text of a Dataset Query to the correct driver operation based on the input shape:

Input shape

Operation

Description

Plain collection name, no braces or brackets

Find

Returns all documents in the collection, with columns auto-detected from the first document.

JSON object starting with {

Find or Count

Reads collection, filter, sort, and limit fields. If the object has a count key, runs Count and returns a single-row table.

JSON array starting with [

Aggregate

Treated as an aggregation pipeline. Requires a default collection set on the Dataset Query.


Configuration Example

This example shows how to read the latest production readings from a MongoDB collection, run an aggregation, and write a new record. The example requires objects in other modules of the platform.

In Datasets / Queries, create the queries below. For full details on query configuration, see Datasets Queries Reference.

  • QueryFindReadings
    • DB: the MongoDB connection created above.
    • Command Text:
      { "collection": "readings", "filter": { "plant": "Plant01" }, "sort": { "ts": -1 }, "limit": 10 }
  • QueryAggregateHourly
    • DB: the MongoDB connection.
    • Default collection: readings.
    • Command Text (aggregation pipeline):
      [
        { "$match": { "plant": "Plant01" } },
        { "$group": { "_id": { "$dateTrunc": { "date": "$ts", "unit": "hour" } }, "avg": { "$avg": "$value" } } },
        { "$sort": { "_id": 1 } }
      ]
  • QueryCount
    • Command Text:
      { "count": "readings", "filter": { "quality": "good" } }

In Datasets / Tables, create one table bound to the readings collection for insert and update work. A Dataset Table configured on a MongoDB collection uses the document _id as the primary key. Updates apply $set on the tracked columns and preserve any fields outside the tracked column list, matching the behavior of SQL Dataset Tables backed by a CommandBuilder.

In Unified Namespace / Tags, create the tags used in the example:

  • QueryResult: receives the output of the Find and Aggregate queries.
  • TriggerFind, TriggerAggregate, TriggerInsert: script task triggers.
  • LatestPlantCode, LatestValue: values written back to MongoDB.

In Scripts / Tasks, create the tasks below, each triggered by the matching tag:

  • Find task

    @Tag.QueryResult = @Dataset.Query.QueryFindReadings.SelectCommand();
    @Info.Trace("Find OK: " + @Tag.QueryResult);
  • Aggregate task

    @Tag.QueryResult = @Dataset.Query.QueryAggregateHourly.SelectCommand();
    @Info.Trace("Aggregate OK: " + @Tag.QueryResult);
  • Insert task (via Dataset Table)

    @Dataset.Table.TableReadings.AddRow();
    @Dataset.Table.TableReadings.Row["plant"] = @Tag.LatestPlantCode;
    @Dataset.Table.TableReadings.Row["value"] = @Tag.LatestValue;
    @Dataset.Table.TableReadings.Row["ts"] = DateTime.UtcNow;
    int i = @Dataset.Table.TableReadings.Save();
    @Info.Trace("Insert OK: " + i);

After you finish the configuration and create the scripts, run the solution and trigger each task. Values arrive in the MongoDB readings collection and the Find and Aggregate results populate the QueryResult tag.

Notes on writes

  • The Save call on a Dataset Table dispatches to MongoDB UpdateOne with $set on tracked columns. Fields outside the tracked column list are preserved on update.
  • Batched changes use BulkWrite with the same tracking rules.
  • Authentication supports SCRAM-SHA-256, TLS, replica sets, and mongodb+srv:// URIs.

In this section...

The root page @parent could not be found in space FrameworX 10.1.




  • No labels