Read, query, and write MongoDB documents from a Dataset.

How-to ExamplesFeatureApplication → DatasetDB


This example shows how to read, aggregate, and write to a MongoDB collection using the Dataset Provider.

Prerequisites:

  • A running mongod 6.0+ on localhost:27017.
  • A database named plant_data with a readings collection. The MongoDB Database Connector reference page covers the install walkthrough and seed data.


Summary

This example walks through reading documents, running an aggregation pipeline, and inserting new records against a MongoDB collection. The work is done entirely from FrameworX, using the Dataset Provider surface of the MongoDB connector.

For other surfaces of the same connector (Device Protocol, UNS TagProvider, Historian StorageLocation), see the MongoDB Database Connector reference page.


Technical Information

The example creates one Dataset DB connection, three Queries, one Table, three Script Tasks, and a small set of UNS Tags. Each piece is configured in the Designer.

Configure the Dataset DB

In Datasets / DBs, create a connection:

  • Name: plant_data
  • Provider: MongoDB.Driver
  • Server: localhost
  • Port: 27017
  • Database: plant_data

Configure the Queries

In Datasets / Queries, create three queries against the plant_data connection.

QueryFindReadings. Returns the latest 10 readings for Plant01.

{
  "collection": "readings",
  "filter": { "plant": "Plant01" },
  "sort": { "ts": -1 },
  "limit": 10
}

QueryAggregateHourly. Hourly average of values for Plant01, expressed as an aggregation pipeline.

[
  { "$match": { "plant": "Plant01" } },
  { "$group": { "_id": { "$dateTrunc": { "date": "$ts", "unit": "hour" } }, "avg": { "$avg": "$value" } } },
  { "$sort": { "_id": 1 } }
]

QueryCount. Counts documents whose quality field is good.

{
  "count": "readings",
  "filter": { "quality": "good" }
}

Configure the Dataset Table

In Datasets / Tables, create one table named TableReadings bound to the readings collection. A Dataset Table on a MongoDB collection uses the document _id as the primary key. Updates apply $set on the tracked columns and preserve any fields outside the tracked column list, matching the behavior of SQL Dataset Tables backed by a CommandBuilder.

Configure the UNS Tags

In Unified Namespace / Tags, create the tags used by the tasks:

  • QueryResult. Receives the output of the Find and Aggregate queries.
  • TriggerFind, TriggerAggregate, TriggerInsert. Script task triggers.
  • LatestPlantCode, LatestValue. Values written back to MongoDB by the Insert task.

Configure the Script Tasks

In Scripts / Tasks, create three tasks, each triggered by the matching trigger tag.

Find task.

@Tag.QueryResult = @Dataset.Query.QueryFindReadings.SelectCommand();
@Info.Trace("Find OK: " + @Tag.QueryResult);

Aggregate task.

@Tag.QueryResult = @Dataset.Query.QueryAggregateHourly.SelectCommand();
@Info.Trace("Aggregate OK: " + @Tag.QueryResult);

Insert task (via the Dataset Table).

@Dataset.Table.TableReadings.AddRow();
@Dataset.Table.TableReadings.Row["plant"] = @Tag.LatestPlantCode;
@Dataset.Table.TableReadings.Row["value"] = @Tag.LatestValue;
@Dataset.Table.TableReadings.Row["ts"] = DateTime.UtcNow;
int i = @Dataset.Table.TableReadings.Save();
@Info.Trace("Insert OK: " + i);

Run the example

After you finish the configuration and create the scripts, run the solution and trigger each task. Values arrive in the MongoDB readings collection and the Find and Aggregate results populate the QueryResult tag. Trace output appears in the runtime log.

SQL subset alternative

The same queries can also be authored as SQL. The Dataset host translates SQL into Mongo Find or Aggregate operations. For the Find query above, a SQL form is:

SELECT * FROM readings WHERE plant = 'Plant01' ORDER BY ts DESC LIMIT 10

For the supported SQL subset and what falls back to the JSON form, see the SQL subset dialect section of the connector reference page.


In this section...



  • No labels