Overview
The Dataset Engine orchestrates the internal mechanics of database operations through a multi-layered architecture. This reference covers the deep technical aspects of how the engine processes requests internally.
The engine orchestrates:
- Synchronous and asynchronous database operationsoperation coordination
- Thread management and request queuingConnection pooling and thread management
- Client/server domain tag mappingresolution
- Query execution through TServer servicespipeline
- Result set propagation mechanisms
- Multi-database coordination
Understanding the engine internals helps when:
- Optimizing database performance
- Managing concurrent operations
- Implementing client-specific data
- Troubleshooting connection issues
Architecture
Process Separation
The Dataset Module runs as TRunModule.exe (Dataset):
- Reads all dataset configurations
- Does NOT directly connect to databases
- Consumes TServer database services
- Manages tag mapping and updates
TServer provides:
- Actual database connections
- SQL execution services
- Connection pooling
- Transaction management
Connection Management
Single Thread per DB
- Each DB configuration creates ONE connection
- Single execution thread per database
- Sequential command execution
Parallel Execution
For concurrent operations to same database:
DB1 → Production Database (Thread 1)
DB2 → Production Database (Thread 2)
DB3 → Production Database (Thread 3)
Create multiple DB connections to same database for parallelism.
Execution Methods
Asynchronous Execution (Default)
Trigger: Property changes (Select, Insert, Update, Delete)
Flow:
- Property triggered (screen/script)
- Request propagated to server
- Dataset Module receives request
- TServer executes database operation
- Results returned to Dataset Module
- Tags mapped and updated
- Execution continues in parallel
Advantages:
- Non-blocking operation
- Better performance
- Prevents UI freezing
- Allows parallel operations
Use Cases:
- Display queries
- Background updates
- Report generation
- Real-time monitoring
Synchronous Execution
Trigger: Method calls (SelectCommand, ExecuteCommand)
Flow:
- Method called
- Execution PAUSES
- Dataset Module calls TServer
- Database operation completes
- Results returned
- Tags mapped
- Execution RESUMES
Advantages:
- Guaranteed completion
- Sequential logic
- Immediate results
- Transaction support
Risks:
- Can freeze UI if called from screen
- Blocks thread execution
- Performance bottlenecks
Use Cases:
- Script tasks
- Sequential operations
- Transaction requirements
- Data validation
Tag Domain Mapping
Execution Context
Tag mapping occurs in the original call domain:
Client-Initiated Calls:
- From displays or client scripts
- Mapping uses client domain
- Results visible to specific client
- Isolated from other clients
Server-Initiated Calls:
- From server scripts or tasks
- Mapping uses server domain
- Results visible project-wide
- Shared across all clients
Domain Selection Strategy
Requirement | Domain | Example |
---|
User-specific data | Client | Personal preferences |
Shared data | Server | Production values |
Session data | Client | Login information |
Global state | Server | System status |
Connection Pooling
- Reuse existing connections
- Minimize connection overhead
- Configure pool size appropriately
Query Optimization
sql
-- Use parameters to prevent recompilation
SELECT * FROM Table WHERE ID = @id
-- Limit result sets
SELECT TOP 100 * FROM LargeTable
-- Use appropriate indexes
CREATE INDEX idx_timestamp ON Data(Timestamp)
Asynchronous Patterns
csharp
// Good: Asynchronous from screen
@Dataset.Table.MyTable.SelectCommand = "SELECT * FROM Data";
// Bad: Synchronous from screen (blocks UI)
@Dataset.Table.MyTable.SelectCommandWithStatus();
Batch Operations
- Group multiple operations
- Use transactions for consistency
- Minimize round trips
Threading Model
Dataset Module Thread
- Main coordination thread
- Manages request queue
- Handles tag mapping
TServer Database Threads
- One per DB connection
- Sequential execution per thread
- Connection isolation
Client Request Threads
- Asynchronous request handling
- Non-blocking UI operations
- Parallel client support
Error Handling
Connection Failures
- Automatic retry logic
- Connection pooling recovery
- Error event propagation
Query Errors
- Exception capture in TServer
- Error property population
- Tag mapping skipped on error
Timeout Management
- Configurable command timeout
- Connection timeout settings
- Automatic cleanup
Best Practices Checklist
- Prefer asynchronous - Use properties over methods
- Avoid UI blocking - No synchronous calls from screens
- Use appropriate domains - Client for user, Server for shared
- Pool connections - Reuse database connections
- Optimize queries - Index, parameterize, limit
- Handle errors - Check status properties
- Monitor performance - Track execution times
Troubleshooting
Slow queries:
- Check execution plan
- Add appropriate indexes
- Reduce result set size
- Use asynchronous execution
Connection issues:
- Verify TServer running
- Check connection string
- Review firewall rules
- Monitor connection pool
Tag mapping problems:
- Verify domain selection
- Check tag existence
- Review mapping configuration
- Confirm execution context
Thread blocking:
- Avoid synchronous in UI
- Use Script Tasks
- Implement async patterns
- Monitor thread pool
- Debugging complex concurrency issues
- Optimizing for extreme performance scenarios
- Building custom extensions
- Troubleshooting edge cases
Overview
The Dataset Module connects your solution to databases through TServer services, managing data flow, security, and performance. This reference covers essential practices for production deployments, including concurrency management, security hardening, and performance optimization.
Architecture & Execution Model
How the Dataset Module Works
The Dataset Module operates as an intermediary layer between your solution and databases:
- TServer Integration: All database operations route through TServer services
- Execution Modes: Supports both synchronous (blocking) and asynchronous (non-blocking) operations
- Result Processing: Returns .NET DataTable objects for flexible data manipulation
- Distribution: Propagates data to displays, reports, scripts, and other modules
Synchronous vs Asynchronous Operations
Synchronous Execution:
- Blocks execution until database responds
- Returns DataTable directly to calling method
- Stores result in Local Contents property
- Best for: Small queries, immediate data needs
Asynchronous Execution:
- Non-blocking operation
- Updates Async Contents property when complete
- Enables row-by-row navigation
- Best for: Large datasets, UI responsiveness
Concurrency & Server Domain Management
Understanding Server Domain Attributes
All Dataset Module properties exist in the server domain, creating a shared resource environment:
Example Risk Scenario:
1. Client A sets: SQLStatement = "SELECT * FROM Orders WHERE Status='Open'"
2. Client B sets: SQLStatement = "SELECT * FROM Orders WHERE Status='Closed'"
3. Execute command runs with Client B's statement (last write wins)
Preventing Concurrency Conflicts
Strategy 1: Dedicated Query Objects
- Create separate query objects for different operations
- Assign unique queries to specific tasks or displays
- Avoid sharing query objects between concurrent processes
Strategy 2: Synchronization Patterns
- Use semaphores or locks in scripts
- Implement request queuing for shared resources
- Design state machines for complex operations
Strategy 3: Client-Side Processing
- Execute queries in scripts with local variables
- Process DataTables before assignment to tags
- Minimize server domain property modifications
Advanced Data Management Strategies
DataTable Usage Concurrency Patterns
The Dataset Module provides three primary patterns for
DataTable managementhandling concurrent access:
Pattern 1: Direct Script Processing
csharp
DataTable result = @Dataset.Query.Query1.SelectCommand();
// Process data locally without server domain impact
foreach(DataRow row in result.Rows) {
// Local processing
}
Pattern 2: Tag Distribution
csharp
// Assign to DataTable tag for module sharing
@Tag.MyDataTable = @Dataset.Query.Query1.SelectCommand();
// Now available to displays, reports, etc.
Pattern 3: Mapped Navigation
csharp
// Configure mapping, then navigate rows
@Dataset.Query.Query1.Select();
@Dataset.Query.Query1.Next(); // Moves to next row
Memory & Traffic Optimization
Control Data Volume:
- Always use WHERE clauses to filter results
- Implement pagination for large datasets
- Use SELECT specific columns instead of SELECT *
- Consider indexed views for complex queries
Resource Planning:
Small Query: < 1,000 rows = Minimal impact
Medium Query: 1,000-10,000 rows = Monitor memory usage
Large Query: > 10,000 rows = Implement pagination
Security Implementation
SQL Injection Prevention
Never Do This:
string query = "SELECT * FROM Users WHERE Name = '" + userInput + "'";
Always Do This:
execute GetUserData @userName={Tag.UserInput}, @userId={Tag.UserId}
The platform's parameterization:
- Treats all parameters as values, not code
- Prevents malicious SQL execution
- Maintains data type integrity
- Supports all major database platforms
Network Security Architecture
Gateway Configuration for Restricted Databases:
- Identify Restriction: Database accessible only from specific servers
- Install Gateway: Deploy platform with TWebServer on authorized machine
- Configure ServerIP: Point Dataset connections to gateway machine
- Result: Secure database access through controlled gateway
Benefits:
- Maintains network security policies
- Centralizes database connections
- Enables audit logging
- Supports DMZ architectures
Database-Specific Configuration
Database | Syntax Example | Special Considerations |
---|
SQL Server | SELECT TOP 10 * FROM Table | Use TOP for limiting |
SQLite | SELECT * FROM Table LIMIT 10 | Use LIMIT clause |
MySQL | SELECT * FROM \ Table` LIMIT 10` | Backticks for names |
PostgreSQL | SELECT * FROM "Table" LIMIT 10 | Case-sensitive names |
Oracle | SELECT * FROM Table WHERE ROWNUM <= 10 | ROWNUM for limiting |
Time Zone Management
Default Behavior:
- Platform stores all DateTime values as UTC
- Historian and Alarm data always in UTC
- Automatic conversion for display
Configuring External Databases:
DateTimeMode Settings:
- UTC: No conversion needed
- LocalTime: Platform converts automatically
- Custom: Handle in SQL statements
Local Time Queries:
-- Adjust for local time zone (example: EST -5 hours)
WHERE Timestamp >= DATEADD(hour, -5, @Tag.StartTimeUTC)
Query Optimization Checklist
? Indexes: Ensure indexes on filtered and joined columns ? Statistics: Update database statistics regularly ? Query Plans: Review execution plans for bottlenecks ? Connection Pooling: Enable for frequent operations ? Batch Operations: Group multiple operations when possible
Key Metrics to Track:
- Query execution time
- Memory consumption
- Network latency
- Connection pool usage
- Cache hit rates
Diagnostic Properties:
@Dataset.Query.Query1.Error // Last error message
@Dataset.Query.Query1.ExecutionTime // Query duration
@Dataset.Query.Query1.RowCount // Result size
Error Handling & Recovery
Error Detection
The module provides multiple error detection methods:
Method 1: Property Monitoring
@Dataset.Query.Query1.SelectCommand();
if (@Dataset.Query.Query1.Error != "") {
// Handle error
}
Method 2: Status Methods
string status;
DataTable result = @Dataset.Query.Query1.SelectCommandWithStatusAsync(out status);
if (status != "OK") {
// Handle error
}
Common Error Scenarios
Error Type | Typical Cause | Resolution |
---|
Connection Timeout | Network issues, server load | Increase timeout, check connectivity |
Syntax Error | Database-specific SQL | Verify syntax for target database |
Permission Denied | Insufficient privileges | Check database user permissions |
Deadlock | Concurrent transactions | Implement retry logic |
Out of Memory | Large result set | Add pagination, increase resources |
Backup & Recovery
SQLite Backup Strategies
Option 1: Command Line Backup
bash
sqlite3 source.db ".backup backup.db"
- Simple and reliable
- Requires database file access
- Best for scheduled maintenance
Option 2: Online Backup API
- Backup while database is active
- Support for incremental backups
- Progress monitoring capability
Option 3: File System Copy
- Only when database is offline
- Fastest for large databases
- Requires downtime
Backup Best Practices
- Schedule: Automate backups during low-activity periods
- Verify: Test restore procedures monthly
- Rotate: Maintain multiple backup generations
- Secure: Store backups in separate physical location
- Document: Maintain restore procedure documentation
Store & Forward Limitations
The Store & Forward feature has specific requirements:
- Applies Only To: Historian and Alarm databases
- Required Schema: Must include control columns
- Not Available For: Generic application databases
- Alternative: Implement custom buffering for generic databases
For Store & Forward configuration, see Historian Archiving Process documentation.
Best Practices Summary
Production Deployment Considerations
Design Principles
- Isolation: Use dedicated query objects for different operations
- Filtering: Always limit result sets with WHERE clauses
- Security: Use parameterized queries exclusively
- Monitoring: Track performance metrics and errors
- Planning: Design for concurrent access from the start
Production Checklist
- Before deploying to production:
[ ] - Parameterized all dynamic queries
[ ] - Implemented error handling for all operations
[ ] - Tested concurrent access scenarios
[ ] [ ] - Configured appropriate timeouts
[ ] - Established backup procedures
[ ] - Documented recovery processes
[ ] - Verified timezone handling
[ ] - Optimized query performance
[ ] Datasets Engine (Reference)