Workflows
AdaTrack Flow is a powerful, low-code automation engine that allows you to create event-driven workflows for your IoT fleet. Using a visual, node-based interface, you can chain together triggers, logic gates, data transformations, and external actions to automate your operations.
Key Capabilities
Visual Node Editor: Build automation logic using a drag-and-drop canvas (powered by React Flow).
Schema Inference: Automatically detect and propagate data schemas between nodes to simplify variable mapping.
Event-Driven Triggers: Start workflows based on telemetry arrival, geofence breaches, or fixed schedules.
Integrated Intelligence: Use "Stats Query" nodes to incorporate historical data patterns into your real-time logic.
JavaScript Transformations: Use the built-in JavaScript engine (Goja) to manipulate data and create custom payloads.
Multi-Channel Actions: Beyond simple alerts—integrate with external APIs, trigger webhooks, or send Slack/Email notifications.
Detailed Execution Logs: Monitor every workflow run with success/failure tracking and error debugging.
Core Concepts
1. Schema Inference (Variable Mapping)
A key differentiator in AdaTrack Flow is Schema Inference. When you connect a trigger (like a Device Profile), the engine automatically "sniffs" the available telemetry fields.
Variable Picker: When configuring subsequent nodes (like a Webhook or Email), you can use a visual picker to select variables (e.g.,
payload.temperature,device.name) without manual typing.Data Consistency: The engine ensures that the data structure is consistent across the entire flow.
2. Triggers (The "When")
Triggers are the starting point of every workflow.
Telemetry Received: Fires whenever a specific device (or profile) sends a valid packet.
Geofence Transition: Fires when a device enters or leaves a defined zone.
Schedule (Cron): Fires at fixed intervals (e.g., "Every Monday at 9:00 AM").
3. Nodes (The "What")
Nodes are the building blocks that perform tasks:
Logic (IF/ELSE): Branch the workflow based on data values (e.g.,
if (temperature > 50)).Stats Query: Fetch historical trends (e.g., "Average speed over the last hour") to use in your logic.
JS Transform: Write a small script to clean or reformat data before sending it to another node.
Notification Channels: Dedicated nodes for Slack, Email, and Telegram with rich formatting.
Reporting: Trigger the generation of a Custom Report as part of a workflow.
HTTP Request: Call an external REST API with dynamic data.
4. Edges (The Flow)
Edges are the connections between nodes. They define the path data takes through your workflow.
Example Workflow: Cold Chain Monitoring
Imagine you are monitoring a fleet of refrigerated trucks:
Trigger: Telemetry Received (Profile: Fridge-Tracker).
Stats Query: Calculate the average temperature of the last 3 hours.
Condition: If
avg_temp > 5degrees Celsius.Branch TRUE:
Action 1: Send a high-priority Slack alert to the logistics team.
Action 2: Trigger a Generate Report node to create a detailed audit log of the incident.
Action 3: Call an external Webhook to log the incident in your ERP system.
Building a Workflow
Navigate to the Workflows page and click New Workflow.
Add a Trigger: Select your starting event.
Add Nodes: Drag nodes from the library onto the canvas. Use the search to find specific nodes like "Stats Query" or "Slack".
Configure Nodes: Click a node to open its settings. Use the Variable Picker to map fields from previous nodes.
Connect: Drag lines (edges) between nodes to define the logic flow.
Test: Use the Test Run feature with sample data to verify your logic before enabling it.
Enable: Toggle the "Active" switch to start processing live data.
Quotas & Limits
Workflows are resource-intensive. Depending on your subscription tier, limits may apply to:
The number of active workflows.
The maximum number of nodes per workflow.
Monthly execution limits (tracked in your Usage Quotas).
Best Practices
Modular Logic: Keep your workflows focused and simple. If a workflow becomes too complex, consider splitting it into smaller, modular flows.
Error Handling: Use "Error" output handles on critical nodes to handle failures gracefully (e.g., if an external API is down).
Monitor Execution Logs: Regularly check the History tab to ensure your workflows are running as expected and to identify any recurring errors.
Last updated