Flow is a Unified Analytics Framework, or a hub that allows you to consume data from multiple sources, blend it, normalize it, perform calculations on it, and then store the results within time and model context. We call this the data transformation process, and Flow is the platform that allows you to manage that transformation pipeline.
Flow is the one place, a single-source-of-truth, where data is:
As data streams into Flow and is transformed by the pipeline, it immediately becomes available for presentation via reports and dashboards and publishing out to other systems requiring consolidation and calculation capability.
When required, Flow Systems can be distributed across multiple servers, virtual or physical. When installed on a server, the Flow Bootstrap allows other components to be deployed to the server. In a sense, the Flow Bootstrap is the communication bus between components running a Flow System. This distributed architecture allows for large Flow Systems to be scaled.
Running the Flow Bootstrap on an edge device extends the Flow System beyond its network boundary. As long as an outbound connection can be made from the edge device to the Flow System, Data Sources, Data Consumers, and Message Services can be deployed as extensions to that system. For example, a Flow System running in private cloud infrastructure would use this edge architecture to access on-premise Data Sources, Data Consumers and Message Services.
Flow Systems can be tiered together to transfer information from lower level Flow Systems to higher level Flow Systems. A typical use-case is Flow Systems at each production site feeding a single HQ Flow System. Selected KPIs from Sites A, B, and C automatically propagate up to HQ as soon as they become available (typically a few minutes), presenting HQ-level analytics in near real-time!
Flow is a modular system that can be deployed in a variety of architectures (as below) or in a combination of these.