Flow simplifies the management, execution, and distribution of information

Flow Software's Unified Analtyics Framework is designed to standardize the process of transforming raw data into actionable information distributed across an organization.

SEE A DEMO

Analytics have failed to scale, we know why... they are still based on Industry 3.0 architectures!

Analytics in manufacturing fails to scale due to fragmented data silos, lack of context, and brittle integrations. So we built a system designed to handle the complexity and volume of manufacturing data.

Our approach to the Unified Analytics Framework centralizes integration, contextualizes data, and ensures robust, scalable solutions. By addressing these pain points, Flow Software transforms raw data into actionable insights, driving operational excellence and continuous improvement. We provide the comprehensive information management needed to overcome the limitations of traditional analytics systems.
what is a uaf?

Toss the Industry 3.0 Playbook, Build a Scalable Data Foundation

Data projects that have adhered to an Industry 3.0 playbook result in fragile and unscalable data integrations, burdened with custom scripting and excessive coding. Time and again, we find that these projects are constructed on top of platforms like BI tools, reporting solutions, historian products, or even worse, in Excel.

This widespread dispersion of efforts leads to a lack of data governance, rendering any form of templatizable scalability unattainable.
Flow stands out as a specialized Information Management solution, crafted for the manufacturing sector, where it leverages the Unified Analytics Framework architecture. It is meticulously designed to gather, amalgamate, and standardize data from various operational and enterprise data silos (regardless of platform or technology), efficiently orchestrating a sophisticated data transformation process.

"Flow provides a robust, centralized infrastructure that's easy to govern and ready to scale."

SEE A DEMO

Here's how it works

Flow's solution consists of pre configured and developed tools to ensure you can quickly scale your analytics and build a Unified Analytics Framework following a 5 step process.

  • Create an information model
  • Connect to your data sources
  • Transform and contextualize the raw data
  • Publish to other applications
  • Share information visually

Create an information model

Siloed data and numerous naming conventions create significant challenges in manufacturing, leading to inconsistencies and inefficiencies. Without a unified approach, different functional namespaces and disparate data sources make it nearly impossible to gain a holistic, accurate view of operations.

Decoupled

In most cases, we have a number of underlying data sources (e.g. Historians, SQL Databases, etc.). We access this data using tagnames or queries, but we can provide a more meaningful and standardized name for a "piece of information". Let's call this "piece of information" a Measure.

Abstracted

The Operator, Team Leader or Manager accessing the Flow Model doesn't need to know which tag or SQL query was used to create that Measure, that "piece of information" that they use to make key decisions. In fact, they don't want to know, nor do they care! They just want their information!

Templatized

The Flow Model can be standardized across multiple sites or production facilities. The source of a Measure will differ across sites, but the name will be consistent. On Site A, the measure represents tag "FL001-123-FQ001.PV" (see why the managers don't care!) and on Site B, the measure represents a manually input value. But both measures are named "Line 1 Filler Volume", and that is what everyone will know it as, everywhere they go. Flow Templates allow for this model standardization.

Structured but flexible

The Flow Model is hierarchical and generic by design. We can build our model using ISA95, ISA88, PackML, custom asset, twin thing, entity meta-model, or any combination of these. (We're not sure twin thing is really a thing, but you get the idea). The Flow Model represents physical assets, performance indicators, and logical entities. You can structure this model by area, department, or both. The point is - it is flexible. And, despite its hierarchical nature, the Flow Model allows for object "linking" across the structure.

Unified but secure

In many ways, the Flow Model is the "uber" Unified Namespace, consolidating multiple underlying namespaces, whether they are Historian namespaces, SQL namespaces or even MQTT namespaces - Flow brings them all together into one persisted model. Together with a configurable security construct, this Unified Information Model presents the foundation for building value-added IT apps.

"The Information Model is the "uber" Unified Namespace, consolidating multiple underlying namespaces."

SEE A DEMO

Connect to your data sources

As we build out a Flow Model, we start filling it with information. We do this automatically, using data from existing sources, or manually, through Flow Forms.

Data Sources

Flow connects to and ingests data from multiple sources, meaning we can leverage the investments you have already made:

  • Industrial Historians - Canary Historian, AVEVA PI (formally OSIsoft PI) Historian, Ignition Historian, GE Historian, AVEVA Historian (formally Wonderware Historian), additional OPC HDA based historians, etc.
  • IoT and Cloud Platforms - REST APIs, Metering Solutions, Weather Platforms, Power Distribution APIs, etc.
  • SQL Databases - Microsoft SQL, MySQL, Oracle, PostgreSQL, etc.
  • NoSQL Databases - InfluxDB, etc.
  • Realtime Systems - MQTT, OPCUA, Telegraf, etc.

Scalability Matters

Data contained by the connected data sources is never replicated. Rather, it is referenced when required to perform aggregations and calculations. Flow stores only the results of this retrieval process, in the context of time and model. Flow guarantees fast and efficient access via charts and dashboards as and when needed by storing the resultant information only. But, more importantly, this efficient information storage allows Flow Systems to scale enormously, without losing the ability to drill into the underlying data source when necessary!

Data Entry

There will always be data that cannot be captured automatically, whether it's data read from an instrument indicator, or external data coming from email or paper-based systems. Flow handles manually captured data elegantly through the use of Flow Forms. Flow Forms are easily configured and served via a web browser to data capturers in a familiar and intuitive spreadsheet-like interface. No more spreadsheet spaghetti! The best part is that as soon as someone captures data in a Flow Form, any calculations or transforms in the downstream pipeline that depend on that entry are automatically processed and available for additional analytics.

"Flow helps me leverage the investments we've already made in our data infrastructure."

SEE A DEMO

Transform and contextualize the raw data

For us, the transformation pipeline is the most exciting part. This is where Flow really shines.

Context

Out of the box, and at its foundation, Flow enforces two critical pieces of context against which measure information is enriched, namely, time and model. Every data point streaming into Flow, whether used for event framing or calculated into a measure's value, is contextualized by time and model to become part of the information that will ultimately serve our decision making processes.

Time is the base that runs through all Flow Systems, a thread against which all information is stored. However, to present and publish this information as analytics-ready, Flow normalizes time into slices or periods:

Calendar-based periods include minutes, hours, shifts, days, weeks, months, quarters and years. All these periods are required to make meaningful comparisons to derive insight from your information. For example, how is the current shift running? How does our process this year compare to the same time last year? This information is at your fingertips.

Event-framed periods are derived from triggers in the underlying data. Flow monitors for start and stop triggers to generate periods against which you can attribute additional context dynamically. For example, Flow will monitor the necessary tags, or combination of tags, to record when a machine stops and starts up again. Additional information, like the reason for the stop, will be attributed to that event period, providing invaluable insight over time as to how often, how long, and why the machine stops.

Calculation Services

As data streams into Flow, it is cleaned, contextualized, and transformed by a set of calculations services that include:

  • primary aggregations and filters
  • cumulative and secondary aggregations
  • moving window calculations
  • expression based calculations
  • evaluations against limits or targets
  • secondary aggregations on event periods

User-defined functions are used to encapsulate complex algorithms and standardize and lock down calculations throughout the Flow Model.

Power of Multiple

The Flow transformation pipeline applies these contextualization and calculation processes to multiple data streams simultaneously, removing the silos between them as they blend in near real-time. The pipeline allows us to build calculated measures that take inputs from more than one data source or trigger event periods using one data source while attributing its context from other data sources, whether these data sources are time-series or transactional in nature. The possibilities are limitless!

"Unifying data silos by blending data from multiple platforms in near real-time."

SEE A DEMO

Publish to other applications

Flow is anything but a "black box". It contains your information and is open for you to easily access it via industry-standard protocols. Flow is your bridge from OT and IoT data streams to analytics-ready information.

API

Flow exposes an industry-standard REST API for model discovery and information access that can be used to build third-party apps or integration into existing applications.

Publish

Flow provides integration components to automatically publish information out to your other systems via industry standard protocols in near real-time. How about pushing maintenance information like running hours or stroke counts up to your Asset Management system? Or actual production figures up to your ERP system? What about sending information to your Machine Learning platform in the cloud? Or even just back to your SCADA for operator visibility of KPIs calculated from multiple data sources? Flow currently integrates with:

  • Industrial Historians - Canary Historian
  • SQL Databases - Microsoft SQL, MySQL, Oracle, PostgreSQL, etc.
  • Realtime Systems - MQTT (including SparkplugB)

Flow Tiering

Flow Systems can publish information to other Flow Systems! Why would this be useful? Imagine you were a multi-site organization, possibly spanning the globe, and each of your sites' Flow Systems is publishing its information up to your HQ Flow System? The HQ Flow System would provide invaluable fleet-wide information for site comparisons, benchmarking, and logistics planning. How about cost or efficiency comparisons between types of equipment? The possibilities are limitless.

"Flow is your bridge from OT data systems to enterprise applications."

SEE A DEMO

Share information visually

Ultimately, Flow provides value in the form of decision-support, insight and action by presenting the "single source of truth" in a way that is seen and understood.

Dashboarding

Flow reports, charts and dashboards are easily configured and served via a web browser to operators, team leaders and managers. Chart configuration employs built-in visualization best-practice, thus maximizing the transfer of information to the human visual cortex:

  • Big screens in production areas or hand-over rooms
  • Interactive team meetings, in-person or remote
  • Individual consumption via laptops or devices

Reports and charts enable comment entry to add human context to our information.

Messaging

Sometimes it is more convenient for the information to find us rather than for us to find the information. Flow automatically compiles and distributes information and PDF exports as and when required. Distribution is secure and handled via mechanisms such as:

  • Email
  • Slack
  • Microsoft Teams
  • Telegram
  • SMS

How does this all fit together?