"Flow provides a robust, centralized infrastructure that's easy to govern and ready to scale."
SEE A DEMOFlow's solution consists of pre configured and developed tools to ensure you can quickly scale your analytics and build a Unified Analytics Framework following a 5 step process.
Siloed data and numerous naming conventions create significant challenges in manufacturing, leading to inconsistencies and inefficiencies. Without a unified approach, different functional namespaces and disparate data sources make it nearly impossible to gain a holistic, accurate view of operations.
In most cases, we have a number of underlying data sources (e.g. Historians, SQL Databases, etc.). We access this data using tagnames or queries, but we can provide a more meaningful and standardized name for a "piece of information". Let's call this "piece of information" a Measure.
The Operator, Team Leader or Manager accessing the Flow Model doesn't need to know which tag or SQL query was used to create that Measure, that "piece of information" that they use to make key decisions. In fact, they don't want to know, nor do they care! They just want their information!
The Flow Model can be standardized across multiple sites or production facilities. The source of a Measure will differ across sites, but the name will be consistent. On Site A, the measure represents tag "FL001-123-FQ001.PV" (see why the managers don't care!) and on Site B, the measure represents a manually input value. But both measures are named "Line 1 Filler Volume", and that is what everyone will know it as, everywhere they go. Flow Templates allow for this model standardization.
The Flow Model is hierarchical and generic by design. We can build our model using ISA95, ISA88, PackML, custom asset, twin thing, entity meta-model, or any combination of these. (We're not sure twin thing is really a thing, but you get the idea). The Flow Model represents physical assets, performance indicators, and logical entities. You can structure this model by area, department, or both. The point is - it is flexible. And, despite its hierarchical nature, the Flow Model allows for object "linking" across the structure.
In many ways, the Flow Model is the "uber" Unified Namespace, consolidating multiple underlying namespaces, whether they are Historian namespaces, SQL namespaces or even MQTT namespaces - Flow brings them all together into one persisted model. Together with a configurable security construct, this Unified Information Model presents the foundation for building value-added IT apps.
"The Information Model is the "uber" Unified Namespace, consolidating multiple underlying namespaces."
SEE A DEMOAs we build out a Flow Model, we start filling it with information. We do this automatically, using data from existing sources, or manually, through Flow Forms.
Flow connects to and ingests data from multiple sources, meaning we can leverage the investments you have already made:
Data contained by the connected data sources is never replicated. Rather, it is referenced when required to perform aggregations and calculations. Flow stores only the results of this retrieval process, in the context of time and model. Flow guarantees fast and efficient access via charts and dashboards as and when needed by storing the resultant information only. But, more importantly, this efficient information storage allows Flow Systems to scale enormously, without losing the ability to drill into the underlying data source when necessary!
There will always be data that cannot be captured automatically, whether it's data read from an instrument indicator, or external data coming from email or paper-based systems. Flow handles manually captured data elegantly through the use of Flow Forms. Flow Forms are easily configured and served via a web browser to data capturers in a familiar and intuitive spreadsheet-like interface. No more spreadsheet spaghetti! The best part is that as soon as someone captures data in a Flow Form, any calculations or transforms in the downstream pipeline that depend on that entry are automatically processed and available for additional analytics.
"Flow helps me leverage the investments we've already made in our data infrastructure."
SEE A DEMOFor us, the transformation pipeline is the most exciting part. This is where Flow really shines.
Out of the box, and at its foundation, Flow enforces two critical pieces of context against which measure information is enriched, namely, time and model. Every data point streaming into Flow, whether used for event framing or calculated into a measure's value, is contextualized by time and model to become part of the information that will ultimately serve our decision making processes.
Time is the base that runs through all Flow Systems, a thread against which all information is stored. However, to present and publish this information as analytics-ready, Flow normalizes time into slices or periods:
Calendar-based periods include minutes, hours, shifts, days, weeks, months, quarters and years. All these periods are required to make meaningful comparisons to derive insight from your information. For example, how is the current shift running? How does our process this year compare to the same time last year? This information is at your fingertips.
Event-framed periods are derived from triggers in the underlying data. Flow monitors for start and stop triggers to generate periods against which you can attribute additional context dynamically. For example, Flow will monitor the necessary tags, or combination of tags, to record when a machine stops and starts up again. Additional information, like the reason for the stop, will be attributed to that event period, providing invaluable insight over time as to how often, how long, and why the machine stops.
As data streams into Flow, it is cleaned, contextualized, and transformed by a set of calculations services that include:
User-defined functions are used to encapsulate complex algorithms and standardize and lock down calculations throughout the Flow Model.
The Flow transformation pipeline applies these contextualization and calculation processes to multiple data streams simultaneously, removing the silos between them as they blend in near real-time. The pipeline allows us to build calculated measures that take inputs from more than one data source or trigger event periods using one data source while attributing its context from other data sources, whether these data sources are time-series or transactional in nature. The possibilities are limitless!
"Unifying data silos by blending data from multiple platforms in near real-time."
SEE A DEMOFlow is anything but a "black box". It contains your information and is open for you to easily access it via industry-standard protocols. Flow is your bridge from OT and IoT data streams to analytics-ready information.
Flow exposes an industry-standard REST API for model discovery and information access that can be used to build third-party apps or integration into existing applications.
Flow provides integration components to automatically publish information out to your other systems via industry standard protocols in near real-time. How about pushing maintenance information like running hours or stroke counts up to your Asset Management system? Or actual production figures up to your ERP system? What about sending information to your Machine Learning platform in the cloud? Or even just back to your SCADA for operator visibility of KPIs calculated from multiple data sources? Flow currently integrates with:
Flow Systems can publish information to other Flow Systems! Why would this be useful? Imagine you were a multi-site organization, possibly spanning the globe, and each of your sites' Flow Systems is publishing its information up to your HQ Flow System? The HQ Flow System would provide invaluable fleet-wide information for site comparisons, benchmarking, and logistics planning. How about cost or efficiency comparisons between types of equipment? The possibilities are limitless.
"Flow is your bridge from OT data systems to enterprise applications."
SEE A DEMOUltimately, Flow provides value in the form of decision-support, insight and action by presenting the "single source of truth" in a way that is seen and understood.
Flow reports, charts and dashboards are easily configured and served via a web browser to operators, team leaders and managers. Chart configuration employs built-in visualization best-practice, thus maximizing the transfer of information to the human visual cortex:
Reports and charts enable comment entry to add human context to our information.
Sometimes it is more convenient for the information to find us rather than for us to find the information. Flow automatically compiles and distributes information and PDF exports as and when required. Distribution is secure and handled via mechanisms such as: