Introduction
Another lab demo about Kafka. We will discuss about the integration between Kafka and Ignition. I'll talk about Event Streams, REST API, Bridges(HTTP, Confluent, Debezium). We are trying here to benchmark few integration options in order to find out which one fits the best depending on your application. It's all about a tradeoff with your own environment. That's is why I always put here the repo so you can reproduce this demo and make your own conclusion.
Why Kafka for Manufacturing ?
While MQTT is standard for time-series data (temperature, pressure), Kafka is better suited for Event Streams and massive data volumes. It is the gold standard in the IT world (used by Uber, LinkedIn) and is increasingly used in factories to allow different sites to "talk" to each other and feed big data analytics.
Explore the full content of this section here
Limitations of the Built-in Event Stream Module
We can notice three integration bridges:
- Kafka REST Proxy: Uses a heavy IT infrastructure where Ignition acts as an HTTP client. It’s powerful but high-maintenance.
- Debezium (CDC): Uses a SQL database as a buffer. Changes in the database trigger events in Kafka. The downside is maintaining a "duplicate" database buffer.
- Custom Python Bridge: A lightweight external bridge written in Python 3.1 (using FastAPI). Since Ignition uses Jython (Python 2.7), this external bridge allows the use of modern Python libraries to handle Kafka communication efficiently.
Three Integration Bridges Compared
Now we will see the physical and digital environment required to run these tools.
Kepware (Connectivity): It is strictly Windows-only. This is a critical consideration for your "edge device". If you choose Kepware, you are locked into using hardware that can support a Windows operating system, which can impact your hardware costs and maintenance.
HighByte (DataOps): It is built for <strong>modern IT environments, running natively on Docker and Linux</strong>. This allows you to deploy it on a wide variety of lightweight edge devices or cloud environments.
Explore the full content of this section here
The Unified Namespace (UNS) Requirement
A true UNS isn't just data points; it requires Events, Topology, and Data to be consistent. A custom script ensures that when a machine statechanges or a naming convention is updated, all parts of the stream stay synchronized.
Explore the full content of this section here
Production vs. Lab Demo
Manufacturing standards "hate change" because stability ensures quality. ResistanIn a real-world setup, you would move from a single broker to a Kafka Cluster for resilience and likely switch from JSON to Protobuf as the data schema to handle higher volumes with better performance.ce to change can actually be a good thing if it prevents unnecessary modifications that could ruin product quality (e.g., in medical devices like pacemakers). You should focus on sustaining the machine through small incremental improvements rather than radical "revolutions" or running it to total failure.
Explore the full content of this section here