Zerynth Cloud has hundreds of active IoT devices connected to its platform as well as users that perform a variety of actions which range from viewing real-time data to debugging their devices’ activity.

As the number of connected devices grows, the quantity of data flowing inside and outside the platform increases rapidly. It is crucial for us to have a data streaming pipeline that receives, handles, filters, and transforms all data with the best performance possible.

This blog post gives you a sneak preview of what streaming and events are, how they are treated on our platform, and how they can be useful for your device’s monitoring.

How Zerynth Cloud manages data

Every organization is powered by data. The effectiveness of business choices and productivity highly depend on how these data are collected, managed, and used. This is why the way we treat data is nearly as important as the data itself.

In IoT and IIoT data is collected on the edge by real-time control systems and sensors. Then they are sent through the network to the cloud, where they can be stored or used in various ways.

Depending on the device, network, and other requirements, data can be sent in real-time, aggregated in batches, or it can even be kept at the edge and then sent at a later time. Something to remember, in all of these situations, is that the timestamp of the data must be right and it must be preserved between the edge and the cloud. If it should happen, a wrong timestamp could potentially present an incorrect overview of what is happening at any level.

In addition to data collected on the edge by boards and sensors, there is another type of data that is generated both on the edge and on the cloud: the events.

What is an event? 

In general, the term means “a significant change in state”, but in this context, an event is a piece of information that is produced in reaction to something that has happened or has been triggered.

For example, every time a device is connected to Zerynth Cloud, a variety of events are produced:

  • A device authentication event, to check the identity of the device;
  • A cloud connection event, etc.

All of these actions have a payload with advance information about the event.

Events provide some insight into the device’s activity allowing you to understand how they are working and, in case of malfunction, let you run a diagnostic test.

Zerynth Cloud’s pipeline

Zerynth Cloud platform includes a fast and reliable data streaming pipeline that can provide high throughput with low latency, supporting thousands of messages per second. This pipeline has also been implemented for fault tolerance and scalability in our distributed architecture which means that it can offer a very high level of availability and it can adapt its performance to the amount of incoming data.

The same pipeline is also used internally to keep track of the traffic generated by the devices, by the users, and to keep track of the errors that occur on the platform.

Even integrations with 3rd party services are powered by this same powerful pipeline.

Figure 1. Zerynth Cloud platform’s pipeline

Data isolation and replication into this pipeline are also provided: isolation guarantees that every workspace data is separated from the other workspaces data, while replication guarantees that, in case of a disaster, there is always one or more copies of your data available. When the damaged services are restored, the affected copies of your data are restored and they are synchronized with the remaining data.

Both data and events can be forwarded to 3rd party external services via a webhook or one of the other supported integrations. One notable feature of our data streaming pipeline is there is a mechanism that retries a certain number of times to submit data and events to external services in case of error. This guarantees that, in case of network errors, your data is not lost; furthermore, the sending is retried as soon as possible.

You can keep track of errors that occurred in your active integrations by looking at the Integrations section in the Zerynth Cloud web application.

Figure 2. Visualization of errors that occurred on the Zerynth Cloud

In Figure 2, you can see some details about data that failed to send: the last time the service retried to send the data, the current status, how many attempts are left, and some detailed payload of the error that occurred.

Furthermore, the service forwards your data using an adaptive sending frequency that estimates your device’s message rate, offering the correct tradeoff between latency and batch size.

The events that are generated by your devices and the cloud are available for consultation for a short time. This lets you view and debug the activity of your devices without any additional payment or services. After this time they are deleted and are unavailable. In fact, the data streaming pipeline does not offer the persistence of the data itself, but in the case of Zerynth Storage service enabling, it stores all the data into a database. This database is optimized for time-series data and it allows you to have persistent storage of all your devices’ data and events in order to explore them  historically and perform other operations (e.g. download some data in a specified time range, etc).

If you are interested in this topic and would like to learn in-depth about the potential of the Zerynth Cloud system in real contexts, we invite you to consult our technical material and the pages relating to case studies.

Share This Story, Choose Your Platform!

About the Author: Matteo Arre

Matteo is a Full Stack Software Engineer and works in the Cloud Team at Zerynth. He holds a Bachelor of Science degree in Computer Science and he is a tech enthusiast.

Follow Zerynth on

Latest Posts