Data Streams is a powerful tool to enable customers to make use of Webtrekk raw data instantly. Data Streams can be perceived as another data distribution method on the surface, where we deliver raw data not in hourly or daily batches, but a stream of events.

With data streams, you have the ability to gain insights from the real interaction of end users in real time by combining the world of data into real-time actions and a constant flow of events. In this context, events are pages and actions that have many attributes that are attached to them to identify better what they are and to turn them into the single source of truth for our insights and actions. 

An event can be described as the lowest granular level of any end-user interaction with the properties of our customers (website, app, etc.). 


Technically, it is possible to define events of any granularity, but the lowest granularity best represents the level that could describe the customer business.


Our base technology is Apache Kafka, one of the industry standards for modern, scalable, powerful, and reliable messaging systems to provide access to request-levelprocessed Webtrekk data within seconds after collection.


This documentation is intended to assist you with the introduction, implementation, and some basic troubleshooting of Data Streams.


Webtrekk’s implementation of Apache Kafka is a standard one in most aspects. We will, therefore, link to official Apache Kafka documentation and implementation guidelines where applicable.