Bistro Streams [1] is similar to Kafka Streams in how it is supposed to be used - it is implemented as a library which can be part of an application (including IoT analytics) or run at the edge, for example, on gateways or devices. However, Bistro Streams is based on different principles and has the following major distinguishing features:<p>o [Column-oriented logical model] Bistro Streams describes its data processing logic using mainly column operations as opposed to set operations [2]. It is a unique feature of this system: no joins, no group-by, no map-reduce.<p>o [Column-oriented physical model] Data within the system is represented in columns (in-memory). It is apparently not new and widely used in column stores but it is new for stream processing. In the case of long histories (which is needed for complex analysis) and complex analytic workflows it can provide higher performance. It is also important for running on edge devices with limited resources.<p>o [Separate injection, processing, retention] Bistro Streams separates the logic of (1) triggering the processes for appending, evaluating, and deleting data from the logic of (2) what to do during evaluation (data processing itself). In particular, the frequency and conditions for starting evaluations are specified using a separate API. The same for retention policy where deletion time is determined not by the windows used during its processing (say, for moving average) but separately.<p>I am the author of Bistro Streams [1] and the underlying Bistro Engine [2]. I will be glad to answer questions and any feedback is welcome. In particular, what are possible application areas for this system.<p>[1] Bistro Streams: <a href="https://github.com/asavinov/bistro/tree/master/server" rel="nofollow">https://github.com/asavinov/bistro/tree/master/server</a><p>[2] Bistro Engine: <a href="https://github.com/asavinov/bistro/blob/master/core" rel="nofollow">https://github.com/asavinov/bistro/blob/master/core</a>