I'm surprised no one has mentioned Esper yet: <a href="http://www.espertech.com/esper/" rel="nofollow">http://www.espertech.com/esper/</a><p>Esper does exactly this - you run streams of events over it and it continuously executes SQL to see if it matches. If so you can:<p>- run code<p>- make new streams<p>- store the results<p>Esper's been doing this kind of thing for 9 years now.
I downloaded the OSX .pkg installer and didn't see anything in /Applications or /opt after running it and telling it to install to my root drive. Just glancing at some docs on your site I see pipeline-init, so doing a find on / to find out where it placed the binaries see it installed to:<p>/usr/lib/pipelinedb/usr/lib/pipelinedb/bin/pipeline-init<p>Is this intentional?<p>EDIT:<p>After playing around with the .pkg file it looks like the packed Payload contains '/usr/bin/pipelinedb/usr/lib/pipelinedb' which is probably the problem. I see broken symlinks for pipeline-init etc in /usr/bin pointing to /usr/lib/pipelinedb, so I'm guessing this repetition of the path above is a mistake.<p>Also I see a postinstall script creating a symlink from pipeline to psql. This seems like a bad idea as psql is pretty universal already as the name for the PostgreSQL CLI binary, maybe 'pipesql' might be better?
How does the PipelineDB differ or build on the ideas from Aurora/Borealis/StreamBase? At least at a high level, something like LiveView[1] seems to provide similar functionality to PipelineDB's concept of a Continuous View.<p>I was under the impression that the academic projects had proposed StreamSQL as a general language, though since StreamBase's acquisition it now seems to have been branded as TIBCO StreamSQL[2]. Have you guys been part of any efforts to make sure that there is an open language standard?<p>[1] <a href="http://streambase.typepad.com/streambase_stream_process/2013/05/liveview-14-new-continuous-queries.html" rel="nofollow">http://streambase.typepad.com/streambase_stream_process/2013...</a><p>[2] <a href="http://www.streambase.com/developers/docs/latest/streamsql/" rel="nofollow">http://www.streambase.com/developers/docs/latest/streamsql/</a>
This looks very cool. Although, I'm not sure I totally understand how it can be used to replace batch ETL processes. So, PipelineDB eliminates ETL batch processing by incrementally inserting data into continuous views, but the documentation says that it's not meant for ad-hoc data warehouses as the raw data is discarded. So, does that leave me still using batch processes to load my data warehouse? Is PipelineDB going to be my data warehouse as long as I only want the resulting streamed data? Just trying to figure out what this would look like and where its place is in a data warehouse environment.
As someone who's made a lot of use of `tail` and similar, this is appealing.<p>But I don't have a lot of use cases in personal projects, and am unlikely to find a good use-case at work in the near future. What's the 'adoption path' for something like this?<p>I think a really robust sample data set with example queries (think the neo4j imdb examples) would be a great way to show how powerful and easy something like this can be.
How similar is this to something like <a href="http://riemann.io" rel="nofollow">http://riemann.io</a> for processing events from a stream?
Very cool that it is open sourced - seems like there would be a lot to learn from the code. Link: <a href="https://github.com/pipelinedb/pipelinedb" rel="nofollow">https://github.com/pipelinedb/pipelinedb</a>
This is awesome, thanks for making it open source!<p>Would it be possible to set triggers or something on the continuous views? Lets say I want to take action (immediately) when a value calculated over sliding window goes above a limit.<p>It's a bit late here but I'll definitely play with PipelineDB tomorrow.
This claim about ETL not needed in the future sounds dubious. I work on a large application that is all about ETL. If we wanted to use this new method instead, I am not sure how it would deal with the following:<p>- State in the data. In many sources we have, processing depends on some internal state, which must be kept along the time. For example some process has started and we will know when it ended, and we must keep its state so we could correctly process the ending event (to match it up). I am not clear how this will work with continuous views. I would say this is actually the major reason of what makes ETL processing non-trivial.<p>- Processing failure. Let's say something goes wrong and the data processing fails (or it can actually be even planned downtime). How do we know where to restart, to avoid processing data twice or miss data? Does the continuous stream take care of this metadata? And how does it deal with the state information per above? If you do data processing in batches, there is an obvious point of restart. Again, I think the extra complexity that "continuous" approach says is unnecessary relates to the fact that you want to be able to checkpoint the state of processing for various reasons.
It seems PipelineDB doesn't have a clustered version, all the data must be sent to one server similar to Postgresql. Considering the fact that stream processing feature is usually useful in big data (if the data size is not that big and the data can fit in memory, complex aggregation queries usually don't take more than 1 second using a columnar database), is it possible to use PipelineDB for millions of events per second?
Do Continuous Views work with table-table joins, or must there always be at least one stream present? The documentation[1] doesn't specify.<p>If so, this could be an interesting alternative to RethinkDB's changefeeds, as RethinkDB doesn't support joins on the change stream.<p>[1] <a href="http://docs.pipelinedb.com/joins.html" rel="nofollow">http://docs.pipelinedb.com/joins.html</a>
Cool. Very cool.<p>My first thought (aside from "Cool") was that the current time would be the tricky thing that can't be incorporated into a continuous view. But even that seems to be handled! <a href="http://docs.pipelinedb.com/sliding-windows.html" rel="nofollow">http://docs.pipelinedb.com/sliding-windows.html</a><p>Looks pretty impressive. :-)
We needed to implement continuous queries in our application code. (It's actually hard to do it right in Postgresql so it's very limited) <a href="https://github.com/buremba/rakam/wiki/Postgresql-Backend#continuous-query-tables" rel="nofollow">https://github.com/buremba/rakam/wiki/Postgresql-Backend#con...</a> Since stream processing and real-time analytics are quite hot topics nowadays, I think real-time databases will get much more attention in a near future.
Well said! Good timing too, I'm beginning to sketch out how to tackle this large file set processing that has to stitch together data from corresponding files. The magnitude I'm imagining is such that I can't just read all the files into memory and do the matching, number crunching, and what not against. I like the concepts and terminology in this article. Definitely worth keeping in the back pocket going forward if not diving into it all outright. Thanks so much.
It looks like PipelineDB is implemented as a fork of PostgreSQL. I would be interested to understand what is different about the architecture of PipelineDB that it couldn't be integrated into upstream PostgreSQL.
Can PipelineDB be used to run projections for an EventStore?<p>I'm experimenting with the EventStore pattern for a side project, and I have struggled to implement projections. Could PipelineDB be a way to deliver that?
In the example for sf_proximity_count, you state the view covers a 5 minute sliding window, but the WHERE clause does not reference clock_timestamp(). Is 5 minutes an implicit default?