As a team member helped built the service, I would like to offer some of my personal understanding. I am not with Amazon now, and all my views are based on public information on the website.<p>Like all AWS offerings, Kinesis is a platform. It looks like kafka + storm, with fully integrated ecosystem with other AWS services. From the very beginning, the reliability, real-time processing, and transparent elasticity are built in. That's all I can say.
This is essentially a hosted Kafka (<a href="http://kafka.apache.org/" rel="nofollow">http://kafka.apache.org/</a>). Given the complexity of operating a distributed persistent queue, this could be a compelling alternative for AWS-centric environments. (We run a large Kafka cluster on AWS, and it is one of our highest-maintenance services.)
What's going on with Amazon recently? We're seeing a torrent of new technologies and platform offerings. Are we finally catching a glimpse of Bezos's grand scheme?
The 50KB limit on data (base64 encoded data) will be a gotcha you'll have to deal with similar to the size limit in DynamoDB. Now you'll have to split your messages so they fit inside the Kinesis records and then you'll have to reassemble them on the other end... Not fun :-)
Having to base64 encode data is also a bit awkward. They should be passing PutRecord parameters as HTTP headers (which they are already using for other properties) and let users pass raw data in the body.
It's interesting to see these messaging platforms and the new use cases starting to hit the mainstream a la kinesis,
storm, kafka.<p>Some interesting things about these kinds of measaging platforms.<p>Many exhanges/algo/low-latency/hft firms have large clusters of these kinds of systems for trading. The open source stuff out there is kind of different from the typical systems that revolve around a central engine/sequencer (matching engine).<p>There's a large body of knowledge in the financial industry on building low-latency versions of these message processors. Here's some interesting possibilities. On an e5-2670 with 7122 solarflare cards running openonload, its possible to pump a decent 2M 100byte messages/sec with a packetization of around 200k pps.<p>Avergae latency through a carefully crafted system using efficient data structures and in-memory only stores can pump and process a message through in about 15 microseconds with the 99.9 percent median at around 20 micros. This is a message hitting a host, getting sent to an engine, then back to the host and back.<p>Using regular interrupt based processing and e1000s probably yields around 500k msgs/sec with average latency through the system at around 100 micros and 99.9% medians in the 30-40 millisecond range.<p>Its useful to see solarflares tuning guidelines on building uber-efficient memcache boxes that can handle something like 7-8M memcache requests/sec.
I'm really excited about this - data streaming has been a crucial missing piece for building large-scale apps on AWS.<p>If the performance and pricing are right it's going to relieve a lot of headaches in terms of infrastructure management.
<i>it is possible that the MD5 hash of your partition keys isn't evenly distributed</i><p>how? i mean, apart from poisson stats / shot noise, obviously (and which is noise, so you can't predict it anyway).<p>thinking some more, i guess this (splitting and merging partitions in a non-generic way) is to handle when a consumer is slow for some reason. perhaps that partition is backing up because the consumer crashed.<p>but then why not say that, instead of postulating the people are going to have uneven hashes?<p>[edit:] maybe they allow duplicates?
Seems like a useful reworking of SQS, but all the hard work is being done in the client: "client library automatically handle complex issues like adapting to changes in stream volume, load-balancing streaming data, coordinating distributed services, and processing data with fault-tolerance."<p>Unfortunately, there's no explanation of the mechanics of coordination and fault tolerance, so the hard part appears to be vaporware.
The Kinesis consumer API is somewhat equivalent to the Simple Consumer API in Kafka. You'll have to manage the consumed sequence number yourself. There's no higher level consumer API to keep track of the consumed sequence numbers.