TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Redis criticism thread

152 pointsby HeyChinaskiover 11 years ago

10 comments

rdtscover 11 years ago
To summarize quickly for those that didn&#x27;t read the article, this focuses mostly on the new &quot;clustering&quot; aspect not on the &quot;classic&quot; single Redis server (if you wish).<p>I think it is important to be honest with the users and make it clear how and what happens behind the scenes, how data could be lost. And Salvatore has done most of this, maybe just make it a bit more explicit, as there still seems to be some confusion around.<p>All this is in light of 2 things -- 1) With the popularity and amount of talks and churn around distributed systems these days, people sort of expect a point on the map in the CAP triangle. So just saying we kind of do this and we provide some C, a little A and a dash HA was probably ok 5 years ago, now it needs a bit more definition, 2) In light of other database systems misleading users about what it could provides (you know which one I am talking about) and having resulted in lost data, there is a bit of apprehension and a higher bar that needs to be met in order for a db product to be accepted.<p>One good thing that came out recently is NoSQL database writers&#x2F;vendors pushing for more rigorous tests. Tests that run for weeks and months. Consistency tests, network partition tests as run by Aphyr. It is a very good idea those things are talked about and defined better.
评论 #6879163 未加载
StavrosKover 11 years ago
This is the first time I&#x27;ve ever heard any criticism for redis. As far as I know, everyone loves it, it&#x27;s a great tool for many jobs and it&#x27;s amazingly written and solid to boot. I think many of the critics are trying to apply it in ways it wasn&#x27;t meant to be used.<p>As far as I know, its main purpose is non-critical data that needs to be accessed as quickly as possible in various different ways, and that&#x27;s where redis shines.
评论 #6878412 未加载
评论 #6878441 未加载
pashieldsover 11 years ago
Just to be clear, the criticism is of Redis Cluster, which has been redesigned because the first design was so lax it was effectively unusable. The real issue, is that as of right now I have no idea what the exact semantics and failure conditions of Redis cluster are. I&#x27;m not clear that anyone really knows. Salvatore <i>thinks</i> he knows, but we can&#x27;t be sure that he does.<p>With all this talk of practicality, what really makes distributed systems practical is when someone can do a formal analysis of them and conclude what possible states can occur in the system. This is not work that database users should do, it is work that database implementors should do. The failure to use a known consensus system is a failure to deliver a database I can understand.<p>I find this all a bit disappointing since I&#x27;ve been a huge fan of Redis since the early days. It&#x27;s an amazing tool that I still have in production, but I get the feeling that it&#x27;s utility will never expand to suit some of my larger needs. Bummer.
评论 #6879337 未加载
dorfsmayover 11 years ago
&gt; Redis is probably the only top-used database system developed mostly by a single individual currently<p>isn&#x27;t SQLite pretty much just Richard Hipp?
评论 #6878549 未加载
HeyChinaskiover 11 years ago
Salvatore is admirably proactive at garnering criticism for his project. He&#x27;s also incredibly gracious when accepting it.
bsg75over 11 years ago
There has been a lot of constructive discussion in the Google Groups thread, and a lot of negativity on the same topic on Twitter. Ignoring for the moment the difference in discussion platforms, I am uncertain why the sudden uptick in Redis negativity.<p>Is it from users trying (and failing) to replace RDBMS or distributed platforms feature-for-feature with a single threaded, memory limited store like Redis? Or could it be FUD from people interested in seeing their new platform-du-jour get more attention?
评论 #6878800 未加载
elwellover 11 years ago
I think many times people criticize new-ish web technologies because they don&#x27;t want to spend time learning it (which can be okay). They hope to sway others away so that a critical mass of users never compels them to adopt the, now necessary, tech (not that redis is &#x27;necessary&#x27;).
strlenover 11 years ago
Quick things<p>1) I expected that thread to look like a Cultural Revolution struggle session&quot;. Thankfully it wasn&#x27;t.<p>2) As I am sure many others have already said it, durability has very little to do with CAP. CAP is about the A and I in in ACID, D is an orthogonal concern.<p>3) Durability doesn&#x27;t necessarily mean losing high performance. Most databases let the user choose how much data they&#x27;re willing to lose and for what latency decreases -- the standard approach (used even in main-memory databases like RAMCloud[1] and recent versions of VoltDB[2]) is to keep a separate write-ahead log (WAL) and let the end-user choose how frequently to fsync() it to disk as well as how frequently to flush a snapshot of main memory data structures to disk.<p>There are many papers (e.g., <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.174.6205" rel="nofollow">http:&#x2F;&#x2F;citeseerx.ist.psu.edu&#x2F;viewdoc&#x2F;summary?doi=10.1.1.174....</a>) that talk about various challenges of building WALs, but fundamentally users who want strongest possible single-machine durability can choose to fsync() on every write (and usually use battery backed raid controllers or separate log devices like SSDs with supercapacitors or even NVRAM if the writes are going to be larger than what fits into the RAID controller&#x27;s write-back cache). Others can choose to live with possibility of losing some writes, but use replication[3] to protect against power supply failure and crashes -- idea being that machines in a datacenter are connected to a UPS, replicas don&#x27;t all live on the same rack (to protect against -- usually rack local -- UPS failure), and there&#x27;s cross-data center replication (usually asynchronous with possibility of some conflicts -- notable exception being Google&#x27;s Spanner&#x2F;F1) to protect (amongst many other things...) against someone leaning against the big red-button labeled &quot;Emergency Power Off&quot; (which is exactly what you think it is).<p>Flushes of main data do also hurt bandwidth with spinning disks and old or cheap SSDs, but there&#x27;s a solution: use a a good, but commodity MLC SSD with synchronous toggle NAND flash with a good controller&#x2F;firmware (Sandforce SF2000 or later series, Intel&#x2F;Samsung&#x2F;Indilinx&#x27;s recent controllers) -- these work on the same principle as DDR memory (latch onto both edges of a signal) to provide sufficient bandwidth to handle both random reads (traffic you&#x27;re serving) and sequential writes (the flush).<p>4) I known several tech companies and&#x2F;or engineering departments therein who absolutely love and swear by redis. There are very good reasons for it: the code is extremely clean and simple[4] and it handles a use case that neither conventional databases nor pure kv-stores or caches handle well.<p>That use case is roughly described data structures on an outsourced heap for maintaining a materialized view (such as a user&#x27;s newsfeed, adjacency lists of graphs stored efficiently using compressed bitmaps, counts, etc...) on top of a database. So my advise to antirez is to focus the effort around making this use case simpler rather than build redis out into a database: build primitives to let developers piggy back durability and replication to a database or a message queue. In fact, I&#x27;ve known of multiple startups that have (in an ad-hoc way) implement pretty much exactly that.<p>This is still a tough problem, but one which (I think) would yield a lot more value to redis users. Just thinking out loud, one approach could be a way to associate each write to redis with an external transaction id (HBase LSN, MySQL gtid, or perhaps an offset in a message queue like Kafka). When redis flushes its data structures to disk, it stores the last flushed transaction id to persistent storage.<p>I would also implement fencing within redis: when in a &quot;fenced&quot; mode redis won&#x27;t accept any requests on a normal port, but can accept writes through an bulk batch update interface that users can program against. This could be more fine grained by having both a read-fence and a write fence, etc...<p>This makes it easier for users to tackle replication and durability themselves:<p>For recovery&#x2F;durability, users can configure redis such that after a crash, it is automatically fence and &quot;calls-back&quot; with that last flushed id into users&#x27; own code -- by either invoking a plugin, doing an REST or RPC call to a specified endpoint, or simply using fork() and executing a user configured script which would use the bulk API.<p>For replication, users could use a high-performance durable message queue (something I&#x27;d imagine some users already do) -- a (write-fenced) standby redis node can then become a &quot;leader&quot; (unfence itself) once its caught up to the latest &quot;transaction id&quot; (last consumed offset in the message queue, as maintained by the message queue itself -- in case of Kafka this is stored in ZooKeeper). More advanced users can tie this with database replication by either tailing the database&#x27;s WAL (with a way to transform WAL edits into requests to redis) or using a plugin storage engine for the database.<p>Fundamentally, where I see redis used successfully are uses cases where (prior to redis) users would use custom C&#x2F;C++ code. This cycles back to the &quot;outsourced on heap data structures&quot; idea -- redis lets you use a high level languages to do fast data manipulation without worrying about performance of the code (especially if using a language like Ruby or Python) or garbage collection on large heaps (a problem with even the most advanced concurrent GCs like Java&#x27;s).<p>There have been previous attempts to build these outsourced heaps as end-to-end distributed system that handle persistence, replication, and scale-out and transactions. These are generally called &quot;in-memory data grids&quot; -- some simply provide custom implementations of common data structures, others act almost completely transparent and require no modifications to the code (e.g., some by using jvmti). Terracotta is a well known one with a fairly good reputation (friends who contract for financial institutions and live in hell^H^H^H^H world of app servers and WAR files swear by it), but JINI and JavaSpaces were some of the first (too came too early, way before the market was ready) and are rightly still covered by most distributed systems textbooks. However their successful use usually requires Infiniband or 10GbE (or Myrinet back in dotcom days) -- reliable low-latency message delivery is needed as (with no API to speak off) there&#x27;s no easy way for users to recover from network failures or handle non-atomic operations.<p>To sum it up, I&#x27;d suggest to examine and focus on use-case where redis is already <i>loved</i> by its users, don&#x27;t try to build a magical end to system as it won&#x27;t conserve the former, and make it easy (and to an extent redis already does this) to let users build custom distributed systems with redis as a well-behaved component (again, they&#x27;re already doing this).<p>[1] <a href="https://ramcloud.stanford.edu/wiki/display/ramcloud/Recovery" rel="nofollow">https:&#x2F;&#x2F;ramcloud.stanford.edu&#x2F;wiki&#x2F;display&#x2F;ramcloud&#x2F;Recovery</a><p>[2] <a href="http://voltdb.com/intro-to-voltdb-command-logging/" rel="nofollow">http:&#x2F;&#x2F;voltdb.com&#x2F;intro-to-voltdb-command-logging&#x2F;</a><p>[3] Whether it&#x27;s synchronous or not is about the atomicity guarantees and not durability -- the failure mode of acknowledging a write and then &#x27;forgetting&#x27; can happen in these systems even if they fsync every write.<p>[4] It reminds me of NetBSD source code: I can open up a method and it&#x27;s very obvious what it does and how.
评论 #6880574 未加载
outside1234over 11 years ago
I had no idea that there was ONLY antirez working on Redis. That is some crazy inspirational stuff.
midyskyover 11 years ago
Mm Nn