TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: Fast, In-Memory, Distributed data analysis and machine learning?

5 pointsby henrythe9thalmost 12 years ago
We&#x27;re looking to implement a new data pipeline architecture at work. The primary goal is speed (data size is small enough to fit entirely in memory, sharded across multiple machines if needed). The primary bottleneck is feature extraction, transformation and iteration, which is both CPU and read&#x2F;write intensive. Model building is not too slow, so no need to distribute training&#x2F;testing as of yet.<p>I&#x27;ve heard good things about Spark&#x2F;Shark and Storm. Does anyone have any experiences or recommendations? Maybe we don&#x27;t even need a super sophisticated system and a Riak&#x2F;Redis K-V store cluster would do?<p>Thanks in advance

4 comments

karterkalmost 12 years ago
Hard to offer suggestions without knowing rough size of data - depending on how much money you&#x27;re willing to cough up, even 1 TB is in the range of &quot;can fit in the memory&quot; territory.<p>Having said that, Spark is really great for running iterative algorithms and will definitely fit with what you have described. I suggest staying away from building it on your own using riak&#x2F;redis (atleast until you have ruled out spark), as you will run into lots of operational issues like handling failures, resource allocation, retries etc.
评论 #6000982 未加载
agibsoncccalmost 12 years ago
I can vouch for storm. If only for the fact it&#x27;s pretty easy to setup (especially compared to hadoop) Being able to leverage zookeeper for coordination allows you some extra capabilities for coordination as well. With that being said, just watch how you build your bolts&#x2F;spouts. There&#x27;s lots of ways you can send data in to the system, but in general , storm&#x27;s documentation has been superb to work with.<p>I built a mini library for myself to auto construct the topologies based on a set of named dependencies to handle bolt&#x2F;spout wiring. Aside from that, the builder interface for it is really nice if your data pipeline doesn&#x27;t change.<p>There&#x27;s good support for testing with a local cluster as well.
评论 #6000989 未加载
x0x0almost 12 years ago
you should check out <a href="http://0xdata.com/" rel="nofollow">http:&#x2F;&#x2F;0xdata.com&#x2F;</a> ; it&#x27;s built from the ground up on a custom dkv to do in-memory ML. Reasons to check it out:<p>1 - it&#x27;s open source <a href="https://github.com/0xdata/h2o" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;0xdata&#x2F;h2o</a><p>2 - ingest data from hdfs, s3, csv<p>3 - I&#x27;ve built systems like what you&#x27;re discussing twice; the ML algorithms are often easier to write than expected while data management (moving data, sending updates, etc) which initially seems easier is much harder. 0xdata handles this for you.<p>4 - under active development<p>5 - it cleanly runs on your dev box with 1 or many nodes for development; deploying is a simple as uploading a jar to a cluster and putting a single file on each naming peers in the cluster<p>5a - see scripts to walk you through doing this<p>disclosure: I work on it as of very recently =P
niharalmost 12 years ago
Have you looked at Oracle Coherence? It&#x27;s pretty light weight and has clustering features as well.
评论 #6000985 未加载