TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The Cost of Scalability in Graph Processing

107 pointsby ms705over 10 years ago

8 comments

scott_sover 10 years ago
The GraphChi paper from OSDI 2012 made a similar observation: <a href="http://select.cs.cmu.edu/publications/paperdir/osdi2012-kyrola-blelloch-guestrin.pdf" rel="nofollow">http:&#x2F;&#x2F;select.cs.cmu.edu&#x2F;publications&#x2F;paperdir&#x2F;osdi2012-kyro...</a><p>From the abstract: &quot;In this work, we present GraphChi, a disk-based system for computing efficiently on graphs with billions of edges. By using a well-known method to break large graphs into small parts, and a novel parallel sliding windows method, GraphChi is able to execute several advanced data mining, graph mining, and machine learning algorithms on very large graphs, using just a single consumer-level computer. ... By repeating experiments reported for existing distributed systems, we show that, with only fraction of the resources, GraphChi can solve the same problems in very reasonable time.&quot;<p>Section 7.2 compares themselves to distributed graph frameworks.
评论 #8903316 未加载
zackmorrisover 10 years ago
I&#x27;m concerned that articles like this paint multiprocessing in a bad light. Yes, there are issues like <a href="http://en.wikipedia.org/wiki/Amdahl&#x27;s_law" rel="nofollow">http:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Amdahl&#x27;s_law</a> but many real-world tasks are &quot;embarrassingly parallel” and it takes little effort to break them into segments that can be processed concurrently.<p>After skimming the article, I’m thinking the real bottleneck here is latency since it mentions Hilbert curves. Currently networks are orders of magnitude slower than memory but that won’t always be the case. A big game changer is going to be content-addressable memory because we won’t have to worry about network topology as much. It will work more like BitTorrent and locally cache frequently-used data as needed.<p>Going forward, I have to admit that I’m not hugely fond of big data schemes as they’re currently conceived. There is way too much emphasis on using strange new databases and commodity hardware. I want just the opposite approach - low level access to data with a language like Go or Rust and new hardware with hundreds or thousands of cores on the same chip so we can get revolutionary performance (like with Bitcoin ASICs). Then if we want to double performance, we simple double the number of cores rather than hand-optimizing code, and that is going to be huge for productivity.
评论 #8902928 未加载
baneover 10 years ago
Lots of people make the mistake of thinking there&#x27;s only two vectors you can go to improve performance, high or wide.<p>High - throw hardware at the problem, on a single machine<p>Wide - Add more machines<p>There&#x27;s a third direction you can go, I call it &quot;going deep&quot;. Today&#x27;s programs run on software stacks so high and so abstract that we&#x27;re just now getting around to redeveloping (again for like the 3rd or 4th time) software that performs about as well as software we had around in the <i>1990s and early 2000s</i>.<p>Going deep means stripping away this nonsense and getting down closer to the metal, using smart algorithms, planning and working through a problem and seeing if you can size the solution to running on one machine as-is. Modern CPUs, memory and disk (especially SSDs) are unbelievably fast compared to what we had at the turn of the millenium, yet we treat them like they&#x27;re spare capacity to soak up even lazier abstractions. We keep thinking that completing the task means successfully scaling out a complex network of compute nodes, but completing the task <i>actually</i> means processing the data and getting meaningful results in a reasonable amount of time.<p>This isn&#x27;t really hard to do (but it can be tedious), and it doesn&#x27;t mean writing system-level C or ASM code. Just seeing what you can do on a single medium-specc&#x27;d consumer machine <i>first</i>, then scaling up or out if you <i>really</i> need to. It turns out a great many problems really don&#x27;t need scalable compute clusters. And in fact, the time you&#x27;d spend setting that up, and building the coordinating code (which introduces yet more layers that soak up performance) you&#x27;d probably be better off just spending the same time to do on a single machine.<p>Bonus, if your problem gets too big for a single machine (it happens), there might be trivial parallelism in the problem you can exploit and now going-wide means you&#x27;ll probably outperform your original design anyways and the coordination code is likely to be much simpler and less performance degrading. Or you can go-high and toss more machine at it and get more gains with zero planning or effort outside of copying your code and the data to the new machine and plugging it in.<p>Oh yeah, many of us, especially experienced people or those with lots of school time, are taught to overgeneralize our approaches. It turns out many big compute problems are just big one-off problems and don&#x27;t need a generalized approach. Survey your data, plan around it, and then write your solution as a specialized approach just for the problem you have. It&#x27;ll likely run much faster this way.<p>Some anecdotes:<p>- I wrote an NLP tool that, on a single spare desktop with no exotic hardware, was 30x faster than a 6-high-end-system-distributed-compute-node that was doing a comparable task. That group eventually used my solution with a go-high approach and runs it on a big multi-core system with as fast of memory and SSD as they could procure and it&#x27;s about 5 times faster than my original code. My code was in Perl, the distributed system it competed against was C++. The difference was the algorithm I was using, and not overgeneralizing the problem. Because my code could complete their task in 12 hours instead of 2 weeks, it meant they could iterate every day. A 14:1 iteration opportunity made a huge difference in their workflow and within weeks they were further ahead than they had been after 2 years of sustained work. Later they ported my code to C++ and realized even further gains. They&#x27;ve never had to even think about distributed systems. As hardware gets faster, they simply copy the code and data over and realize the gains and it performs faster than they can analyze the results.<p>Every vendor that&#x27;s come in after that has been forced to demonstrate that their distributed solution is faster than the one they already have running in house. Nobody&#x27;s been able to demonstrate a faster system to-date. It has saved them literally tens of millions of dollars in hardware, facility and staffing costs over the last half-decade.<p>- Another group had a large graph they needed to conduct a specific kind of analysis on. They had a massive distributed system that handled the graph, it was about 4 petabytes in size. The analysis they wanted to do was an O(N^2) analysis, each node needed to be compared potentially against each other node. So they naively set up some code to do the task and had all kinds of exotic data stores and specialized indexes they were using against the code. Huge amounts of data was flying around their network trying to run this task but it was slower than expected.<p>An analysis of the problem showed that if you segmented the data in some fairly simple ways, you could skip all the drama and do each slice of the task without much fuss on a single desktop. O(n^2) isn&#x27;t terrible if your data is small. O(k+n^2) isn&#x27;t much worse if you can find parallelism in your task and spread it out easily.<p>I had a 4 year old Dell consumer level desktop to use so I wrote the code and ran the task. Using not much more than Perl and SQLite I was able to compute a large-ish slice of a few GB in a couple hours. Some analysis of my code showed I could actually perform the analysis on insert in the DB and that the size was small enough to fit into memory so I set SQLite to :memory: and finished it in 30 minutes or so. That problem solved, the rest was pretty embarrassingly parallel and in short order we had a dozen of these spare desktops occupied running the same code on different data slices and finishing the task 2 orders of magnitude than what their previous approach had been. Some more coordinating code and the system was fully automated. A single budget machine was theoretically now capable of doing the entire task in 2 months of sustained compute time. A dozen budget machines finished it all in a week and a half. Their original estimate on their old distributed approach was 6-8 months with a warehouse full of machines, most of which would have been computing things that resulted in a bunch of nothing.<p>To my knowledge they still use a version of the original Perl code with SQlite running in memory without complaint. They could speed things up more with a better in-memory system and a quick code port, but why bother? It&#x27;s completing the task faster than they can feed it data as the data set is only growing a few GB a day. Easily enough for a single machine to handle.<p>- Another group was struggling with handling a large semantic graph and performing a specific kind of query on the graph while walking it. It was ~100 million entities, but they needed interactive-speed query returns. They had built some kind of distributed Titan cluster (obviously a premature optimization).<p>Solution, convert the graph to an adjacency matrix and stuff it in a PostgreSQL table, build some indexes and rework the problem as a clever dynamically generated SQL query (again, Perl) and now they were realizing .01second returns, fast enough for interactivity. Bonus, the dataset at 100m rows was tiny, only about 5GB, with a maximum table-size of 32TB and diskspace cheap they were set for the conceivable future. Now administration was easy, performance could be trivially improved with an SSD and some RAM and they could trivially scale to a point where dealing with Titan was far into their future.<p>Plus, there&#x27;s a chance for PostgreSQL to start supporting proper scalability soon putting that day even further off.<p>- Finally, a e-commerce company I worked with was building a dashboard reporting system that ran every night and took all of their sales data and generated various kinds of reports, by SKU, by certain number of days in the past, etc. It was taking 10 hours to run on a 4 machine cluster.<p>A dive in the code showed that they were storing the data in a deeply nested data structure for computation and building and destroying that structure as the computation progressed was taking all the time. Furthermore, some metrics on the reports showed that the most expensive to compute reports were simply not being used, or were being viewed only once a quarter or once a year around the fiscal year. And cheap to compute reports, where there were millions of reports being pre-computed, only had a small percentage actually being viewed.<p>The data structure was built on dictionaries pointing to other dictionaries and so-on. A quick swap to arrays pointing to arrays (and some dictionary&lt;-&gt;index conversion functions so we didn&#x27;t blow up the internal logic) transformed the entire thing. Instead of 10 hours, it ran in about 30 minutes, on a single machine. Where memory was running out and crashing the system, memory now never went above 20% utilization. It turns out allocating and deallocating RAM actually takes time and switching a smaller, simpler data structure makes things faster.<p>We changed some of the cheap to compute reports from being pre-computed to being compute-on-demand, which further removed stuff that needed to run at night. And then the infrequent reports were put on a quarterly and yearly schedule so they only ran right before they were needed instead of every night. This improved performance even further and as far as I know, 10 years later, even with huge increases in data volume, they never even had to touch the code or change the ancient hardware it was running on.<p>It seems ridiculous sometimes, seeing these problems in retrospect, that the idea was that to make these problems solvable racks in a data center, or entire data centeres were ever seriously considered seems insane. A single machine&#x27;s worth of hardware we have today is almost embarrassingly powerful. Here&#x27;s a machine that for $1k can break 11 <i>TFLOPS</i> [1]. That&#x27;s insane.<p>It also turns out that most of our problems are not compute speed, throwing more CPUs at a problem don&#x27;t really improve things, but disk and memory are a problem. Why anybody would think shuttling data over a network to other nodes, where we then exacerbate every I&#x2F;O problem would improve things is beyond me. Getting data across a network and into a CPU that&#x27;s sitting idle 99% of the time is not going to improve your performance.<p>Analyze your problem, walk through it, figure out where the bottlenecks are and fix those. It&#x27;s likely you won&#x27;t have to scale to many machines for most problems.<p>I&#x27;m almost thinking of coming up with a statement: Bane&#x27;s rule, you don&#x27;t understand a distributed computing problem until you can get it to fit on a single machine first.<p>1 - <a href="http://www.freezepage.com/1420850340WGSMHXRBLE" rel="nofollow">http:&#x2F;&#x2F;www.freezepage.com&#x2F;1420850340WGSMHXRBLE</a>
评论 #8903238 未加载
评论 #8903408 未加载
评论 #8902860 未加载
评论 #8904092 未加载
评论 #8909950 未加载
taericover 10 years ago
This raises so many questions that I can&#x27;t even really comprehend that it worries me. Please, if there are well formed responses or dialogues that form based on this, somebody make sure to link them back. Very very interesting read.
评论 #8902206 未加载
xtacyover 10 years ago
While I understand the sentiment behind this post, I think it misses one crucial point: It costs time, effort, and very smart people to build the &quot;Bugati&quot;-like system as they describe, instead of the current systems (that are more like &quot;Toyotas&quot;, to name one).<p>I haven&#x27;t seen the paper yet, so I can&#x27;t be sure, but I think the numbers might ignore many factors: First, you need some kind of abstract, exchangeable storage (e.g., protobufs) to work with the data in many languages. Third, there&#x27;s the file-system and all its intricacies. Fourth, it&#x27;s unlikely that any compute environment will be dedicated only to one application (there&#x27;s scheduling, resource management, and all that, which means there are hidden costs to doing network IO due to contention, protocol quirks, etc.). And finally, any realistic application is more than just &quot;solving&quot; the problem in the fastest way possible. Requirements change all the time, new features will be added, the code needs to be readable, understandable, maintainable, etc.<p>It&#x27;s possible to do all the above AND be super efficient, but it requires a tremendous level of understanding of a system at all levels that it can be quite challenging, and frankly, with business requirements, it&#x27;s probably not worth the time. If there&#x27;s a framework that gives you abstraction but compiles to the fastest possible specific implementation AND makes a programmer productive, I would love to read up more!
评论 #8903479 未加载
erdevsover 10 years ago
Perhaps another takeaway from this article is that it&#x27;d be nice if more research papers benchmarked with vastly larger data sets.<p>Many researchers would love to do just that, of course. However, as many researchers will lament, it&#x27;s not always easy to get 10+-figure node and 11+-figure edge data sets appropriate to a space being explored.<p>The best research benchmarks I see do compare to a single-core (often multithread or multiprocess) implementation. And they also show benchmark results on datasets of increasing sizes.<p>I agree with the authors that those sorts of papers aren&#x27;t common enough, though. And we should strive to do better. Moreover, I agree that in practice in industry, many people over-optimize for horizontal scalability early, and&#x2F;or do not realize potential savings and benefit by doing vertical optimizations after gaining initial scale.
chimtimover 10 years ago
If the dataset and&#x2F;or computation fits in your laptop, why would you use a cluster framework?<p>If you want to use multi-core, why would you not use the pthread library instead of using Spark, GraphX etc. The authors never show a pthread comparison.<p>The article shows just one algorithm for toy datasets. For certain algorithms such as stochastic gradient descent, multiple-cores can process data in parallel and go through the data very quickly. Again, if everything fits in memory, doing a gradient descent on entire dataset will be much faster and give a better quality result. This fact is pretty much well known to end-users i.e. folks who actually try to solve big-data problems.<p>However, most papers use small datasets like twitter or MNIST because their convergence behavior is well-understood (rather than to demonstrate scaling).
wrexsouleover 10 years ago
&gt; In many cases, you’d be better off running the same computation on your laptop.<p>Stopped reading there. If you&#x27;re better off running the same computation on your laptop, you&#x27;re just not dealing with real big data and so you don&#x27;t need any distributed systems. Simple as that.
评论 #8902431 未加载