TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Performance Improvements Using Judy Arrays

183 pointsby drewolsonabout 12 years ago

17 comments

jemfinchabout 12 years ago
The article's credibility is significantly reduced by referring to Judy Trie lookups as O(log n) operations. They're actually O(log w) operations: the number of operations is bounded by the array's <i>word size</i>, not the size of the Judy array. A million elements in a Judy32 array will cause at most 7 node hops, not the 20 you'd expect from a binary tree.
评论 #5639920 未加载
评论 #5640143 未加载
评论 #5639826 未加载
lobster_johnsonabout 12 years ago
While this is very interesting, the real problem here is not Linguist stuff, but the fact that it's crammed into the web app.<p>A web app involved in a site as complex as Github's should really contain only the parts required to service web pages. A service like language classification is clearly better designed as a standalone service.<p>Aside from working around GC overhead, compositing big app from many smaller apps have many advantages. You force yourself to encapsulate complexity with strictly focused, network-friendly APIs (we use REST ourselves), which changes the way you have to think about programs. Hiding the implementation behind an API also allows you to swap out the implementation without the client ever knowing. Since the client and server are separated, you have fine-grained control over how much resources you are willing to spend on which components. And so on.<p>Having done monolithic Rails apps for many years, it's a model I am not going to back. Today, our web apps contain only UI logic. No database, no models, no data logic, just UI all the way. Everything else is handled by specialized components.
评论 #5644866 未加载
jekubabout 12 years ago
There is some interesting things in this post but the main problem is not using Judy or something else, or talking about memory or complexity. The main problem I see here is using the wrong tool.<p>Replace the token matcher with a simple classifier (a maxent will work very well here) with n-gram of characters features through a hash-kernel and you get a very accurate, fast and low memory system.<p>I've build one two-years ago who accurately classified a bit over 100 different languages with only 64k features in the end. (So requiring only 8*64ko of memory) And this was without using file extensions as they weren't available in our case.<p>Before any hard optimizations, first check the methods used, and next the algorithm, anything else should go after.
prezjordanabout 12 years ago
Some previous discussion on Judy Arrays [0]. Apparently they are patented!<p>[0]: <a href="https://news.ycombinator.com/item?id=5043667" rel="nofollow">https://news.ycombinator.com/item?id=5043667</a>
habosaabout 12 years ago
Wow this is an incredibly interesting and accessible article on performance tuning. I have two questions:<p>1) Is there an easy tutorial somewhere on calling out to native code from Ruby?<p>2) Could the team at Github possible give a little more detail on what you did to get all of those pretty benchmarking graphs? How did you get all of the info on memory usage and CPU activity (I assume it wasn't just time {command} &#62; file.txt)
评论 #5639492 未加载
exabrialabout 12 years ago
I would have liked to seen the Judy array implementation in pure Ruby, so we could compare apples to apples. I'm not trying to troll... but basically they solved their problem by:<p>* Avoiding Ruby language features<p>* Rewriting it in a different language<p>This is why I lean towards static languages like Go, Scala, and Java.
评论 #5639461 未加载
评论 #5640206 未加载
评论 #5639780 未加载
jrochkind1about 12 years ago
This is great. Someone should change the title so it's descriptive of what's going on, although I have no idea what that title should be, heh.<p>This post is about Github moving some code from ruby to C for performance, MRI's GC, and judy trie-like data structure and how it compares wrt to both performance and memory consuption with hash implementations.
评论 #5639195 未加载
pjmlpabout 12 years ago
I wonder if something like Crystal could have helped.<p><a href="https://github.com/manastech/crystal" rel="nofollow">https://github.com/manastech/crystal</a>
评论 #5639499 未加载
thezilchabout 12 years ago
For starters, the blog post was informative and should get some creative juices flowing for applications in others' stacks -- piqued my interests in hacking core utilities. However...<p>The graphs never show a 30K token mark, which is what the heap-allocation counter showed for their [programming language] classifier.<p>It's not clear to me that much RSS was saved. Maybe 20Mb? So, the next question is how many classifiers are running, where such a "hit" actually matters. Again, there is no 30K mark on the runtime graphs, but let's assume the generation to the graphs' left are linear. It looks like we're saving a half-second and removing most of the jitter, but it's not made clear how the removal of GC chunking has any effect on processing outside of the classifier. I just can't imagine how a couple of these classifiers can't keep up with the push rate of files to Github -- the classification improvements are neither low-hanging or part of the BigO in Github's stack. The process seems very able to run asynchronous at push time and un-deferred, if unrun, on page view. Unless they're seeing more than 1 million pageviews/sec (per classifier; just run more? one 20Mb service per httpd server?), because I can't really imagine hitting more than 10x their tokens (300K tokens), which is still only ~50Mb; the RSS graph, again, has a weird scale to suggest they might grow their tokens by 67x.
评论 #5641689 未加载
jerrysievertabout 12 years ago
Judy arrays are also faster in node.js: <a href="http://legitimatesounding.com/blog/Faster_sometimes_Associative_Arrays_with_Node_js.html" rel="nofollow">http://legitimatesounding.com/blog/Faster_sometimes_Associat...</a><p>the issue is with the patents - has anyone been contacted yet about the usage?
nthitzabout 12 years ago
Sometimes Mods change the title to be the original article title. And sometimes they change it from the original article title to something else!
评论 #5639453 未加载
jongraehlabout 12 years ago
1. this could be done offline, or cached<p>2. unconvinced that they can't do better w/ the right hash table (which will be thousands of lines fewer of code). but since they're using a library, i'll give them a pass. not even a benchmark vs. a quick hash table version makes all the claims about "judy arrays are better for our purpose" suspect.
ww520about 12 years ago
Isn't Judy Array patented? What's the legal status using it?
评论 #5639665 未加载
kristianpabout 12 years ago
So each language is stored with a prefix in the Judy array. This means that to identify the language of a token, you have to loop for all languages, prefixing the token and looking it up, keeping a count of matches for each language. Does that sound correct?<p>I wouldn't have used the prefix approach, instead storing a token once in the judy array, and using the data stored to indicate which languages match the token.
alexdowadabout 12 years ago
This is awesome!!! I remember looking for an object allocation profiling tool for Ruby 1.9+ some time ago, and not finding one (<a href="http://stackoverflow.com/questions/12204012/allocation-tracer-which-works-with-ruby-1-9" rel="nofollow">http://stackoverflow.com/questions/12204012/allocation-trace...</a>). When will Github's --debug-objects option make it into MRI???
ikeepforgettingabout 12 years ago
Somebody finally found a use for them!
revelationabout 12 years ago
Something here doesn't sit right with me. Please don't launch into a lecture on algorithmic complexity and cache lines if you are running on Ruby on Rails.
评论 #5639403 未加载