The article's credibility is significantly reduced by referring to Judy Trie lookups as O(log n) operations. They're actually O(log w) operations: the number of operations is bounded by the array's <i>word size</i>, not the size of the Judy array. A million elements in a Judy32 array will cause at most 7 node hops, not the 20 you'd expect from a binary tree.
While this is very interesting, the real problem here is not Linguist stuff, but the fact that it's crammed into the web app.<p>A web app involved in a site as complex as Github's should really contain only the parts required to service web pages. A service like language classification is clearly better designed as a standalone service.<p>Aside from working around GC overhead, compositing big app from many smaller apps have many advantages. You force yourself to encapsulate complexity with strictly focused, network-friendly APIs (we use REST ourselves), which changes the way you have to think about programs. Hiding the implementation behind an API also allows you to swap out the implementation without the client ever knowing. Since the client and server are separated, you have fine-grained control over how much resources you are willing to spend on which components. And so on.<p>Having done monolithic Rails apps for many years, it's a model I am not going to back. Today, our web apps contain only UI logic. No database, no models, no data logic, just UI all the way. Everything else is handled by specialized components.
There is some interesting things in this post but the main problem is not using Judy or something else, or talking about memory or complexity. The main problem I see here is using the wrong tool.<p>Replace the token matcher with a simple classifier (a maxent will work very well here) with n-gram of characters features through a hash-kernel and you get a very accurate, fast and low memory system.<p>I've build one two-years ago who accurately classified a bit over 100 different languages with only 64k features in the end. (So requiring only 8*64ko of memory) And this was without using file extensions as they weren't available in our case.<p>Before any hard optimizations, first check the methods used, and next the algorithm, anything else should go after.
Some previous discussion on Judy Arrays [0]. Apparently they are patented!<p>[0]: <a href="https://news.ycombinator.com/item?id=5043667" rel="nofollow">https://news.ycombinator.com/item?id=5043667</a>
Wow this is an incredibly interesting and accessible article on performance tuning. I have two questions:<p>1) Is there an easy tutorial somewhere on calling out to native code from Ruby?<p>2) Could the team at Github possible give a little more detail on what you did to get all of those pretty benchmarking graphs? How did you get all of the info on memory usage and CPU activity (I assume it wasn't just time {command} > file.txt)
I would have liked to seen the Judy array implementation in pure Ruby, so we could compare apples to apples. I'm not trying to troll... but basically they solved their problem by:<p>* Avoiding Ruby language features<p>* Rewriting it in a different language<p>This is why I lean towards static languages like Go, Scala, and Java.
This is great. Someone should change the title so it's descriptive of what's going on, although I have no idea what that title should be, heh.<p>This post is about Github moving some code from ruby to C for performance, MRI's GC, and judy trie-like data structure and how it compares wrt to both performance and memory consuption with hash implementations.
I wonder if something like Crystal could have helped.<p><a href="https://github.com/manastech/crystal" rel="nofollow">https://github.com/manastech/crystal</a>
For starters, the blog post was informative and should get some creative juices flowing for applications in others' stacks -- piqued my interests in hacking core utilities. However...<p>The graphs never show a 30K token mark, which is what the heap-allocation counter showed for their [programming language] classifier.<p>It's not clear to me that much RSS was saved. Maybe 20Mb? So, the next question is how many classifiers are running, where such a "hit" actually matters. Again, there is no 30K mark on the runtime graphs, but let's assume the generation to the graphs' left are linear. It looks like we're saving a half-second and removing most of the jitter, but it's not made clear how the removal of GC chunking has any effect on processing outside of the classifier. I just can't imagine how a couple of these classifiers can't keep up with the push rate of files to Github -- the classification improvements are neither low-hanging or part of the BigO in Github's stack. The process seems very able to run asynchronous at push time and un-deferred, if unrun, on page view. Unless they're seeing more than 1 million pageviews/sec (per classifier; just run more? one 20Mb service per httpd server?), because I can't really imagine hitting more than 10x their tokens (300K tokens), which is still only ~50Mb; the RSS graph, again, has a weird scale to suggest they might grow their tokens by 67x.
Judy arrays are also faster in node.js: <a href="http://legitimatesounding.com/blog/Faster_sometimes_Associative_Arrays_with_Node_js.html" rel="nofollow">http://legitimatesounding.com/blog/Faster_sometimes_Associat...</a><p>the issue is with the patents - has anyone been contacted yet about the usage?
1. this could be done offline, or cached<p>2. unconvinced that they can't do better w/ the right hash table (which will be thousands of lines fewer of code). but since they're using a library, i'll give them a pass. not even a benchmark vs. a quick hash table version makes all the claims about "judy arrays are better for our purpose" suspect.
So each language is stored with a prefix in the Judy array. This means that to identify the language of a token, you have to loop for all languages, prefixing the token and looking it up, keeping a count of matches for each language. Does that sound correct?<p>I wouldn't have used the prefix approach, instead storing a token once in the judy array, and using the data stored to indicate which languages match the token.
This is awesome!!! I remember looking for an object allocation profiling tool for Ruby 1.9+ some time ago, and not finding one (<a href="http://stackoverflow.com/questions/12204012/allocation-tracer-which-works-with-ruby-1-9" rel="nofollow">http://stackoverflow.com/questions/12204012/allocation-trace...</a>). When will Github's --debug-objects option make it into MRI???
Something here doesn't sit right with me. Please don't launch into a lecture on algorithmic complexity and cache lines if you are running on Ruby on Rails.