I am using Julia to casually participate in a bioinformatics contest (I have no bioinformatics background but I am pretty darned good at coding and biochemistry). The technique i'm using involves blasting a subset of the genome with neural nets, and I'll be damned if Julia isn't fast. I haven't benchmarked it against python, but basically the program is running a hybrid swarm optimization/gradient descent technique to find 20:5:1 neural nets; it is able to find test and optimize a swarm of 100 neural nets over 50 iterations in about 2 seconds on my rather slow laptop (2.3 GHz Core i3) A friend of mine is doing the same contest (using python) and his eyes popped out when I told him how efficient Julia was.
I'm pretty excited about Julia and integrating it into existing infrastructure[0]. The build process though -- even the "release" seems to insist on running git to pulldown code/data from the network -- if Julia devs/release-managers are reading -- what are odds of getting standalone distributions ?<p>[0] <a href="https://news.ycombinator.com/item?id=7173137" rel="nofollow">https://news.ycombinator.com/item?id=7173137</a>
Interesting. OP (if you're around): I noticed in the confusion matrix that everything was classified to the middle classes (5, 6, 7). That makes sense because the 3s, 4s, and 8s are rare and "true 8s" are still most likely to have a high probability on the 7 class, because there are far more 7s in the data. Did you analyze approximate correctness for the probabilities, or consider sampling from the computed probabilities rather than classifying to the highest one, to see where that led?