I think the top-level take away here is not that Julia is a great language (although it is) and that they should use it for all the things (although that's not the worst idea), but that its design has hit on <i>something</i> that has made a major step forwards in terms of our ability to achieve code reuse. It is actually the case in Julia that you can take generic algorithms that were written by one person and custom types that were written by other people and just use them together efficiently and effectively. This majorly raises the table stakes for code reuse in programming languages. Language designers should not copy all the features of Julia, but they should at the very least understand why this works so well, and be able to accomplish this level of code reuse in future designs.
I really like Julia a lot and actually used it in a work project a few years back.<p>However, there's the debugger issue. There are several debugger alternatives. It's tough to figure out which debugger is canonical (or is any of them the canonical debugger?). The one that seems to being used most at this point is Debugger.jl. However, it's exceedingly slow if you're debugging sizeable operations (matrix multiplies, for example) - I'm talking hitting 'n' to step to the next line and then waiting several minutes for it to get there. There's also Rebugger.jl, MagneticReadHead.jl (IIRC) and Infiltrator.jl among others. I finally found that Infiltrator.jl was a lot faster for that machine learning program I was trying to debug, but it's rather limited in features (the only way to set breakpoints it seems is by editing your source, for example).<p>And this isn't the only case where there are multiple packages for achieving some task and you're not quite sure which one is the one that's the most usable. I think what the Julia community needs to do is maybe add some kind of rating system for packages so you can see which packages have the highest rating.
There are parts of Julia I really like but it has some problems.<p>* Multiple dispatch is an odd design pattern that seems to over complicate things. I know there are people that love it and claim it’s better, but after working with it for some time I just want a struct with methods. It’s much easier to reason about.<p>* The packaging is cumbersome. I understand their reasoning behind it but in practice it’s just more of a pain than denoting functions as public one way or another.<p>* The tooling is poor. I work with Go for my day job and it’s toolchain is an absolute pleasure. The Julia toolchain isn’t in the same arena.<p>* The JIT is slowwww to boot. I was amazed the first time running a Julia program how slow it was. You could practically compile it faster.<p>* Editor support has never been great.<p>* As others have mentioned type checks don’t go deep enough<p>I think it has some neat ideas and in certain scientific arenas it will be useful, but IMO they need to focus a bit more on making it a better general purpose language.
I've been using Julia along with python and Pytorch, not yet for machine learning until flux is more mature but for NLP scripts, and I have to say that I'm starting to like it. Multiple dispatch, linear algebra and numpy built in, dynamic language but with optional types, user defined types, etc.
I hope Julia will be more popular in bioinformatics. Personally, I have a high hopes for BioJulia[1][2][3] and the amazing AI framework FluxML[4][5] + Turing.jl[6][7]. Apart from the speed, they offer some interesting concepts too - I recommend to check them out.<p>[1] <a href="https://biojulia.net/" rel="nofollow">https://biojulia.net/</a><p>[2] <a href="https://github.com/BioJulia" rel="nofollow">https://github.com/BioJulia</a><p>[3] <a href="https://github.com/BioJulia" rel="nofollow">https://github.com/BioJulia</a><p>[4] <a href="https://fluxml.ai/" rel="nofollow">https://fluxml.ai/</a><p>[5] <a href="https://github.com/FluxML/" rel="nofollow">https://github.com/FluxML/</a><p>[6] <a href="https://turing.ml/dev/" rel="nofollow">https://turing.ml/dev/</a><p>[7] <a href="https://github.com/TuringLang" rel="nofollow">https://github.com/TuringLang</a>
Julia is great. It’s significantly simpler than Python while also being much more expressive. It’s too bad the typing is only for dispatch, but hopefully someone will write a typechecker someday. I’ve found it surprisingly refreshing to not have to think about classes and just add functions on objects wherever I want. Some languages solve this with monkey patching (which is bad), others like Scala with an extension class (reasonable, but you still don’t get access to private properties), but the Julia approach is cleaner.<p>I wouldn’t use Julia for a non-scientific computing app as I don’t think it’s suitable, but for anything data science related, it’s great! And with the Python interop, I don’t really think there’s any reason <i>not</i> to use Julia for your next data science project. I suspect that over the next 5 years Python will no longer be used for these applications at all.
Julia is a language I really wanted to like, and to a certain extent I do. However after spending some time working with it and hanging out on the Discourse channels (they certainly have a very friendly and open community, which I think is a big plus), I've come to the tentative conclusion that its application domains are going to be more limited than I would have hoped.<p>This article hits on some of the issues that the community tends to see as advantages but that I think will prove limiting in the long run.<p>> Missing features like:<p>> Weak conventions about namespace pollution<p>> Never got around to making it easy to use local modules, outside of packages<p>> A type system that can’t be used to check correctness<p>These are some of my biggest gripes about Julia, especially the last two. To these I would add:<p>* Lack of support for formally defined interfaces.<p>* Lack of support for implementation inheritance.<p>Together with Julia's many strengths I think these design choices and community philosophy lead to a language that is very good for small scale and experimental work but will have major issues scaling to very complex systems development projects and will be ill-suited to mission critical applications.<p>In short I think Julia may be a great language for prototyping an object detection algorithm, but I wouldn't want to use it to develop the control system for a self-driving car.<p>Unfortunately this means that Julia probably isn't really going to solve the "2 language problem" because in most cases you're still going to need to rewrite your prototypes in a different language just like you would previously in going from, for example, a Matlab prototype to a C++ system in production.
Julia is by far the favorite language I have written code in. It is extremely expressive, while also being easy to read. Most design decisions are spot on. For example, the language has syntactic sugar, but not too much. Everything in the base library makes sense and seems to be there for a purpose.<p>Other niceties are the meta-programming capabilities, which allow for things like inspecting the llvm code and printing a variable name `x` plus output by only typing `@show x`. Then there is the fact that anonymous functions actually look like how you would describe a math function! (That is, `f(x) = 2x` is a valid function, as is `f(x) = 2π`.)<p>However, there is one thing I do not like at all. That is the loading time of packages. When starting julia and running `@time using DataFrames` it takes about 38 seconds when recompiling some stale cache. If all caches are good, the the load times for some common packages still add up to 1.1 + 4.5 + 1.1 seconds according to `@time using Test; @time using CSV; @time using Dates`. Therefore, nowadays I prefer to use R. For most of my use cases R outperforms Julia by a factor 10.
I never was able to get julia to do what I want. If I were a data scientist who developed and maintained large libraries, then it would probably be great, but I'm not. I just want to quickly visualize and modify data, or maybe see how a model compares. Much more difficult to do simple things like that than in Octave/Matlab.
Interesting post and excellent discussion. I have the following question to all Julia and/or Python experts here. What strategy for developing a cloud-native {science and engineering}-focused platform would be better, in your opinion, and why: A) develop an MVP and then relevant production platform in Python, spending some saved time and efforts (due to simplicity, [as of now] better tooling and much wider pool of experts) on development of more and/or better features; B) develop an MVP in Python, then rewrite it for production in Julia for much better native performance, GPU integration and potential use of macros for an embedded DSL; C) take more time initially to master Julia and develop an MVP and then the corresponding production platform in Julia from scratch?<p>EDIT: Forgot to mention that HPC workloads would represent a significant, albeit non-unique, aspect of the platform in question.
Except they have no story for building composable libraries in a distributed setting. The story for distributed execution in Julia today is just use MPI, which is a terrible answer. Anyone who has ever use libraries backed by MPI in any language knows that they are inherently not composable. You can't just take an object partitioned across multiple nodes one way by one library and pass it into a second library that expects it to be partitioned a different way. As far as I can tell the Julia language has nothing to say about that, and that makes them a non-starter today for anyone trying to build composable libraries for distributed memory machines.