Interestingly, I found myself going the other way. Let me first say that R is a hilariously weird-feeling and janky language. The Julia features mentioned (structure are good for organising; compilation and better data structures mean you need to worry less about accidentally writing code that is 10x or 100x slower than it ought to be, which tends to matter a lot for interactive use) are definitely useful, and magically getting e.g. arbitrary precision arithmetic is pretty cool.<p>I think the example in the post shows an annoying way for Julia’s generic functions to be difficult because the function seems to take a matrix but secretly it only wants a 2x2 matrix. If such a function gets called with the wrong value deep in some other computation, and especially if it silently doesn’t complain, you may end up with some pretty annoying bugs. This kind of bug can happen in R too (functions may dispatch on the type of their first arg and many are written to be somewhat generic by inspecting types at runtime). I think it’s a little less likely only because data structures are more limited. A related example that trips me up in R is min vs pmin.<p>The biggest issue I had in practice is that for either language, I wanted to input some data, fiddle with it, draw some graphs, maybe fit some models, and suchlike. R seems to have better libraries for doing the latter but maybe I just didn’t find the right Julia libraries.<p>- I feel like I had more difficulties reading csvs with Julia. But then when I was using Julia, I wanted to read a bunch of ns-precision time stamps which the language didn’t really like, and with R I didn’t happen to need this. I found neither language had amazing datetime type support (partly this is things like precision. Partly this is things like wanting to group by week/day/whatever. Partly this is things like wanting sensible graphs to appear for a time axis)<p>- R has a bigger standard library of functions that are useful to me, e.g. approx or nlm or cut. I think it’s a reasonable philosophy for Julia to want a small stdlib but it is less fun trying to find the right libraries all the time. Presumably if I knew the canonical libraries I would have been happier.<p>- R seems to have better libraries for stats.<p>- I found manipulating dataframes in Julia to be less ergonomic than dplyr, but maybe I just wasn’t using the Julia equivalent. In particular, instead of e.g. mutate(x=cumsum(y<i>filter)), I would have to write something like mutate(do, [:y, :filter]=>((y,f)-> cumsum(y</i>filter))=>:x). I didn’t like it, even though it’s clearly more explicit about scoping which I find desirable in a less interactive language.<p>- I much preferred ggplot2 to the options in Julia. It seems the standard thing is plots.jl but I never had a great time with that. Gadfly seemed to have a better interface but had similar issues to manipulating data frames and I found myself hitting many annoying bugs with it. Ggplot is fast slow, however.<p>- Pluto crashed a lot on me, which wasn’t super fun. In general, I felt like Julia was more buggy in general. Though I also get an annoying bug with R where it starts printing new prompts every second or so, and sometimes just crashes after that. Pluto also doesn’t work with Julia’s parallelism features (but maybe it does now?)<p>- The thing that most frustrated me with Pluto/Gadfly was that I would want to take a bunch of data, draw it nice and big, and have a good look at it. Ggplot (probably because of bad hidpi support) does this well by throwing up the plot with a tiny font size on a nice 4k window and, with appropriate options, not doing a ton of X draw calls for partial results (downside: it is still quite slow with a lot of points). Gadfly in Pluto wants to generate an SVG with massive font size and thick borders on chonky scatter plot shapes, and crams it into a tiny rectangle in Pluto. Maybe this is more aesthetic or something but generally I plot things because I want to look at the data and this is not an easy way to look at it. The option to hide the thick borders in gadfly is hilariously obscure. I never bothered learning how to not generate the svg in the notebook. I would just suffer terrible performance while I zoomed in to get a higher resolution screenshot (before deleting the avg in the dev console) or generate a png file.<p>That said, there are still things I don’t know how to do with either plotting system, like reversing a datetime scale, or having a scale where the output coordinate goes as -pseudolog(1-y) to see the tail of an ecdf, or having a scale where the labels come from one source but positions come from some weight, e.g. time on the x axis weighted by cpu-hours so that an equal x distance between points corresponds to equal cpu-hours rather than equal wall-time. Maybe I will learn how to do it someday with ggplot.