If I understand the helper code part correctly, libraries are banned? That makes the challenge quite pointless, as it doesn’t even pretend to be equivalent to “real world” usage. All it can show is the julia language has it by default... but not whether the language’s design actually leads to something more clean, readable and sufficiently fast than what would “normally” be done in other languages.<p>It doesn’t really matter if python doesnt have as powerful an array syntax as julia, if I can pull in numpy (a standard, battle-tested library) and produce equivalent (and good) code.<p>But if I pull in numpy, and <i>still</i> can’t produce code as nice as julia (because of whatever limitations of python as a language), then there’s an argument to be made for switching to julia.<p>But going purely off what the language gives by its initial installation... its just an arbitrary limitation to give the question one answer.<p>The article claims that it proves the point ... "This can be unfair since Julia has lots of functionality for array algorithms built in, but also illustrates the point" ... but assuming good intentions, I'm not sure what point that is? At best, you might be comparing APL languages to one another, but nothing broader than that.
As someone who has used both languages extensively over the last 6 years (Julia for grad school research and Python for my work as a data scientist), I'm always surprised at how quickly people rush to defend Python. "Yes, you can do that in Python too using such-and-such library or with this hack..." Don't get me wrong — the numerous Python libraries are impressive, but at the end of the day, for me personally the experience of using Julia is just far more pleasant than using Python. It seems as though I'm always fighting <i>against</i> Python to accomplish something, whereas with Julia it feels like the language just gets out of the way and lets me do what I want.
There is something that puts me off about all these languages that want to show off their own strengths by pointing at other languages weaknesses. X in Y is another such pattern, who cares what 'X' is written in, the question is does it perform well, does it serve a need and is it maintainable and secure if it is network facing or otherwise employed in a situation that would require the program to be locked down properly.<p>Setting up an artificial challenge with rules that favor 'your' language up front is a great way to get the result that you want, but not a great way to learn. Each language has its strengths and its weaknesses, but none of those strengths and weaknesses matter all that much from the point of view of an end user until they have a direct impact on the quality of that which is produced. And programmers have shown time and again they can write crappy code in just about any language (and good code too...) with the language features having a much smaller impact on the outcome than the quality of the people writing the code.
> Use Python + Numba, Python + C, Swift, Cython, Go, Lua-Jit, Rust, D - surprise us!<p>I'm familiar with Python scientific ecosystems and currently learning Julia and Rust. As far as I see it, these languages likely do not compete well with Julia on <i>all</i> three categories --- "developer effort, performance and extensibility" for the problem you picked, because none of them was designed for native array arithmetic with broadcasting in the first place (except Numba maybe).<p>The current dominance of Python in scientific computing is rather a beneficent coincidence, because most people were pretty fed up with C++ and legacy Fortran back then, and efforts to improve tools of the trade tended to converge on Python. Fast forward 18 years after Python was born, Julia came to light and it was <i>designed</i> to do numerical tasks fast and elegantly and to avoid some pitfalls of "scripting languages". Does the success of Julia in these tasks signify the incompetence of Python? I don't think so. It served well what it was designed for. And if you tell me Julia is the right track of the history, you'd have to wait for it to happen. (Don't get me wrong, I like the Julia language and use it for a side project. I just don't see bashing other languages to prove that Julia is the one language to rule them all would be fruitful.)<p>On the other hand, I'd say you forget to count in Fortran 2008! For broadcasting with multithreading, Fortran + OpenMP works pretty well.
I think the point is to be able demonstrate
the ability to implement high-level abstractions,
in a very composable way,
that doesn't cost on performance.
Am I the only one who sees a big PR push for Julia? Up until a few weeks ago I'd never heard of it. Not I see it all over, frequently touted as the "one language to rule them all."
So the actual challenge is to provide a means, in your language of choice, to write an arbitrary expression like:<p><pre><code> user_defined(x) + y + z
</code></pre>
Which may mix matrices (<i>x</i>), vectors (<i>y</i>), scalars (<i>z</i>), and <i>user_defined</i> or built-in (+) functions applied elementwise; has automatic extension of dimensions; and must operate in a single pass, performing only one traversal.<p>This problem statement is rather tailored to Julia’s way of doing things—I don’t believe any other language can really “succeed” as a result, and not because Julia is any <i>better</i> (nor worse).<p>In Haskell I might simply rely on laziness and rewrite rules for fusion, or reach for a package like repa¹ or folds², depending on the actual problem at hand. There wouldn’t need to <i>be</i> a library providing this “broadcast” functionality, because it can be expressed already with other composable tools.<p>If you work with the language and type system, there are lots of tools available for giving the compiler enough information about the structure of your code to guarantee good performance, abstracted behind a reasonable API. You might end up writing something more explicit, like:<p><pre><code> zipWith3 (+) (userDefined <$> x) (extend y) (extend z)
-- With MonadComprehensions + ParallelListComp
[ userDefined a + b + c
| a <- x | b <- extend y | c <- extend z
]
</code></pre>
But I’d be perfectly happy with that, since it uses abstractions like functors and monads that I already understand. I probably <i>wouldn’t</i> reach for the direct equivalent of this Julia solution, which would be to write some Template Haskell macro that rewrites an expression to insert dimensional conversions (“extend” in the pseudocode above) and fusion of operations—that approach feels too brittle and difficult to extend, and not because of any deficiency in the language.<p>¹ <a href="http://hackage.haskell.org/package/repa" rel="nofollow">http://hackage.haskell.org/package/repa</a><p>² <a href="https://hackage.haskell.org/package/folds" rel="nofollow">https://hackage.haskell.org/package/folds</a>
This seems very finely tuned to the Julia language (i.e. relying heavily on meta-programming). It's not clear to me that strong support for meta-programming is actually that necessary for every day use cases.<p>I'd be curious to see what kinds of implementations someone could come up with in CUDA using templates, though. It could easily blow the Julia performance out of the water.