I tend to use the word magic for those situations when you can not easily examine the source code and follow the control flow. For instance, you are in some part of the code and you have a User object and you see the code calls the .babble() method on the object. You go to the User definition, and there's no babble method. You follow up the superclasses, and there's no babble method. Where did the babble method come from? Where is it going? Who created it? Where can I find the documentation for it (if any)? Even if I do find something that seems to implement it, can I be sure that some <i>other</i> code didn't fiddle with the babble method in <i>another</i> way?<p>Languages permit varying levels of this sort of magic, ranging from the Ruby extreme of dynamically rewriting classes and inheritance hierarchies on the fly, to the pretty-much-no-magic of C. (C <i>qua</i> C anyhow. Preprocessor can get fun.)<p>I think this is one of the worst sorts of magic, and I've generally tried to arrange my code so that even when I'm using a dash of magic, the pathway back to where everything came from is still fairly clear. (Usually this can be done just by dealing with the fact you may need to use a few more keystrokes to leave the breadcrumb trail back to the implementation. Sometimes simplifying a bit of functionality down to zero bytes in the source code is going a step too far.)<p>I'll also observe that having used a number of both high-magic and low-magic languages, in terms of costs/benefits, I am generally underwhelmed by the benefits of the magic, and find the cost generally go up more quickly than most people realize as source code size goes up. I generally prefer low-magic languages. With just a bit of thought, you can still get a huge amount of power without the costs. (For instance, believe it or not, Haskell is a nearly-zero-magic language by this definition. For all the crazy stuff it seems to be able to do, you can always see what's going on just by following back the symbol definitions, with the exception of just a finite bit of syntax sugar implemented by the compiler, the most important of which is generally tutorial-level.)
In practice, at work, I haven't found people using magic to describe code that they don't understand, but rather code that uses obscure or somewhat hidden features to try and do things in a smart way.<p>For instance if a method relied on it being modified at run time by it's caller to ensure sanity of it's input I would call it magical.
In your specific case, the argument is much narrower: Scala's for comprehension is, in itself, a very magical construct that you have to read from the documentation, while map is a function you can actually find in the code and follow: I'd argue that scala's 'for' is its most magical feature, even more than implicits. So anyone that finds for easier than map doesn't really understand what for really does, just an approximation from another language.<p>That said, there is such thing as not using language features that your team isn't familiar with: There is cognitive cost in, say, understanding higher kinded types, adding cats as a project dependency, or, god forbids, shapeless. There is such thing as excessive complexity, or using features that make anyone that didn't write the code suffer to understand it.<p>So yes, there are definitely good reasons to say that something is magic, and wishing for a lower cognitive load. Now, the map function isn't it though. I'd just avoid making it dotless, because it's not really any clearer, especially in a case where it's a higher order function, like in your example.
There is no magic other than the unfamiliar. The problem is not magic or the unfamiliar, it is complexity.<p>A technique can be understood, and still a pain in the ass to use; then it is bad. This is of course subjective, by comparison with other ways of solving the same problem.<p>Or, a technique can be useful, but too difficult to learn to justify using it on something simple; then it is inappropriate, and should be replaced by simpler, presumably less concise, code. This is of course subjective, dependent on the code's audience (readers, reviewers, maintainers). This is the root of the article's map vs for-loop example: different levels of familiarity with the technique of mapping functions over something.
Working towards a rule to distinguish between <i>magic</i> and powerful would be interesting and I for one would like to see some of HN take some swings here.<p>On first pass I'm having issues imagining a specific function that would be part of a standard library that is sufficiently magic that it warrants avoiding. Learning is just a part of programming. Something that initially seems like magic can quickly becomes one of your favorite hammers and make you more productive (just found reduce-kv the other day, it's awesome). If it's in the standard library then at least being able to use it seems wise. Note I'm not saying one should or could reasonably be demanded to know everything about the standard library, but rather that being asked to know something in it going forward from a point is likely reasonable.<p>Magic tends to live in side-effect heavy libraries and frameworks. Rails is full of magic that can make you very productive for specific tasks, but learning all the magic takes a lot of time and the incantations are not necessarily generalizable so when you switch over to Django or Node you need to go learn new magic, or sometimes the magic your looking for doesn't exist anymore and you need to learn what was actually happening (this happened to me switching from Rails to Django, I forget what in particular but it was somewhat jarring realizing how much I had been taking for granted). Learning something like map will translate even if the literal syntax doesn't, knowing and relying on x-many separate opt-args for defining an ActiveRecord relationship to save explicitly typing a dozen lines might be magic not worth it especially if the whole team has to know.<p>Edit: Just wanted to add that although I took a shot at a rough distinction I want to clarify that I am not suggesting that I have the correct answer, I only hope to elicit the thoughts of better thinkers and posters.
A few years ago, I was working on a testing automation tool. The tests were written in Cucumber, and run on IronRuby, which hooked into a C# application and would press buttons and otherwise move around the application testing that everything did what it was supposed to. My job was to write the layer that sat between the Cucumber tests and the C# application and called the proper events when a Cucumber step was called.<p>The initial approach was to start the C# and IronRuby processes and have the IronRuby process watch the C# process for events, but this quickly broke down; the C# process ran in an STAThread, and in order to allow a second thread to interact with that it would have to be an MTAThread, and with 500K lines of existing code, there was no telling what changing that would do. So instead we inserted a callback in main. As soon as the CLR had loaded, if a command line parameter had been given, we started the IronRuby interpreter from the C# application. After hooking into some events we would drop out of the IronRuby interpreter and the C# application would start as normal, only going into IronRuby when an event handler was called.<p>There was just one problem. The IronRuby interpreter was starting in unmanaged code, and the CLR was seeing it as a noop. So when it got to the if statement that checked the command line parameter, it optimized the whole if statement out of existence.<p>My pair and I spent hours reading docs, trying to bend the optimizer to allow the if statement without hurting application performance. Finally, my pair inserted the following line into the if statement, after the call to the IronRuby interpreter:<p><pre><code> Console.PrintLn("magic!");
</code></pre>
It worked. The optimizer recognized the print statement as C# code and left the if statement alone.<p>That's always what I think of when I think of magic in code.
Many programming languages have features about which I would say, "You should either use them never, or all the time, but nowhere in between."<p>map is one of those features. Closures are another.<p>The reason is simple. The cost of learning the feature gets amortized over all of the times you use it. It really isn't worth learning the feature just to see it used a single time. But if you've paid the price to learn it, then there is no reason not to use it wherever it makes sense.<p>Unfortunately there are some very large languages with many reasonable working sets of features, but sharp corners when you combine the wrong ones. C++ is notorious for this, but it is hardly the only one. In that case it is important to agree on the subset of features that you will use. And not introduce others willy nilly.<p>I would therefore appreciate the right to use map. But if my co-workers do not know it, I would think hard about how hard to press for adding it to what is needed to understand my code.
I can't understand why you would use Scala and not use a simple higher order function such as map. It communicates the intentions clearly and is easy to understand.
My personal cutoff for magic is if it's determined at compile time, I'm happy.<p>If someone has proven to the compiler that a piece of code has a certain nonobvious behaviour, I'm usually intrigued.<p>If my interpreter starts parsing Roman numerals as constants because a gem depends on a gem which depends on gem `roman` (I'm looking at you Ruby), I'm not amused.<p>Scala's slick package is a fantastic example of advanced language features being used for good.
There is nothing magical about computing (unless you use Source Mage [1]). But, I have to agree with the author: people tend to use the word magic if they cannot <i>immediately</i> grasp the meaning of a piece of code. This code might use a language construct we are not familiar with, or implement a complex O(log N) algorithm, or it might use a magic number [2]. In most cases, however, it is worth to spend time to understand such code.<p>[1]: <a href="https://en.wikipedia.org/wiki/Source_Mage" rel="nofollow">https://en.wikipedia.org/wiki/Source_Mage</a><p>[2]: <a href="https://en.wikipedia.org/wiki/Fast_inverse_square_root#Overview_of_the_code" rel="nofollow">https://en.wikipedia.org/wiki/Fast_inverse_square_root#Overv...</a>
> <i>Don't lazily conflate the unfamiliar with magic. It's short-sighted and annoying.</i><p>Why? Because magic <i>exists</i>, so save the word for when you see actual magic; don't waste the word on a metaphor?
Is it me or does anyone else think that a single map function application is a really bad example? I've seen lots of errors introduced by for loops that have been malformed (although modern syntax like for(x:allthese) {stuff} helps ) but also map(stuff:x) means do all these, while for means do them one at a time in sequence
Magic is OK provided that it always delivers what it claims to offer (for any reasonable use case).<p>Most of the underlying details of the TCP, HTTP and SSL protocol stack are magic to me but it works pretty damn well - I'm really glad that I don't have to understand it fully in order to use it.
I sometime use the term magic to highlight code that is out of the current context/scope.<p>That ICA, FFT or MLP module is magic...<p>The module maybe complex but it is not magic. However it is something that I probably don't _need_ to understand to solve the problem I am currently working on.<p>Now I think about it, normally only very reliable code is called magic - the stuff that always goes wrong has some other choice descriptors.
I foolishly picked C++ for an assignment I did at university once and I summed up the experience as "non-deterministic programming" to a friend. To me, it seemed as if I was ending up with different results when running the same code. Then I suddenly got everything working, and that was "magic". In retrospect, it was a much too hard language for me at the time.
What if you are not there to explain your "magic" (which is just another word for clever here)? I get useful abstractions, but if your code has reached a point where you have to explicitly explain what your code does to fellow programmers, then you should probably revisit it and make it better.
If your code is clever but not readable, go back and make it readable.
Sounds like you guys just need a in house style guide..it does get weird when I can tell a person wrote this half of a program and then tell someone else wrote the other half due to spacing or bracket returning
readability is always more important. The "magic" is just a sign of bad code or code that is too heavily abstracted. why have everybody learn this so called magic when you can be more explicit? this is all religious programming views but i personally am pretty tired of things being way too abstracted. it makes getting shit done take too long.