TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

“Mostly functional” programming does not work

51 点作者 lisptime将近 11 年前

20 条评论

btmorex将近 11 年前
To play devil&#x27;s advocate for a minute:<p>Nearly all useful, reusable software today is written in a mostly or entirely imperative language. This is despite the fact that functional programming has been around for at least 20-30 years. So, my basic question is, if functional programming is so much better, why isn&#x27;t more software written in a functional language? Or put another way, why are there so many blog posts promoting functional programming when it clearly hasn&#x27;t produced results.
评论 #8084502 未加载
评论 #8084468 未加载
评论 #8084481 未加载
评论 #8084505 未加载
评论 #8084460 未加载
评论 #8084452 未加载
评论 #8084503 未加载
评论 #8084584 未加载
评论 #8084454 未加载
评论 #8086285 未加载
评论 #8084491 未加载
评论 #8084475 未加载
kristopolous将近 11 年前
The author needs to be less aggressive in the accusations. This current trend which supposedly &quot;doesn&#x27;t work&quot; actually runs the vast majority of the modern web consistently and reliably.<p>Placing onerous restrictions on what someone is permitted to do in order to satisfy some formal abstract programming model - that is the thing that really doesn&#x27;t work too well.<p>This makes arbitrary programming arbitrarily difficult: many simple concepts are completely prohibited. Fanciful convoluted ways that don&#x27;t violate the formalism have to be fabricated...because we can&#x27;t violate our arbitrary formalism! No! Not in the name of instructing a computer to do something. &#x2F;me adjusts his english headmaster cap.
评论 #8084461 未加载
评论 #8084459 未加载
评论 #8084463 未加载
matthewmacleod将近 11 年前
Evidently &quot;mostly functional&quot; programming <i>does</i> work, as does imperative programming, because we&#x27;ve got loads of successful software written using these paradigms.<p>I&#x27;m absolutely all behind the idea of looking at what the future of programming languages is going to be, and how we&#x27;re going to cope with diverse, highly parallel systems, and how we can reduce errors, bugs and failures. But it will be a while until we figure that out.
tinco将近 11 年前
It&#x27;s a shame he&#x27;s put an introduction to monads smack down in the middle of this otherwise nice argument. If there was ever any objection to pure functional programming it would be that monads are complex and hard to learn and reason about. Putting an explanation of them with all sorts of mathematical terms is not helping the cause.<p>Not only are monads not the only solution to doing I&#x2F;O in a pure programming language (stream based I&#x2F;O or functional reactive being another), they need not such a mathematical explanation.<p>An I&#x2F;O monad is a simple class modelling a box. There&#x27;s two functions for it, one puts something that&#x27;s not in a box in a box. The other function allows you to give it a function that is applied over the function in the box, without taking it out of the box (this is called a &#x27;bind&#x27; operation).<p>Note that there&#x27;s no function to take the value out of the box.<p>The trick is that the only thing that knows how to get the value out of the box is the VM itself. So its task is to at the end of your function, execute whatever function created your monad, and then execute all functions you gave it using the &#x27;bind&#x27; functions sequentially over the returned value (which might include unwrapping more i&#x2F;o monads).<p>Since every bind function relies on the previous value of the monad, it can naturally only be executed sequentially.<p>Note that this explanation does not explain monads either in full or very accurately, it&#x27;s just one of the patterns that monads are used in, and hopefully gives you an idea about how it is that monads are used to enforce encapsulation and sequential execution of effectful functions.
danieldk将近 11 年前
His argument is that concepts such as laziness lead to unexpected results in imperative languages. However, such problems also occur in pure functional languages:<p><a href="http://stackoverflow.com/questions/5892653/whats-so-bad-about-lazy-i-o" rel="nofollow">http:&#x2F;&#x2F;stackoverflow.com&#x2F;questions&#x2F;5892653&#x2F;whats-so-bad-abou...</a>
评论 #8084556 未加载
vertex-four将近 11 年前
The author never explicitly specifies what &quot;does not work&quot; actually means. Does it mean the code doesn&#x27;t run? Is riddled with bugs? Certain types of reasoning aren&#x27;t possible? Certain compiler optimisations aren&#x27;t possible? The design of such languages is more inconsistent than they&#x27;d like?
louthy将近 11 年前
&quot;Computation is not just about functions, if computation was just about functions then quicksort and bubblesort would be the same because they&#x27;re computing the same function. A computing device is something that goes through a sequence of states. What an assignment statement is doing is it is telling you &quot;here is a new state&quot;. Functions alone don&#x27;t solve the problems of programming because programs (on the whole) are non-deterministic; on the other hand, imperative programs are using assignment statements to compute functions, and that&#x27;s silly.&quot;<p>That exact phrase was said to Erik Meijer by Leslie Lamport when Erik interviewed Leslie. [1]<p>It seems clear (if you take Leslie&#x27;s word as gospel) that there is room for &#x27;Mostly Functional&#x27;. That is, accept that your program goes through a series of states, but use pure functions to calculate those states.<p>However, as an imperative programmer who&#x27;s found Haskell over the past few years (thanks, in a large part to Erik Meijer), I happen to think that Haskell has it just about right.<p>[1] <a href="http://channel9.msdn.com/Shows/Going+Deep/E2E-Erik-Meijer-and-Leslie-Lamport-Mathematical-Reasoning-and-Distributed-Systems" rel="nofollow">http:&#x2F;&#x2F;channel9.msdn.com&#x2F;Shows&#x2F;Going+Deep&#x2F;E2E-Erik-Meijer-an...</a>
mathenk2将近 11 年前
I must say that in all of my years, most every piece of source I&#x27;ve ever seen has been mostly [insert paradigm here]. OOP is &quot;mostly&quot; more times than not, same goes for FP. I would argue that &quot;mostly&quot; does work, and thankfully so, because nearly every service that you use throughout your day is a &quot;mostly&quot;.
sqrt17将近 11 年前
&quot;Mostly functional&quot;, in my eyes, works as well as contract-based specification of interfaces works. The property of being free of (surprising&#x2F;undesirable) side effects is part of a contract that can, but should not be violated by an implementation. To me Meijer&#x27;s argument sounds like a strawman, since the alternatives to encapsulating side-effects in the terms of a (informal, software) contract are all not very appealing.<p>Declarative sublanguages such as LINQ work precisely because of this &quot;contract&quot; of being mostly-functional, even if the work behind the scenes (database accesses etc.) certainly changes the state of the world.
Jweb_Guru将近 11 年前
This would be a better article if it didn&#x27;t implicitly assume mutable state was always bad. For example, locally mutable state (that doesn&#x27;t escape) is highly unlikely to bite either you or the compiler.
rtpg将近 11 年前
OK, this points out a couple of things that &quot;do not work&quot; in these sorts of situations:<p>- lazy evaluation with side effects - replacement optimisations based off of referential transparency when there are side effects - general side-effectiness<p>This is basically saying that functional programming doesn&#x27;t work when it isn&#x27;t functional... but we can still have functional &quot;chunks&quot;, and monads _are_ an effect system.<p>I don&#x27;t really understanding what he&#x27;s trying to prove, of course purely functional semantics fall apart when side effects are introduced.
otikik将近 11 年前
The article should start by defining what he means by &quot;work&quot;. I see mostly functional programming working every day. I even see non-functional programming (<i>grasp!</i>) working every day.
creyer将近 11 年前
&quot;It is impossible to make imperative programming languages safer by only partially removing implicit side effects&quot; So when we can call a program safe? My personal definition is that a program is safe when it always gives the right output for the input it was designed for. Of course if you put water in your car won&#x27;t make it run, the same way if you provide the wrong input to a program might end up in a wrong output. Getting back I do believe in the middle way.
radicalbyte将近 11 年前
Erik&#x27;s example of the using statement is a straw-man: you get exactly the same problems if you do this:<p>using(var file = File.Open(path)) { _someField = file; }<p>..and then use _someField in another method.<p>It&#x27;s not so much a problem with understanding closures, but of the limitations of IDisposable.
acqq将近 11 年前
I don&#x27; understand where he sees the problem in the first C# example? I believe it actually works, the interleaving is OK to happen IMHO. I guess in Haskell it would also happen, as it&#x27;s also lazy. What am I missing?
auggierose将近 11 年前
I use Scala for &quot;mostly functional&quot; programming, and it works very well. Sorry, Erik, you are just not where it&#x27;s at anymore.
评论 #8084512 未加载
评论 #8084531 未加载
sillysaurus3将近 11 年前
Lisp is mostly functional. Arc isn&#x27;t purely functional, and it works quite well.
lispm将近 11 年前
Another monad introduction.
评论 #8084453 未加载
cm2187将近 11 年前
All I get is a 403 Forbidden error on this link
mbenjaminsmith将近 11 年前
I think the real failure of FP has been demonstrating its advantages in practical terms. While deferring impurity is admittedly interesting (if you care about the theoretical underpinnings of programming at all) what do I gain as a programmer?<p>I&#x27;ll try to illustrate my point:<p>In programming we often have to do something more than once. In terms of mental complexity, I&#x27;d rank our options like so, from easy to hard:<p>Loop -&gt; Recursion -&gt; Fixed Point Iteration<p>We can do the same thing with all of them. Recursion in some cases can lead to a stack overflow and any fixed point iteration has the disadvantage of, well, being f-ing hard to understand.<p>A beginner can get a `while` or a `for` loop in seconds. They&#x27;re very, very intuitive. Once you figure out solving problems in software often requires doing things repeatedly, loops open up a lot of power.<p>Recursion is more difficult once you throw in conditionals and return values. They&#x27;re harder to reason about. Loops are simple. So why bother with recursion? I learned recursion because it exists and I love learning. Could I have written just as much software without it? Yes. Would it have been just as functionally correct, maintainable, easy to read? Yes.<p>I know very little about fixed point iteration. Why? Well, because I have loops and if I&#x27;m feeling jaunty, recursion.<p>Having said that, I recently came across this article about the Y combinator:<p><a href="http://matt.might.net/articles/implementation-of-recursive-fixed-point-y-combinator-in-javascript-for-memoization/" rel="nofollow">http:&#x2F;&#x2F;matt.might.net&#x2F;articles&#x2F;implementation-of-recursive-f...</a><p>I recommend that article for everyone. But the most important part is the section &quot;Exploiting the Y combinator&quot;.<p>&gt; This formulation still has exponential complexity, but we can change it to linear time just by changing the fixed-point combinator. The memoizing Y combinator, Ymem, keeps a cache of computed results, and returns the pre-computed result if available:<p>The author is talking about his algorithm for calculating Fibonacci numbers. While the naive recursive approach is limited due to its computation complexity, by caching intermediate results we can short-circuit the calculations made.<p>&gt; The end result is that the 100th Fibonacci number is computed instantly, whereas the naive version (or the version using the ordinary Y combinator) would take well beyond the estimated lifetime of the universe.<p>I don&#x27;t know enough about what&#x27;s being done here to say that this optimization couldn&#x27;t have been made with a loop (I believe Turing equivalence proves that it could have been) but at least I get to see some of the magic that fixed point iteration gives us.<p>Haskell, as a functional language, offers a lot (purity! type safety!) but the community hasn&#x27;t really shown <i>how</i> these things are valuable. Type safety is great, but is Haskell materially better than Objective-C (my primary language) in that regard? Pure functions are trivial to test, but in a language with a reasonable amount of type safety, how many individual functions are getting tested anyway? Of course I can write pure functions in an imperative language and often do. And I can do so without painfully abstract concepts like the dreaded monad if I want to do something as pedestrian as generate a random number.<p>The reality is writing software for most people is about the platform they&#x27;re targeting, available libraries, ease of use and techniques for dealing with long-term complexity. I have yet to find a reason to use Haskell even though I would love to -- more accurately, I have yet to find a situation where I could justify using Haskell.