TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

The future of programming languages in a massively concurrent world

31 pointsby NickSmithover 17 years ago
If, as it appears, whatever-it-is-that-replaces-Moores's-law states that the number of processor cores will double every 18 months from now on, then in 6 years time 32 core machines will be common place.. and the hacker's weapon of choice a gleaming new 64 core Macbook Pro.<p>It seems to me that in such a world any language that by default addresses 3% or less of the processing capacity will quickly loose popularity and those that embrace concurrency at a fundamental level (not another library) will become more and more relevant. IMO, Joe Armstrong in this (previously submitted) video talks a lot of sense on this issue.--&#62; <a href="http://channel9.msdn.com/ShowPost.aspx?PostID=351659" rel="nofollow">http://channel9.msdn.com/ShowPost.aspx?PostID=351659</a><p>My motivation for posting this is simply that I would love for Arc to succeed in it's objectives. But to be a '100 year language' I imagine it would have to first thrive in the next 10 years; and to do that it must be seen as a great language for tomorrow's world and not today's. From what I've seen so far of Arc I get nothing but good vibes and it would be a shame for it to be sidelined in the multi-core rush just around the corner.<p>My apologies if this has been discussed before. I am new here, couldn't figure out how to search past articles and Google just returns this home page.

10 comments

axodover 17 years ago
I really hope that if multiple cores increase, they are done in a way which hides them from the programmer. For example a 32 core CPU, but that appears as a very very fast single core. That's where the concurrency/'threading' issues should live, not in everyones code.<p>Threads are usually the problem IMHO not the solution.<p>I don't agree with the suggestion that javascript will need threads either. Javascript works extremely well in a single thread. There isn't really much of a need for threads. Having multiple cores doesn't change that, it just means you might need some abstraction layer like I described above, that utilizes all cores, whilst appearing as a single core.
评论 #76167 未加载
tlrobinsonover 17 years ago
A common misconception is that Moore's "law" states that processor speed will double every 18 months or so. That's incorrect. In fact, Moore said that the NUMBER of transistors on a processor would double roughly every 18 months.<p>It turns out that they are correlated, since smaller transistors means both faster transistors and more transistors per unit of area.<p>My point is that more processor cores require more transistors, and thus the end of Moore's Law also means the end of more processor cores (ignoring things like architectural advancements or more but less powerful cores)<p>That said, I do agree that we will see an increasing number of cores, at least for awhile.<p>First of all, it's important to have an OS that efficiently manages the cores and the applications that run on them. This automatically benefits everyone who runs multiple applications on a multi-core machine, since each application gets a larger slice of time and fewer context switches.<p>Multiple cores could also eliminate dedicated components like GPUs which would bring down the cost of low end machines.<p>As far as programming languages go, I hear Erlang is good for concurrency, though I've never used it.
davidwover 17 years ago
I think it's a race:<p>In one corner are languages like Erlang that have been designed for concurrency.<p>In another corner are languages with massive user bases, that don't do concurrency very well (Java for example), that will have to undergo modifications to work better.<p>In another corner, perhaps, are languages that are just now being created. They're the outsiders, but have more agility in their design because they don't have huge user bases.
评论 #76037 未加载
Hexayurtover 17 years ago
Occam.<p>Specifically, <a href="http://transterpreter.org" rel="nofollow">http://transterpreter.org</a><p>Yes, the language it runs (Occam) is 20 years old. But the language was designed for programs running on dozens to thousands of nodes, and in the transterpreter implementation, there's the possibility of doing this on heterogeneous hardware, where the fast nodes do things like splitting and merging the data set, and the smaller "grunt compute" nodes do the actual work.<p>Parallel programming is hard, but that's inherent hardness. You can't get around things like memory bandwidth and latency at a programming language level, no matter how much you try. You can only get away from those things by dealing with the fact you have thousands of machines, or tens of thousands.<p>It's only going to get worse from here on in, as "faster" comes to mean more processors, not higher clock rates. You'll see this: 2 core! 3 core! 4 core! 8 core! and pretty soon (within 10 years) we'll see 64 and 128 core desktop machines, maybe even a revival of unusual architectures like wafter scale integration with 3D optical interconnects (i.e. upward pointing tiny lasers and photocells fabricated on the chip) to handle getting data on and off the processors.<p>We've seen unambiguously that <i></i>GIGANTIC<i></i> data sets have their own value. Google's optimization of their algorithms clearly uses enormous amounts of observed user behavior. Translation efforts with terabyte source cannons. Image integration algorithms like that thing that Microsoft were demonstrating recently... gigantic data sets have power because statistics draw relationships out of the real world, rather than having programmers guessing about what the relationships are.<p>I strongly suspect that 20 years from now, there are going to be three kinds of application programming:<p>1&#62; Interface programming<p>2&#62; Desktop programming (in the sense of programming things which operate on <i>your personal objects</i> - these things are like <i>pens and paper</i> and you have your own.)<p>3&#62; Infrastructure programming - supercomputer cluster programming (Amazon and Google are <i>supercomputer</i> <i>applications</i> <i>companies</i>) - which will provide yer basic services.<p>One of the concepts I'm pitching to the military right now is using the massive data sets they have from satellite sources to provide "precision agriculture" support for the developing world. Precision Agriculture in America is tractors with GPS units that vary their fertilizer and pesticide distribution on a meter-by-meter basis (robotic valves consult the dataset as you drive around the land.)<p>In a developing world context, your farmers get the GPS coordinates for their land tied to their cell phone numbers either by an aid worker, or by their own cell phone company.<p>Then the USG runs code over their sat data, and comes up with farming recommendations for that plot of land. If the plots are small enough (and they often are) the entire plot is a single precision agriculture cell.<p>But if you think about the size of the datasets - we're talking about doing this for maybe 20 - 30% of the planet's landmass - and the software to interpret the images is non-trivial and only going to get more complex as modeling of crops and farming practices improves...<p>Real applications - change the world applications - need parallel supercomputer programming. Occam was <i>right</i> in the same way that Lisp is <i>right</i> but for a different class of problems. That's because Occam is CSP (concurrent sequential processes) and those are a Good Thing. There may need to be refinements to handle the fact we have much faster nodes, but much slower networks, than Occam was originally designed for - but that may also turn out to be a non-issue.<p>I'm also working on similar stuff around expert systems for primary health care - medical expert systems are already pretty well understood - so the notion is to develop an integrated set of medical practices (these 24 drugs which don't require refrigeration, don't produce overdose easily, and are less than $10 per course) with an expert system which can be accessed both by patients themselves to figure out if their symptoms are problematic or not, and by slightly trained health care workers who would use the systems to figure out what to prescribe from their standard pharmacopoeia.<p>It's not much, but for the poorest two or three billion, this could be the only health care service they ever see. None of the problems are particularly intractable, but you better bet there's a VAST - and I mean VAST - distributed call center application at the core of this.<p>Of course, the Right Way to do this is FOLDING@HOME or SETI - we've already proven that public interest supercomputing on a heterogeneous distributed network works.<p>Now we just need to turn it to something directly lifesaving, rather than indirectly important for broader reasons.<p>Remember that the richest 50% of the human race have cell phones already, and rumor has it (i.e. I read it on the internet) that phone numbers and internet users in Africa have doubled every year for the past seven years. 10 years from now the network is going to be ubiquitous, even among many of the very, very poorest.<p>We get a do-over here in our relationship with the developing world. We can't fix farm subsidies, but we can ensure that when they plug into the network for the first time, there is something useful there.
评论 #76303 未加载
评论 #76594 未加载
ralphover 17 years ago
I'm surprised the decent solution to this isn't more widely known. People have mentioned Occam and Stackless Python; both interesting. But their ancester is Hoare's CSP and other descendant have included Squeak (not the Smalltalk relation), Newsqueak, Plan 9's Alef, Inferno's Limbo, and now libthread.<p>Channels with co-operating threads are easy to reason about. See Russ Cox's overview page <a href="http://swtch.com/~rsc/thread/" rel="nofollow">http://swtch.com/~rsc/thread/</a> for more.
评论 #76410 未加载
staunchover 17 years ago
The server side is already prepared for this. There's nothing much to do. All the big web languages run as multiple processes (or threads) and so do all the big databases. I think we'll see a lot more server consolidation, which we're already seeing with the 4-core and 8-core machines of today.
dhoustonover 17 years ago
check out cilk; <a href="http://en.wikipedia.org/wiki/Cilk" rel="nofollow">http://en.wikipedia.org/wiki/Cilk</a> . i'm not terribly familiar with it, but it extends c++ and adds a few keywords/abstractions to work with concurrency. it's been spun out of its original project at MIT into a startup as well (<a href="http://cilk.com/" rel="nofollow">http://cilk.com/</a> )<p>
jhrobertover 17 years ago
My bet is that existing Object Oriented Programming languages will become more "functionnal".<p>ie: Everything is an object... or a value!<p><a href="http://virteal.com/ObjectVersusValue" rel="nofollow">http://virteal.com/ObjectVersusValue</a>
gritzkoover 17 years ago
There is some chicken-and-egg effect: multicore processors are selling well if you have applications/languages and vice-versa. As far as I see, OpenMP has some potential as well as Erlang and others.
评论 #76070 未加载
waleedkaover 17 years ago
The way things are headed, we'll soon be running most of our applications in the browser. Someone needs to come up with a multi-threaded JavaScript. Yes, I know, it's not going to be pretty.
评论 #76058 未加载
评论 #76028 未加载