My fundamental hope for humanity is that someday, rather than having to confront social challenges around distribution of resources (who gets more water? who gets more land? who gets more oil? do we need to raise energy costs to capture the extrinsic costs of polluting the air?), we will just have enough and more of everything.<p>The Central Arizona Project and the Navajo Generating Station are my favorite examples of this. The Navajo Generating Station is a giant coal-fired power plant, which provides energy used primarily to pump water up into Arizona. The station substantially degrades air quality in the surrounding areas, and the water that is "stolen" from the Colorado is a part of the West's growing water concerns. Right now, there are two sides to the issue: those who think it is worth the cost to bring water to Arizona, and those who believe the environmental issues are too big. We could solve that like a social problem, using politics and ethics to try to make the decision... or we could just get on making energy cheap and clean, and suddenly a huge part of the cost disappears. If there was no fly ash issue, no NOx issue, no CO2 issue, then we're back to just worrying about the water flow. But if we have cheap energy, desalination suddenly looks a lot more palatable, which would reduce a lot of water pressures in the US. Los Angeles could switch to 100% desal water and the Colorado river would have a surplus of water again.<p>Cheap (and preferably clean) energy, solving climate change, and cheap access to space (this one is more dubious) would all broadly boost the quality of life for humanity.
DeepMind looks like a hilariously wrong project to criticize because it is a true moonshot, something very different from the majority of other SV projects.
If hiring hundreds of PhDs to create a general purpose learning agent, all while publishing all the intermediate results in freely available papers isn't a moonshot with socially beneficial outcome, then I don't know what is.
Also note that DeepMind went even further than that, there is DeepMind health division aiming at using this technology to help doctors and patients directly.<p>If I were the author I'd choose some social media unicorn or an ad network as an example of inherent misallocation of human talent.
I think the author is strawmanning a bit here. The central premise they're attacking is that tech should not be called upon to solve social problems, and they assume this is what the pull quote at the top is implying. They then spend the body of the essay arguing that social problems are intractable to a top down approach, technological rationality is subject to the same problems as government bureaucracy, etc.<p>However, one could more charitably interpret the quote as saying there are lots of TECHNOLOGIES that would produce social good, which are underfunded - alternative energy, carbon capture, distributed social networking, etc. Ycombinator seems to agree, given their recent focus on nuclear.
Is anybody seriously proposing heavy Government spending on any specific problem right now, other than a wall against Mexico?<p>Some engineering problems have yielded to heavy spending. The Polaris submarine program. The Apollo program. RCA's color TV effort. AIDS treatment. Extreme ultraviolet photolithography. The H-bomb. Sometimes this works. With a narrow goal, a general idea of how to get there, and heavy funding, impressive results are possible.
I'd question how much of a necessity it is to refute the need for a "Manhattan Project" for various things is. But having heard the same call myself, I'll add my two cents to what I think is a rather good essay.<p>There's a perception among those calls I gather that the Manhattan Project was pulled together as this massive undertaking at the stroke of a pen. The reality however, was that it started off as overlooked, modest, and aimless at times. Overtime, as the goals and necessities became more clear, it grew and matured into the massive, focused effort we all know. But it was because of the direction and need that developed that resulted in the Project, not the other way around. And I think that's a very important distinction.<p>I think great projects tend to owe a good amount of their growth to organic factors. The reason why we haven't seen a "Manhattan Project" to solve a societal issue is because the conditions are not necessary right for one, and perhaps more importantly, the "project" needed to tackle one <i>might not necessarily be recognizable to us</i> as such.
>> First, the Manhattan Project itself was sui generis. There is very little reason to believe that this model does, in fact, generalize widely outside of the basic and applied sciences.<p>No. It was a basic question of practical physics. It was an effort to translate theoretical knowledge already proven in laboratories into a working product at scale. The innovations were definitely costly, but essentially rather incremental and straightforward. That was much of the reason it needed to be done in a hurry: eventually someone would do it.<p>Fusion power is in the same place. The theory works in the lab, we need to translate that into a working model at practical scales. Eventually someone will crack the problem. A Manhattan-style push would accelerate that date, hopefully to within our lifetimes.
><i>One representative example of this argument, applied more generally, can be found in this lament about why Silicon Valley isn't solving the "big" problems. Assuming that this complaint is sincere, it nonetheless misses the point. What are "real" problems?</i><p>Well, I could name 10 real problems off of the top of my head, and most people would agree that they are important too. So I don't really understand the question. It seems to me to imply that people are unique snowflakes and because of that it's difficult to determine what's important, but that's not really the case.<p>><i>1. The clamoring for tech companies to solve social problems ignores the empirical record and empirically observed characteristics of top-down engineering efforts</i><p>Like this internet thing, on which the very article is posted on?<p>><i>Unfortunately, Manhattan Project-like ventures are not translatable to social and political problems.</i><p>Maybe, but then again there are tons of serious problems that are not "social and political" in nature, and are not "social video sharing for cat lovers" either.<p>And, yes, people might enjoy and derive some value from the latter too. But they'd derive a lot more value from, e.g. not dying from certain diseases due to a better focus on such problems, or sustainable energy.<p>What I think the people the author mentions ask for is not stopping Google and co from doing their stuff and conscripting SV engineers to basic research, but more and better funded and coordinated national (and international) efforts on stuff that matters in a large scale for our societies and our survival. Kind of like the internet was developed, we went to the moon, nuclear energy, and also applications of such stuff to current problems (from environment to health, education, democracy, privacy, etc).
I have a pet theory that the level of concern towards social issues for an individual, as a function of wealth and well being, is convex[1].<p>That is, it is highest at the extremes : for the poorest and the wealthiest. The former are concerned because of the tacit idea that solving social problems would directly solve theirs. The latter are concerned because as people who don't have direct problems themselves, solving other people's ones would alleviate a repressed guilt feeling, satisfy their ego, or help with the minor annoyance of having to listen to people's whinning.<p>Then I'm guessing most people working in tech industry are right in the middle of this curve, and consequently give the least possible F.<p>1. <a href="https://en.wikipedia.org/wiki/Convex_function" rel="nofollow">https://en.wikipedia.org/wiki/Convex_function</a>
When someone bemoans the fact that a significant chunk of the our generations intellectual and monetary capital is invested in trivial matters it isn't necessarily that they wish google and Microsoft ought to save the world instead of make phone operating systems and ads.<p>If they have any sense what they want is for we as a species to put our money, our labor, our hearts and our lives into matters of importance to the human race.
Let's reframe the question:<p>Why does society today not attempt to maximize happiness among all citizens?<p>Resources are expended to make those with more resources marginally more happy, rather than someone with less resources comparatively much more happy.<p>Market economy says that this is the most "efficient" distribution of resources, but the assumption there is that the happiness of the rich is more important than the happiness of the poor.<p>Is this what we want our societal values to be?
And if so, how much more important?
And if not, is a domestic happy person more valuable than a foreign happy person?<p>The ethics and morals that come out of defining these answers are quite strikingly different.
I honestly couldn't finish this. Who exactly is comparing today's startups to the manhattan project? Yes many projects are aiming for more ad clicks, but so what? In the 30's movies and radio were doing the same thing and what? Why are they being compared to the manhattan project, or to apollo, or to the search for the Higgs boson, or world peace, or whatever?.
Is there anything to this beyond the obvious strawmab cllckbait?
><i>If the problem of producing the atomic bomb occupied the attention and resources of the scientific establishment during World War II, it is sadly unsurprising that this very same establishment immediately moved on to the problem of adapting to the political, military, and intelligence consequences of such an disruptive innovation. We live today with the legacy of those consequences, and cannot imagine a world without them.</i><p>Nuclear weapons may have been a positive development. The doctrine of mutually-assured destruction has a lot to do with why World War III hasn't happened yet.
"Change the world," they say. As if changing the world implies that change will be better for everyone. Some would love to change the world and bring slavery back. How about we focus on changing ourselves to adapt to the world we have? I'm glad this essay touches upon this since nobody else seems to be talking about it.
Go and Chess AI are SERIOUS and REAL problem. It is something which goes back for thousands years and will be remembered for another thousands years.<p>AIDS and other contemporary diseases will be soon forgotten after their extinction.