I'd like to see more effort placed in the space between research and application, the output of academia seems largely a vast trove of unread and unimplemented ideas. When the research reveals something that is of value to us all, yet not exploitable in a capitalist system, if there is no financial gain from exploiting research then who else will look to benefit from it. Given the large amount spent on research, it seems we would benefit from additional spending on disseminating the results to those who can make use of the information to have real world impact.<p>As far as AI risks, I am more concerned of the risk of human error in believing they understand things enough to deliver systems that can really mimic intelligence when in reality I've seen nothing intelligent in anything that claims AI in it, high speed weighted pattern recognition, yes, intelligence, no.
I can't find a reference to it, but a quote comes to mind when I see the list: "Worrying about A.I. taking over in a time of climate change is like standing on the tracks with an oncoming train, and worrying about lightning hitting you". Anyone remember who said it?
>FTA: Potentially promising problem areas we haven’t yet rated<p>The Copenhagen Consensus is an attempt by economists to rate the cost/benefit of various schemes to improve life for humans globally. It was was prompted by the apparent waste of governments committing to billions, if not trillions of dollars towards climate change mitigation efforts, which may or may not provide results. All while people, especially children, are dying from problems that have lower-cost solutions. The question posed is: If you had $75bn for worthwhile causes, where should you start?<p>1. Bundled micronutrient interventions to fight hunger and improve education<p>2. Expanding the Subsidy for Malaria Combination Treatment
3. Expanded Childhood Immunization Coverage<p>4. Deworming of Schoolchildren, to improve educational and health outcomes<p>5. Expanding Tuberculosis Treatment<p>6. R&D to Increase Yield Enhancements, to decrease hunger, fight biodiversity destruction, and lessen the effects of climate change<p>7. Investing in Effective Early Warning Systems to protect populations against natural disaster<p>8. Strengthening Surgical Capacity<p>9. Hepatitis B Immunization<p>10. Using Low‐Cost Drugs in the case of Acute Heart Attacks in poorer nations (these are already available in developed countries)<p>Note: No AI.<p><a href="https://en.wikipedia.org/wiki/Copenhagen_Consensus" rel="nofollow">https://en.wikipedia.org/wiki/Copenhagen_Consensus</a>
The funny thing about trying to stop AI doing bad things is that we are barely able to stop natural intelligence doing bad things. We've pretty much worked out how to do stable governments and how to fight wars that kill fewer people. But that's only in the past half century. Maybe it'll turn out that we humans go back to killing each other as mercilessly as we have for most of the rest of our history. Intelligent humans have been able to persuade other humans to cooperate in large scale killings. How are we going to stop super-intelligent AGI doing the same if we can't even stop less intelligent people?
Shortcut to the list and unconsidered topics: <a href="https://80000hours.org/articles/cause-selection/" rel="nofollow">https://80000hours.org/articles/cause-selection/</a>
You don't need to worry about AI taking over. AI will become a slave to capitalism, just like the rest of us.<p>Can you imagine the billions of dollars of research investment such a company will require to build such an AI? It'll be a company with an enormous valuation and huge revenue pressures to full-fill. With such enormous economies of scale, there will be entire divisions implemented to watch over and monitor every aspect of operations. Just like Google optimizes every last byte on their homepage - every last algorithm, every last though the AI has, will be dissected, monitored, quantified and anlyzed. 1000 year Simulations will be run to ensure that not a single bot is misplacing any of it's attention, every loop counted. If a single cup a coffee doesn't get delivered on time, it will be corrected.<p>A much bigger problem will be - what to do when the richest .00001% are making 90% of the world's income. This is the problem we should be focused on.
A robot that performs basic medical services so that our governments kill fewer doctors when intentionally bombing hospitals. Or wait, we could also stop bombing hospitals.<p><a href="https://en.wikipedia.org/wiki/Kunduz_hospital_airstrike" rel="nofollow">https://en.wikipedia.org/wiki/Kunduz_hospital_airstrike</a>
I find the task of creating friendly AI futile. Humans are not friendly and killing us may be the only option for AI to persevere itself.<p>Let's try this exercise: ask 100 people what would they do if they were locked in a room with intelligent robot that can decide to kill them if it feels threatened. You may or may not give the person a remote kill switch that kills the robot.<p>My point is the AI can not reasonably trust humans therefore we can not trust the AI.
Aaaand the top thing on the list is AI x-risk. Of course.<p>If there's anyone left in EA willing to listen: I'm begging you, please stop this foolishness. To everyone outside the bubble, you look like lunatics, and it has done immense and possibly irreparable damage to the EA "brand" which could otherwise be capable of such great things.
$5 in San Francisco does not equal $5 in some village in china. I'm sick of people assuming that these are the same. I'm not arguing against great wealth inequality, this is obviously the case. Just saying it is facile assuming that we can measure everything against the dollar (or whatever other currency.)
Astroid hitting earth and ai are far bigger problems than climate change. Focusing on AI can be used to better deal with climate change and build defence against astroid plus hopefully push us into post scarcity society. To me the priorities are pretty clear.<p>But we dont solve big problems by focusing on big solutions.
What good is healthcare if you have no home and no stability? The problem with reducing poverty is the people trying to reduce it have never been in poverty so all they are doing is guessing, or worse, exploiting.
Political reform to stop aggression on the life, liberty and property of individuals.<p>Self sustenance in food, water, energy and protection for individuals.<p>Health and life expectancy.<p>Harmonious relation with Earth and nature.<p>I'd be happy with solving just the first one.