There are a lot of research labs and institutes around, in universities and outside, with funding from NSF, NIH, foundations, wealthy individuals, etc. So, if Altman wants to set up a research institute, okay -- that alone is not very novel.<p>It is obvious from history that good research is super tough to do. My view has been: We look at the research and mostly all we see is junk think. Then we see that, actually, research is quite competitive so that if people really could do some much better stuff then we would be hearing about it. So, net, for a view from as high up as orbit, just fund the research, keep up the competitiveness, don't watch the details, and just lean back and notice when get some really good things. E.g., we found the Higgs boson. We detected gravitational waves from colliding neutron stars and black holes. We set up a radio telescope with aperture essentially the whole earth and got a direct image of a black hole. We've done big things with DNA and made progress curing cancer and other diseases. We discovered dark energy. So, we DO get results, slower than we would like, but the good results are really good.<p>How to improve that <i>research world</i>? Not so clear.<p>Then Altman will have to borrow heavily from the best of how research is done now. This sets up Altman as the head of a research institute. That promises to be not much like YC or even much like the computer science departments, or any existing departments, at Stanford, Berkeley, CMU, or MIT. E.g., now if a prof wants to get NSF funding for an attack on AGI, he will get laughs.<p>But how to attack cancer? Not directly! Instead work with and understand DNA and lots of details about cell biology, immunity, etc. Then when have some understanding of how cells and immunity work, maybe start to understand how some cancers work. But it is not a direct attack. The DNA work goes back before 1950 or so. The Human Genome Project started in about
1968. Lesson: Can't attack these hugely challenging projects directly and, instead, have to build foundations.<p>Then for artificial general intelligence (AGI), what foundations?<p>Okay, Altman can go to lots of heads of the best research institutes and get a crash course in Research Institute Management 101, take some notes, and follow those.<p>Uh, the usual way to evaluate the researchers is with their publications in peer-reviewed journals of original research. Likely Altman will have to go along with most of that.<p>How promising is such a research institute for the goal of AGI?<p>Well, how promising was the massive sequencing of DNA, of the many astounding new telescopes, of the LIGO gravitational wave detector(s), of the Large Hadron Collider (LHC), of engineering viruses to attack cancer, of settling the question of P versus NP, ...?<p>Actually, for the physics, we had some compelling math and science that said what to do. What math/science do we have to say what to do for AGI?<p>One level deeper, although maybe we should not go there and, instead, just stay with the view from orbit and trust in competitiveness, what are the prospects for AGI or any significant progress in that direction?<p>For a tiny question, how will we recognize AGI or tell it from dog, cat, dolphin, orca, or ape intelligence? Hmm.<p>For a few $billion a year, can set up a serious research institute. For, say, $20 billion a year, could do more.<p>If Altman can find that money, then it will be interesting to see what he gets.<p>I would warm: (A) At present, the pop culture seems to want to accept nearly any new software as <i>artificial intelligence</i> (AI). A research institute should avoid that nonsense. (B) From what I've seen in AI, for AGI I'd say first throw away everything done for <i>AI</i> so far. In particular, discard all current work on <i>machine learning</i> (ML) and <i>neural</i> anything.<p>Why? Broadly ML and neural nets have no promise of having anything at all significant to do with AGI. For ML, sure, some really simple fitting back 100 years, even back to Gauss, could be useful, but that is now ancient stuff. The more recent stuff, for AGI, f'get about it. For neural nets, maybe they could have something to do with some of the low level parts of the eye of an insect -- really low level stuff not part of <i>intelligence</i> at all. Otherwise the <i>neural</i> stuff is essentially more <i>curve fitting</i>, and there's no chance of AGI making significant use of that. Sorry, guys, it ain't curve fitting. And it wasn't <i>rules</i>, either.<p>Finally, mostly in science we try to proceed mathematically, and the best successes, especially in physics, have come this way. Now for AGI, what will be the role of math, that is, with theorems and proofs, and what the heck will the theorems be about, especially with what assumptions and generally what sorts of conclusions?<p>My guess: In a few years the consensus will be (1) AI is essentially 99% hype, 0.9% water, and the rest, maybe, if only from accident, some value. (2) The work of the institute on AGI will be seen as just a waste of time, money, and effort. (3) Otherwise the work of the institute will be seen as not much different from existing work at Stanford, Berkeley, CMU, MIT, etc. (4) Nearly all the funding will dry up; the institute will get a new and less ambitious charter, shrink, join a university, and largely f'get about AGI.