> Q1: Can we imitate "thinking" from only observing behavior?<p>If you believe that the human mind is the result of Darwinian evolution, then you implicitly believe that thinking can arise from interacting with the environment. That does not exactly answer the question, but if anything, it goes beyond it.<p>On the other hand:<p>"This could also be addressed via other high-data environments:<p>1) Observing trader orders on markets and seeing if we can recover the trader's deductive reasoning and beliefs about the future. See if we can observe rational thought (if not rational behavior).<p>2) Recovering intent and emotions and desire from social network activity."<p>Before you could even begin to get anything from these streams of symbols, you would have to have some way to give them semantics - to understand what they signify to the actors. I suspect that these would be among the last things you would achieve, not a means to the end.
Question 2: This is better left unanswered. We've been building superhuman AIs for at least 300 years which are exploiting reality to their advantage and have started out-competing humans for resources (including by changing laws in their favour). These things we've been building are called corporations, and they're certainly smarter than any individual, richer than any human can become and are now working hard to figure out how their humans can be replaced by more reliable components.
Q1) By the mere copying of behaviors, no. It is clear that there are at least the a priori intuitions of space, time, and causality necessary for humans to obtain any knowledge. It is within that a priori framework where concepts and then decisions are derived from.<p>Q2) This would require the knowledge of what is sufficient & necessary for AGI.<p>Q3) Minsky, Neumann, and most sophisticatedly, Norbert Weiner answered this question. Read page 33 of Weiner's Cybernetics. The real insight, where in what I will now explain in brevity is at the expense of the insight's resolution, is that in the nature of microscopic phenomena there are an exorbitant quantity of fickle components ("equal particles") whose positions, accelerations, velocities are all impossible to record because of that sheer quantity by the nature of this sphere of study. More so, there are then at least second-order coupling effects between all these fickle components due to their constant interaction. As a resolve, we have developed terms of statistical understanding. Weiner cites "cloud, temperature, turbulence, etc." all being concepts which are specifically defined in a statistical context.<p>Where as in the macroscopic sphere there are far fewer particles much further apart from each other with near certain initial masses, positions, and velocities.<p>The result? In the microscopic systems, because their properties are precisely defined statistically the distribution which predicts the future nor past for a given system isn't unique. That is, the future state (predicted by the distribution provided by some initial system) could have also been predicted by other distributions describing many other initial systems unless by "some miracle" there is a "tightly defined statistical range".<p>So are microscopic physics condemned to this paradox? Not necessarily, there may be some way to construct scientific objects from systems phenomena which don't require a statistical conception because they can be precisely defined from a microscopic systems point of view temporally & spatially. However, this would require a revolution in theory and likely mathematics
Lame, only true question is will this universe die out and all information in it be lost or can it be maintained indefinitely/transfer information to the next universe iteration.
Richard Feynman has a good explanation for 3.<p><a href="http://www.informationphilosopher.com/solutions/scientists/feynman/past_and_future.html" rel="nofollow">http://www.informationphilosopher.com/solutions/scientists/f...</a>
Sometimes I wonder if we are the critters related to the second question. Of course the next question is how far away are we from meeting the simulation’s spec.
> If you don't know how to grasp an object, you can't bring food to your mouth.<p>Counterpoint: hummingbirds.<p>It might sound cheap, but discussions of the "basic necessities" of anything so often just overlook obvious counterexamples.
Ask HN: what are the questions that keep you up at night ?<p>For me: How do viruses really work? How to antiviruses work? I’ve gone into basics of DNA -> RNA -> Ribosome -> protein -> cell organelles -> Cells. We know how to sequence DNA and it’s pretty cheap right now. How do we simulate compiling DNA -> proteins and how they interact with each other? How do you find weak spots in a virus and figure out what proteins will inactivate them ?<p>2) what is the best course of action for people who lost their jobs? Taking <20k from 401k/IRA should be free of taxes and penalties. Everyone has lost >30% of value since Feb already. Sure most people don’t have 401ks, but those who do could weather this storm a bit better. Should we stop payments for all loans for 2 months. No mortgage, auto loans, student loans etc. stop those foreclosures from happening. No rent.<p>3) will US have the most deaths? Most likely. A large population + most states haven’t locked down. This means unless >80% of population gets infected and recovers to build herd immunity, or we have a vaccine, we’re in deep trouble. Should we infect everyone >10 and under <30 and in clusters so we build up herd immunity in weeks? And get most of the young population back to work? Like if you’re gonna be sitting at home for two weeks, may as well get sick and build up immunity while we’re at it.
Do we already posess the hardware required to run full AI?<p>Deep fakes can be done on older hardware, we just didn't have the code. Does the same thing apply to AI?
I can imagine many people would not be comfortable answering that question, particularly depending on the company and their culture. I can easily see this type of question leading to dramatically different conclusions ranging from “this person is a deep thinker on varied topics” all the way to “this person wouldn’t be a good cultural fit”. Don’t think too deeply about the wrong topics.
#2 is explored by John Searle's <i>Chinese Room</i> argument, which is unfortunately completely misunderstood by most technologists working in AI, to their own peril.<p>This article is an excellent introduction:<p><a href="http://www.dreamsongs.com/Searle.html" rel="nofollow">http://www.dreamsongs.com/Searle.html</a><p>A relevant excerpt:<p><i>Searle goes on to contend that intentional behavior must be in response to real contingencies, not formal simulations of reality. If so, it would seem to follow that the capacity for intentional behavior is the result of what might be called phylogenic contingencies, and so would only be found in products of real evolution. In short, what Searle is saying is that, just like other biological activities such as digestion and photosynthesis, thought is intrinsically dependent on the biochemistry of its origin. Just as a formal simulation of digestion is not really digesting, a formal simulation of thought is not really thinking. And to tie this to the gender thought experiment, a formal simulation of gender is not gender, because it lacks biological structural and functional characteristics.</i><p>The "Simulation, Duplication and Evolution" section of the SEP article is also useful:<p><a href="https://plato.stanford.edu/entries/chinese-room/#SimuDuplEvol" rel="nofollow">https://plato.stanford.edu/entries/chinese-room/#SimuDuplEvo...</a>
Q1: Can we imitate "thinking" from only observing behavior?<p>NO as you are measuring actions and not the process that instigated those actions.<p>One example - chess - you see the move, but the thinking behind that move may elude. More so a bad move, as their thinking was that it is a good move when it is not.<p>[EDIT format/spelling]
> Many AI researchers, myself included, believe that competitive survival of "living organisms" is the only true way to implement general intelligence.<p>My gut translates that idea as: Progress is made though War not Markets.<p>Which seems dubious to say the least.
In this set up, the answer to q1- Intent: No, intent can’t be inferred.<p>If intent could be inferred- the meaningful and material portion of human privacy would cease to exist.
Tldr:<p>>Q1: Can we imitate "thinking" from only observing behavior?<p>>Q2: What is the computationally cheapest "organic building block" of an Artificial Life simulation that could lead to human-level AGI?<p>>Q3: Loschmidt's Paradox: How does T-asymmetry in macroscopic phenomena arise from T-symmetric (microscopic) physical laws?
I know it’s just the moment we’re living in right now but as I read those questions I just shrug, they couldn’t feel less relevant.<p>The ones on my mind are “how long will large populations be locked down?” “How many people are going to die?” and “how long until this all happens again?”