> [Video @ 21:50] I don't know about you, I put my car in reverse and if it gets too close to something it beeps because it's aware of its surroundings, right? Does that make my car self-aware?<p>I wouldn't call sensing the external environment "self-awareness" even for humans - it's more about ability to inwardly inspect our own thoughts and have an internal model of ourself. If you take some entity with a train of thought and then you give it that ability, I'd probably say you gave it self-awareness.<p>> [Video @ 25:18] Computers are constrained by something called the Church-Turing thesis, which says anything you can do on a computer of today or a computer of the future could be done on Alan Turing's original 1938 Turing machine<p>> Now today's computers can do things millions billions of times as fast.<p>A Turing machine is an abstract model of computation with infinite time and memory. Maybe nitpicking, but I feel it's being talked about here as if it were some real physical machine.<p>> [Video @ 26:35] You have an algorithm on your shampoo, right? "Wet, apply shampoo, lather, rinse, repeat". Unfortunately if a computer was looking at this what would happen? You would wash your hair forever wouldn't you? Because it doesn't say "rinse once", it says "rinse, repeat".<p>> [Article] The computer will not do anything that departs from its programming. That’s a human specialty.<p>I think there's conflation between instructions given to some agent and its low-level underlying programming (floating point math, or chemical interactions for us).<p>Modern AI would likely be capable of using context and understand the intended meaning of the video's given examples, or disobey a given instruction.<p>> [Video @ 28:50] The first one he did is something called the Turing halting problem.<p>You can't correctly answer a question like "What won't you answer this question with?" - halting problem is effectively this. Less a limitation specific to computation, more about showing that some tasks are sufficiently non-trivial to embed this kind of paradox so can't be solved in all instances.<p>> [Video @ 30:58] Imagine trying to explain your experience to a man who has been blind since birth. [...] but duplicating the experience that you're having, the simple experience of seeing green, is not possible to describe to the blind man to the point where he can experience it also<p>Could probably build up the concept and associations of green in his head, but the visualization part is going to be limited by neural pathways in/between visual cortex not being properly formed without having received signals from the eyes. Sufficiently advanced future neurosurgeon could make someone experience green without actually having seen green, I'd bet.<p>> [Video @ 31:24] Now if we can't explain it to a blind man then how are we ever going to write a computer program to have qualia? And the answer is we won't.<p>Consider a text-based agent that can reason and introspect. How would it describe the tokens of text that it receives? I reckon similar to how we consider qualia - seemingly irreducible inputs that are hard to explain in terms of anything else.<p>> [Video @ 31:48] Understanding is something that computers will never do. This was established a long time ago by [Chinese room example], but does the person inside the room understand Chinese? No, he is exercising an algorithm.<p>I think one problem with the thought experiment is that people imagine the person in the room's procedure to be relatively tractable - like replacing English characters with a couple sets of intermediate characters, and then finally to Chinese.<p>While you can translate Chinese just with look-ups and writing symbols (with unbounded time/memory), to do so at a human-Chinese-speaker level would currently (until machine translation improves) involve using symbols to simulate arithmetic, to simulate quantum field theory, to simulate chemical interactions, to simulate a Chinese speaker's head.<p>I personally believe the answer of whether the system as a whole understands Chinese at that point has to be "yes", but at the very least it's not a clear "no".<p>> [Video @ 35:58] Rather I like the proposal made by Selmer Bringsjord called the Lovelace Test for strong AI. That is as follows: "Strong (or General) AI will be demonstrated when a machine's performance is beyond the explanation of its creator"<p>This was originally proposed in 2001, and I feel since then it has been accomplished by deep learning. Leaps in performance that take theory a while to catch up and understand what it's actually doing, agents that cheat games in unintended ways not previously thought possible, or unexpected generalization ability to novel tasks.<p>I think this definition of strong AI is generally too lenient. Although in the opposite direction, given this is a Christian conference and beliefs on God's omniscience, it doesn't seem like they'd consider humans to meet the bar.<p>> [Video @ 37:12] All computer programs have done what they were designed to do.<p>That would make my job a lot easier!<p>> [Video @ 37:30] Can AI create music? No it can't create music, do you know what a typical scenario of creating music is? Say you want to have a computer program AI generate baroque music, what do you do? You feed it a bunch of musical scores which were written by Bach. What's it going to generate? It's going to generate a musical score which sounds like Bach. It's not going to generate Wagner's music or Schoenberg's music or any of the more modern music, it's only going to generate things that sound like Bach, it just does the interpolation. So again it's this idea of interpolation, that we have. So no a computer cannot create music.<p>On sufficiently high dimensional data like music, novel examples are essentially always going to be extrapolation. If there's acceptance that it can learn from pieces of music and produce new pieces, with ability to vary similarity to existing piece distribution, then I don't see the objection to it not being able to do the same with musical styles.<p>> [Video @ 41:16] [...] totally splits the brain. Now if this is true, shouldn't we end up with a split personality after it was over if the mind was the same as the brain?<p>By materialism there'll be no direct communication between the halves of a split brain, and that's what's observed. It doesn't imply anything about whether both halves of the brain have the capability to develop personality traits, or that they'll noticeably diverge even given roughly the same experiences.