> "When your work, and what you do to avoid it, become the same thing, that’s when the breakthroughs come."<p>Perhaps I need to get a job here at HN to attain my own personal singularity.
I would love to spend several hours talking with Roger Penrose. I too think doodling with your mind on things unrelated to what you are supposed to be doing can have great results but most people fear such thinking as not useful, or simply can't their mind go that far.
>Penrose ultimately showed that singularities are inevitable, with the implication that black holes are common in the universe.<p>I'm not familiar with the proof. Did he show that in the THEORY of general relativity, singularity has to exist given our observation of the universe? Or there are something more to it.<p>Would it be possible or plausible that singularity actually does not exist, but just that the theory of general relativity is not a correct description of space/time/matter in small scale? I am thinking in classical theory, when things were treated as point mass/charges, infinity exist in the solution of point sources.
I am someone who spends enormous amount of time mindlessly skimming through internet glued to my phone. It makes me worry that due to this habit it will be difficult for me to do any novel work due to lack of deep thinking.
>'When I would talk to someone about an idea, I found myself not understanding a word they were saying.’<p>Ha! It goes both ways! Penrose gave a colloquium at my institution when I was a graduate student (physics department), and I've often reflected on how it was the most impossible to understand talk I've <i>ever</i> attended.<p>He had multiple overhead projectors going to different screens (and this was in the early 2000s when wet-erase transparencies were already less common), and he kept mixing up the slide order or which projector he wanted them on. Then the geometry was so far beyond my capabilities that getting to the science was impossible.
Penrose is probably correct about the limit of AI. We're living in many simulations now. And sometimes we cannot distinguish between reality and simulations. But one thing that stands out is the suffering. It's an important concept in Buddhism, Duḥkha. Suffering may be a key to consciousness. Machines can have minds. But they don't have bodies. They'd never understand reality on their own. The danger is more with humans. They may increasingly connect their own sufferings into machines. They become tools and slaves for machines.
> ` (...) They keep pushing it to later!’ His big concern about AI isn’t Judgment Day, but rather ‘that people will believe machines actually understand things’. He gives examples of symmetrical chess configurations in which humans consistently outperform computers by abstracting to a higher level<p>This sounds a lot like the usual moving of goalposts whereby "anything computers can do isn't AI, so AI doesn't work".<p>When AI couldn't do anything, chess was supposed to be a demonstration of human intelligence. Now that AI can play chess and other board games, suddenly it needs to solve symmetrical configurations and think "abstractly" (which is left fairly loosely defined).