The first time I felt a mixture of awe and fear about AI was after I watched the first Terminator movie, as a teenager in early 90s. Something just clicked for me and I saw the inevitable eventual emergence of AI as the successor race on this planet.<p>Fast forward to 2012, when I saw this discussed in the news: <a href="https://www.wired.com/2012/06/google-x-neural-network/" rel="nofollow">https://www.wired.com/2012/06/google-x-neural-network/</a> - that was when I decided to quit my job and enroll in a phd program to do AI research.<p>Fast forward to 2017, when I was working at one of the first generative AI startups. I remember the moment when I played with image style transfer models and there was a realistic painting showing (among other things) a nicely detailed clock handing on a wall. This painting was recreated as a cartoon, and the clock was shown as as a silly cartoonish clock hanging on the same spot on the wall. It seems trivial now, but at that moment I realized these models are capable of something profound (i.e. they <i>understood</i> a visual concept of a clock).<p>A couple of years later, GPT-2 came out and I read the story it generated about English speaking unicorns. That was when I realized human level intelligence is probably much simpler to achieve than we thought. That was when my research interests switched to NLP. About a year later (early 2020), I was discussing chatbot progress with someone here on HN. We made a $100 bet where I claimed that an AI model will be able to pass Turing test - properly conducted - by the end of 2022. Properly conducted means a model would successfully pretend to be an adult educated native English speaker, during a 2 hour chat session. Many people back then expressed how ridiculously implausible that seemed. Today I admit I lost the bet and I would pay up if contacted by the guy. However, note that ChatGPT was released in November and had OpenAI optimized it for passing Turing Test, instead of adding all the ethical safeguards, it would have fooled a lot of people. I feel that GPT-4 would have widely succeed, had the passing of TT been its training objective.<p>Despite all that, despite my knowledge and experience as an ML researcher, I did feel the things you mentioned, the mixture of awe and fear for what's coming, after I watched the OpenAI live stream. Especially after I asked GPT-4 the same questions I asked GPT-3.5 just a couple of weeks ago - the questions I ask job candidates applying to an ML researcher position to test both depth and breadth of their ML knowledge. GPT-4 produced satisfactory answers where GPT-3.5 didn't. That same day I asked it to write code for me - not some toy example but the actual code I needed for work (complicated rounding schemes when quantizing an ML model to a specific number format) - and it did eventually produced correct working code. I've spent roughly an hour on prompt engineering, progressive clarifications/corrections of what I'm trying to do, and verifying the results, but it would have taken me at least 3 hours to write code like that, maybe more.<p>For me, it's not so much about the current abilities of GPT-4, which are obviously a big deal. It's about what's coming next, in the near future (1-3 years). We (ML researchers) have not reached the plateau with LLMs yet - they will get better. We have just started looking into video generation, which has a huge potential of building very good world models. Video/audio and language modeling abilities will reinforce and amplify each other. Most importantly, models trained to predict the next video frame will be able to act in the physical world. When GPT-5 or GPT-6 is released inside a robot, and can do non-trivial physical actions - that's when things get really interesting. This will happen faster than people expect (again), and I am willing to bet we will see intelligent humanoid robots, capable of performing most forms of human physical labor - by the end of 2025. The only reason I would lose this bet is if humans decide they don't want it to happen (e.g. like the ethical safeguards built into GPT-4).