A tangent perhaps, but I've felt this with AI, albeit with nuanced differences. In a nutshell, there's this weird tension between being kind of dismissive of it due to failed personal expectations, but also a desire to be as objective about it as possible, a "fear of missing out" if you will. As such I consume news about AI and engage with the "community" online about it, but I struggle constantly between separating noise from signal, "truth" and bullshit. It's not so much that bad news gets me down so much as the fact that sensationalist hype and bullshit makes me irrationally upset.<p>The whole OpenAI strawberry thing is kind of a great example. There was vague hype and speculation about it months before it was actually released, and in the end we got a model that is objectively impressive when it comes to benchmarks and is objectively better at certain tasks, but otherwise in my mind fell way short of the expectations being set by more "enthusiastic" commentators.<p>Now, normally if someone is prone to hype and sensationalism on the internet one would typically learn to ignore them and preserve your mental health, problem is that sometimes OpenAI employees (and other employees of frontier labs) can't resist making sensationalist claims themselves, i.e. claiming that we're a couple of years away from AGI or whatever.<p>On the one hand, it's a smart and reasonable thing to bet on expert advice, on the other hand, the experts are making very bold claims that I just struggle to see coming to fruition. The idea that you could simply scale an LLM until we achieved AGI always felt a little suspect to me.<p>Curious to see how others have felt about this.