Short answer: because chat is a shortcut to anthropomorphization in folks' brains. It's much easier to assign personality and intent to a chat than a floating head that needs to be 99.999% lifelike in order to not feel completely fake.
Chat is perhaps the cheapest implementation you could ever build. It's a linear interaction, easy to test, and arguably the easiest to encode/decode (with a fixed set of inputs too). As an added bonus, it has a familiar, well-understood interface.
I'm curious about: alpha go, Watson, and the AI that "conquered" chess. Are those outliers, or part of a larger story? It feels to me a contemporary history of AI would mention those milestones.
Uhm… The boom in AI with LLMs wouldn’t have happened without about a decade of major focus on images (both generative models and DNN models that blew traditional image processing out of the water), and planning/optimization type problems (alpha go, chess, etc.). Seems incorrect to claim that chat starts every cycle.