Oh I'm glad that I'm not the only one who has gotten lost in the sauce by asking LLMs to recursively synthesize from data towards some grand insights--we want to see results when there is none apparent. What you end up getting is some bizarre theories overfit on the data with zero causal relationships. LLMs are fundamentally pattern matching systems and they will find "connections" between any two domains if prompted. It just reeks of confirmation bias; researchers looking for connections between art and science will find them.<p>The simpler explanation makes more sense: knowledge graphs naturally show certain structural properties, and these properties appear across domains due to basic mathematical constraints, common organizational principles, and human cognitive patterns reflected in data. Sure, LLMs trained on human knowledge can identify these patterns, generate plausible narratives, and create appealing connections - but this doesn't necessarily indicate novel scientific insights, predictive power, or practical utility.<p>If you find yourself going down a rabbit hole like this (and trust me, we've all been there), my advice is to ask "is there a simpler explanation that I'm missing?" Then start from square one: specific testable hypotheses, rigorous controls, clear success metrics, practical demonstrations, and independent validation. And maybe add a "complexity budget" - if your explanation requires three layers of recursive AI analysis to make sense, you're probably way too deep in the sauce.