Cyc was an interesting project - you might consider it as the ultimate scaling experiment in expert systems. There seemed to be two ideas being explored - could you give an expert system "common sense" by laboriously hand-entering in the rules for things we, and babies, learn by everyday experience, and could you make it generally intelligent by scaling it up and making the ruleset comprehensive enough.<p>Ultimately it failed, although people's opinions may differ. The company is still around, but from what people who've worked there have said, it seems as if the original goal is all but abandoned (although Lenat might have disagreed, and seemed eternally optimistic, at least in public). It seems they survive on private contracts for custom systems premised on the power of Cyc being brought to bear, when in reality these projects could be accomplished in simpler ways.<p>I can't help but see somewhat of a parallel between Cyc - an expert system scaling experiment, and today's LLMs - a language model scaling experiment. It seems that at heart LLMs are also rule-based expert systems of sorts, but with the massive convenience factor of learning the rules from data rather than needing to have the rules hand-entered. They both have/had the same promise of "scale it up and it'll achieve AGI", and "add more rules/data and it'll have common sense" and stop being brittle (having dumb failure modes, based on missing knowledge/experience).<p>While the underlying world model and reasoning power of LLMs might be compared to an expert system like Cyc, they do of course also have the critical ability to input and output language as a way to interface to this underlying capability (as well as perhaps fool us a bit with the ability to regurgitate human-derived surface forms of language). I wonder what Cyc would feel like in terms of intelligence and reasoning power if one somehow added an equally powerful natural language interface to it?<p>As LLMs continue to evolve, they are not just being scaled up, but also new functionality such as short term memory being added, so perhaps going beyond expert system in that regard, although there is/was also more to Cyc that just the massive knowledge base - a multitude of inference engines as well. Still, I can't help but wonder if the progress of LLMs won't also peter out, unless there are some fairly fundamental changes/additions to their pre-trained transformer basis. Are we just replicating the scaling experiment of Cyc, just with a fancy natural language interface?