This is a pretty good article.<p>I was one of the first hires on the Cyc project when it started at MCC and was at first responsible for the decision to abandon the Interlisp-D implementation and replace it with one I wrote on Symbolics machines.<p>Yes, back then one person could write the code base, which has long since grown and been ported off those machines. The KB is what matters anyway. I built it so different people could work on the kb simultaneously, which was unusual in those days, even though cloud computing was ubiquitous at PARC (where Doug had been working, and I had too).<p>Neurosymbolic approaches are pretty important and there’s good work going on in that area. I was back in that field myself until I got dragged away to work on the climate. But I’m not sure that manually curated KBs will make much of a difference beyond bootstrapping.
I was born in late USSR and my father is software engineer. We had several books that were not available for "general public" (they were intended for libraries of science institutions). One of the book was, as I understand now, abridged translation of papers from some "Western" AI conference.<p>And there were description if EURISCO (with claims that it not only "win some game" but also that it "invented new structure of NAND-gate in silicon, used by industry now") and other expert systems.<p>One of the mentioned expert systems (without technical details) said was 2 times better in diagnose cancer than best human diagnostician of some university hospital.<p>And after that... Silence.<p>I always wonder, why did this expert system were not deployed in all USA hospitals, for example? If it is so good?<p>Now we have LLMs, but they are LANGUAGE models, not WORLD models. They predict distribution of possible next words. Same with images — pixels, not world concepts.<p>Looks like such systems are good for generating marketing texts, but can not be used as diagnosticians by definition.<p>Why did all these (slice of) world model approaches dead? Except Cyc, I think. Why we have good text generators and image generators but not diagnosticians 40 years later? What happens?..
I would love to see a Cyc 2.0 modeled in the age of LLMs. I think it could be very powerful, especially to help deal with hallucinations. I would love to see a causality engine built with LLMs and Cyc. I wrote some notes on it before ChatGPT came out: <a href="https://blog.jtoy.net/understanding-cyc-the-ai-database/" rel="nofollow">https://blog.jtoy.net/understanding-cyc-the-ai-database/</a>
I worked on Cyc as a visiting student for a couple of summers; built some visualization tools to help people navigate around the complex graph. But I never was quite sold on the project, some tangential learnings here: <a href="https://hyperphor.com/ammdi/alpha-ontologist" rel="nofollow">https://hyperphor.com/ammdi/alpha-ontologist</a>
Has Cyc been forgotten? Maybe it's unknown to tech startup hucksters who haven't studied AI in any real way but it's a well known project among both academic and informed industry folks.
from 2015-2019 i was working on a bot company (myra labs) where we were directly inspired by cyc to create knowledge graphs and integrate into LSTMs.<p>the frames, slots and values integrated were learned via a RNN for specific applications.<p>we even created a library for it called keyframe (modeling it after having the programmer specify the bot action states and have the model figure out the dialog in a structured way) - similar to how keyframes in animation work.<p>it would be interesting to resurrect that in the age of LLMs!
The Cyc project proposed the idea of software "assistants" : formally represented knowledge based on a shared ontology, reasoning systems that can draw on that knowledge, handle tasks and anticipate the need to perform them.[1]<p>The lead author on [1] is Kathy Panton who has no publications after that and zero internet presence as far as i can tell.<p>[1] Common Sense Reasoning – From Cyc to Intelligent Assistant
<a href="https://iral.cs.umbc.edu/Pubs/FromCycToIntelligentAssistant-IJHCS-LNAI3864.pdf" rel="nofollow">https://iral.cs.umbc.edu/Pubs/FromCycToIntelligentAssistant-...</a>
Back in the mid 1990s Cyc was giving away their Symbolics machines and I waffled on spending the $1500 in shipping to get them to me in Denver. In retrospect I should have, of course!
Related Stephen Wolfram's note when Doug Lenat passed away [0]<p>[0] <a href="https://writings.stephenwolfram.com/2023/09/remembering-doug-lenat-1950-2023-and-his-quest-to-capture-the-world-with-logic/" rel="nofollow">https://writings.stephenwolfram.com/2023/09/remembering-doug...</a>
One thing the article doesn't really speak to: the future of Cyc now that Doug Lenat has passed away. Obviously a company can continue on even after the passing of a founder, but it always felt like Cyc was "Doug's baby" to a large extent. I wonder if the others that remain at Cycorp will remain as committed without him around leading the charge?<p>Does anybody have any insights into where things stand at Cycorp and any expected fallout from the world losing Doug?
Cyc seemed to be the best application for proper AI in my opinion - all the ML and LLM tricks are statistically really good, but you need to parse it through Cyc to check for common sense.<p>I am really pleased they continue to work on this - it is a lot of work, but it needs to be done and checked manually, once done the base stuff shouldn't change much and it will be a great common sense check for generated content.
I first heard about Cyc's creator Douglas Lenat a few months back when I watched an old talk by Richard Feynman.<p><a href="https://youtu.be/ipRvjS7q1DI?si=fEU1zd6u79Oe4SgH&t=675" rel="nofollow">https://youtu.be/ipRvjS7q1DI?si=fEU1zd6u79Oe4SgH&t=675</a>
Are there any efforts to combining a knowledge base like Cyc together with LLMs and the like?
Something like RAG could benefit I suppose.<p>Have some vector for a concept match a KB entry etc, IDK :).
Cyc is one of those bad ideas that won't die, and which keeps getting rediscovered on HN. Lenat wasted decades of his life on it. Knowledge graphs like Cyc are labor intensive to build and difficult to maintain. They are brittle in the face of change, and useless if they cannot represent the underlying changes of reality.
Cyc was an interesting project - you might consider it as the ultimate scaling experiment in expert systems. There seemed to be two ideas being explored - could you give an expert system "common sense" by laboriously hand-entering in the rules for things we, and babies, learn by everyday experience, and could you make it generally intelligent by scaling it up and making the ruleset comprehensive enough.<p>Ultimately it failed, although people's opinions may differ. The company is still around, but from what people who've worked there have said, it seems as if the original goal is all but abandoned (although Lenat might have disagreed, and seemed eternally optimistic, at least in public). It seems they survive on private contracts for custom systems premised on the power of Cyc being brought to bear, when in reality these projects could be accomplished in simpler ways.<p>I can't help but see somewhat of a parallel between Cyc - an expert system scaling experiment, and today's LLMs - a language model scaling experiment. It seems that at heart LLMs are also rule-based expert systems of sorts, but with the massive convenience factor of learning the rules from data rather than needing to have the rules hand-entered. They both have/had the same promise of "scale it up and it'll achieve AGI", and "add more rules/data and it'll have common sense" and stop being brittle (having dumb failure modes, based on missing knowledge/experience).<p>While the underlying world model and reasoning power of LLMs might be compared to an expert system like Cyc, they do of course also have the critical ability to input and output language as a way to interface to this underlying capability (as well as perhaps fool us a bit with the ability to regurgitate human-derived surface forms of language). I wonder what Cyc would feel like in terms of intelligence and reasoning power if one somehow added an equally powerful natural language interface to it?<p>As LLMs continue to evolve, they are not just being scaled up, but also new functionality such as short term memory being added, so perhaps going beyond expert system in that regard, although there is/was also more to Cyc that just the massive knowledge base - a multitude of inference engines as well. Still, I can't help but wonder if the progress of LLMs won't also peter out, unless there are some fairly fundamental changes/additions to their pre-trained transformer basis. Are we just replicating the scaling experiment of Cyc, just with a fancy natural language interface?
I interviewed with them in 2018. They're still kicking as far as I know. They asked me recursive and functional programming questions.<p>I wonder if they've adopted ML yet.
Cyc was the last remaining GOFAI champion back in the day when everyone in AI was going the 'Nouvelle AI' route.<p>Eventually the approach would be rediscovered (but not recuperated) by the database field desparate for 'new' research topics.<p>We might see a revival now that transformets can front and backend the hard edges of the knowledge based tech, but it will remain to be seen wether scaled monolyth systems like Cyc are the right way to pair.
I have a vague memory in the 90s of a website that was trying to collect crowdsourced somewhat-structured facts about everything that would be used to build GOFAI.<p>Was trying to find it the other day and AI searches suggested Cyc; I feel like that's not it, but maybe it was? (It definitely wasn't Everything2.)
I wonder what is the closest thing to Cyc we have in the open source realm right now. I know that we have some pretty large knowledge bases, like Wikidata, but what about expert system shells or inference engines?
Interesting article, thanks.<p>> <i>Perhaps their time will come again.</i><p>That's pretty sure, as soon as the hype about LLMs has calmed down. I hope that Cyc's data will then still be available, ideally open-source.<p>> <a href="https://muse.jhu.edu/pub/87/article/853382/pdf" rel="nofollow">https://muse.jhu.edu/pub/87/article/853382/pdf</a><p>Unfortunately paywalled; does anyone have a downloadable copy?
The first thing to do is to put LLMs to work to generate large knowledge bases of commonsense knowledge in symbolic machine-readable formats that Cyc-like projects can consume.