Since other comments here do a great job answering your question, I'll also provide an opinion on the follow-up question which is how <i>should</i> the terms ML and AI be used if we want to make conscious choices in how we portray our work.<p>In my opinion they should be defined as a non-overlapping Venn diagram, defined as follows:<p>( ML { ML & AI ) AI }<p>ML / Machine Learning we can define as: The field of using learning techniques to train machines. It's worth pointing out that learning is <i>not</i> a necessary technique to reach even Star Trek levels of technology or beyond. Learning is a shortcut. Any task that can be learned through ML could in theory be manually specified by providing either a complete set of step by step instructions to perform the procedure procedurally (where applicable) or otherwise to use more mathematically expressive paradigms like Functional Programming or even Constraint Programming <a href="https://en.wikipedia.org/wiki/Constraint_programming" rel="nofollow">https://en.wikipedia.org/wiki/Constraint_programming</a> . I just want to highlight that Imperative (Procedural Programming, Functional Programming, and Constraint (Declarative) Programming as three other ways that can literally do anything that ML can do, but it usually takes probably 10x-100x longer to apply a computer to a task using the old paradigms alone.<p>AI / Artificial Intelligence: This is pretty subjective but the way I think we could frame it as "anything that a talented & well-educated, but curmudgeonly &unimaginative, computer programmer living in the year 1980 would have said that a computer will never be able to do". Like "a computer will never write poetry!", "a computer will never be able to design a beautiful painting!", etc.<p>The reason I think this framing is interesting is that it highlights the existence of the parts of the Venn diagram that are "ML but not AI" as well as "AI but not ML". Here are some interesting examples:<p>"ML but not AI": One example would be using ML techniques to create a valuable product/experience which is a static object (e.g. a Word Doc or a video) rather than any piece of intelligent software. AI might be involved along the way, but it's only as a "compiler" step, and the end result is something that no longer contains AI in it. As an example, take this hypothetical startup idea of what if someone wants to make a startup using ML to cheaply generate amazing reference books about any topic. Like the startup would use ML-based document information retrieval algorithms to near-instantly generate a helpful reference book, with each passage having a mandatory URL citation, which the startup would use with a web scraper to fact-check every passage in every book. And they'd print the books. You could imagine how this startup might fall into the "ML but not AI" camp, because ML is a critical part of their daily business but they are not trying to make anything "alive" or in any way intelligent - they just sell books and happen to use learning. Additionally, I think we should even consider the process of evolution (both in nature and in genetic algorithms) to be an example of ML which created the entire plant and animal kingdoms and I think it's undeniably an application of learning. Evolution uses variation and natural selection to perform trial and error and the results when compounded over billions of years have given us a biosphere so amazingly rich and complex that the complexity of it was/it the primary argument for Intelligent Design. People said there <i>must</i> be a Creator because there's no way a world this incredibly complex and detailed could have come from an anarchic process without any grand captain at the helm. The honest fact is that biological evolution in nature is fricking <i>amazing</i> and in the interest of giving credit where credit is due, I think we should count Evolution as being a learning algorithm, and I surely think we would call it a learning technique if humans had been the ones to invent it.<p>"AI but not ML": This is a really important one to highlight! There is a very common misconception that learning must be used to solve key problems that once required human-like intelligence, with a famous example being the 1996 defeat of chess champion Garry Kasparov by IBM's Deep Blue. Deep Blue used no learning, and the game of chess was solved/won through only other approaches, including a mix of manually coded clever algorithms as well as the brute force application of a very large computer. Beating the world champion at chess (which happened in 1996) meets the standard of being something that a gifted but cantankerous programmer in 1980 would have thought that computers would never be able to do, until proven wrong.<p>I think we in the field need to put some deep focus into Douglas Lenat's visionary Cyc project, and others like it, which seek to formalize human scientific and cultural knowledge for use in Automated Reasoning Systems. As we've seen with GPT-3's massive tendency to lie and hallucinate, learning techniques are very hard to understand and to make safe. I think we should be investing much more in using Automated Reasoning techniques in every market vertical where we can, because Automated Reasoning techniques use the magic of fast computers and powerful math to achieve the goals of AI, but in a way that is fully hand-crafted, interpretable, and has a sensible data architecture (allowing for a way to use something like the Dewey Decimal System to give a way to find the right part of the database if you're looking up a specific fact, its confidence interval, and citations providing evidence.<p>cyc.com<p><a href="https://en.wikipedia.org/wiki/Cyc" rel="nofollow">https://en.wikipedia.org/wiki/Cyc</a>