The old classic [0]. Sufficiently advanced linear algebra is indistinguishable from magic.<p>[0] <a href="https://www.reddit.com/r/LocalLLaMA/comments/1bgh9h4/the_truth_about_llms" rel="nofollow">https://www.reddit.com/r/LocalLLaMA/comments/1bgh9h4/the_tru...</a>
I think knowing what part of the knowledge base to delete — to get to adequately small reasoning model — is the hard part.<p>Doesn't "reasoning" rise from the knowledge? How much of a brain can you cut away before you affect the reasoning? When do you know what you've cut away, and what aspects did you miss / forget about?<p>We can probably train / fine-tune, w/ synthetic data, and we'll get reasonably close, but the "reasoning" will always hit rough patches, bc our training didn't include <i>that</i> kind of reasoning... and if we had to give it examples of every single kind of reasoning, then it can't move past all the already-established kinds of reasoning, so it's still pattern matching
Hi HN, I'd love to get your thoughts on this one! Anyone using LLM, hidden inside an app, just as a reasoning 'brick' to progress some workflows, decide on best math, etc.
Eh, this is just reinventing decision trees from first principles. There's a reson why we cant have an universal decision tree, and it's that the universal concepts would need to be described somehow to the model to take action, and this somehow is language, and our current sota for getting a model to understand language is to feed it gazillion combinations of sentences and their valid continuations.<p>But. There's indeed some challenging aspect in making these model plan for solving unexpected, novel problems not in the training set. Possibly a model that produces axioms and relations and a constrain solver that evaluates if the solution is coherent, non conflicting and complete.