TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Llama That Thinks

1 pointsby torrmal4 months ago

1 comment

torrmal4 months ago
Hey HN,<p>So, casually trying to make LLaMA achieve consciousness (as one does on a Tuesday), when I stumbled upon something hilarious. Turns out, you can make these language models &quot;reason&quot; with about as much code as it takes to write a &quot;Hello World&quot; program. No, really!<p><a href="https:&#x2F;&#x2F;colab.research.google.com&#x2F;drive&#x2F;1jfsG0_XP8a5mME76F5a6xeP9uu-tvDZ3#scrollTo=fNg798sHpuqk" rel="nofollow">https:&#x2F;&#x2F;colab.research.google.com&#x2F;drive&#x2F;1jfsG0_XP8a5mME76F5a...</a><p>Here&#x27;s the philosophical crisis I&#x27;m having now: When you ask an LLM to code something vs. asking it to reason about something... are we basically watching the same neural spaghetti being twirled around?<p>The real question is: If one can make an AI model &quot;think&quot; with 5 lines of code, does this mean: a) An LLM should be able to write its own reasoning code b) We&#x27;ve been overthinking AI? c) The simulation is running low on RAM d) All of the above<p>Would love to hear your thoughts, preferably in the form of recursive functions or philosophical paradoxes.<p>P.S. No LLAMAs were harmed in the making of this experiment, though several did ask for a raise.