TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Re-implementing LangChain in 100 lines of code

252 点作者 ColinEberhardt大约 2 年前

24 条评论

fbrncci大约 2 年前
I work with Langchain on a daily basis now, and so often I find myself asking; do I really need a whole LLM framework for this? At this point, the assistant I am writing, will likely be more stable rewritten in pure Python. The deeper and more complex the application becomes, the more of a risk Langchain seems to become to keeping it maintainable. But even at less complex levels, if I want to do this:<p>1. Have a huge dataset of documents.<p>2. Want to ask questions and have an LLM chat conversation based on these documents.<p>3. Be able to implement tools like math, wiki or Google search on top of the retrieval.<p>4. Implement memory management for longer conversations.<p>Its still a lot more straightforward to maintain it in Python. The only thing where it becomes interesting is having agents execute async, which is not that easy replicate, but at the moment agents are not that helpful. Not trying to diss Langchain too much here, because its such an awesome framework, but I can&#x27;t help seeing past it other than just being a helpful tool to understand LLM&#x27;s and LLM programming for now.
评论 #35824398 未加载
评论 #35827250 未加载
评论 #35825824 未加载
评论 #35824674 未加载
rcme大约 2 年前
LangChain has been so frequently discussed that I thought it must be this amazing piece of software. I was recently reading about vector databases and how they can be used to provide context to LLMs. I came across a LangChain class called RetrievalQA, which takes in a vector database and a question and produces and answer based on documents stored in the vector db. My curiosity was piqued! How did it work? Well... it works like this:<p><pre><code> prompt_template = &quot;&quot;&quot;Use the following pieces of context to answer the question at the end. If you don&#x27;t know the answer, just say that you don&#x27;t know, don&#x27;t try to make up an answer. {context} Question: {question} Helpful Answer:&quot;&quot;&quot; </code></pre> My sense of wonder was instantly deflated. &quot;Helpful Answer:&quot;. Seriously? I think LLMs are cool, but this made me realize people are just throwing darts in the dark here.
评论 #35824428 未加载
评论 #35825635 未加载
评论 #35824734 未加载
评论 #35824498 未加载
评论 #35858576 未加载
评论 #35825053 未加载
评论 #35833669 未加载
评论 #35824996 未加载
评论 #35827344 未加载
评论 #35824837 未加载
评论 #35827031 未加载
loveparade大约 2 年前
Am I the only one who is not convinced by the value proposition of langchain? 99% of it are interface definitions and implementations for external tools, most of which are super straightforward. I can write integrations for what my app needs in less than an hour myself, why bring in a heavily opinionated external framework? It kind of feels like the npm &quot;left-pad&quot; to me. Everyone just uses it because it seems popular, not because they need it.
评论 #35824989 未加载
评论 #35824126 未加载
评论 #35831765 未加载
评论 #35824304 未加载
评论 #35824064 未加载
cube2222大约 2 年前
Yeah, the basics of LangChain are fairly simple, and reimplementing a loop like that in Go, including tool usage, was very straightforward when I was writing Cuttlefish[0] (a toy desktop chat app for ChatGPT that can use stuff like your local terminal or Google).<p>The magic in LangChain, though, is the ecosystem. I.e. they have integrations with tons of indexes, they have many tool implementations, etc. This is the real value of LangChain. The core ReAct loop is quite trivial (as this article demonstrates).<p>[0]: <a href="https:&#x2F;&#x2F;github.com&#x2F;cube2222&#x2F;cuttlefish">https:&#x2F;&#x2F;github.com&#x2F;cube2222&#x2F;cuttlefish</a>
adityapurwa大约 2 年前
I got the chance to try Langchain as part of a hiring process. I was already having my eye on it for a personal projects though.<p>The moment I tried it and went through the docs, the entire abstraction feels weird for me. I know a bit here and there about LLM, but Langchain make me feels like Im learning something entirely new.<p>How agent and tools work and how to write one wasnt straightforward from the docs, and the idea of having an AI attach itself to an eval or writing its own error&#x2F;hallucination-prone API request based on a docs doesnt give me a lot of confidence.<p>The hiring assignment specifically mentioned to use Langchain thought, so I did. But just as a glorified abstraction to call GPT and parses the NL output as JSON.<p>I did the actual API call, post-processing, etc. manually. Which I have granular control over it. Also cheaper in terms of token usages. You could say I ended writing my own agent&#x2F;tool that doesnt exactly match Langchain specifications but it works.<p>I guess Langchain had its use case. But it feels pretty weird to use for me.
评论 #35825620 未加载
评论 #35825472 未加载
lxe大约 2 年前
I&#x27;ve been working with langchain and llamaindex and did notice that it&#x27;s a pretty hefty abstraction on top of pretty simple concepts and I also eventually ended up dropping both and simply write the underlying code without the framework on top.
评论 #35825036 未加载
评论 #35824214 未加载
评论 #35824681 未加载
okhat大约 2 年前
There’s always DSP for those who need a lightweight but powerful programming model — not a library of predefined prompts and integrations.<p>It’s a very different experience from the hand-holding of LangChain, but it packs reusable magic in generic constructs like annotate, compile, etc that work with arbitrary programs.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;stanfordnlp&#x2F;dsp&#x2F;">https:&#x2F;&#x2F;github.com&#x2F;stanfordnlp&#x2F;dsp&#x2F;</a>
评论 #35824831 未加载
ukuina大约 2 年前
I cannot praise Deepset Haystack enough for how simple they make things compared to LangChain, between the Preprocessor, the Reader&#x2F;Retriever, and the PromptNode - the APIs, docs, and tutorials are quite easy to modify to your use-case.<p>Not affiliated, just a happy defector from LangChain.
评论 #35825475 未加载
saulpw大约 2 年前
I also was underwhelmed by langchain, and started implementing my own &quot;AIPL&quot; (Array-Inspired Pipeline Language) which turns these &quot;chains&quot; into straightforward, linear scripts. It&#x27;s very early days but already it feels like the right direction for experimenting with this stuff. (I&#x27;m looking for collaborators if anyone is interested!)<p><a href="https:&#x2F;&#x2F;github.com&#x2F;saulpw&#x2F;aipl">https:&#x2F;&#x2F;github.com&#x2F;saulpw&#x2F;aipl</a>
KevinBenSmith大约 2 年前
As someone who has created several LLM-based applications running in production, my personal experience with langchain has been that it is too high of an abstraction for steps that in the end are actually fairly simple.<p>And as soon as you want to slightly modify something to better accomodate your use-case, you are trapped in layers &amp; layers of Python boiler plate code and unnecessary abstractions.<p>Maybe our llm applications haven’t been complex enough to warrent the use of langchain, but if that’s the case, then I wonder how many of such complex applications actually exist today.<p>-&gt; Anyways, I came away feeling quite let down by the hype.<p>For my own personal workflow, a more “hackable” architecture would be much more valuable. Totally fine if that means it’s less “general”. As a comparison, I remember the early days of HugginfaceTransformers where they did not try to create a 100% high-level general abstraction on top of every conceivable Neural Network architecture. Instead, each model architecture was somewhat separate from one another, making it much easier to “hack” it.
评论 #35936822 未加载
评论 #35824811 未加载
zyang大约 2 年前
I&#x27;m glad I wasn&#x27;t the only one that felt Langchain had a ton of redundant abstractions engineered to gain clout for vc money. Here is an example:<p>AnalyzeDocumentChain[1] just wraps RecursiveCharacterTextSplitter[2]. It serves no real purpose except padding the api doc.<p>[1] <a href="https:&#x2F;&#x2F;js.langchain.com&#x2F;docs&#x2F;modules&#x2F;chains&#x2F;other_chains&#x2F;analyze_document" rel="nofollow">https:&#x2F;&#x2F;js.langchain.com&#x2F;docs&#x2F;modules&#x2F;chains&#x2F;other_chains&#x2F;an...</a> [2] <a href="https:&#x2F;&#x2F;js.langchain.com&#x2F;docs&#x2F;modules&#x2F;chains&#x2F;other_chains&#x2F;summarization" rel="nofollow">https:&#x2F;&#x2F;js.langchain.com&#x2F;docs&#x2F;modules&#x2F;chains&#x2F;other_chains&#x2F;su...</a>
评论 #35825507 未加载
convexfunction大约 2 年前
If you know little about prompt engineering and want to throw together a demo of something that kind of works extremely quickly, or experiment with an LLM agent exactly as it&#x27;s defined in some paper, LangChain is pretty useful.<p>If you want to develop a real LLM application, you&#x27;re probably better off skipping the library completely, or at least fully understand each abstraction to make sure it does everything you want before you decide you want to incorporate it.
d4rkp4ttern大约 2 年前
I’ll repeat what I’ve said in another thread the other day —<p>To put together a basic question&#x2F;answer demo that didn&#x27;t quite fit the LangChain templates, I had to hunt a bunch of doc pages and and cobble together snippets from multiple notebooks. Sure, the final result was under 30 lines of code, BUT: It uses fns&#x2F;classes like `load_qa_with_sources_chain` and `ConversationalRetrievalChain`, and to know what these do under the hood, I tried stepping into the debugger, and it was a nightmare of call after call up and down the object hierarchy. They have verbose mode so you can see what prompts are being generated, but there is more to it than just the prompts. I had to spend several hours piecing together a simple flat recipe based on this object hierarchy hunting.<p>It very much feels like what happened with PyTorch Lightning -- sure, you can accomplish things with &quot;just a few lines of code&quot;, but now everything is in one giant function, and you have to understand all the settings. If you ever want to do something different, good luck digging into their code -- I&#x27;ve been there, for example trying to implement a version of k-fold cross-validation: again, an object-hierarchy mess.
justanotheratom大约 2 年前
Given that the company has $200 million valuation, that is $2 million per line of code! just kidding.<p>Still, I would like to understand $200 million valuation of langchain.ai.
评论 #35824225 未加载
评论 #35825389 未加载
评论 #35827259 未加载
评论 #35823821 未加载
nestorD大约 2 年前
For me Langchain is glue code between a lot of commonly used LLM building blocks and prompts.<p>It is great to get a prototype 80% of the way there fast in order to validate an idea or run something short lived.<p>I suspect that, if you want to go further (simpler code, better control message length, reliability, etc), you will be better served by implementing the functionality you need yourself.
评论 #35825628 未加载
shri_krishna大约 2 年前
For the calculator tool I suggest instead to just generate Javascript as an output with temperature set to 0 (system prompt set to something along the lines of: &quot;Generate native Javascript code only. Don&#x27;t provide any explanations. Don&#x27;t import any extraneous libraries&quot;) and then eval that Javascript code in a VM. Deno is a good candidate for this as it has good security settings with access to filesystem and network turned off by default. You can use something like deno-vm [1] to execute it separate from your running process too. Setting GPT-4 as model works even better. I have seen it perform better than Wolfram Alpha in many cases so I am wondering why OpenAI chose to integrate with Wolfram Alpha for this. GPT-4 was able to solve some really complex math problems I threw at it.<p>[1]: <a href="https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;deno-vm" rel="nofollow">https:&#x2F;&#x2F;www.npmjs.com&#x2F;package&#x2F;deno-vm</a>
havercosine大约 2 年前
Personal experience: was using LangChain and its output parsers for getting structured data. It was having a very high error rates (probably prompt was becoming too long and confusing). But it is a just prompt + some parsing logic. Replaced it with straight asking openAI GPT for json that matches some Rust struct &#x2F; Python data-class. The errors went down and got one extra dependency out from the project. Tried to use its self hosted embeddings but the implementation (strangely) seemed tied to something called Run-house.<p>Not to belittle the library, but most of it is a very thin wrapper classes that reek of premature abstraction, couple with hit-n-miss docs. At this point, given the hype, it is primarily optimized for cooking up demoes quickly. But not sure if the valuations or production use is justified.
sia_m大约 2 年前
There are a few ways to use Langchain. Firstly, the docs are a mess. What I personally did, I followed the notebook from OpenAI cookbook on embedding a code base, and one on embedding the docs, and querying over that with GPT-4.<p>After a while of doing that, I realised like many others that it&#x27;s too high of an abstraction. In the end I think you&#x27;re better off just looking at their source code, and just looking at how they&#x27;ve implemented the stuff in normal python and then adapting it for your own needs.
lynx23大约 2 年前
Hmm, this is great food for thought! I am working on a Haskell based REPL for GPT, called GPTi[1] which might benefit from this approach.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;mlang&#x2F;gpti">https:&#x2F;&#x2F;github.com&#x2F;mlang&#x2F;gpti</a>
rahimnathwani大约 2 年前
Related comments about ReAct: <a href="https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=all&amp;page=0&amp;prefix=true&amp;query=https%3A%2F%2Farxiv.org%2Fabs%2F2210.03629&amp;sort=byPopularity&amp;type=comment" rel="nofollow">https:&#x2F;&#x2F;hn.algolia.com&#x2F;?dateRange=all&amp;page=0&amp;prefix=true&amp;que...</a>
fchief大约 2 年前
We ported the core of LangChain to Ruby, and while it is way more that 100 lines, I would give similar feedback as the author. Here is the repo if anyone is interested <a href="https:&#x2F;&#x2F;github.com&#x2F;BoxcarsAI&#x2F;boxcars">https:&#x2F;&#x2F;github.com&#x2F;BoxcarsAI&#x2F;boxcars</a>
zbyforgotpass大约 2 年前
The problem with all these new fields is that the first code that gets popular is from people who are good at marketing not those who are good at programming.
valyagolev大约 2 年前
we&#x27;re still in the stage of LLM adoption when we can have &quot;eye-opening&quot; simple discoveries weekly. Langchain has momentum because of this, as a library of simple ideas. this period will end, and if they don&#x27;t figure out the next step they&#x27;re gone
sia_m大约 2 年前
The same goes for gpt-index.