TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Ergonomically call LLM in bulk from CLI

7 点作者 thatjoeoverthr4 个月前
Hi!<p>I&#x27;ve found myself repeatedly writing little scripts to do bulk calls to LLMs for various tasks. For example, run some analysis on a large list of records.<p>There are a few &quot;gotchas&quot; to doing this. For example, some service providers have rate limits, and some models will not reliably return JSON (if you&#x27;re asking for it).<p>So, I&#x27;ve written a command for this.<p>What I&#x27;ve tried to do here is let the user break up prompts and configuration as they see fit.<p>For example, you can have a prompt file which includes the API key, rate limit, settings, etc. all together, or break these up into multiple files, or keep some parts local, or override parameters.<p>This solves the problem of sharing settings between activities, and keeping prompts in simple, committable files of narrow scope.<p>I hope this can be of use to someone. Thanks for reading.

暂无评论

暂无评论