TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Tiny Code Improver

19 点作者 mr_kotan将近 2 年前
Hey, fellow hackers! I&#x27;m excited to share my latest project, TinyCodeImprover, which has become an indispensable tool in my coding workflow.<p>What is TinyCodeImprover?<p>TinyCodeImprover leverages the power of GPT-4 to analyze and enhance your project files. By simply loading your code into the GPT-4 context, you can ask questions about your code, identify bugs, and even request GPT-4 to write code snippets across multiple files simultaneously.<p>The Story Behind its Creation<p>As a programmer, I frequently turn to GPT-4 for assistance with topics outside my expertise. However, I found the process of copying and pasting code snippets into the chat cumbersome and time-consuming. That&#x27;s when I had an idea: a tool that seamlessly integrates GPT-4 into my coding environment.<p>A month ago, during a flight from Bangkok to Dubai, I developed the first prototype of TinyCodeImprover. It allowed me to feed project files directly to GPT-4 and request code improvements based on my specifications! It even wrote a Readme for itself – quite mind-blowing!<p>Refining the Process<p>To maximize the effectiveness of TinyCodeImprover, I discovered the importance of employing a critical approach. I created special commands, &quot;.critic&quot; and &quot;.resolver,&quot; to initiate self-reflection, enabling GPT-4 to identify its own mistakes in approximately 30% of cases.<p>Since its inception, I&#x27;ve integrated TinyCodeImprover into four different projects, transforming error-fixing into an enjoyable experience, even when dealing with CSS challenges. It has proven useful not only for code but also for any type of text.

5 条评论

karsuren将近 2 年前
You should probably add an example for using the special commands &#x27;.critic&#x27; &#x27;.resolve&#x27; commands that you are projecting as the key &#x27;spice&#x27; of your project. There is no &#x27;how to use&#x27; or &#x27;how it works&#x27; provided for these special commands in your readme.md. One would have to walk through the entire code to hopefully get an idea. I went through the main script - I have an idea on the overall flow, but I still don&#x27;t know how these special commands work. Other people might also run into the same issue. So, additional documentation can help
评论 #36193210 未加载
kesor将近 2 年前
My version of loading multiple files into the context allows the UI of ChatGPT to load the files on his own, when it decides that it needs&#x2F;wants to read the content of the file. And from some experimentation it would seem that each file it loads, it considers it as a separate interaction. Thus the token limit is much less of a problem, allowing to load larger pieces of code - either the whole thing, or piece by piece. <a href="https:&#x2F;&#x2F;github.com&#x2F;kesor&#x2F;chatgpt-code-plugin">https:&#x2F;&#x2F;github.com&#x2F;kesor&#x2F;chatgpt-code-plugin</a>
评论 #36191715 未加载
karsuren将近 2 年前
What is the perf of GPT4 vs GPT3.5 in the &#x27;reasoning&#x27;, &#x27;reflection&#x27;, &#x27;critisism&#x27; and &#x27;resolver&#x27; tasks like in your project? I see that you have commented out gpt3.5 and replaced with GPT4 in config yaml. Was GPT3.5 perf too bad? I don&#x27;t think many people have GPT4 API access. If this requires atleast GPT4 to be effective, it might take a while before anyone else in the community can take it up.
评论 #36193253 未加载
karsuren将近 2 年前
Claude has 100k context for around 2$ per million tokens.<p>With GPT4&#x27;s 4-8k token limit, anything but very small projects in their early phase can benefit from this. Also GPT4 would be far too cost prohibitive
评论 #36193244 未加载
评论 #36191269 未加载
devdiary将近 2 年前
how do you deal with token limitation? What is the maximus size of codebase it can work on?
评论 #36187747 未加载