TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Refact Code LLM: 1.6B LLM for code that reaches 32% HumanEval

181 pointsby kateklinkover 1 year ago

18 comments

vikpover 1 year ago
This post is misleading, in a way that is hard to do accidentally.<p><pre><code> - They compare the performance of this model to the worst 7B code llama model. The base code llama 7B python model scores 38.4% on humaneval, versus the non-python model, which only scores 33%. - They compare their instruct tuned model to non-instruct-tuned models. Instruction tuning can add 20% or more to humaneval performance. For example, WizardLM 7B scores 55% on humaneval [1], and I&#x27;ve trained a 7B model that scores 62% [2]. - For another example of instruction tuning, Stablecode instruct tuned benchmarks at 26%, not the 20% they cite for the base model [3] - Starcoder, when prompted properly, scores 40% on humaneval [4] - They do not report their base model performance (as far as I can tell) </code></pre> This is interesting work, and a good contribution, but it&#x27;s important to compare similar models.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;nlpxucan&#x2F;WizardLM">https:&#x2F;&#x2F;github.com&#x2F;nlpxucan&#x2F;WizardLM</a><p>[2] <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;vikp&#x2F;llama_coder" rel="nofollow noreferrer">https:&#x2F;&#x2F;huggingface.co&#x2F;vikp&#x2F;llama_coder</a><p>[3] <a href="https:&#x2F;&#x2F;stability.ai&#x2F;blog&#x2F;stablecode-llm-generative-ai-coding" rel="nofollow noreferrer">https:&#x2F;&#x2F;stability.ai&#x2F;blog&#x2F;stablecode-llm-generative-ai-codin...</a><p>[4] <a href="https:&#x2F;&#x2F;github.com&#x2F;huggingface&#x2F;blog&#x2F;blob&#x2F;main&#x2F;starcoder.md">https:&#x2F;&#x2F;github.com&#x2F;huggingface&#x2F;blog&#x2F;blob&#x2F;main&#x2F;starcoder.md</a>
评论 #37389983 未加载
Havocover 1 year ago
That’s an impressive result<p>The open rail license seems to reference some sort of limitations on safety and unethical use but I can’t see where in the repo that’s spelled out precisely what the authors have in mind?
评论 #37382479 未加载
brucethemoose2over 1 year ago
One misleading thing is the notion that you need a 1-2B model to run on commodity hardware.<p>This is not really true. Llama 7B runs with Vulkan&#x2F;llama.cpp on ~8GB smartphones and ~12GB laptops. That ease is going to get much better over time, as lower RAM hardware starts dropping out of the market and the Vulkan implementations get more widespread.<p>For users trying to run LLMs on 8GB or less machines, the AI Horde approach of distributed models seems much more practical anyway.
评论 #37384051 未加载
评论 #37385152 未加载
评论 #37385744 未加载
评论 #37384386 未加载
评论 #37383895 未加载
评论 #37385296 未加载
评论 #37383967 未加载
mholubowskiover 1 year ago
Hey, I have a genuine question:<p>What is the point of a new model that isn’t better than the best possible model (example: OpenAI GPT-4)?<p>What’s the point in having a smaller model? Who cares?<p>—-<p>This is a real, genuine question that I don’t have a clear answer to. Excuse my ignorance, plz enlighten your boi.
评论 #37382897 未加载
评论 #37382940 未加载
评论 #37382820 未加载
评论 #37383292 未加载
评论 #37383214 未加载
评论 #37382827 未加载
评论 #37383603 未加载
评论 #37382814 未加载
评论 #37383237 未加载
smcleodover 1 year ago
Just trying out the official container image for self-hosting along side the VSCode extension - I&#x27;ve got to say I&#x27;m really impressed with the scaffolding especially for an early stage project.<p>The web interface for the LLM server is especially nice and clean compared to many of the others I&#x27;ve tried - and it &quot;just works&quot;. Very interested to see how this evolves.
holodukeover 1 year ago
Whats the difference between 1% and 99% of HumanEval? What does it tell really?
评论 #37384315 未加载
ldjkfkdsjnvover 1 year ago
I dont trust any benchmarks for any LLM thats not coming from FB, Google, OpenAI, Anthropic, or Microsoft. These models are so dynamic, the simple benchmark numbers never tell the whole story of the quality of the model. Take for instance, a recent posting by anyscale, claiming their fine tuning of Llama 2 was competitive with OpenAI&#x27;s model. The reality being their fined tuned model is basically worthless, and was competitive along a single metric&#x2F;very narrow commoditized task. Its a great way to get clicks by posting these metrics though
评论 #37383236 未加载
评论 #37382753 未加载
评论 #37382863 未加载
howon92over 1 year ago
Congrats on your achievement! I&#x27;m curious about your end goal. Do you aim to beat GitHub Copilot&#x27;s performance and convince devs to use Refact for code completion instead of GitHub Copilot? I want to understand the motivation behind these different code-completion models that are not solely for academic research.
评论 #37382679 未加载
评论 #37382695 未加载
umutisikover 1 year ago
The title is misleading This model is not &quot;SOTA for the size&quot;, there are smaller models that do 10-18% better in absolute score. The text says it&#x27;s SOTA &quot;among similar models&quot; where they probably compare with other models with permissive licensing.
评论 #37382734 未加载
评论 #37382828 未加载
glutamateover 1 year ago
License text: <a href="https:&#x2F;&#x2F;drive.google.com&#x2F;file&#x2F;d&#x2F;16NqKiAkzyZ55NClubCIFup8pT2jnyVIo&#x2F;view" rel="nofollow noreferrer">https:&#x2F;&#x2F;drive.google.com&#x2F;file&#x2F;d&#x2F;16NqKiAkzyZ55NClubCIFup8pT2j...</a> [PDF]<p>See last page for restrictions
评论 #37382582 未加载
评论 #37382506 未加载
acheong08over 1 year ago
Say I want to fine tune a Golang specific model. How much $ and effort would I have to put in? Would using this as a base help in any way compared to starting from llama?
评论 #37383355 未加载
palmer_foxover 1 year ago
All these LLMs are pretty general if I understand correctly. Are there any efforts to create specialized models (other than for coding)? Or, what would be even better, &quot;extract&quot; certain areas from existing LLMs as a way to specialize them? With the goal to drastically reduce model size to be able to run on less powerful devices.<p>E.g. a model specializing in chemistry doesn&#x27;t need to include data on world&#x27;s history or to be able to write poetry.
评论 #37383972 未加载
评论 #37384510 未加载
Manjuuuover 1 year ago
Another model that we&#x27;ll soon forget it ever existed.
igammaraysover 1 year ago
For the sake of not giving Microsoft and a few other tech giants immense power over the world, I really do hope the cost and efficiency of LLMs improve dramatically, until we can get GPT-4-equivalent models trained on a few graphics cards and running offline on an iPhone. Really rooting for these kinds of projects until someone makes the breakthrough.
评论 #37382674 未加载
评论 #37382699 未加载
评论 #37382681 未加载
评论 #37383159 未加载
评论 #37382705 未加载
评论 #37382696 未加载
评论 #37383268 未加载
评论 #37383310 未加载
kateklinkover 1 year ago
We’ve finished training a new code model Refact LLM which took us about a month. The main use-case is for blazing-fast code completion with fill-in-the-middle, additionally, the model could reply to chat prompts.<p>It has much better performance than all of the code models of similar size, and almost reaches the same HumanEval as Starcoder being 10x smaller in size.<p>With the small size, it can work with most modern GPUs requiring just 3GB Ram.<p>You can try self-hosting it in Refact <a href="https:&#x2F;&#x2F;github.com&#x2F;smallcloudai&#x2F;refact&#x2F;">https:&#x2F;&#x2F;github.com&#x2F;smallcloudai&#x2F;refact&#x2F;</a> and get a local fast copilot alternative with decent suggestions.<p>Weights and model card <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;smallcloudai&#x2F;Refact-1_6B-fim" rel="nofollow noreferrer">https:&#x2F;&#x2F;huggingface.co&#x2F;smallcloudai&#x2F;Refact-1_6B-fim</a>.<p>We would love to hear your feedback!
评论 #37383365 未加载
评论 #37382840 未加载
评论 #37382280 未加载
评论 #37465555 未加载
评论 #37382612 未加载
评论 #37383311 未加载
zcesurover 1 year ago
tangentially related: refact recently shared 4 bounties worth $9,000 to help improve their tech!<p><a href="https:&#x2F;&#x2F;algora.io&#x2F;org&#x2F;smallcloudai&#x2F;bounties" rel="nofollow noreferrer">https:&#x2F;&#x2F;algora.io&#x2F;org&#x2F;smallcloudai&#x2F;bounties</a><p>disclaimer: i&#x27;m a cofounder of algora, the platform enabling these bounties
评论 #37383919 未加载
iFireover 1 year ago
LICENSE<p>bigscience-openrail-m<p><a href="https:&#x2F;&#x2F;huggingface.co&#x2F;smallcloudai&#x2F;Refact-1_6B-fim&#x2F;blob&#x2F;main&#x2F;README.md" rel="nofollow noreferrer">https:&#x2F;&#x2F;huggingface.co&#x2F;smallcloudai&#x2F;Refact-1_6B-fim&#x2F;blob&#x2F;mai...</a>
评论 #37382421 未加载
notsahilover 1 year ago
Model Stats - Architecture: LLAMA-like model with multi-query attention - Objectives Fill-in-the-Middle, Chat - Tokens context: 4096 - Pretraining tokens: 1.2T - Finetuning tokens: 40B - Precision: bfloat16 - GPUs 64 NVidia A5000 - Training time 28 days
评论 #37382419 未加载