TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

llama.cpp now supports StarCoder model series

6 pointsby wsxiaoysover 1 year ago

1 comment

wsxiaoysover 1 year ago
For the 1B version of the model, it operates at approximately 100 tokens per second when decoding with Metal on an Apple M2 Max.<p>llama_print_timings: load time = 114.00 ms<p>llama_print_timings: sample time = 0.00 ms &#x2F; 1 runs ( 0.00 ms per token, inf tokens per second)<p>llama_print_timings: prompt eval time = 107.79 ms &#x2F; 22 tokens ( 4.90 ms per token, 204.11 tokens per second)<p>llama_print_timings: eval time = 1315.10 ms &#x2F; 127 runs ( 10.36 ms per token, 96.57 tokens per second)<p>llama_print_timings: total time = 1427.08 ms<p>(Disclaimer: I submited the PR)