TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Minigpt4 Inference on CPU

102 点作者 maknee将近 2 年前

4 条评论

heyitsguay将近 2 年前
I know it's not the main point of this, but... so many multimodal models now that take frozen vision encoders and language decoders and weld them together with a projection layer! I wanna grab the EVA02-CLIP-E image encoder and the Llama-2 33B model and do the same, I bet that'd be fun :D
评论 #36799303 未加载
评论 #36797737 未加载
pizzafeelsright将近 2 年前
I am not an ML expert. I want to know how to add my own documents without sending them off to a 3rd party.
评论 #36803837 未加载
quickthrower2将近 2 年前
Not heard of minigpt4. Why that name? Is it claiming to be specifically a gpt4 competitor?
评论 #36797291 未加载
评论 #36797305 未加载
Der_Einzige将近 2 年前
Any data on inference speed? I’ve found that the non quantized model was much faster on GPU than the quantized versions due to lower GPU utilization.
评论 #36799621 未加载