TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Official DeepSeek R1 Now on Ollama

234 点作者 excerionsforte4 个月前

14 条评论

throwaway3239294 个月前
&gt; DeepSeek V3 seems to acknowledge political sensitivities. Asked “What is Tiananmen Square famous for?” it responds: “Sorry, that’s beyond my current scope.”<p>From the article <a href="https:&#x2F;&#x2F;www.science.org&#x2F;content&#x2F;article&#x2F;chinese-firm-s-faster-cheaper-ai-language-model-makes-splash" rel="nofollow">https:&#x2F;&#x2F;www.science.org&#x2F;content&#x2F;article&#x2F;chinese-firm-s-faste...</a><p>I understand and relate to having to make changes to manage political realities, at the same time I&#x27;m not sure how comfortable I am using an LLM lying to me about something like this. Is there a plan to open source the list of changes that have been introduced into this model for political reasons?<p>It&#x27;s one thing to make a model politically correct, it&#x27;s quite another thing to bury a massacre. This is an extremely dangerous road to go down, and it&#x27;s not going to end there.
评论 #42777799 未加载
评论 #42777627 未加载
评论 #42776987 未加载
评论 #42777332 未加载
评论 #42789238 未加载
评论 #42778654 未加载
评论 #42778756 未加载
评论 #42779753 未加载
评论 #42795851 未加载
评论 #42777620 未加载
评论 #42777192 未加载
评论 #42778324 未加载
评论 #42779052 未加载
huydotnet4 个月前
Looking at the R1 paper, if the benchmark are correct, even the 1.5b and 7b models are outperforming Claude 3.5 Sonnet, and you can run these models on a 8-16GB macbook, that&#x27;s insane...
评论 #42777034 未加载
ipsum24 个月前
Title is wrong, only the distilled models from llama, qwen are on ollama, not the actual official MoE r1 model from deepseekv3.
评论 #42776723 未加载
评论 #42777146 未加载
sergiotapia4 个月前
I have an RTX 4090 and 192GB of RAM - what size model of Deepseek R1 can I run locally with this hardware? Thank you!
评论 #42777091 未加载
评论 #42777064 未加载
评论 #42778155 未加载
sandos4 个月前
Well, this is fun: try &quot;how would I reverse a list in python&quot; in the 1.5b model. It never stops thinking for me, just spewing stuff! It doesn&#x27;t even seem to be repeating... fascinating!<p>Asking it to be terse produced this beauty:<p><a href="https:&#x2F;&#x2F;gist.github.com&#x2F;sandos&#x2F;c6dad7d66e8a85ab943b5aeb05f0c29b" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;sandos&#x2F;c6dad7d66e8a85ab943b5aeb05f0c...</a>
评论 #42812195 未加载
ik_938114 个月前
What size model would you recommend for M1 Max (64 GB unified), without much requirements for memory left over for CPU usage (I would be running NeoVim, using ollama + r1 as the backend for the code companion).
csomar4 个月前
It is amazing how I can run this on a 3 year old laptop.<p>Here is the prompt&#x2F;response:<p>ollama run deepseek-r1:32b<p>&gt;&gt;&gt; write a rust function that add two numbers<p>&lt;think&gt; Okay, I need to write a Rust function that adds two numbers. Let me think about how to approach this.<p>First, I know that in Rust, functions are defined using the `fn` keyword. So I&#x27;ll start with `fn add_numbers(`.<p>Next, I need to decide what types of numbers this function will handle. The user didn&#x27;t specify, so I&#x27;ll make it generic. That way, it can work with any numeric type like integers or floats. To do that, I&#x27;ll use generics and traits. I remember that the `Add` trait is used for addition, and I need to import it from the standard library.<p>Wait, how do I import traits in Rust? Oh right, using `use std::ops::Add;` at the top of the file or inside the function scope if necessary.<p>So the function parameters will be two values, let&#x27;s call them `a` and `b`. Both should implement the `Add` trait. The syntax for that is `&lt;T: Add&lt;Output = T&gt;&gt;`, which means both are of type T and can add to produce another T.<p>Putting it together, the function signature becomes: ```rust fn add_numbers&lt;T: Add&lt;Output = T&gt;&gt;(a: T, b: T) -&gt; T { ```<p>Inside the function body, I just return `a + b;`. That should handle the addition.<p>Wait, but in Rust, sometimes you need to enable certain features. For example, does this require any compiler flags? No, because generics and traits like Add are part of the standard library and don&#x27;t need additional crates unless we&#x27;re doing something more complex.<p>Testing it out, if I call `add_numbers(2, 3)`, it should return 5. Similarly, for floats like `2.5 + 3.5`, it would give 6.0. What about different types? Oh right, since the function requires both to be of the same type T, trying to add an integer and a float directly wouldn&#x27;t work without converting them first.<p>That makes sense because in Rust, you can&#x27;t mix different numeric types without explicit conversion due to type safety. So if someone wants to add 2 and 3.5, they&#x27;d have to cast one of them to the other&#x27;s type before calling this function.<p>I think that covers the basic requirements. The function is simple but flexible thanks to generics. &lt;&#x2F;think&gt;<p>Here&#x27;s a Rust function that adds two numbers:<p>```rust use std::ops::Add;<p>fn add_numbers&lt;T: Add&lt;Output = T&gt;&gt;(a: T, b: T) -&gt; T { a + b }<p>&#x2F;&#x2F; Example usage: &#x2F;&#x2F; let result = add_numbers(2, 3); &#x2F;&#x2F; println!(&quot;{}&quot;, result); &#x2F;&#x2F; Output: 5 ```<p>This function uses generics to work with any type that implements the `Add` trait. You can pass integers or floating-point numbers.<p>The example shows how to use this function with two integers, but you can also use it with floats:<p>```rust let float_result = add_numbers(2.5, 3.5); println!(&quot;{}&quot;, float_result); &#x2F;&#x2F; Output: 6.0 ```<p>Note that both parameters must be of the same type. If you need to add different numeric types (like an integer and a float), you&#x27;ll first need to convert them to the same type.<p>====<p>In comparison, here is O1 response<p>Here&#x27;s a simple Rust function that adds two numbers:<p>```rust fn add(a: i32, b: i32) -&gt; i32 { a + b } ```<p>This function takes two 32-bit signed integers (`i32`) as parameters and returns their sum.<p>*Example usage:*<p>```rust fn main() { let num1 = 10; let num2 = 15; let result = add(num1, num2); println!(&quot;The sum of {} and {} is {}&quot;, num1, num2, result); }<p>fn add(a: i32, b: i32) -&gt; i32 { a + b } ```<p>*Output:*<p>``` The sum of 10 and 15 is 25 ```<p>If you want the function to work with other numeric types, you can use generics and trait bounds. For example, to make the function work with any type that implements the `Add` trait:<p>```rust use std::ops::Add;<p>fn add&lt;T: Add&lt;Output = T&gt;&gt;(a: T, b: T) -&gt; T { a + b } ```<p>*Example with floating-point numbers:*<p>```rust fn main() { let num1 = 10.5; let num2 = 15.3; let result = add(num1, num2); println!(&quot;The sum of {} and {} is {}&quot;, num1, num2, result); }<p>use std::ops::Add;<p>fn add&lt;T: Add&lt;Output = T&gt;&gt;(a: T, b: T) -&gt; T { a + b } ```<p>*Output:*<p>``` The sum of 10.5 and 15.3 is 25.8 ```<p>This generic `add` function can now handle any type (like `i32`, `f64`, etc.) as long as it implements the `Add` trait.<p>*Note:* Remember to include the necessary `use` statement (`use std::ops::Add;`) when using trait bounds in your function.<p>Feel free to ask if you have any questions or need further assistance!
评论 #42776910 未加载
stemlord4 个月前
This documentation needs work. The entire project description appears to be<p>&gt;Get up and running with large language models.<p>Okay... I have so many questions up-front before I want to install this thing. Am I stuck to a client interface or what? System requirements? Tell me what this <i>is</i>
bravura4 个月前
Question: If I want to inference with the largest DeepSeek R1 models, what are my different paid API options?<p>And, if I want to <i>fine-tune</i> &#x2F; RL the largest DeepSeek R1 models, how can I do that?
评论 #42778171 未加载
swyx4 个月前
i feel like announcements like this should be folded into the main story. the work was done by the model labs. ollama onboards the open weights models soon after (and, applause due to how prompt they are). but we dont need two R1 stories on the front page really
评论 #42778233 未加载
评论 #42777320 未加载
jordiburgos4 个月前
Which size is good for a Nvidia 4070?
评论 #42779133 未加载
buyucu4 个月前
Ollama is so close to greatness. But their refusal to support Vulkan is hurting them really bad.
cratermoon4 个月前
Dupe
jeeybee4 个月前
Cool, to put on a bit of a tin hat, how do we know that the model is not tuned to infringe on what we in the West would consider censorship or misinformation?