TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Concurrency in Rust

282 点作者 SirNoobsAlot大约 9 年前

10 条评论

gregwebs大约 9 年前
Send + Sync are great. The downside of concurrency in Rust is:<p>1) There isn&#x27;t transparent integration with IO in the runtime as in Go or Haskell. Rust probably won&#x27;t ever do this because although such a model scales well in general, it does create overhead and a runtime.<p>2) OS threads are difficult to work with compared to a nice M:N threading abstraction (which again are the default in Go or Haskell). OS threads leads to lowest common denominator APIs (there is no way to kill a thread in Rust) and some difficulty in reasoning about performance implications. I am attempting to solve this aspect by using the mioco library, although due to point #1 IO is going to be a little awkward.
评论 #11371256 未加载
评论 #11369844 未加载
评论 #11370603 未加载
Manishearth大约 9 年前
For a more in-depth explanation of how Send and Sync work theoretically, see <a href="http:&#x2F;&#x2F;manishearth.github.io&#x2F;blog&#x2F;2015&#x2F;05&#x2F;30&#x2F;how-rust-achieves-thread-safety&#x2F;" rel="nofollow">http:&#x2F;&#x2F;manishearth.github.io&#x2F;blog&#x2F;2015&#x2F;05&#x2F;30&#x2F;how-rust-achiev...</a> or <a href="http:&#x2F;&#x2F;huonw.github.io&#x2F;blog&#x2F;2015&#x2F;02&#x2F;some-notes-on-send-and-sync&#x2F;" rel="nofollow">http:&#x2F;&#x2F;huonw.github.io&#x2F;blog&#x2F;2015&#x2F;02&#x2F;some-notes-on-send-and-s...</a>
nindalf大约 9 年前
I think Steve Klabnik could clarify this, but the book at that link is in the process of being rewritten. I think it might be good to wait until it is. I personally found it slightly difficult to follow compared to other options like the soon to be published Programming Rust.
评论 #11369912 未加载
jonreem大约 9 年前
Another thing to know about rust concurrency is that it supports safe &quot;scoped&quot; threads, or threads which have plain references to their parent threads stack.<p>This makes it very easy to write, for instance, a concurrent in-place quicksort (this example uses the scoped-pool crate, which provides a thread pool supporting scoped threads):<p><pre><code> extern crate scoped_pool; &#x2F;&#x2F; scoped threads extern crate itertools; &#x2F;&#x2F; generic in-place partition extern crate rand; &#x2F;&#x2F; for choosing a random pivot use rand::Rng; use scoped_pool::{Pool, Scope}; pub fn quicksort&lt;T: Send + Sync + Ord&gt;(pool: &amp;Pool, data: &amp;mut [T]) { pool.scoped(move |scoped| do_quicksort(scoped, data)) } fn do_quicksort&lt;&#x27;a, T: Send + Sync + Ord&gt;(scope: &amp;Scope&lt;&#x27;a&gt;, data: &amp;&#x27;a mut [T]) { scope.recurse(move |scope| { if data.len() &gt; 1 { &#x2F;&#x2F; Choose a random pivot. let mut rng = rand::thread_rng(); let len = data.len(); let pivot_index = rng.gen_range(0, len); &#x2F;&#x2F; Choose a random pivot &#x2F;&#x2F; Swap the pivot to the end. data.swap(pivot_index, len - 1); let split = { &#x2F;&#x2F; Retrieve the pivot. let mut iter = data.into_iter(); let pivot = iter.next_back().unwrap(); &#x2F;&#x2F; In-place partition the array. itertools::partition(iter, |val| &amp;*val &lt;= &amp;pivot) }; &#x2F;&#x2F; Swap the pivot back in at the split point by putting &#x2F;&#x2F; the element currently there are at the end of the slice. data.swap(split, len - 1); &#x2F;&#x2F; Sort both halves (in-place!). let (left, right) = data.split_at_mut(split); do_quicksort(scope, left); do_quicksort(scope, &amp;mut right[1..]); } }) } </code></pre> In this example, quicksort will block until the array is fully sorted, then return.
pmarreck大约 9 年前
Reading all this is making me happy about pursuing Elixir (which is of course a language addressing largely different use-cases)
djhworld大约 9 年前
I&#x27;m having a tough time trying to understand this snippet<p><pre><code> for i in 0..3 { thread::spawn(move || { data[i] += 1; }); } </code></pre> What is the &#x27;move&#x27; thing here before the ||
评论 #11369507 未加载
评论 #11369417 未加载
评论 #11369401 未加载
z1mm32m4n大约 9 年前
Does Rust have a way to work with SIMD concurrency as opposed to just fork&#x2F;join concurrency? Something along the lines of how openmp or cilk let you do a parallel for all?
评论 #11369432 未加载
评论 #11369451 未加载
评论 #11369450 未加载
armitron大约 9 年前
This looks terribly overcomplicated&#x2F;overengineered to me, to the point where I doubt many are going to adopt&#x2F;switch to this style, esp when used to more convenient approaches [even the standard C++ approach, faulty as it may be].<p>Also note how much boilerplate one has to write and how the code snippets bypass error handling (do it differently in &quot;real&quot; code but don&#x27;t show us how). Bleh.
评论 #11370140 未加载
评论 #11370210 未加载
RasmusWL大约 9 年前
Can someone enlighten me as to why the first snippet has a data race? Won&#x27;t the resulting array become [2,3,4]?<p><pre><code> let mut data = vec![1, 2, 3]; for i in 0..3 { thread::spawn(move || { data[i] += 1; }); }</code></pre>
评论 #11371738 未加载
评论 #11371637 未加载
askyourmother大约 9 年前
What about the assumption (fatally flawed decision?) that malloc never fails when rust asks? Sounds like something that could affect concurrency
评论 #11369457 未加载
评论 #11369467 未加载
评论 #11369453 未加载