TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Coroutines for Go

330 点作者 trulyrandom将近 2 年前

22 条评论

alphazard将近 2 年前
It looks like a lot of people are missing the point here. Yes a coroutine library would be a worse&#x2F;more cumbersome way to do concurrency than the go keyword.<p>The use case motivating all the complexity is function iterators, where `range` can be used on functions of type `func() (T, bool)`. That has been discussed in the Go community for a long time, and the semantics would be intuitive&#x2F;obvious to most Go programmers.<p>This post addresses the next thing: Assuming function iterators are added to the language, how do I write one of these iterators that I can use in a for loop?<p>It starts by noticing that it is often very easy to write push iterators, and builds up to a push-to-pull adapter. It also includes a general purpose mechanism for coroutines, which the adapter is built on.<p>If all of this goes in, I think it will be bad practice to use coroutines for things other than iteration, just like it&#x27;s bad practice to use channels&#x2F;goroutines in places where a mutex would do.
评论 #36764227 未加载
评论 #36764011 未加载
评论 #36766742 未加载
评论 #36768663 未加载
评论 #36772923 未加载
评论 #36764005 未加载
Zach_the_Lizard将近 2 年前
I have written Go professionally for many years now and don&#x27;t want to see it become something like the Python Twisted &#x2F; Tornado &#x2F; whatever frameworks.<p>The go keyword nicely prevents the annoying function coloring problem, which causes quite a bit of pain.<p>Sometimes in high performance contexts I&#x27;d like to be able to do something like e.g. per CPU core data sharding, but this proposal doesn&#x27;t scratch those kinds of itches.
评论 #36764361 未加载
评论 #36764608 未加载
评论 #36764133 未加载
评论 #36765301 未加载
评论 #36764110 未加载
评论 #36764703 未加载
MathMonkeyMan将近 2 年前
Multitasking systems gave us processes.<p>But those were too much.<p>So we got threads, which are processes that share an address space, file table, and some other things. The scheduler can switch from one to the other more easily than between processes, and data can be shared between threads without needing serialization.<p>But those were too much.<p>So we got user space threads, which are logical threads of execution that are driven by a runtime entirely in user space. The runtime adds scheduling hooks into all I&#x2F;O functions in the standard library, or even uses a system API like Unix signals to preempt logical threads. No system-level context switching is needed. User space threads can be tiny.<p>But those were too much.<p>So we got coroutines, which allow a programmer to define logical &quot;threads&quot; of execution that cooperatively interact with each other. There is no assumption about the presence of a scheduler. The programmer either writes their own event loop or invokes one from a library in a &quot;real&quot; logical thread.<p>I wonder what comes next. As far as [communicating sequential processes][1] are concerned, maybe cooperative coroutines are a low as you can go.<p>[1]: <a href="https:&#x2F;&#x2F;www.cs.cmu.edu&#x2F;~crary&#x2F;819-f09&#x2F;Hoare78.pdf" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.cs.cmu.edu&#x2F;~crary&#x2F;819-f09&#x2F;Hoare78.pdf</a>
评论 #36769329 未加载
评论 #36769911 未加载
评论 #36767054 未加载
评论 #36768760 未加载
djha-skin将近 2 年前
I thought that <i>the entire point</i> of green threads was so that I didn&#x27;t <i>have</i> to use something like Python&#x27;s `yield` keyword to get nice, cooperative-style scheduling.<p>I thought go&#x27;s `insert resumes at call points and other specific places` design decision was a very nice compromise.<p>This is allowing access to more and more of the metal. At what point are we just recreating Zig here? What&#x27;s next? An <i>optional</i> garbage collector?
评论 #36763513 未加载
评论 #36763457 未加载
chrsig将近 2 年前
Coroutines are one thing that i&#x27;d probably prefer language support for rather than a library.<p><pre><code> x := co func(){ var z int for { z++ yield z } } y := x() for y := range x { ... } </code></pre> or something to that effect. It&#x27;s cool that it can be done at all in pure go, and I can see the appeal of having a standard library package for it with an optimized runtime instead of complecting the language specification. After all, if it&#x27;s possible to do in pure go, then other implementations can be quickly bootstrapped.<p>My $0.02, as someone that uses go at $work daily: I&#x27;d be happy to have either, but I&#x27;d prefer it baked into the language. Go&#x27;s concurrency primitives have always been a strength, just lean into it.
评论 #36766158 未加载
评论 #36768585 未加载
评论 #36765767 未加载
silisili将近 2 年前
Not sure I&#x27;m a fan. Looking through the examples, I feel like this makes the language much harder to read and follow, but maybe that&#x27;s just my own brain and biases.<p>Further, it doesn&#x27;t seem to me to allow you to do anything you can&#x27;t currently do with blocking channels and&#x2F;or state.
评论 #36765530 未加载
评论 #36767575 未加载
评论 #36766789 未加载
xpressvideoz将近 2 年前
Reading the comments makes me feel bittersweet.<p>- Many people consider coroutines and green threads to be more or less the same thing, when they both have their pros and cons.<p>- The fact that the omission of iterators is even acceptable in the Go community saddens me. They seem to deliberately refuse any feature that might make the language even slightly more complex, in the name of simplicity. But hey, at least they retracted their opinion on generics.<p>I&#x27;m again reminded that Go is not my language.
评论 #36766707 未加载
评论 #36767099 未加载
评论 #36769061 未加载
评论 #36769813 未加载
评论 #36768570 未加载
pjmlp将近 2 年前
I guess it is great that they are finally paying attentio to programming languages like CLU.<p>On the other side, given my experience with .NET and C++ co-routines, and Active Objects (in Symbian C++ and Active Oberon) not sure if this is really something to add to Go.<p>Even the .NET team has acknowledged at this year&#x27;s BUILD, that if they could go back in time having the runtime handle them Go-style would probably been a better decision, given how many developers keep having issues understanding async&#x2F;await.
vaastav将近 2 年前
Not sure if this really is required. Most cases in Go are served well by GoRoutines and for yield&#x2F;resume semantics, 2 blocking channel are enough. This seems to add complexity for the sake of it and not sure it actually adds any new power to Go that already didn&#x27;t exist.
评论 #36763488 未加载
评论 #36766803 未加载
pmarreck将近 2 年前
As a point of comparison, here&#x27;s my demo from a recent presentation of firing up 1 million (1,000,000) Elixir (BEAM VM) threads, sending them all a &quot;Hello!&quot; message, and then each thread waits a random amount of time between 0 and 2 seconds to send a message back of &quot;Process &lt;their number&gt; received message &lt;themessage&gt;!&quot;<p>At the same time, I am running the Erlang observer beside it to watch what happens to the CPU and memory consumption and how quickly it recovers&#x2F;cleans up the garbage.<p>The biggest bottleneck here is the terminal&#x27;s ability to keep up, but the observer seems to reflect what&#x27;s happening accurately.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=yxyYKnashR0">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=yxyYKnashR0</a><p>The code I used: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;pmarreck&#x2F;4cc8f2f55a561ebce2012085a3a631f0" rel="nofollow noreferrer">https:&#x2F;&#x2F;gist.github.com&#x2F;pmarreck&#x2F;4cc8f2f55a561ebce2012085a3a...</a><p>These features have been built into Erlang (and thus Elixir) since the 1980&#x27;s. I&#x27;m sure many of you have heard of the Actor model and&#x2F;or Erlang&#x27;s &quot;legendary&quot; implementation of it, but I don&#x27;t know how many have actually seen it in action with monitoring kit running.<p>I think it would be great for Go if it offered language-level support like this, but given the extremely resource-efficient implementation (both in spawning and runtime consumption) of threads on the BEAM VM, coupled with the ease of concurrency which comes directly from only permitting immutable values, I don&#x27;t think it will ever be matched.
RcouF1uZ4gsC将近 2 年前
I don’t think Coroutines would fit in with Go. There is a huge emphasis on simplicity. Coroutines add a massive amount of complexity. In addition, goroutines provide the best parts of Coroutines - cheap, easy to use, non-blocking operations - without a lot of the pain pints such as “coloring” or functions and issues with using things like mutexes.<p>Just the question of whether one should use a goroutine or a coroutine adds complexity.
评论 #36764367 未加载
评论 #36764412 未加载
jerf将近 2 年前
I&#x27;m not 100% sure this is the case, but I believe the context of this goes something like this. As Go has added generics, there are proposals to add generic data structures like a Set. Generics solve almost every problem with that, but there is one conspicuous issue that remains for a data structure: You can iterate over a slice or a map with the &quot;range&quot; keyword, and that yields special iteration behavior, but there is no practical way to do that with a general data structure, if you consider constructing an intermediate map or slice to be an insufficient solution. Go is generally performance-sensitive enough that it is.<p>The natural solution to this is some sort of iterator, as in Python or other languages. (Contra frequent accusations to the contrary, the Go community is aware of other language&#x27;s efforts.)<p>So this has opened the can of worms of trying to create an iteration standard for Go.<p>Go has something that has almost all the semantics we want right now. You can also &quot;range&quot; over a channel. This consumes one value at a time from the channel and provides it to the iteration, exactly as you&#x27;d expect, and the iteration terminates when the channel is closed. It just has one problem, which is that it involves a full goroutine and a synchronized channel send operation for each loop of the iteration. As I said in another comment, if what is being iterated on is something huge like a full web page fetch, this is actually fine, but no concurrency primitive can keep up with the efficiency of incrementing an integer, a single instruction which may literally take an amortized fraction of a cycle on a modern processor. With generics you can even relatively implement filter, map, etc. on this iterator... but adding a goroutine and synchronized commit for each such element of a pipeline is just crazy.<p>I believe the underlying question in this post is, can we use standard Go mechanisms to implement the coroutines without creating a new language construct, then use the compiler under the hood to convert it to an efficient execution? Basically, can this problem be solved with compiler optimizations rather than a new language construct? From this point of view, the payload of this article is really only that very last paragraph; the entire rest of the article is just orientation. If so, then Go can have coroutine efficiency with the standard language constructs that already exist. Perhaps some code that is using this pattern goroutine already might speed up too &quot;for free&quot;.<p>The concerns people have about this complexifying Go, the entire point of this operation is to suck the entire problem into the compiler with 0 changes to the spec. Not complexifying Go with a formal iteration standard is the entire point of this operation. If one wishes to complain, the correct complaint is the exact opposite one, that Go is not &quot;simply&quot; &quot;just&quot; implementing iterators as a first class construct just like all the other languages.<p>Also, in the interests of not posting a full new post, note that in general I shy away from the term &quot;coroutine&quot; because a coroutine is what this article describes, exactly, and nothing less. To those posting &quot;isn&#x27;t a goroutine already a coroutine?&quot;, the answer is, no, and in fact almost nothing called a coroutine by programmers nowadays actually is. The term got simplified down to where it just means thread or generator as Python uses the term, depending on the programming community you&#x27;re looking at, but in that context we don&#x27;t need to use the term &quot;coroutine&quot; that way, because we already <i>have</i> the word &quot;thread&quot; or &quot;generator&quot;. This is what &quot;real&quot; coroutines are, and while I won&#x27;t grammatically proscribe to you what you can and can not say, I will reiterate that I <i>personally</i> tend to avoid the term because the conflation between the sloppy programmer use and the more precise academic&#x2F;compiler use is just confusing in almost all cases.
评论 #36764115 未加载
HumblyTossed将近 2 年前
what? I&#x27;m a Go newb, but isn&#x27;t this what goroutines and channels get you?
评论 #36763824 未加载
评论 #36768393 未加载
VWWHFSfQ将近 2 年前
Aside:<p>Lua is an absolute work of art. Everything about the tiny language, how it works, and even all the little peculiarities, just makes sense.
评论 #36766749 未加载
评论 #36765772 未加载
评论 #36764820 未加载
FZambia将近 2 年前
Wondering whether coroutines may be a step towards async event-based style APIs without allocating read buffers for the entire connection. I.e. a solution to problems discussed in <a href="https:&#x2F;&#x2F;github.com&#x2F;golang&#x2F;go&#x2F;issues&#x2F;15735">https:&#x2F;&#x2F;github.com&#x2F;golang&#x2F;go&#x2F;issues&#x2F;15735</a>. Goroutines provide a great way to have non-blocking IO with synchronous code – but when it comes to effective memory management with many connections Go community tend to invent raw epoll implementations: <a href="https:&#x2F;&#x2F;www.freecodecamp.org&#x2F;news&#x2F;million-websockets-and-go-cc58418460bb&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.freecodecamp.org&#x2F;news&#x2F;million-websockets-and-go-...</a>. So my question here – can coroutines somehow bring new possibilities in terms of working with network connections?
xwowsersx将近 2 年前
Somewhat on topic given that OP brought up coroutines in Python: what resources have folks used to understand Python&#x27;s asyncio story in depth? I&#x27;m just now finally understanding how to use stuff, but it was through a combination of the official documentation, the books &quot;Using Asyncio in Python&quot; and &quot;Expert Python Programming&quot;, none of which were particularly good. Normally I&#x27;d rely just on the official docs, but the docs have created much confusion, it seems, because there&#x27;s a lot in them that are useful more so for library&#x2F;framework developers than for users. So, I&#x27;m just wondering if anyone has great resources for really gaining a strong understanding of Python&#x27;s asyncio or how else you might have gone about gaining proficiency to the point where you felt comfortable using asyncio in real projects.
评论 #36764979 未加载
评论 #36765812 未加载
up2isomorphism将近 2 年前
The most valuable quality of a programming language committee is holding the temptation to add any new features unless it is something that drives existing users away.
samsquire将近 2 年前
This is a thoroughly interesting topic. Thanks for the article.<p>I haven&#x27;t thought much about iterators link to coroutines.<p>As a hobby, I am working to write about a dream programming language. I happen to be really interested in parallelism, asynchronous, coroutines, multithreading and concurrency.<p>I want:<p>* seamlessly switch between remote-thread coroutine, local thread coroutine.<p>* concurrency and parallelism and async to be easy to think about, reason about, read and program<p>* programs should be easy to parallelise and be async and concurrent<p>Go iterators seem to be local to a thread, but what if you want to distribute work across threads?<p>I&#x27;ve been thinking of scheduling recently.<p>Imagine you&#x27;re a search engine company and you want to index links between URLs. How would you solve this with coroutines?<p><pre><code> task download-url for url in urls: download(url) task extract-links parsed = parse(document) return parsed task fetch-links for link in document.query(&quot;a&quot;) return link task save-data db.save(url, link) </code></pre> How would you do control flow and scheduling and parallelism and async efficiently with this code?<p>* `db.save()`, `download()` are IO intensive whereas `document.query(&quot;a&quot;)` and `parse` is CPU intensive.<p>* I want to handle plurality or multiple items trivially such as multiple URLs and multiple links.<p>* I want to keep IO and CPU in flight at all times.<p>I think I want this schedule:<p><a href="https:&#x2F;&#x2F;user-images.githubusercontent.com&#x2F;1983701&#x2F;254083968-b46485c8-fe5f-43ea-b840-d0d63dab4a51.PNG" rel="nofollow noreferrer">https:&#x2F;&#x2F;user-images.githubusercontent.com&#x2F;1983701&#x2F;254083968-...</a><p>I have a toy 1:M:L 1 scheduler thread:M kernel threads:N lightweight threads lightweight scheduler in C, Rust and Java<p><a href="https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;preemptible-thread">https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;preemptible-thread</a><p>This lets me switch between tasks and preempt them from user space without assistance at descheduling time.<p>I have a simplistic async&#x2F;await state machine thread pool in Java. My scheduling algorithm is very simple.<p>I want things like backpressure, circuit breakers, rate limiting, load shedding, rate adjustment, queuing.
kragen将近 2 年前
i&#x27;ve been thinking about a closely related feature in a different context: adding block arguments, as in smalltalk or ruby or especially lobster, to a language more like c, with static types and stack allocation<p>i think this would be favorable for (among other things) clu-like iterators and imgui libraries, where you often want to do something like<p><pre><code> submenu(&quot;&amp;Edit&quot;) { command(&quot;&amp;Cut&quot;) { clip_cut(getSelection()); } ... } </code></pre> this is especially useful in a context where you&#x27;re heap-allocating sparingly or not at all, because the subroutine taking the block argument can stack-allocate some resource, pass it to the block, and deallocate it once the block returns; python context managers and win32 paint messages are two cases where people commonly do this sort of thing, but things like save-excursion, with-output-file, transactional memory, and gsave&#x2F;grestore also provide motivation<p>the conventional way to do this is to package up the block into a closure, then use a full-fledged function invocation to invoke it, using a calling convention that supports closures. but i suspect a more relaxed and efficient approach is to use an asymmetric coroutine calling convention, in which the callee yields back control to its caller at the entry point to the block, and the block then resumes the callee when it finishes. so instead of merely dividing registers into callee-saved and call-clobbered, as subroutine calling conventions do, we would divide them into callee-saved upon return but upon yield containing callee values the block must have restored upon resumption; caller coroutine context registers, which are callee-saved upon return and also on yield; and call-clobbered. you also need in many cases a way for the block to safely force an early exit from the callee<p>this allows the caller&#x27;s local variables to be in registers its blocks can use without further ado, or at least indexed off of such a register, while allowing the yield and resume operations to be, in many cases, just a single machine instruction. and it does not require heap allocation<p>as an example of taking this to the point of absurdity, here&#x27;s an untested subroutine for iterating over a nul-terminated string passed in r0 with a block passed in r1, using a hypothetical coroutine convention which passes at least r4 through from its caller to its blocks<p><pre><code> itersz: push {r6, r7, r8, lr} mov r7, r0 mov r6, r1 1: ldrb r0, [r7], #1 cbz r0, 1f blx r1 b 1b 1: pop {r6, r7, r8, pc} </code></pre> and here is another untested subroutine which uses it to calculate a string hash<p><pre><code> hashsz: push {r4, r5, r9, lr} movs r4, #53 adr r1, 1f blx itersz mov r0, r4 pop {r4, r5, r9, pc} 1: eor r4, r0, r4, ror #27 bx lr </code></pre> even in this case where both the iteration and the visitor block are utterly trivial, the runtime overhead per item (compared to putting them in the same subroutine) is evidently extremely modest; my estimate is 7 cycles per byte rather than 4 cycles per byte on in-order hardware with simple branch prediction, so, on the order of 1 ns on the hardware russ used as his reference. for anything more complex the overhead should be insignificant<p>it&#x27;s less general than the mechanism russ proposes here (it doesn&#x27;t solve the celebrated samefringe problem), but it&#x27;s also an order of magnitude more efficient, because the yield and resume operations are less work than a subroutine call, though still more work than, say, decrementing a register and jumping if nonzero
评论 #36776877 未加载
pierrebai将近 2 年前
The examples given prompt me to say: if all you have is Rube-Goldberg hammer, everything looks like an Escheresque nail.<p>Sieving primes by turning functions into coroutines, parsing text by yielding characters, all with unnatural functions and state management... that;s an improvement over what?
ketchupdebugger将近 2 年前
I&#x27;m not sure why author is advocating for single threaded patterns in a multithreaded environment. Not sure why he&#x27;s trying to limit himself like this. The magic of goroutines is that you can use all of your cores easily not just one. Python and Lua has no choice.
评论 #36763993 未加载
metadat将近 2 年前
Reasoning about and following the control flow of the proposed code hurts me inside. If Go adds function coloring via (e.g. python&#x27;s async and&#x2F;or yield concepts), I&#x27;m out, because I don&#x27;t want to use this, much less encounter it in the form of a bug in some library.<p>Java and C++ are largely inferior for my typical purposes, but at the end of the day they work fine and are stable in terms of direction, and don&#x27;t tend to repeatedly bloat the language over pedantry. If you want top-notch performance, there&#x27;s already C, C++, and Rust.<p>I am not a fan of the function coloring shit in Python and Javascript.<p>I don&#x27;t want the kitchen sink!
评论 #36782848 未加载