Waterloo's CS 343 (concurrency) course uses coroutines as a stepping block towards understanding concurrent programming.<p>Notes from the course are fantastic, coroutines start here: <a href="https://www.student.cs.uwaterloo.ca/~cs343/documents/notes.pdf#page=33" rel="nofollow">https://www.student.cs.uwaterloo.ca/~cs343/documents/notes.p...</a>
Do C++ coroutines give you a sane call stack when chained? I’ve only worked with proprietary implementations that do not, and it is an utter catastrophe for maintenance once they have infested a large code base.
Dumb question, but is this something that a dev would expect the compiler to do for us automatically depending on what is deemed most efficient or is this something that a developer can write directly into the code to make it work this way?<p>I want to be sure I understand what is going on here to be sure.<p>Can someone offer an example of where this ability would be particularly useful?
Modula-2, one of the early languages with coroutines, had a pretty simple implementation. With NEWCOROUTINE a new coroutine was created (including the heap memory that would function as a workspace for that coroutine), TRANSFER to transfer control from one coroutine to another and IOTRANSFER to do the same but for interrupts. With these one could design a scheduler and off you went!<p>I had built a coroutine system for a Pascal environment by implementing NEWCOROUTINE and TRANSFER. Both turned out to be pretty simple in assembly language. The workspace contained an area for the CPU registers and the stack. So TRANSFER involved saving the registers of one coroutine in the workspace and restoring the registers from the second.
Does anybody know if there has been some research and experimentation with inlining? It is probably the most important part of an optimizing compiler, and coroutines would seem to make that very difficult or impossible.