All the new asyncIO stuff in Django is awesome, they are doing a phenomenal job retrofitting it all to an inherently sync designed framework.<p>One thing to note, it's been possible to build these sort of things with Django by using Gevent and Gunicorn for over a decade, and it works well.<p>In many ways I wish Gevent had been adopted by Python rather than AsyncIO.
Just a heads up that currently Django is not cleaning up open PostgreSQL connections when ran in ASGI mode, leading to too many open connections error: <a href="https://code.djangoproject.com/ticket/33497" rel="nofollow noreferrer">https://code.djangoproject.com/ticket/33497</a>
This implementation is probably a little different but we were using long poll in the late 90s and early 2000s. The problem was that you were committing one thread (or worse, one process) to each client. That obviously doesn't scale unless threads are extremely light on RAM and either the OS or the runtime support a large number of them. I remember that a way out was using continuations. Jetty was a Java application server that supported them (random link [1]) One thread -> many connections. I didn't investigate how Django is implementing this now but CPUs and RAM are still CPUs and RAM.<p>[1] <a href="https://stackoverflow.com/questions/10587660/how-does-jetty-handle-multiple-requests" rel="nofollow noreferrer">https://stackoverflow.com/questions/10587660/how-does-jetty-...</a>
“ The idea is that the client "subscribes" to an HTTP endpoint, and the server can then issue data to the client as long as the connection is open.”<p>To those who have been around longer than me: isn’t this just long polling, that predates websocket?
<a href="https://github.com/sysid/sse-starlette">https://github.com/sysid/sse-starlette</a> makes token streaming so much easier in python