I like Solid Queue and the direction things are heading, but its hard to overlook the performance. A system that does tens to hundreds of thousands of jobs/s on Sidekiq + Redis, will now get bottlenecked by transactional performance with solid queue / PG - <a href="https://github.com/sidekiq/sidekiq/wiki/Active-Job#performance">https://github.com/sidekiq/sidekiq/wiki/Active-Job#performan...</a><p>My choice of design pattern here is - Use PG (PostgreSQL) for orchestration + decision making and Sidekiq + Redis as message bus. Just can't beat the time it takes for job to get picked up once it has landed on a queue.
Mastodon runs on Rails and currently utilises and relies on Redis and Sidekiq. I’ve heard redis/sidekiq adds some additional complication and workload to setting up and maintaining a mastodon instance especially for those less familiar with the stack.<p>I’d love some opinions from those with any insight. Would it benefit the Mastodon project to switch across to Solid Queue generally or as a default? Or is mastodon one of those use cases where the current Redis/Sidekiq setup really is more suitable?<p>Please explain your reasoning :)
I've used a philosophy of reduced infra complexity till you really need it for years.<p>Having to just manage a database is far easier infra wise than a complex system for small rails operations.<p>Scaling up there will always be strong needs for complexity, but doesn't mean you can't get really far without it.
I like the idea of a db-backed background processor, but I still feel like Good Job is a better option. It has much more parity with Sidekiq in terms of features, UI, etc than Solid Queue.
If this was part of reducing operational overhead, why not implement something functionally like GCP Cloud Tasks [0]?<p>Since this is part of Rails, all you would need to do is implement regular http endpoints, no need for workers/listeners. Submit a "job" to the queue (which itself is just a POST) and the message details: the endpoint and some data to POST to said endpoint.<p>The queue "server" processes the list of jobs, hits the specified endpoint and when it gets a 200 response, it deletes it. Otherwise, it just keeps retrying.<p>[0] <a href="https://cloud.google.com/tasks/docs" rel="nofollow">https://cloud.google.com/tasks/docs</a>
I like this approach but seems a missed opportunity to use the pgmq library: <a href="https://github.com/pgmq/pgmq">https://github.com/pgmq/pgmq</a><p>Here's a neat project built on top of pgmq & supabase deno edge functions, but a similar thing could be done in other stacks:<p><a href="https://www.pgflow.dev" rel="nofollow">https://www.pgflow.dev</a><p>Played with it a bit and it's very promising.
I have been a big fan of delayed_job for a while. For a time went with sidekiq + Redis but found the juice not worth the squeeze. The biggest issue was complex logic that got run before the current sql transaction finalized. Weird timing bugs and wacky solutions with after_commit hooks and random delays. Not an issue if the database is the sole source of state.
We didn't know how "UPDATE SKIP LOCKED" worked. We lookoed into and wrote a blog on it. <a href="https://www.bigbinary.com/blog/solid-queue" rel="nofollow">https://www.bigbinary.com/blog/solid-queue</a>
This is cool, but I will just continue to use Sidekiq. I know the API well, it's crazy fast and scalable, and it's easy to setup. A Redis dependency is dead simple these days too.