Interesting article, but surprised that the author never mentions heaps... <a href="https://en.wikipedia.org/wiki/Heap_(data_structure)" rel="nofollow">https://en.wikipedia.org/wiki/Heap_(data_structure)</a>
Ordered queue is a priority queue, which can be implemented using a binary heap. A binary heap can be backed with a simple array, which is more cache friendly than a linked list. The worst case is O(logN) for insert and delete, only O(1) for peek. It's better than the O(N) insertion sort on the linked list.
Worth to mention, ordered queue is also called as priority or heap queue. There's a one builtin implementation in Python's built library, inside the "heapq" module.
one thing that'd be interesting to see benchmarked is the performance of a linked list implementation vs a heap backed by an array allocated with sufficient capacity for expected usage, so it doesn't need to be resized often (or ever).<p>one downside of linked lists is that each node in the list can reside in an arbitrary part of memory. in the worst case each traversal of a link is a cache miss, so you sit around for 100 or 200 cycles waiting for a main memory read. similarly if all the values in the list that are used to implement comparisons are also stored by reference and scattered throughout memory.<p>with heaps, another interesting thing to try is switching from a binary heap to e.g. a 4-ary heap (i.e. 4 children per parent). i've been fiddling with heaps recently for a priority queue & see a decent speedup switching to a 4-ary heap -- provided the "heapify down" logic that runs whenever you pop an element is structured to let the cpu do as many pairwise compares as possible in parallel, rather than structuring it as a big chain of dependent computations. during "heapify" operations you're jumping around inside a heap a fair bit, which may also cause cache misses, but each bunch of children will be arranged linearly in memory, so there is a bit more signal to noise of each cache line read.
This is a horrible implementation.
To start with, this queue isn't concurrent. It has a lock (and make it easy to leak / stall) with it.<p>Second, it has a `O(N)` performance on inserts.<p>This is beyond unexpected for something that is supposed to be a queue.
This reads like a system design interview, and a good one. The author clearly understands the pattern and the technologies they’re working with.<p>I would perhaps attempt to gather more non functional requirements. How many items on the queue? How quickly are they added? How many threads are consuming from the queue? What’s the size of the message?<p>Also, how much would all this cost to run, in terms of memory? Is there a way of persisting the in-memory structure in case the server goes down?