I’ve been working on a Python async library ([Blazeio](https://github.com/anonyxbiz/Blazeio)) and stumbled into a shockingly simple optimization that makes `asyncio.Event` look like a relic.<p>### *The Problem*
`asyncio.Event` (and similar constructs in other languages) has two nasty scaling flaws:<p>1. *Memory*: It allocates <i>one future per waiter</i> → 1M waiters = 48MB wasted..
2. *Latency*: It wakes waiters <i>one-by-one</i> - O(N) syscalls under the GIL.<p>### *The Fix: `SharpEvent`*
A drop-in replacement that:
- *Uses one shared future* for all waiters - *O(1) memory*.
- *Wakes every waiter in a single operation* - *O(1) latency*.<p>### *Benchmarks*
| Metric | `asyncio.Event` | `SharpEvent` |
|----------------|-----------------|--------------|
| 1K waiters | ~1ms wakeup | *~1µs* |
| 1M waiters | *Crashes* | *Still ~1µs*|
| Memory (1M) | 48MB | *48 bytes* |<p>### *Why This Matters*
- *Real-time apps* (WebSockets, games) gain *predictable latency*.
- *High concurrency* (IoT, trading) becomes trivial.
- It’s *pure Python* but beats CPython’s C impl.<p>### *No Downsides?*
Almost none. If you need per-waiter timeouts/cancellation, you’d need a wrapper, but 99% of uses just need bulk wakeups.<p>### *Try It*
```python
from Blazeio import SharpEvent
event = SharpEvent()
event.set() # Wakes all waiters instantly
```<p>[GitHub](https://github.com/anonyxbiz/Blazeio)<p><i>Would love feedback, am I missing a critical use case?</i>