That's a nice explanation. I wonder whether autoregressive and diffusion language models could be combined such that the model only denoises the (most recent) end of a sequence of text, like a paragraph, while the rest is unchangeable and allows for key-value caching.
I'm curious, in image generation, flow matching is said to be better than diffusion, then why do these language models still start from diffusion, instead of jumping to flow matching directly?
A big discussion on this happened here as well <a href="https://news.ycombinator.com/item?id=44057820">https://news.ycombinator.com/item?id=44057820</a><p>There is quite a bit of evidence diffusion models work better at reasoning because they don't suffer from early token bias.<p><a href="https://github.com/HKUNLP/diffusion-vs-ar">https://github.com/HKUNLP/diffusion-vs-ar</a>
<a href="https://arxiv.org/html/2410.14157v3" rel="nofollow">https://arxiv.org/html/2410.14157v3</a>
Great overview. I wonder if we'll start to see more text diffusion models from other players, or maybe even a mixture of diffusion and transformer models alternating roles behind a single UI, depending on the context and request.