Would be curious to know how this stacks up against Coconut [1] which also uses latent space for reasoning.<p>[1] <a href="https://arxiv.org/abs/2412.06769" rel="nofollow">https://arxiv.org/abs/2412.06769</a>
I feel like this is the obvious next step for chain of thought reasoning. Excited to see work on models that try and transform the intermediate thinking space tokens, down to language. Allowing us to still try and see what's happening inside the "mind" of the LLM, if that process is even possible to map to language anymore. I also wonder what the implications of this research are on chain of thought reasoning with reinforcement learning, since from my understanding many of the reward mechanisms set up during reinforcement learning are around the structure of thought process.
Very importantly here they provide a ways of decoding the encoded thought tokens, so you're not really losing explanatory power or debuggability. As much as OpenAI want to present hidden chain of thought as some sort of long term advantage or safety feature, it's horrible when you want to understand how a model came to some insane conclusion.
isn't this dangerous?
isn't the efficiency given at the expense of safety and interpretability?<p><a href="https://arxiv.org/abs/2412.14093" rel="nofollow">https://arxiv.org/abs/2412.14093</a> (Alignment faking in large language models)<p><a href="https://joecarlsmith.com/2024/12/18/takes-on-alignment-faking-in-large-language-models" rel="nofollow">https://joecarlsmith.com/2024/12/18/takes-on-alignment-fakin...</a><p>PS I m definitely not an expert
Probably not needed in the end to reason in latent space. Unless constrained by human preference/SFT data, RL spontaneously should create new additions to language to help with new reasoning methods/new concepts invented by the system.
Is “multimodal reasoning” as big a deal as it sounds? Does this technique mean LLMs can generate chains of thought that map to other modalities, such as sound and images?
I don't think autoregressive models have a fundemental difference in terms of reasoning capability in latent space vs token space. Latent space enables abstract reasoning and pattern recognition, while token space acts as both the discrete interface for communication, and as a interaction medium to extend, refine and synthesize high order reasoning over latent space.<p>Intuively speaking, most people think of writing as a communication tool. But actually it's also a thinking tool that helps create deeper connections over discrete thoughts which can only occupy a fixed slice of our attention at any given time. Attentional capacity the primary limitation-- for humans and LLMs. So use the token space as extended working memory. Besides, even the Coconut paper got mediocre results. I don't think this is the way.
I’ve been thinking a bit about this lately - reasoning in latent space - especially because it looks like that’s what R1-Zero does — the researchers mention that it’s <think> sections switch back and forth between Chinese and English, but the <say> sections are coherent.<p>The paper raises a few more questions than it answers, though.<p>Do they hard code a certain set of CoT token types upfront to train on? While the results are good, they are not ‘great’ - other methods seem to provide better outcomes, based on their own charts.<p>The interpretability does not seem ‘strong’ to me either - they train decoders on latent space encodings by sort of guessing what must be going on based on text prompts.<p>That said, this is a fairly sweet ‘hack’ in my mind - training hidden layers to do the reasoning. I guess I’m skeptical that it’s the way forward, though. It feels like until your CoT token can specify it needs more thinking time, you’re stuck without extensibility / deep thinking when needed.<p>Overall, very cool. Probably not “the future”. More research in latent space reasoning would be very welcome.
Keeping the thinking interpretable makes it easier to impose conditions on it both at runtime and as part of reinforcement. It opens the doors to manually injecting relevant thoughts triggered by supervision ("I must remember to say nothing that could offend the party.", search results, or access to APIs like calculators).<p>Those advantages are easily worth some efficiency.<p>I'm skeptical of the safety/security arguments some have made. Models RL trained seeing their own COT may (and in fact almost certainly) will develop hidden context embedded into their word choices that carry through data that we're not aware of, the fact that the COT appears to be English (or some other human language) doesn't mean that we necessarily really understand it.<p>Consider how a game of Hanabi between long time partners might look to an outsider.
I'm new to this topic. Can someone help me understand this sentence?<p>"Meanwhile, through the next-token prediction constraint, the explicit
textual symbols of the hidden representations for Heima
Encoder are aligned to the text of the corresponding special
tokens {<CoT>(k)} in vocabulary, while the hidden representations contained in hidden states of thinking tokens remain distinct and variable depending on the inputs"<p>I understand that they have fine-tuned the MLLM to produce, in response to each query and image input, the CoT "thinking tokens" in addition to the answer.<p>How does that establish an association between the thinking tokens and the original plain-English CoT statements?<p>The second clause seems to say that the thinking tokens encode information that is "distinct and variable depending on the inputs." Is my interpretation correct?
I would be interrested in seeing how a combined latent space and traditional gpro cot could perform vs just one of either.<p>My intuition is still that latent space would be better at emulating larger models with fewer params, and cot helping refining the output after latent space.<p>Combined it would kinda being able to think about a problem. Throw down a draft then refine it.
Could someone ELI5? It sounds like they generate a compressed token which represents a whole "thought" rather than elaborating the entire "thought" in actual language. Is that right?