TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Byte Latent Transformer: Patches Scale Better Than Tokens

378 点作者 zxexz5 个月前

19 条评论

dang5 个月前
The paper: <a href="https:&#x2F;&#x2F;scontent-sjc3-1.xx.fbcdn.net&#x2F;v&#x2F;t39.2365-6&#x2F;470135129_1314438233309836_4712217603129928862_n.pdf?_nc_cat=111&amp;ccb=1-7&amp;_nc_sid=3c67a6&amp;_nc_ohc=WqSN1qsot3oQ7kNvgFWGG4j&amp;_nc_zt=14&amp;_nc_ht=scontent-sjc3-1.xx&amp;_nc_gid=A2yO-vwOF4w2PIUX2gHIbXD&amp;oh=00_AYBAR_B1_9ewVRJM5VYbJbdfm4Uk5INZY0t67hlpNccpAA&amp;oe=676400C8" rel="nofollow">https:&#x2F;&#x2F;scontent-sjc3-1.xx.fbcdn.net&#x2F;v&#x2F;t39.2365-6&#x2F;470135129_...</a>
PaulHoule5 个月前
The summer that BERT came out I was working at a startup that was using character-based CNN models for classification. We were thinking a lot about alternate representations, other members of the team were keen on word vectors but I wasn&#x27;t, particularly because it seemed the documents were were working on frequently had out-of-dictionary words, because those words were important, and because discarding them would lead to failure.<p>(We were working on &quot;foundation models&quot; too, so it&#x27;s not just being out-of-dictionary in the final model that&#x27;s a problem but being out-of-dictionary in the foundation model which is more expensive to train.)<p>We were doing OK with character based models for classification but people believed that storing the &quot;dictionary&quot; inside the neural net was not a good use of the neural net so there was a lot of enthusiasm for tokens.<p>Meanwhile I felt so sure that schemes like Word2Vec were doomed that I had left an earlier project using RNNs where the goal was text understanding with a foundation model made by training an RNN to write fake abstracts for case reports from PubMed.<p>When byte-pair encoding was introduced I remember telling people in a meeting that it was the first tokenization scheme we&#x27;d looked at that I could endorse.<p>I have to admit though that I wish we could work at the character label.
评论 #42419473 未加载
评论 #42417964 未加载
modeless5 个月前
I really hope this works out. Death to tokenizers!<p>Interesting that it&#x27;s a hierarchical structure but only two levels of hierarchy. Stacking more levels seems like an obvious direction for further research.<p>Note: I posted this comment on another related story[1] and the author replied:<p>&quot;Author here :), I do think it’s a good direction to look into! That said, aside from it being a bit too much to do at once, you’d also have to be careful about how you distributed your FLOP budget across the hierarchy. With two levels, you can make one level (bytes&#x2F;local encoder) FLOP efficient and the other (patches&#x2F;global encoder) FLOP intensive. You’d also need to find a way to group patches into larger units. But ya, there are many directions to go from here!&quot;<p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42413430">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42413430</a>
评论 #42422235 未加载
flimflamm5 个月前
To create a patch, a small model is used to predict the likelihood for the next character in the input string. Input string: &#x27;Lazy dog jumped over a fence.&#x27; Use the model to predict the likelihood of each character.<p>For example:<p><pre><code> 100% sure the next character is &#x27;a&#x27;. Or maybe it&#x27;s 10% sure it&#x27;s &#x27;a&#x27;, 10% sure it&#x27;s &#x27;b&#x27;, and so on. </code></pre> Then we chunk character estimates together. How many characters? Enough characters so that the total uncertainty (entropy) in each chunk is about the same. And there you have your &#x27;patch&#x27; (or &#x27;token&#x27;).
评论 #42416018 未加载
评论 #42418140 未加载
dang5 个月前
Recent and related:<p><i>Sharing new research, models, and datasets from Meta FAIR</i> - <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42412360">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=42412360</a> - Dec 2024 (61 comments)
vishpr5 个月前
So only thing teaching model (loss) is probability prediction in single byte space. And that is enough? Looks very promising, if I am not misunderstanding.
nodja5 个月前
From my understanding this not only removes tokenization but also sampling correct?<p>Sampling can be a pain point of LLMs, but they also can enable interesting usages, like forcing grammar so the model always outputs valid JSON or tuning temperature to get more varied distribution, XTC sampling, etc.<p>What would be the equivalent of these in a BLT?<p>I can only think of providing the decoder an extra input of allowed&#x2F;prohibited bytes and run the decode over and over until it outputs something valid, maybe there&#x27;s a simpler and more obvious approach.
评论 #42419427 未加载
dr_dshiv5 个月前
Does this mean AI can pre-train on binaries?
评论 #42417906 未加载
iandanforth5 个月前
I find it interesting how far linguistic, and experienced based approaches have fallen out of fashion. Humans don&#x27;t read character by character, even if we <i>can</i> it&#x27;s not a standard operating mode. We have word stems and understand modifications by endings. Tokenization doesn&#x27;t replicate this experience (seriously, look at the tokens that appear in LLM vocabularies), nor does character or byte encoding. Humans have multiple ways to parse words. You can grok a full sentence, read a phrase, read word by word, or sound out a new word character by character. Very few papers explicitly claim that a method is good because it replicates the way a human would perform a task, or perceive the world.<p>I suspect as LLM reliance increases we&#x27;ll want to align the models to our experience more closely. I further suspect this will make the errors that models make more comprehensible.
DerSaidin5 个月前
&gt; Unlike tokenization, BLT has no fixed vocabulary for patches.<p>iiuc this means: the vocabulary of patches is not known prior to training.<p>I guess once training has established a vocabulary of patches, that same fixed vocabulary is used for inference (if this is not true I don&#x27;t see how it could work).<p>Right?
RandyOrion5 个月前
An interesting read on alternative tokenization methods.<p>Questions:<p>1. What&#x27;s the goal of entropy based byte token grouping as tokenization? Is this tokenization method best suited for the goal?<p>2. What about simply using byte level sequence to sequence autoencoder with down sampling for tokenization?
boulos5 个月前
This is neat work, but I also love the (presumably intentional?) backronym of BLT.
dewijones925 个月前
notebooklm <a href="https:&#x2F;&#x2F;notebooklm.google.com&#x2F;notebook&#x2F;77fe83ee-35b3-4a9a-a3d9-e2f1395a3a0f&#x2F;audio" rel="nofollow">https:&#x2F;&#x2F;notebooklm.google.com&#x2F;notebook&#x2F;77fe83ee-35b3-4a9a-a3...</a>
评论 #42416653 未加载
评论 #42418277 未加载
amelius5 个月前
Why can&#x27;t the tokenization be implicit, so we only feed bytes (or characters) to the model?
评论 #42417428 未加载
评论 #42417423 未加载
macrolime5 个月前
I wonder whether llama 4 will use this
qouteall5 个月前
Related quote from Karpathy:<p>Tokenization is at the heart of much weirdness of LLMs. Do not brush it off.<p>• Why can&#x27;t LLM spell words? Tokenization.<p>• Why can&#x27;t LLM do super simple string processing tasks like reversing a string? Tokenization.<p>• Why is LLM worse at non-English languages (e.g. Japanese)? Tokenization.<p>• Why is LLM bad at simple arithmetic? Tokenization.<p>• Why did GPT-2 have more than necessary trouble coding in Python? Tokenization.<p>• Why did my LLM abruptly halt when it sees the string &quot;&lt;|endoftext|&gt;&quot;? Tokenization.<p>• What is this weird warning I get about a &quot;trailing whitespace&quot;? Tokenization.<p>• Why the LLM break if I ask it about &quot;SolidGoldMagikarp&quot;? Tokenization.<p>• Why should I prefer to use YAML over JSON with LLMs? Tokenization.<p>• Why is LLM not actually end-to-end language modeling? Tokenization.<p>• What is the real root of suffering? Tokenization.
评论 #42415874 未加载
评论 #42416206 未加载
评论 #42416580 未加载
评论 #42417567 未加载
评论 #42416629 未加载
评论 #42416497 未加载
paraschopra5 个月前
My notes:<p>It&#x27;s a 3 component model.<p>- Encoder: Takes byte groupings and outputs a hidden state&#x2F;encoding called patches<p>- Transformer: Takes these encodings of patches in autoregressive fashion<p>- Decoder: Takes processed encodings by transformers and outputs bytes<p>Loss is on byte to byte crossentropy (Next byte prediction)<p>How they group bytes.<p>- Use entropy thresholds: If a sequence of bytes have entropy lower than a threshold, group them<p>- This is a learned model (from data)<p>Why this helps over current byte-pair tokenization in LLMs.<p>- Encoder&#x2F;decoder essentially act as “learnable” tokenization scheme<p>- Better efficiency tradeoffs (as for highly predictable sequence of bytes, encoder can “offload” computation effort from the main transformer)<p>- History teaches us that end to end learned system beats human designed mechanisms
评论 #42416125 未加载
评论 #42416410 未加载
评论 #42416631 未加载
bloomingkales5 个月前
I thought we’re supposed to be plateauing!?
评论 #42416010 未加载
评论 #42419745 未加载
fabmilo5 个月前
I am gonna read this paper and the other latent sentence later today. I always advocated for this kind of solutions together with latent sentence search should get to the next level of AI. Amazing work from Meta
评论 #42416057 未加载