TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Compressing text using AI by sending only prediction rank of next word

2 点作者 sktguha将近 5 年前
<p><pre><code> Is there any effort made to compress text (and maybe other media) using prediction of next word and thus sending only the order number of the word&#x2F;token which will be predicted on the client-side i.e SERVER TEXT: This is an example of a long text example, custom word flerfom inserted to confuse, that may appear on somewhere COMPRESSED TEXT TRANSMITTED: This [choice no 3] [choice no 4] [choice no 1] [choice no 6] [choice no 1] [choice no 3] [choice no 1], custom word flerfom inserted [choice no 4] confuse [choice no 5] [choice no 4] [choice no 6] [choice no 5] on somewhere (Note: of course [choice no 3] will be shortened to [3] to save bytes and also maybe we can do much better in some cases by sending the first letter of the word) of course it means that the client side neural network has to be static or only updated in a predictable fasion, so the server knows for sure that the client neural network&#x27;s predictions will follow the given choice orders. I tried example with https:&#x2F;&#x2F;demo.allennlp.org&#x2F;next-token-lm, but the prediction is not that good. maybe gpt-3 can do better . but its too heavy for use in a normal pc &#x2F; mobile device</code></pre>

1 comment

zimpenfish将近 5 年前
Fabrice Bellard has done some work on this - <a href="https:&#x2F;&#x2F;bellard.org&#x2F;textsynth&#x2F;sms.html" rel="nofollow">https:&#x2F;&#x2F;bellard.org&#x2F;textsynth&#x2F;sms.html</a>