TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Generative Modeling with Sparse Transformers

70 点作者 stablemap大约 6 年前

4 条评论

yorwba大约 6 年前
Using two attention layers with √N inputs to cover a context of size N = √N × √N is somewhat intuitively understandable for image data, since the decomposition corresponds to rows and columns.<p>But it&#x27;s quite surprising that this also works for text data, especially that the fixed pattern performs better than the strided one, despite there not being anything analogous to image boundaries in the data.<p>It&#x27;d also be interesting to see what happens for other decompositions, such as 3 layers of ∛N or a logarithmic stack of dilated convolutions.
评论 #19732993 未加载
joe_the_user大约 6 年前
So &quot;Transformers&quot; are part of the attention-based systems, which are a approach for modeling input-output relationships that is an alternative to Recurrent Neural Networks. These are instead based on Convolutional Neural Networks.<p>The innovation here is that the transformer is compressed, allowing the system to deal with longer sequences.
评论 #19733685 未加载
评论 #19733508 未加载
skdotdan大约 6 年前
That&#x27;s really impressive!<p>However, I&#x27;m a bit disappointed with the code release. I was expecting the full source code and setup.
评论 #19733572 未加载
tezka大约 6 年前
What is NLL for 32x32 Imagenet? Thats a common benchmark and it’s strange that it’s missing from this paper. Also, will you release cifar10 samples? Curious what they look like at 2.80