TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ring attention with blockwise transformers for near-infinite context

47 pointsby muggermuchover 1 year ago

6 comments

chpatrickover 1 year ago
Get ready for some countries putting your entire surveillance logs in the LLM and asking it if you've been naughty or not, automatically, every day.
评论 #37788312 未加载
optimalsolverover 1 year ago
Rather than all this effort to work around the flaws of the transformer model, maybe researchers should be looking for a better architecture altogether.<p>The absolutely insane amount of compute that transformers consume could probably be better used for neuroevolutionary search.
评论 #37771278 未加载
评论 #37772106 未加载
esafakover 1 year ago
It&#x27;s cool to see the founder of a major company still write papers.
评论 #37772442 未加载
评论 #37771347 未加载
ctothover 1 year ago
I was reading the paper, and then it hit me ... Did they use ChatGPT to generate their abstract?<p>Almost certainly not, but I wonder how well it does at that?<p>Come up with a (fake) way to improve the transformer architecture, then write the title and abstract.<p>Title: Novel Cyclical Attention Mechanism Enhances Transformer Architectures<p>Abstract: The transformer architecture has emerged as a powerful model for handling sequential data across various domains. Despite its success, the fixed nature of its attention mechanism often restricts its ability to adapt to the dynamic nature of real-world data sequences. In this paper, we propose a novel Cyclical Attention Mechanism (CAM) that augments the standard transformer architecture. Unlike conventional attention mechanisms which allocate attention statically based on previous layers, the CAM operates in a cyclical fashion, allowing for a dynamic, recurrent redistribution of attention over the sequence at each layer of the transformer. This cyclical process is facilitated through a novel temporal feedback loop that integrates information from both previous and subsequent layers, allowing for a more nuanced understanding of long-term dependencies within the data. Moreover, the proposed mechanism introduces an adaptive temporal gating system that intelligently modulates the flow of information through the cycles, ensuring optimal retention and refinement of relevant information throughout the network. We demonstrate through extensive experiments on various benchmark datasets that the Cyclical Attention Mechanism significantly improves the model&#x27;s ability to handle long-range dependencies, leading to substantial improvements in performance across multiple tasks including language modeling, translation, and sequence labeling. Our findings pave the way for a new line of research into dynamic attention mechanisms within transformer architectures, showcasing the potential for enhanced performance and adaptability in handling complex sequential data[1].<p>I know, I find it tiresome too when people share their ChatGPT responses, but this really struck me. We are very, very close to those being indistinguishable.<p>* I&#x27;d hate to be trying to sort out valid from invalid papers these days.<p>* How close are AIs to doing AI research?<p>* If an AI can predict something similar to your paper, is it more or less likely to be valid&#x2F;true&#x2F;reproduceable?<p>[1]: <a href="https:&#x2F;&#x2F;chat.openai.com&#x2F;share&#x2F;ba769733-e98d-48d3-809a-7611f3dd905e" rel="nofollow noreferrer">https:&#x2F;&#x2F;chat.openai.com&#x2F;share&#x2F;ba769733-e98d-48d3-809a-7611f3...</a>
评论 #37788214 未加载
chaz6over 1 year ago
I am disappointed to see a paper with the phrase, in the title no less, &quot;Near-Infinite&quot;. Something is either infinite or not; there can be no &quot;near&quot;.
评论 #37772099 未加载
评论 #37772104 未加载
thefourthchimeover 1 year ago
WOWOWOWOWOWOWOWOOWOWOWOWOWOWOWOW