TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why Are Sinusoidal Functions Used for Position Encoding?

5 pointsby mfnabout 2 years ago

1 comment

mfnabout 2 years ago
Sinusoidal positional embeddings have always seemed a bit mysterious - even more so since papers don&#x27;t tend to delve much into the intuition behind them. For example, from Vaswani et al., 2017:<p>&gt; That is, each dimension of the positional encoding corresponds to a sinusoid. The wavelengths form a geometric progression from 2π to 10000 · 2π. We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset k, PE(pos+k) can be represented as a linear function of PE(pos).<p>Inspired largely by the RoFormer paper (<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2104.09864" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2104.09864</a>), I thought I&#x27;d write a post that dives a bit into how intuitive considerations around linearity and relative positions can lead to the idea of using sinusoidal functions to encode positions.<p>Would appreciate any thoughts or feedback!