TE
テックエコー
ホーム24時間トップ最新ベスト質問ショー求人
GitHubTwitter
ホーム

テックエコー

Next.jsで構築されたテクノロジーニュースプラットフォームで、グローバルなテクノロジーニュースとディスカッションを提供します。

GitHubTwitter

ホーム

ホーム最新ベスト質問ショー求人

リソース

HackerNews APIオリジナルHackerNewsNext.js

© 2025 テックエコー. すべての権利を保有。

The Fractured Entangled Representation Hypothesis

59 ポイント投稿者: akarshkumar01015日前

7 comments

scarmig5日前
Did you investigate other search processes besides SGD? I'm thinking of those often termed "biologically plausible" (e.g. forward-forward, FA). Are their internal representations closer to the fractured or unified representations?
ipunchghosts5日前
I am glad they evaluated this hypothesis using weight decay which is primarily thought of to induce a structured representation. My first thought was that the entire paper was useless if they didn&#x27;t do this experiment.<p>I find it rather interesting that the structured representations go from sparse to full to sparse as a function of layer depth. I have noticed that applying weight decay penalty as an exponential function of layer depth gives improved results over using a global weight decay.
timewizard5日前
&gt; Much of the excitement in modern AI is driven by the observation that scaling up existing systems leads to better performance.<p>Scaling up almost always leads to better performance. If you&#x27;re only getting linear gains though then there is absolutely nothing to be excited about. You are in a dead end.
goldemerald5日前
This is an interesting line of research but missing a key aspect: there&#x27;s (almost) no references to the linear representation hypothesis. Much work on neural network interpretability lately has shown individual neurons are polysemantic, and therefore practically useless for explainability. My hypothesis is fitting linear probes (or a sparse autoencoder) would reveal linearly semantic attributes.<p>It is unfortunate because they briefly mention Neel Nanda&#x27;s Othello experiments, but not the wide array of experiments like the NeurIPS Oral &quot;Linear Representation Hypothesis in Language Models&quot; or even golden gate Claude.
评论 #44044701 未加载
评论 #44044548 未加载
cwmoore5日前
Isn&#x27;t this simply mirroronic gravitation?
light_hue_14日前
&quot;I looked at the representations of a network and I don&#x27;t like them&quot;.<p>Great! There&#x27;s no mathematical definition of what a fractured representation is. It&#x27;s whatever art preferences you have.<p>Our personal preferences aren&#x27;t a good predictor of which network will work well. We wasted decades with classical AI and graphical models encoding our aesthetic into models. Just to find out that the results are totally worthless.<p>Can we stop please? I get it. I too like beautiful things. But we can&#x27;t hold on to things that don&#x27;t work. Entire fields like linguistics are dying because they refuse to abandon this nonsense.
akarshkumar01015日前
Tweet: <a href="https:&#x2F;&#x2F;x.com&#x2F;kenneth0stanley&#x2F;status&#x2F;1924650124829196370" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;kenneth0stanley&#x2F;status&#x2F;1924650124829196370</a> Arxiv: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2505.11581" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2505.11581</a>
评论 #44043339 未加载
评论 #44044587 未加载
评论 #44044237 未加载