TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

A visual explanation for regularization of linear models

163 点作者 parrt大约 5 年前

3 条评论

LolWolf大约 5 年前
&gt; The simple reason is that that illustration shows how we regularize models conceptually, with hard constraints, not how we actually implement regularization, with soft constraints!<p>Note that these are equivalent. In particular, assuming the two problems are (for some loss &quot;l&quot; and regularizer &quot;r&quot;)<p><pre><code> minimize l(θ) + λr(θ), </code></pre> and<p><pre><code> minimize l(θ) subject to r(θ) ≤ M, </code></pre> for every λ there exists an M and for every M there exists a λ such that the resulting problems are equivalent, in the sense that a solution for one is a solution for the other. (Of course, under some fairly general regularity conditions, but these hold for all given examples). I agree that this is not often stated in many introductory texts, but the intuitive image is the same.
评论 #23086479 未加载
评论 #23089686 未加载
评论 #23086556 未加载
parrt大约 5 年前
The world certainly doesn&#x27;t need yet another article on the mechanics of regularized linear models. What&#x27;s lacking is a simple and intuitive explanation for what exactly is going on during regularization. The goal of this article is to explain how regularization behaves visually, dispelling some myths and answering important questions along the way.
评论 #23085062 未加载
NischalM大约 5 年前
I wrote a small post to explain the bias-variance trade-off in OLS visually, leaving it here in case it helps anyone: <a href="https:&#x2F;&#x2F;towardsdatascience.com&#x2F;bias-and-variance-in-linear-models-e772546e0c30" rel="nofollow">https:&#x2F;&#x2F;towardsdatascience.com&#x2F;bias-and-variance-in-linear-m...</a>