TE
テックエコー
ホーム24時間トップ最新ベスト質問ショー求人
GitHubTwitter
ホーム

テックエコー

Next.jsで構築されたテクノロジーニュースプラットフォームで、グローバルなテクノロジーニュースとディスカッションを提供します。

GitHubTwitter

ホーム

ホーム最新ベスト質問ショー求人

リソース

HackerNews APIオリジナルHackerNewsNext.js

© 2025 テックエコー. すべての権利を保有。

How linear regression works intuitively and how it leads to gradient descent

292 ポイント投稿者: lucasfcosta3日前

15 comments

tibbar約2時間前
Some important context missing from this post (IMO) is that the data set presented is probably not a very good fit for linear regression, or really most classical models: You can see that there&#x27;s way more variance at one end of the dataset. So even if we find the best model for the data that looks great in our gradient-descent-like visualization, it might not have that much predictive power. One common trick to deal with data sets like this is to map the data to another space where the distribution is more even and then build a model in <i>that</i> space. Then you can make predictions for the original data set by taking the inverse mapping on the outputs of the model.
评论 #43928725 未加载
评论 #43928152 未加载
评论 #43928192 未加载
c7b約12時間前
One interesting property of least squares regression is that the predictions are the conditional expectation (mean) of the target variable given the right-hand-side variables. So in the OP example, we&#x27;re predicting the average price of houses of a given size.<p>The notion of predicting the mean can be extended to other properties of the conditional distribution of the target variable, such as the median or other quantiles [0]. This comes with interesting implications, such as the well-known properties of the median being more robust to outliers than the mean. In fact, the absolute loss function mentioned in the article can be shown to give a conditional median prediction (using the mid-point in case of non-uniqueness). So in the OP example, if the data set is known to contain outliers like properties that have extremely high or low value due to idiosyncratic reasons (e.g. former celebrity homes or contaminated land) then the absolute loss could be a wiser choice than least squares (of course, there are other ways to deal with this as well).<p>Worth mentioning here I think because the OP seems to be holding a particular grudge against the absolute loss function. It&#x27;s not perfect, but it has its virtues and some advantages over least squares. It&#x27;s a trade-off, like so many things.<p>[0] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Quantile_regression" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Quantile_regression</a>
评论 #43924645 未加载
评论 #43928836 未加载
评论 #43924234 未加载
评论 #43925155 未加载
easygenes約11時間前
This is very light and approachable but stops short of building the statistical intuition you want here. They fixate on the smoothness of squared errors without connecting that to the gaussian noise model and establishing how that relates to the predictive power against natural sorts of data.
评论 #43924515 未加载
评论 #43924499 未加载
评论 #43924382 未加载
stared約9時間前
I really recommend this explorable explanation: <a href="https:&#x2F;&#x2F;setosa.io&#x2F;ev&#x2F;ordinary-least-squares-regression&#x2F;" rel="nofollow">https:&#x2F;&#x2F;setosa.io&#x2F;ev&#x2F;ordinary-least-squares-regression&#x2F;</a><p>And for actual gradient descent code, here is an older example of mine in PyTorch: <a href="https:&#x2F;&#x2F;github.com&#x2F;stared&#x2F;thinking-in-tensors-writing-in-pytorch&#x2F;blob&#x2F;master&#x2F;3%20Linear%20regression.ipynb">https:&#x2F;&#x2F;github.com&#x2F;stared&#x2F;thinking-in-tensors-writing-in-pyt...</a>
评论 #43925431 未加载
jampekka約10時間前
The main practical reason why square error is minimized in ordinary linear regression is that it has an analytical solution. Makes it a bit weird example for gradient descent.<p>There are plenty of error formulations that give a smooth loss function, and many even a convex one, but most don&#x27;t have analytical solutions so they are solved via numerical optimization like GD.<p>The main message is IMHO correct though: square error (and its implicit gaussian noise assumption) is all too often used just per convenience and tradition.
评论 #43925024 未加载
评论 #43925591 未加载
评论 #43924550 未加载
评论 #43924456 未加载
rogue7約3時間前
I built a small static web app [0] (with svelte and tensorflow js) that shows gradient descent. It has two kind of problems: wave (the default) and linear. In the first case, the algorithm learns y = ax + b ; in the second, y = cos(ax + b). The training data is generated from these functions with some noise.<p>I spent some time making it work with interpolation so that the transitions are smooth.<p>Then I expanded to another version, including a small neural network (nn) [1].<p>And finally, for the two functions that have a 2d parameter space, I included a viz of the loss [2]. You can click on the 2d space and get a new initial point for the descent, and see the trajectory.<p>Never really finished it, though I wrote a blog post about it [3]<p>[0] <a href="https:&#x2F;&#x2F;gradfront.pages.dev&#x2F;" rel="nofollow">https:&#x2F;&#x2F;gradfront.pages.dev&#x2F;</a><p>[1] <a href="https:&#x2F;&#x2F;f36dfeb7.gradfront.pages.dev&#x2F;" rel="nofollow">https:&#x2F;&#x2F;f36dfeb7.gradfront.pages.dev&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;deploy-preview-1--gradient-descent.netlify.app&#x2F;" rel="nofollow">https:&#x2F;&#x2F;deploy-preview-1--gradient-descent.netlify.app&#x2F;</a><p>[3] <a href="https:&#x2F;&#x2F;blog.horaceg.xyz&#x2F;posts&#x2F;need-for-speed&#x2F;" rel="nofollow">https:&#x2F;&#x2F;blog.horaceg.xyz&#x2F;posts&#x2F;need-for-speed&#x2F;</a>
评论 #43927192 未加载
brrrrrm約13時間前
&gt; When using least squares, a zero derivative always marks a minimum. But that&#x27;s not true in general ... To tell the difference between a minimum and a maximum, you&#x27;d need to look at the second derivative.<p>It&#x27;s interesting to continue the analysis into higher dimensions, which have interesting stationary points that require looking at the matrix properties of a specific type of second order derivative (the Hessian) <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Saddle_point" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Saddle_point</a><p>In general it&#x27;s super powerful to convert data problems like linear regression into geometric considerations.
dalmo3約5時間前
I don&#x27;t have anything useful to say, but, how the hell is that a &quot;12 min read&quot;?<p>I always find those counters to greatly overestimate reading speed, but for a technical article like this it&#x27;s outright insulting, to be honest.
评论 #43927172 未加载
setgree約2時間前
Nice, thanks for sharing! I shared this with my HS calculus teacher :) (My model is that his students should be motivated to get machine learning engineering jobs, so they should be motivated to learn calculus, but who knows.)
throwaway7783約2時間前
In the same vein, Karpathy&#x27;s video series &quot;Neural Networks from zero to hero&quot;[0] touches upon a lot of this and intuitions as well. One of the best introductory series (even if you ignore the neural net part of it) and brushes on gradients, differentiation and what it means intuitively.<p>[0] <a href="https:&#x2F;&#x2F;youtu.be&#x2F;VMj-3S1tku0?si=jq1cCSn5si17KK1o" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;VMj-3S1tku0?si=jq1cCSn5si17KK1o</a>
quercusa約4時間前
This (housing prices) example seems really familiar. Was it used in Andrew Ng&#x27;s original Coursera ML class?
评论 #43927514 未加载
wodenokoto約9時間前
Speaking of linear regression, can any of you recommend an online course or book that deep dives into fitting linear models?
评论 #43924446 未加载
jwilber約3時間前
See another interactive article explaining linear regression and gradient descent: <a href="https:&#x2F;&#x2F;mlu-explain.github.io&#x2F;linear-regression&#x2F;" rel="nofollow">https:&#x2F;&#x2F;mlu-explain.github.io&#x2F;linear-regression&#x2F;</a>
reify約12時間前
All thats wrong with the modern world<p><a href="https:&#x2F;&#x2F;www.ibm.com&#x2F;think&#x2F;topics&#x2F;linear-regression" rel="nofollow">https:&#x2F;&#x2F;www.ibm.com&#x2F;think&#x2F;topics&#x2F;linear-regression</a><p>A proven way to scientifically and reliably predict the future<p>Business and organizational leaders can make better decisions by using linear regression techniques. Organizations collect masses of data, and linear regression helps them use that data to better manage reality, instead of relying on experience and intuition. You can take large amounts of raw data and transform it into actionable information.<p>You can also use linear regression to provide better insights by uncovering patterns and relationships that your business colleagues might have previously seen and thought they already understood.<p>For example, performing an analysis of sales and purchase data can help you uncover specific purchasing patterns on particular days or at certain times. Insights gathered from regression analysis can help business leaders anticipate times when their company’s products will be in high demand.
评论 #43923737 未加载
jascha_eng約9時間前
The amount of em dashes in this make this look very AI written. Which doesn&#x27;t make it a bad piece but makes me more carefully check every sentence for errors.
评论 #43924671 未加载
评论 #43927528 未加载