TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Logistic Regression from Bayes’ Theorem

184 pointsby Homunculiheadedalmost 6 years ago

7 comments

imbusy111almost 6 years ago
If you fit a linear model for the coffee making problem and one of the parameters is temperature and the coefficient for the temperature in the linear model is positive, does that mean if you keep increasing the temperature without limit, the probability of making a good cup of coffee increases as well without limit?<p>In reality the temperature is required to be a certain exact value within a range.
评论 #20179198 未加载
评论 #20178258 未加载
评论 #20176489 未加载
评论 #20179803 未加载
debbiedowneralmost 6 years ago
It would be nice to hear about the optimization method with convergence guarantees etc. Introducing the model is nice, but you need to show quality and easiness of fit. You can maybe do this before since you rely on the idea of learning the parameters somehow to motivate the model.<p>You can relate to NNs for free since it is a linear layer with sigmoid activation.<p>You can stress it is linear in that your decision boundary is linear.<p>I don&#x27;t like how capitalized letters are not random variables but are observations.<p>You can give some examples of what conditional PDFs P(H=1 | D ) look like and what you can model. In your case if the ideal temp for coffee is 190F and +&#x2F;- 10 or more and the coffee is bad then you hope that (temp - 190)^2 is a feature input.<p>Congrats on the book deal!
julesalmost 6 years ago
Very nice! How about this, for more than 2 classes:<p>Let p_k be the probability of being in class k. We assume log p_k = f_k(x) + C(x) where x is the feature vector and C(x) is normalisation to make the probabilities sum to 1.<p>Equivalently, p_k is proportional to exp(f_k(x)), so p_k = exp(f_k(x)) &#x2F; sum_j exp(f_j(x)).<p>Because of the normalisation we may assume without loss of generality that f_0(x) = 0. Then if we have 2 classes and f_1(x) is linear, we get logistic regression.
评论 #20178216 未加载
doomroboalmost 6 years ago
This was a really neat exposition! I have a few questions:<p>1. Is D a binary random variable? If so, what exactly does it mean to say beta*D + beta_0 is an approximation for log odds? Doesn&#x27;t this formula only take on 2 possible values?<p>2. Could you provide intuition for why a linear function of D would be a good approximation for the log odds mentioned?
评论 #20178464 未加载
评论 #20176574 未加载
blackbear_almost 6 years ago
NB: this post uses D for the input x and H for the output y. This confused me quite a bit since usually in ML we use D for the data (pairs of x and y) and H for the model (in most cases the parameters, the betas in this example).
PopularBoardalmost 6 years ago
I&#x27;m a little confused, how much technical this approach is? I can&#x27;t understand the meaning of P(D) for example. Does it make sense in strict mathematics?
评论 #20178727 未加载
评论 #20178592 未加载
s_Hoggalmost 6 years ago
I realise this is pedantry but it&#x27;s definitely &quot;Bayes&#x27; theorem&quot; not &quot;Baye&#x27;s theorem&quot; dammit.<p>Sorry about that.
评论 #20175283 未加载