If you fit a linear model for the coffee making problem and one of the parameters is temperature and the coefficient for the temperature in the linear model is positive, does that mean if you keep increasing the temperature without limit, the probability of making a good cup of coffee increases as well without limit?<p>In reality the temperature is required to be a certain exact value within a range.
It would be nice to hear about the optimization method with convergence guarantees etc. Introducing the model is nice, but you need to show quality and easiness of fit. You can maybe do this before since you rely on the idea of learning the parameters somehow to motivate the model.<p>You can relate to NNs for free since it is a linear layer with sigmoid activation.<p>You can stress it is linear in that your decision boundary is linear.<p>I don't like how capitalized letters are not random variables but are observations.<p>You can give some examples of what conditional PDFs P(H=1 | D ) look like and what you can model. In your case if the ideal temp for coffee is 190F and +/- 10 or more and the coffee is bad then you hope that (temp - 190)^2 is a feature input.<p>Congrats on the book deal!
Very nice! How about this, for more than 2 classes:<p>Let p_k be the probability of being in class k. We assume log p_k = f_k(x) + C(x) where x is the feature vector and C(x) is normalisation to make the probabilities sum to 1.<p>Equivalently, p_k is proportional to exp(f_k(x)), so p_k = exp(f_k(x)) / sum_j exp(f_j(x)).<p>Because of the normalisation we may assume without loss of generality that f_0(x) = 0. Then if we have 2 classes and f_1(x) is linear, we get logistic regression.
This was a really neat exposition! I have a few questions:<p>1. Is D a binary random variable? If so, what exactly does it mean to say beta*D + beta_0 is an approximation for log odds? Doesn't this formula only take on 2 possible values?<p>2. Could you provide intuition for why a linear function of D would be a good approximation for the log odds mentioned?
NB: this post uses D for the input x and H for the output y. This confused me quite a bit since usually in ML we use D for the data (pairs of x and y) and H for the model (in most cases the parameters, the betas in this example).
I'm a little confused, how much technical this approach is? I can't understand the meaning of P(D) for example. Does it make sense in strict mathematics?