So this may be a place as good as any -- I've got a decent math background, and am self teaching myself ML while waiting for work to come in.<p>I'm working on undertstanding CNNs, and I can't seem to find the answer (read: don't know what terms to look for) that explain how you train the convolutional weights.<p>For instance, a blur might be<p>[[ 0 0.125 0 ] , [ 0.125 0.5 0.125 ] , [0 0.125 0]]<p>But in practice, I assume you would want to have these actual weights themselves trained, no?<p>But, in CNNs, the same convolutional step is executed on the entire input to the convolutional step, you just move around where you take your "inputs".<p>How do you do the training, then? Do you just do backprop on each variable of the convolution stem from its output, with a really small learning rate, then repeat after shifting over to the next output?<p>Sorry if this seems like a poorly thought out question, I'm definitely not phrasing this perfectly.