Just a shout-out to Professor Polo (the Professor who is the leader of the Polo Club of Data Science who wrote the tutorial on CNN's), 3 years ago, as a CRUD code monkey, I found out about Georgia Tech's launching of Online Masters in Data Science on Hacker News and then entered into the program and took Dr.Polo's class on Data Visualization and Analysis (for that semester I lit. spent more time on the class than on my day-job)... now for going full circle 3 years later, I see Dr.Polo's tutorial on CNN on Hacker News again and am working on CNN/RNN's for my last class/capstone project lol. The circle of life or the circle of HN I suppose!<p>Anyone here in the OMSA/OMSCS program?
I know very little about CNNs. But, I noticed the ReLu Activation step is Max(0,x) where x is the sum of the pixel intensities from each channel. In this example, it appears x > 0 (for all x) and so the activation step isn't really doing much?<p>EDIT: I'm wrong. x < 0 for some of the pixels. Specifically for the more red-ish channels.
I wrote a nn a while back and made some interesting projects with it. two things I wanted to do: transfer the matrix multiplications to the GPU and also implement convolution layers. if I ever got free time again I'll maybe do it.<p>thanks for sharing this, great content. made me think about old projects I have laying around.
The interactive inspection of each layer is beautifully implemented. I hope that one day we'll be able to make even more sense of the consequence of each individual weight, ie. know more than -0.52 for a given pixel.