>> The course will be available for free online from early 2023.<p>Anyone aware of an in-depth intro-level text-based explanation of Stable Diffusion that covers the whole pipeline, including training on an extremely
limited dataset?<p>Here’s an example, but open to suggestions too:<p><a href="https://huggingface.co/blog/stable_diffusion" rel="nofollow">https://huggingface.co/blog/stable_diffusion</a>
Sorry folks for not contributing to this thread earlier - didn't realise this popped up on HN while I was sleeping! I'm Jeremy, and I'll be running this course. Lemme know if you've got any questions about it, or anything related to the topic of DL, stable diffusion, etc.
This is exactly the kind of course I’ve wanted to do for some time now. Even before stable diffusion it felt like other media synthesis applications like StyleGAN were what I wanted to learn, but most machine learning courses focus on more traditional data science topics.<p>Of course you can start with a more traditional course and then learn something like stable diffusion afterwards, but as a newbie it’s quite hard to figure out where to even start. A full-fledged course that takes you exactly where you want to go is a lot easier and I think it can help learners to stay motivated because they have a clear goal in mind. If I want to learn how to create cool images, I want to spend as little time as possible predicting housing prices in the Bay Area.
> in depth course that started right from the foundations—implementing and GPU-optimising matrix multiplications and initialisations—and covered from scratch implementations of all the key applications of the fastai library.<p>I haven't taken the course, but that sounds like a horrible place to start a course on understanding deep learning. GPU matrix operations are literally an implementation detail.<p>I think the proper way to teach deep learning "from scratch" would be:<p>1.) show simple example of regression using high level library<p>2.) implement same regression by writing a simple neutral network from scratch (explain going through, multiplying weights, adding biases, applying activation function, calculating loss, back propagation).<p>3. use NN on more complicated problem with more parameters and a larger training set, so that user sees they've hit a wall in performance, and now implementation needs to be optimized<p>4. at <i>this</i> point, say okay, our implementation of looping and multiplying can be done much faster with matrix multiplication on GPU, and even faster parallelized across GPUs on a network. If you're interested in that, here is an <i>optional</i> fork in the course that gets into specifics. Anything after this point will assume that implementation of NN calls will be using these techniques under the hood<p>5. move onto classification, q learning, GANs, transformers.<p>95% should have skipped step 4 and only revisited if they become interested in this for a specific reason. To <i>start</i> with it is crazy. It's like starting a course about flying by explaining how certain composites allowed us to transition from propellers to jets, and let's dive into how those composites are made.
This course looks interesting. My only concern is that I don't have real experience with NLP. Anybody can recommend resources to get to speed on this pre-requisite? My NLP knowledge is very basic.