Dear all,<p>Today on HN I saw a post mentioning [Gen AI Handbook](https://news.ycombinator.com/item?id=40604093), in which a comment mentioned the [MIT Deep Learning course](http://introtodeeplearning.com/). It seems that they cover some LLMs / foundation models, but not all.<p>So I googled a bit and found out that Stanford ([NLP with DL](https://web.stanford.edu/class/cs224n/)) and Princeton ([Understanding LLMs](https://www.cs.princeton.edu/courses/archive/fall22/cos597G/)) both offer great courses on these topics.<p>I was wondering that for people who have more experience with DL and/or LLMs, if I want to focus more on LLMs / foundation models (somewhat motivated by [this YC podcast episode](https://www.youtube.com/watch?v=fmI_OciHV_8)), <i>which of the above materials and/or other contents you would recommend, if I already have [the Transformer book](https://transformersbook.com/)?</i><p>Some context: I am a math graduate student with a 13-year-old Macbook Pro; I recently purchased a Mini PC Beelink SER5 and had some fun with small llamafiles...<p>Many thanks!