<i>Low-rank adaptation (LoRA) ... has some advantages over previous methods:</i><p>- <i>It is faster and uses less memory, which means it can run on consumer hardware.</i><p>- <i>The output is much smaller (megabytes, not gigabytes).</i><p>- <i>You can combine multiple fine-tuned models together at runtime.</i><p>This is great news for my dream of building a fine-tuned interactive messenger, that can deliver a message on my behalf by training it on my personality & the information I want to convey.<p>Now just add text to speech and a talking head, as discussed in that other submission about cloning yourself with AI... <a href="https://news.ycombinator.com/item?id=35280418" rel="nofollow">https://news.ycombinator.com/item?id=35280418</a>