I'm sharing a blog post <a href="https://mobiusml.github.io/low-rank-llama2/" rel="nofollow noreferrer">https://mobiusml.github.io/low-rank-llama2/</a> on our approach to pruning the Llama2 model by leveraging low-rank structures.<p>In a nutshell, we've managed to reduce the model's parameter count by up to 50%, double the training speed, and increase inference speed by 1.25 times.<p>For those interested in the technical details or looking to replicate our results, the code is openly available for community use and contributions
Cool! But the GitHub repo isnt visible for me yet.<p>Also, can y'all dumb it down for a simple end user like me? Is this actually distilling the model down to a smaller parameter count, or is it just reducing VRAM/compute during training and during inference with a lora? Or something else?