According to the "Get the code" link [1], it looks like these models need pretty huge GPUs to even interact with the pre-trained models. Is that abnormal? I was under the impression that training the model is generally what takes the beefy GPU, and then using that model requires more consumer-adjacent hardware. A P100 GPU is $3000 [2].<p>[1] <a href="https://parl.ai/projects/blender/" rel="nofollow">https://parl.ai/projects/blender/</a><p>[2] <a href="https://www.amazon.com/dp/B06WV7HFWV/" rel="nofollow">https://www.amazon.com/dp/B06WV7HFWV/</a>
Boiling it all down, when prompted, these models just regurgitate a similar sentence to what is observed in the training data for loosely that same input, using some glorified curve fitting. This does not necessarily imply the model understands the meaning of what it is spitting out. So the uninitiated will be really impressed with this kind of toy.<p>The researchers here appear to have placed particular emphasis on cleaning up what the model is spitting out, but I think it's lipstick on a pig. The area begging for more research is parsing out the meaning of anything but the most simple sentence.