>"In the in-context learning (ICL) paradigm, given a set of examples, the model has to learn the mapping from inputs to outputs. Prior research has demonstrated that LLMs implicitly compress this mapping into a latent activation, called the <i>task vector</i>"<p>Related:<p>In-Context Learning Creates Task Vectors:<p><a href="https://arxiv.org/abs/2310.15916" rel="nofollow">https://arxiv.org/abs/2310.15916</a><p>Function Vectors in Large Language Models:<p><a href="https://functions.baulab.info/" rel="nofollow">https://functions.baulab.info/</a><p><a href="https://x.com/graceluo_/status/1852043048043319360" rel="nofollow">https://x.com/graceluo_/status/1852043048043319360</a>