I believe the paper to be cited is "The Internal State of an LLM Knows When It's Lying", published last year: <a href="https://arxiv.org/abs/2304.13734" rel="nofollow">https://arxiv.org/abs/2304.13734</a>
if what I understand is correct, that they project the LLM's internal activations into meaningful linear directions derived from contrasting examples, I guess this is similar to how we began to derive a lot more value from ebeddings by using the embedding values for various things.
This is a stupid arguement. I wish author understood an ounce of how LLMs works.
Of course, they know more than whay they say. That's because LLMs are nothing but probabistic structures. They mix and match and provide probabilistic approach. Therefore, they are always making a choice between multiple options.<p>I wish there was a global mandatory course before these substacky authors write for fame.
This looks cool, but I'm confused as to how this is surfaced in your product, llama-8 is not present in your model list.<p>I thought maybe you offer hallucination detection, but I also don't see that. RAG evals also not visible