Ok, this seems bunk basically because they never really provide evidence of "better".<p>> ... traditiontal gold-standard approaches use human evaluators that score the quality of generated responses, which can be costly. However, since chat AIs are by definition deployed in social environments with humans, one can leverage statistics of users interaction as a meaningful and aligned measure of chat AI engagingness and quality. To assess the ’quality’ of a chat AI, we consider two main proxy functions: the industry standard user retention and the main objective function, user engagement.<p>Maybe retention and engagement _are_ sufficiently well correlated to human evaluations, but you should probably do both and show that they're strongly correlated before you decide to just drop the human evaluators in favor of your cheap proxy measurements.<p>And in this field, where there are some known issues with chat LLMs, perhaps it's important to check stuff like:<p>- Does the model seem "engaging" just b/c the user has to refine their prompt several times before they get a satisfying response?<p>- Do responses include a lot of hallucinations which might be engaging but not true?<p>- Do successive responses show decreased consistency or coherence between messages, in a way that might accidentally elicit continued engagement?<p>Overall, it seems sloppy to believe that it's not a waste of humans time to talk to your chatbots, and it's not a waste of time for readers to look at this paper about your chatbots, but it's too expensive for you to actually measure the quality of responses from your chatbots.