Ok, this seems bunk basically because they never really provide evidence of "better".<p>> ... traditiontal gold-standard approaches use human evaluators that score the quality of generated responses, which can be costly. However, since chat AIs are by definition deployed in social environments with humans, one can leverage statistics of users interaction as a meaningful and aligned measure of chat AI engagingness and quality. To assess the ’quality’ of a chat AI, we consider two main proxy functions: the industry standard user retention and the main objective function, user engagement.<p>Maybe retention and engagement _are_ sufficiently well correlated to human evaluations, but you should probably do both and show that they're strongly correlated before you decide to just drop the human evaluators in favor of your cheap proxy measurements.<p>And in this field, where there are some known issues with chat LLMs, perhaps it's important to check stuff like:<p>- Does the model seem "engaging" just b/c the user has to refine their prompt several times before they get a satisfying response?<p>- Do responses include a lot of hallucinations which might be engaging but not true?<p>- Do successive responses show decreased consistency or coherence between messages, in a way that might accidentally elicit continued engagement?<p>Overall, it seems sloppy to believe that it's not a waste of humans time to talk to your chatbots, and it's not a waste of time for readers to look at this paper about your chatbots, but it's too expensive for you to actually measure the quality of responses from your chatbots.
<i>"Responses are selected randomly from a group of base chat AIs. ... The response generated by a specific chat AI is conditional on all previous responses generated by the previously selected chat AIs."</i><p>That's all? That works? Useful.<p>Could that be extended? It doesn't seem inherent in this that all the chat AIs have to be LLMs. Some might be special-purpose systems. Solvers or knowledge bases, such as Wolfram Alpha or a database front end, could play too. Systems at the Alexa/Siri level that can do simple tasks. Domain-specific systems with natural language in and out have been around for decades.
I really would like them to compare to Gpt4 instead of claiming victory when matching 3.5. To me GPT4 is the first usable one for a lot of professional uses. 3.5 is fun and gets some stuff right but it’s like a demo.
Is it weird to refer to GPT-3.5 as "state of the art" when GPT-4 is right there? Actually the paper uses davinci interchangeably with GPT-3.5 (sometimes without a hyphen) and ChatGPT.
The paper refers to ChatGPT as a 175B parameter LLM. This is almost certainly incorrect; the original largest version of GPT-3 was 175B but analysis of the speed and cost of the current model as well as public statements by OpenAI indicate it’s as much as 5-10x smaller.
mergekit is the tool you need to do this<p><pre><code> https://github.com/cg123/mergekit
</code></pre>
you can slice off layers and blend models with different strategies.
Now that I think about it, doesn't this "technique" triple the amount of compute and memory per generated token since each model needs to also compute and store the KV values for the two previous tokens it didn't generate and thus has never seen?<p>Edit: On second thought, depending on how it's actually implemented the other two tokens are probably ran through the model in parallel so it shouldn't be all that much slower.
Foundational models are designed to be universally applicable, covering a wide range of use cases. While it's relatively easy to tailor smaller models to specific scenarios through overfitting, when a model is overly specialized, it loses its broad applicability and ceases to be a foundational model.