Random, off-topic observation: it seems like I see lots of open-source LLM prompts have prompts that aren't proofread. For example [0], "ounch", "he spend", general lack of punctuation, and so on. But the big model providers don't do this [1], so I'm curious how much impact proper grammar in prompts makes (and if it's a positive or negative impact).<p>[0] <a href="https://github.com/allenai/lumos/blob/main/data/incontext.py#L91">https://github.com/allenai/lumos/blob/main/data/incontext.py...</a><p>[1] <a href="https://twitter.com/AmandaAskell/status/1765207842993434880" rel="nofollow">https://twitter.com/AmandaAskell/status/1765207842993434880</a>
The paper[1] seems promising, is there a fine-tuned model available? Or will we have to fine tune Llama-7b or Mistral-7b ourselves?<p>[1] <a href="https://arxiv.org/pdf/2311.05657.pdf" rel="nofollow">https://arxiv.org/pdf/2311.05657.pdf</a>