When models are released like this, it would be great to do it with a PR to ggml/llama.cpp giving support, or use a format that's already supported. Imo if I'm choosing between a 3B and a 7B, I'm using it in an edge or local model and I don't want HF/pytorch. It would be easier to evaluate and rank higher in things to consider if I could easily get it into llama.cpp.
A helpful paper with the full recipe Cerebras uses to train LLMs and their process including:
- Extensively deduplicated dataset (SlimPajama)
- Hyperparameter search using muP
- Variable sequence length training + ALiBi
- Aggressive LR decay