So any LLM works with an input/prompt, parameters (like the so-called temperature) and randomness aka seed for its "magic" decision making based on a modell. Think of the randomness like the magic number for minecraft maps: It can create repeatably the same world just from one number and the same version. Or the seed number in stable diffusion: Same prompt, same seed, same modell = always the same result.<p>But not for GPT3, ChatGPT and GPT4. I wonder why.<p>I'm quite mad that this random seed is not presented to be preserved or available as an input. It would speed up scientific exploration, produce differen/better fidelity tests.