How many tokens was that before it diverged from your original request? If it filled the context window, the original prompt would no longer be available to the model and it would start predicting just based on the repeated word. That certainly looks like what's happened here, although I haven't checked what model context size you used or how long the response is.