This is a problem with AI/ML in general. Things like object recognition and facial recognition can be tricked to misclassify an image by manipulating specific pixels in certain ways. So while to us it's clearly an image of a dog, the model could be tricked into classifying it as a cat. Adding spaces to a prompt injection attack feels very similar to that.
Almost sounds like a subliminal message for LLMs. Escapes (no pun intended) normal parsing to deliver an underlying message.<p>On the bright side, we may see a renewed interest in word parsing algorithms beyond interview questions. Can't be hit by a spacebar-based attack if you get rid of the spaces first!