I constantly have to refrain GPT3 and 4 because they’re too often using expressions like “it is important” or “it is essential”.<p>I can’t think of a reason for the model to have been tried on texts that contain many occurrences of these expressions.<p>This is why I am wondering if it is a side effect of the attention mechanism built in the transformer algorithm. As the prompt and the output are recursively processed to figure out what really matters, maybe these expressions get embedded as a latent representation of the weights of the different concepts at play in the conversation context.<p>What do you think?
Maybe we can extrapolate Godel where a complete ruleset cannot be consistent, and a consistent ruleset cannot be complete.<p>Somewhere in there is the human ingenuity to adapt certain pathological patterns [1] that defeat (current) AI.<p>But it's not always obvious, and the <i>composer sapiens</i> must defy their own understanding of the rules to create a purposeful deviation (sometimes with help from other AI) [2].<p>The idea of sharing GPT prompts promotes the implication that results are deterministic. That helps QA, but humans <i>love to play.</i><p>[1] Adversarial Policies Defeat Superhuman Go AIs: <a href="https://arxiv.org/abs/2211.00241" rel="nofollow">https://arxiv.org/abs/2211.00241</a><p>[2] Kellin Pelrine