EXAMPLES:<p>- "I love how I have to watch 3 minutes of adverts before I watch the video."<p>- "This [DISGUSTING LOOKING FOOD] looks delicious."<p>QUESTIONS:<p>- Wouldn't the AI have to have human level understanding of the subject and the context around it?<p>- Couldn't this lead to AIs creating products/suggestions that are the opposite of what people want?<p>- If this is a real problem, is there a technical name for it?
I'm tossing some random sentences at GPT-3.<p>"I love pizza." - Not sarcastic<p>"I love my boss." - Possibly not sarcastic, depending on context.<p>"This meal you have served me looks absolutely fucking delicious." - Sarcastic<p>"This will be fun." - Could be sarcastic, but also genuine. (AI knows how to give non-answers lol)<p>As a human, can you tell? I think it's the same. Both humans and AI have trouble without the context. GPT-3 is trained on more words than you and I have ever read. It's quite global, but leans American.<p>In the guidelines, we're encouraged to control the AI's tone. You don't tell it to "respond to this email". You tell it to "respond politely to this email." Otherwise angry emails will get angry responses.
idk it sounds like a good research problem.<p>Maybe you could get partway there with a tone and object-property analysis which detects contrasting tones next to each other. e.g. in the latter example, referencing a disgusting food and saying it’s delicious. especially if the tone is exaggerated, e.g. “[person] seems like a real-life Romeo” (probably sarcastic) vs “[person] seems like he’d be nice” (probably real); “I’m sure everyone will love [controversial proposal]” vs “[controversial proposal] has its flaws, but I still think it’s better than the alternatives”.<p>Of course considering how many real people fail to detect sarcasm or say things unironically, it will never be 100%. But generally something is sarcastic when it is ridiculous and obviously false, so obvious that you don’t really need much logic or understanding to see, and you can just infer by the phrasing.