This is cool, even if it seems kind of frivolous. Word embeddings work for emoji just like they do for actual words, and it's neat to see an idea for how to commercialize that directly.<p>I wish they had explained details, such as what two-dimensional non-linear projection they're using for their map.<p>I also don't see it fully explained how they're getting representations of sequences of emoji. They explain how their RNN handles sequences of <i>input</i> words, but the result of that is a vector that they're comparing to their emoji-embedding space. Does the emoji-embedding space contain embeddings of specific sequences as well?
It's also a little... racist.
If you feed it with emojis it spits out other emojis
(I was testing if it could spit out text from emoji input)
But what happens if you change the skintone of the emojis?<p>White arm:
<a href="http://i.imgur.com/KTNky0O.png" rel="nofollow">http://i.imgur.com/KTNky0O.png</a>
Obvious connection to sports, sunglasses(like saying "cool" in this context)<p>Black arm:
<a href="http://i.imgur.com/uXtSRfc.png" rel="nofollow">http://i.imgur.com/uXtSRfc.png</a>
Policeman searching something, a location marker(search location?)
Reading this was somewhat bothersome in that my browser (Chrome on Linux) doesn't render emoji. Is there a standard font that supports emoji that could be installed?
Extremely neat, but I really don't understand the point of the app (Dango) that all this engineering is for the sake of. If I'm using an emoji, it's either <i>instead</i> of words, or to clarify words that could be taken multiple ways (e.g. sarcasm.)<p>Who are these people that type a sentence (with a single meaning, clear-cut enough for Dango to detect), and then want to add a redundant pictorial representation of the same words they just typed?
This is really cool. But half the fun for me is to pick the emojis at the end of the message. And they "add" to the mood of my message, they don't "amplify" it. Hence this wouldn't work for me most of the time o_0 ;(
This is pretty cool. Emoji's seem trivial, but they're becoming more and more important in communications (whether that is good or bad is a separate discussion), and this is a pretty impressive bit of ML.
Why not train the RNN to directly predict emojis, instead of projecting everything to semantic space and picking the closest emoji? Seems like that would help with the problem of emojis with multiple meanings in different contexts. With this model, they could only be in a single point in semantic space.
As a lover of Emoji and deep learning, this is awesome. Are you planning to support unicode 9.0 sometime soon (I know it isn't even technically out)?