Our current image processing and textual description extraction from images algorithms based on machine learning have evolved a lot lately. As well as mechanisms that allow generating images from text. Why still the most popular image search engines like Google Image, Bing, Duck Duck Go etc. are THAT bad at indexing the internet images? What's the logic in that?<p>For example:<p>"a duck riding a motorcycle in the rain" don't have even a single result with a duck riding a motorcycle. But if you search for "a duck riding a motorcycle" you can see some right results.<p>And a lot of results with watermarks like "iStock" etc. that could be easily filtered or at least separated from the main search.<p>Sometimes it seems that we have evolved decades in some things, but other such basic things are left behind...
Yandex is the go to image search at this point. It several ducks on motorbikes (although not in the rain). You've raised technical Ai questions, but I think it's a simple as "yandex doesn't cater to pinterest and stock image spam", and therefore works.
Handling image and other visual entities is hard. While going from text (and extracting some understanding of that text) to an image is much easier than the reverse.