I'm not sure if very tiny and close objects like this one are something anyone has focused on. At the moment, the MSL team has so many very well trained eyeballs that they don't need much automation of relatively high level tasks like this one. After the initial 90 days, the team will shrink, and automation will be more important.<p>Detecting objects in medium to long range Mars rover imagery is a problem that has received a lot of attention, including systems that run <i>on board the rover</i>.<p>The people who have pushed this the farthest are here:<p><a href="http://aegis.jpl.nasa.gov/" rel="nofollow">http://aegis.jpl.nasa.gov/</a><p>Just a recognition algorithm is not enough. You also need to integrate prioritization, planning (is it OK to pivot the camera to point in that direction, do we have enough room to store the data, ...), and acquisition to have a system that actually gathers quality data without sacrificing other mission objectives. The described algorithm has been run on Opportunity.<p>The selling point for on board automation in this context is light time. By the time you send the images back, analyze them, and upload a new command sequence to take the data, you would have driven past the interesting rock. So you have to do some things on board. Limited bandwidth also plays a role.