What's cool here other than the fact that it seems to react really quickly is that it has no universal conception of location, like GPS.<p>My limited understanding of the topic is thus:<p>For localizing, they use a particle filter, which is basically a method that helps you figure out latent variables (in this case location) based on multiple observations.<p>Using the data, it creates a model of how the aircraft moves. In each "tick", you make a prediction about where the aircraft will be in the future (let's say, based on how fast you know the motor goes and which way the rudders are tilted etc). Then, you actually compare it to the data you got from your sensors (in this case a laser rangefinder) and update your model. Thus, your model is better.<p>The more traditional formulation of this is the Kalman Filter, which is everywhere in classical controls systems. I think the particle filter is just simpler for large numbers of variables whereas for Kalman filters, complexity increases exponentially.<p>edit:<p>Another way to look at it is that this is how robots deal with the "real world" where sensors are noisy and slightly off, actuators are unreliable and can't produce smooth and constant output etc. Instead of trying to guess all these factors, it automagically accounts for these on the fly by looking at how the robot behaves and how you expected it to behave.<p>edit2:<p>Corrections abound! Read replies below!
Original story from MIT News Office w/ video: <a href="http://web.mit.edu/newsoffice/2012/autonomous-robotic-plane-flies-indoors-0810.html" rel="nofollow">http://web.mit.edu/newsoffice/2012/autonomous-robotic-plane-...</a>
The big "what stopping this from being mainstream" part of this is the need for a prior map. It's absolutely analogous to Google's autonomous vehicles needing to be manually driven through the area in which they're planning to later operate autonomously.<p>This is great work, but it's not onboard SLAM, only onboard localisation. All up, it's great to see more of the autonomous ground vehicle work becoming small and lightweight enough to go on aerial vehicles. Traditionally the low payload capacity has been a showstopper for UAVs, and laser range finders are often heavy, hence why so many UAVs have used vision-only localisation techniques.
Some relevant videos on autonomous catastrophic flight recovery by Rockwell Collins:<p>Continuous AI flight after blowing off one wing:<p><a href="http://www.youtube.com/watch?v=xN9f9ycWkOY" rel="nofollow">http://www.youtube.com/watch?v=xN9f9ycWkOY</a><p>Continuous AI flight after catastrophic wing loss (showing manual/AI difference):<p><a href="http://www.youtube.com/watch?v=dGiPNV1TR5k" rel="nofollow">http://www.youtube.com/watch?v=dGiPNV1TR5k</a>
I am always so happy to see a navigational demonstration that doesn't rely on external positioning, e.g. GPS. Having tried both sides (dead reckoning/sensing and external positioning), I have come to feel like so many GPS-based projects are essentially trivial GPS demos.
What happens when it hits a dead end? At least a helicopter can stop and return on exactly the track it came in on.<p>Presumably one of the reasons for the prebuilt map is so that it doesn't enter a dead end, which some parking garages have.