What's cool here other than the fact that it seems to react really quickly is that it has no universal conception of location, like GPS.<p>My limited understanding of the topic is thus:<p>For localizing, they use a particle filter, which is basically a method that helps you figure out latent variables (in this case location) based on multiple observations.<p>Using the data, it creates a model of how the aircraft moves. In each "tick", you make a prediction about where the aircraft will be in the future (let's say, based on how fast you know the motor goes and which way the rudders are tilted etc). Then, you actually compare it to the data you got from your sensors (in this case a laser rangefinder) and update your model. Thus, your model is better.<p>The more traditional formulation of this is the Kalman Filter, which is everywhere in classical controls systems. I think the particle filter is just simpler for large numbers of variables whereas for Kalman filters, complexity increases exponentially.<p>edit:<p>Another way to look at it is that this is how robots deal with the "real world" where sensors are noisy and slightly off, actuators are unreliable and can't produce smooth and constant output etc. Instead of trying to guess all these factors, it automagically accounts for these on the fly by looking at how the robot behaves and how you expected it to behave.<p>edit2:<p>Corrections abound! Read replies below!