Monte Carlo localization

[4] The algorithm typically starts with a uniform random distribution of particles over the configuration space, meaning the robot has no information about where it is and assumes it is equally likely to be at any point in space.

[4] Whenever the robot moves, it shifts the particles to predict its new state after the movement.

Determining its location and rotation (more generally, the pose) by using its sensor observations is known as robot localization.

Because the robot may not always behave in a perfectly predictable way, it generates many random guesses of where it is going to be next.

[4] Typically, on start up, the robot has no information on its current pose so the particles are uniformly distributed over the configuration space.

At the end of the three iterations, most of the particles are converged on the actual position of the robot as desired.

However, in the real world, no actuator is perfect: they may overshoot or undershoot the desired amount of motion.

When a robot tries to drive in a straight line, it inevitably curves to one side or the other due to minute differences in wheel radius.

Inevitably, the particles diverge during the motion update as a consequence.

This is expected since a robot becomes less sure of its position if it moves blindly without sensing the environment.

When the robot senses its environment, it updates its particles to more accurately reflect where it is.

This is expected since a robot becomes increasingly sure of its position as it senses its environment.

The particle filter central to MCL can approximate multiple different kinds of probability distributions, since it is a non-parametric representation.

Compared with the grid-based approach, the Monte Carlo localization is more accurate because the state represented in samples is not discretized.

is to continuously generate additional particles until the next pair of command

[4] This way, the greatest possible number of particles is obtained while not impeding the function of the rest of the robot.

As such, the implementation is adaptive to available computational resources: the faster the processor, the more particles can be generated and therefore the more accurate the algorithm is.

[4] Compared to grid-based Markov localization, Monte Carlo localization has reduced memory usage since memory usage only depends on number of particles and does not scale with size of the map,[2] and can integrate measurements at a much higher frequency.

[2] The algorithm can be improved using KLD sampling, as described below, which adapts the number of particles to use based on how sure the robot is of its position.

A drawback of the naive implementation of Monte Carlo localization occurs in a scenario where a robot sits at one spot and repeatedly senses the environment without moving.

[4] One way to mitigate this issue is to randomly add extra particles on every iteration.

[4] This is equivalent to assuming that, at any point in time, the robot has some small probability of being kidnapped to a random position in the map, thus causing a fraction of random states in the motion model.

The original Monte Carlo localization algorithm is fairly simple.

Several variants of the algorithm have been proposed, which address its shortcomings or adapt it to be more effective in certain situations.

Monte Carlo localization may be improved by sampling the particles in an adaptive manner based on an error estimate using the Kullback–Leibler divergence (KLD).

due to the need to cover the entire map with a uniformly random distribution of particles.

However, when the particles have converged around the same location, maintaining such a large sample size is computationally wasteful.

[6] KLD–sampling is a variant of Monte Carlo Localization where at each iteration, a sample size

[4] The main idea is to create a grid (a histogram) overlaid on the state space.

In practice, KLD–sampling consistently outperforms and converges faster than classic MCL.

The algorithm initializes with a uniform distribution of particles. The robot considers itself equally likely to be at any point in space along the corridor, even though it is physically at the first door.
Sensor update : the robot detects a door . It assigns a weight to each of the particles. The particles which are likely to give this sensor reading receive a higher weight.
Resampling : the robot generates a set of new particles, with most of them generated around the previous particles with more weight. It now believes it is at one of the three doors.
Motion update : the robot moves some distance to the right. All particles also move right, and some noise is applied. The robot is physically between the second and third doors.
Sensor update : the robot detects no door . It assigns a weight to each of the particles. The particles likely to give this sensor reading receive a higher weight.
Resampling : the robot generates a set of new particles, with most of them generated around the previous particles with more weight. It now believes it is at one of two locations.
Motion update : the robot moves some distance to the left. All particles also move left, and some noise is applied. The robot is physically at the second door.
Sensor update : the robot detects a door . It assigns a weight to each of the particles. The particles likely to give this sensor reading receive a higher weight.
Resampling : the robot generates a set of new particles, with most of them generated around the previous particles with more weight. The robot has successfully localized itself.
Belief after moving several steps for a 2D robot using a typical motion model without sensing .