12.07.2015 Views

Solution Sheet 5 - Autonomous Systems Lab - ETH Zürich

Solution Sheet 5 - Autonomous Systems Lab - ETH Zürich

Solution Sheet 5 - Autonomous Systems Lab - ETH Zürich

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Dr. Francis ColasInstitute of Robotics and Intelligent <strong>Systems</strong><strong>Autonomous</strong> <strong>Systems</strong> <strong>Lab</strong><strong>ETH</strong> ZürichCLA E 26Tannenstraße 38092 Zürich Switzerlandfcolas@mavt.ethz.chwww.asl.ethz.chInformation Processing in Robotics<strong>Solution</strong> <strong>Sheet</strong> 5Topic: Sampling MethodsExercise 1: Particle Filters and Kalman Filters(a) The main assumptions of a Kalman filter are:• first order Markov assumption,• linear transition model,• linear observation model,• Gaussian distributions.For the particle filter we still have the Markov assumption, but no restriction on thedistributions and on the transition and observation models. However, for particlefilters, the inference is only approximate whereas it is exact for Kalman filter.(b) Let M be the dimensionnality of the state space. A Kalman filter represents thestate distribution with just is mean and covariance matrix as it is Gaussian. Thesize of the representation is therefore O(M 2 ).On the other hand, the particle filter approximates the state we a set of N weightedparticles representing each a sample for this distribution. The size is thereforeO(N× M). In most cases, the number M of particles is gooing to be much greaterthan the dimension of the state space. However, one notable exception is fastslamwhere the state-space is the pose of the robot and the map of the environment(huge state space) and the number of particle is typically around 100.(c) The update complexity for the Kalman filter is dominated by N× N matrix inversionand multiplications: O(N 3 ).For a particle filter, there are three steps:• sampling from proposal p(x t | u t ,x [m]t−1) for each particle: for a linear Gaussianmodel it is O(N 2 ) per particle,1


Figure 1: Initial distribution of the particles for global (left) and local (right) localization.• compute sample weights p(z t | x [m]t ) for each particle: for a linear Gaussianmodel it is O(N 2 ) per particle,• resampling: O(M logM).For a complete iteration, the complexity of a particle filter wih linear Gaussian hypothesesis O(M× N 2 + M logM). In general, it depends on the complexity of bothtransition and observation models.Exercise 2: Particle Filters for Localization(a) Figure 1 shows the initial ditributions for global (left) and local (right) localization.In global localization, no prior knowledge about the robot position is available,therefore the particles are distributed equally over the free space. Il local localization,we have already a first estimate of the robot position, therefore the particlesare distributed around that position with a higher density at the center (e.g. Gaussiandistribution).(b) Figure 2 shows the final distribution of the particles. For global localization, thereare two particle clusters, because the environment is symmetric and the robotcan not disambiguate its position. For local localization, the start position is betterknown and can be tracked to the goal location: no ambiguities occur.(c) The major problem of Monte-Carlo localization is a symmetric environment likethis. Possible solutions are to automatically steer the robot into areas where distinguishablefeatures can be found. In this particular case, we could try to drivethe tobot into one of the open hallways until it finds, say, a door that is not present2


Figure 2: Final distribution of the particles after global (left) and local (right) localization.in the other half of the hallway. This is called active localization because the robotperforms an action to be able to localiza. Another possibility is to use other sensorsthat can distinguish the environment better (e.g. a camera).Exercise 3: Playing with a Particle Filter(a) p(X 0 ), p(Y 0 ), and p(X 0 ,Y 0 ) are all Gaussian distributions and the covariance matrixis diagonal, that means that both coordinates are independent. As a consequence,we can sample each coordinate separately.In matlab, we can use P0 = randn(2, M) to sample M such particles.To plot them, we can use scatter(P(:,1), P(:,2)).(( ) ( )X [m]t 2+X [m] ((b) The proposal is NY [m] |t−1 0.5 0t Y [m] ,0 0.5t−1Therefore, we do:for m=1:MP1(m,:) = P0(m,:)+[2:0]+0.5*randn(2, 1)(c) The weight is the value of the bivariate Gaussian. As the covariance is diagonal, itreduces to the product of univariate Gaussian: W = normpdf(P1(1,:),0,1).*normpdf(P1(2,:)) ) .For resampling, we can simply use: randsample(M, M, true, W).3

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!