11.07.2015 Views

Robotics - Localization & Bayesian Filtering - AIRLab - Politecnico di ...

Robotics - Localization & Bayesian Filtering - AIRLab - Politecnico di ...

Robotics - Localization & Bayesian Filtering - AIRLab - Politecnico di ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Robotics</strong> - <strong>Localization</strong> & <strong>Bayesian</strong> <strong>Filtering</strong>Simone Cerianiceriani@elet.polimi.itDipartimento <strong>di</strong> Elettronica e Informazione<strong>Politecnico</strong> <strong>di</strong> Milano10 May 2012


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>2/58Outline1 Introduction2 Taxonomy3 Probability Recall4 Bayes Rule5 <strong>Bayesian</strong> <strong>Filtering</strong>6 Markov <strong>Localization</strong>


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>3/58Outline1 Introduction2 Taxonomy3 Probability Recall4 Bayes Rule5 <strong>Bayesian</strong> <strong>Filtering</strong>6 Markov <strong>Localization</strong>


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>4/58Mobile Robot <strong>Localization</strong> - IntroductionThe problemDetermining pose of robotRelative to a given map of the environmenta.k.a. position estimationNotesIt’s an instance of the general localizationi.e., localize objects in the workspace of a manipulator


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong><strong>Localization</strong> - The problem<strong>Localization</strong> InputKnown map in a reference systemPerception of the environmentMotion of the robotp (W )2p (W )1p 1p 2p (R)1R p 3<strong>Localization</strong> GoalDetermine robot position w.r.t. the mapi.e. the relative transformation T (W )WRProblem of coor<strong>di</strong>nate transformationT (W )WR allow to express objects position (maps)in a local frame (w.r.t. the robot)WT (W )WRp (W )3p (W )i: the mapp (R)1 : robot perceptionT (W )WR : localization 5/58


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>6/58<strong>Localization</strong> - IssuesDirect pose sensingUsually impossibleNoise corruptionPose estimationInferred from dataUsually a single sensor measure is insufficientRobot need to integrate information over timee.g. map with two identical corridorsMapVarious representations are possibleaccor<strong>di</strong>ng to the problemKey concept: localization needs a precise map


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong><strong>Localization</strong> - Graphical modelGraphical model of mobile robot localizationX·: robot posem: the mapz·: measurementsu: inputs (e.g. speed of wheels, ...)Values of shaded nodes are known7/58


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong><strong>Localization</strong> - MapsHand-made metric mapOccupancy grid mapGraph-like topological mapImage mosaic of ceilingImages from Probabilistic <strong>Robotics</strong> - S. Thrun, W. Burgard, D. Fox8/58


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>9/58Outline1 Introduction2 Taxonomy3 Probability Recall4 Bayes Rule5 <strong>Bayesian</strong> <strong>Filtering</strong>6 Markov <strong>Localization</strong>


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>10/58Taxonomy - Local vs Global - 1Local vs Global localizationDepends on information available initially and at run-timeThree types of localization problems, with increasing degree of <strong>di</strong>fficulty1. Pose TrackingAssume initial position is known<strong>Localization</strong> achieved by accommodating the noise in robot motionPose uncertainty often approximated by a unimodal <strong>di</strong>stributionIt is a local problem, uncertainty is confined near robot true pose2. Global <strong>Localization</strong>Initial pose unknownThe robot knows that it does not know where it isApproaches cannot assume bound on (initial) pose errorIt is not a local problem, estimation could be very far from true poseMore complicated than pose tracking


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>11/58Taxonomy - Local vs Global - 23. Kidnapped robot problemVariant of the global localization problemThe robot can get kidnapped and teleported to some other locationThe robot might believe it knows where it is while it does notEven more <strong>di</strong>fficultRobot are not really kidnapped in practicePractical importance: recover from failures in localizationIn these lessonsMarkov <strong>Localization</strong>: general frameworkPose Tracking: Extended Kalman <strong>Filtering</strong>Global <strong>Localization</strong>: Particle <strong>Filtering</strong> / Monte Carlo approaches


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>12/58Taxonomy - Static vs Dynamic EnvironmentStatic EnvironmentOnly the robot pose change during operationEnvironment (the map) is staticDynamic EnvironmentEnvironment changes over timeEnvironment changes affects sensor measurementsEnvironment changes: temporary or permanente.g.: doors, furniture, walking people, daylight<strong>Localization</strong> more <strong>di</strong>fficult than in a static environment


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>13/58Taxonomy - Passive vs Active ApproachesPassive approach<strong>Localization</strong> module observes the robot operatingRobot motion is not aimed in facilitating localizationActive approach<strong>Localization</strong> module controls the robot so as tominimize the localization erroravoid hazardous movement of a poorly localized robote.g., coastal navigation, symmetric corridorsTrade-off: localization performance vs ability to performs operations


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>14/58Taxonomy - Single vs MultirobotSingle Robot <strong>Localization</strong>Most commonly stu<strong>di</strong>edAll data is collected on the robot, no communication issueMultirobot approachArises in team of robotCould be treated as n−single robot localization problemIf robots are able to detect each other, there is opportunity to do better


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>15/58Outline1 Introduction2 Taxonomy3 Probability Recall4 Bayes Rule5 <strong>Bayesian</strong> <strong>Filtering</strong>6 Markov <strong>Localization</strong>


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>16/58Uncertainty in <strong>Robotics</strong> & Probabilistic <strong>Robotics</strong><strong>Robotics</strong> systemsSituated in the physical worldPerceive information through sensorsmanipulate through physical forcesHave to be able to accommodate uncertainties that exists in the physical worldFactors that contribute to robot’s uncertaintyReal environments are inherently unpre<strong>di</strong>ctableSensors are limited in range, resolution, subject to noiseActuation involves motors; uncertainty arises from control noise, wear and tear.Mathematical models are approximation of real phenomenalLevel of uncertaintyDepends on the application domainin well known environments, like assembly lines, could be bounde<strong>di</strong>n the open world plays a key roleManaging uncertainty is a key step towards robust real-world robot systems


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>17/58Discrete Random VariablesDiscrete Random VariablesX : a random variablese.g., consider a <strong>di</strong>e rolling experiment, X is the variable representing the outcomePr(X = x) represent the probability that X has value xe.g., X assume one of {1, 2, 3, 4, 5, 6}x: a specific values that X might assume (on a <strong>di</strong>screte set)e.g., Pr(X = 1) = Pr(X = 2) = · · · Pr(X = 6) = 1 6Properties - 1∑∀xPr(X = x) = 1: <strong>di</strong>screte probability sum to 1Pr(X = x) ≥ 0: probability is non negativee.g., Pr(X = 0) = Pr(X = 7) = 0, impossible eventPr(X = x) ≤ 1: probability is bounded to 1e.g., Pr(X = {1 or 2 or 3 or 4 or 5 or 6}), sure eventCommon abbreviation: Pr(x) instead of Pr(X = x)


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>18/58Discrete Random Variables - PropertiesProperties - 2Consider two event A and Be.g., A is “<strong>di</strong>e outcome is 2 or 3 ”, B is “<strong>di</strong>e outcome is even”Pr(A ∨ B) = Pr(A) + Pr(B) − Pr(A ∧ B)e.g., A ∨ B is “<strong>di</strong>e outcome is 2 or 3 or 4 or 6”, Pr(A ∨ B) = 2 3 = 1 3 + 1 2 − 1 6If A ∧ B = Ø → Pr(A ∨ B) = Pr(A) + Pr(B)Pr(A) = 1 − Pr(A)Pr(A ∨ A) = Pr(A) + Pr(A) − Pr(A ∧ A) = 1RemarksRelative-frequency (i.e., outcome ofexperiments) are alternative (notrigorous) ways of introduce theconcept of probability.


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>19/58Continuous Random VariablesContinuous Random VariablesAllow to address continuous spacePossess a Probability Density Function (p.d.f.)f X (x), x ∈ RPr(X = x) = 0, even though it is not impossibleThe integral is the Cumulative Density Function (c.d.f.)F X (x) = Pr(x ≤ x) = ∫ x−∞ f X (x)dxlim x→∞ F X (x) = Pr(X ≤ ∞) = 1Pr(x ∈ (a, b)) = ∫ ba f X (x)dx = F X (b) − F X (a)


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>20/58Uniformly Distributed Continuous Random Variablep.d.f.Uniformly Distributed ContinuousRandom VariableUniformly Distributed in [a, b]{1x ∈ [a, b]b−ap.d.f.: f (x) =0 elsewhere⎧⎪⎨ 0 x < ax−ac.d.f.: F X (x) = x ∈ [a, b]b−a⎪⎩1 x > bc.d.f.


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>21/58Normal Distributed Random VariableNormal Distributed Random Variablep.d.f.a.k.a. Gaussian Random VariableN (µ, σ 2 )mean: µvariance: σ 2p.d.f.:c.d.f.}f (x) = (2πσ 2 ) − 1 2 exp{− 1 (x−µ) 22 σ 2c.d.f.: ∄ explicit formula,tabulated for G(η), c.d.f. of N (0, 1)F (x) = G( x−µ ) σx−µσ : number of σ away from the mean


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>Multivariate DistributionsMultivariatex is a vectorN (µ, Σ), withµ: n × 1, mean vectorΣ: n × n, covariance matrixf (x) =(2π det(Σ)) − 1 2 exp {− 12(x−µ) T Σ −1 (x−µ)}⎡ ⎤ ⎡⎤µ 1 Σ 11 Σ 12 · · · Σ 1nµ 2µ = ⎢⎥⎣. ⎦ Σ = Σ 21 Σ 22 · · · Σ 2n⎢⎣.. · · · ... ⎥ .. ⎦µ n Σ n1 Σ n2 · · · Σ nn<strong>di</strong>agonal elements are the variances ofthe single componentsoff <strong>di</strong>agonals elements are thecovariances between elementsΣ is symmetric and positive definite,(non singular)if ∃ linear relation among components, Σis positive semi-definite, (singular)p.d.f. - n = 2c.d.f. - n = 222/58


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>23/58Mean, Variance, MomentsDiscrete caseExpected value: E[X ] = ∑ x xp(x) = xn-th moment: E[X n ] = ∑ x x n p(x)Variance: VAR(x) = E[(x − x) 2 ]= ∑ x (x − x)2 p(x) = σ 2= E[X 2 ] − E[X ] 2a.k.a. second central momentContinuous caseExpected value: E[X ] = ∫ xf (x)dxxn-th moment: E[X n ] = ∫ x x n p(x)Variance: VAR(x) = E[(x − x) 2 ]= ∫ x (x − x)2 p(x) = σ 2= E[X 2 ] − E[X ] 2a.k.a. second central momentPropertiesconsider Y = aX + b, X random variable, a, b scalar quantitiesE[Y ] = aE[X ] + bVAR(Y ) = a 2 VAR(Y )


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>24/58Joint Probability, Independence, Con<strong>di</strong>tioningJoint ProbabilityConsider two random variables [ X] and YTor a random vector Z = X , YPr(X = x ∧ Y = y) = Pr(x, y) = Pr(z)[ ] Twith z = x, yIndependenceX and Y are independent if and only ifPr(x, y) = Pr(x) Pr(y)or with p.d.f. f xy (x, y) = f x(x)f y (y)Con<strong>di</strong>tioningPr(x|y) is the probability of x given yPr(x, y) = Pr(x|y) Pr(y)Pr(x|y) = Pr(x) if x and y are independentif X and Y are independent, Y tell us nothing about the value of X ,there is no advantage of knowing the value of Y if we are interested in X


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>25/58Con<strong>di</strong>tional IndependenceCon<strong>di</strong>tional IndependenceX and Y are con<strong>di</strong>tional independent on Z if Pr(x, y|z) = Pr(x|z) Pr(y|z)This is equivalent to Pr(x, y|z) = Pr(x,y,z)p(z)= Pr(x|y, z) Pr(y|z) =Pr(x|y,z) Pr(y,z)Pr(z)Thus, X and Y are con<strong>di</strong>tional independent on Z if Pr(x|z) = Pr(x|y, z)i.e., knowledge on y does not add any information to x if z is known


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>26/58Total probability, marginalsTotal probability:∫ ∞ · · · ∫ ∞x 1 =−∞ x fx n=−∞ 1...x n (x 1, . . . , x n)dx 1 . . . dx n = 1Marginal <strong>di</strong>stribution:∫ ∞fx,y (x, y)dx = fy (y)−∞∫ ∞fx,y (x, y)dy = fx(x)−∞Marginal <strong>di</strong>stribution with con<strong>di</strong>tioning:∫ ∞−∞ f y|x(y|x)f x(x)dx = f y (y)∫ ∞−∞ f x|y (x|y)f y (y)dy = f x(x)


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>27/58Outline1 Introduction2 Taxonomy3 Probability Recall4 Bayes Rule5 <strong>Bayesian</strong> <strong>Filtering</strong>6 Markov <strong>Localization</strong>


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>28/58Bayes FormulaFrom con<strong>di</strong>tioningPr(x, y) = Pr(x|y) Pr(y)Pr(x, y) = Pr(y|x) Pr(x)Bayes formulaPr(x|y) =Pr(y|x) Pr(x)Pr(y)=Pr(x) is the prior, the belief about xy is the data, e.g., a sensor measurePr(y|x) Pr(x)∫ ∞−∞ f y|x(y|x ′ )f x(x ′ )dx ′Pr(y|x) is the likelihood, i.e., how much is probable to have measure y in state xPr(x|y) is the posterior, i.e., the belief x state given the measurement yBayes formula allow to infer a quantity x from data y through inverse probabilityi.e., through the probability of data y assuming that the state is x


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>29/58Bayes Example - 1ProblemA robot “observe” a door


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>30/58Bayes Example - 2ProblemA robot “observe” a doorThe door could be open or closeThe sensor measure a <strong>di</strong>stance as far or nearThe probability that the door is open is 0.4The probability that the sensor measure far when the door is open is 0.8The probability that the sensor measure far when the door is close is 0.1What is the probability that the door is open if the sensor measurement is near?What is the probability that the door is open if the sensor measurement is far?What is the probability that the door is close if the sensor measurement is near?What is the probability that the door is close if the sensor measurement is far?


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>Bayes Example - 3Variable definitionX : door state, {open, close}Y : sensor measure, {open, close}p.d.fPr(X =open)=0.4Pr(Y =far|X =open)=0.8Pr(Y =far|X =close)=0.1Pr(X =close)=0.6SolutionPr(Y =near|X =open)=0.2Pr(Y =near|X =close)=0.9Pr(X = open|Y = near) ==0.2·0.40.2·0.4+0.9·0.6 = 0.13Pr(X = open|Y = far) =Pr(X = close|Y = near) =Pr(X = close|Y = far) =Pr(Y =near|X =open) Pr(X =open)Pr(Y =near|X =open) Pr(X =open)+Pr(Y =near|X =close) Pr(X =close)Pr(far|open) Pr(open)Pr(far|open) Pr(open)+Pr(far|close) Pr(close) = 0.8·0.40.8·0.4+0.1·0.6 = 0.84Pr(near|close) Pr(close)Pr(near)= 0.9·0.60.62= 0.87Pr(far|close) Pr(close)Pr(far)= 0.1·0.60.9+0.2 = 0.16 31/58


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>32/58Bayes Recursive Updating - 1New measurementsSuppose that we get the first measurement: nearThus, Pr(X = open|Y 1 = far) = 0.84, and Pr(X = close|Y 1 = far) = 0.16A second measure Y 2 arrives: it is farPr(X = open|Y 1 = far, Y 2 = far)?More generally, how to estimate Pr(X = open|Y 1 = y 1, Y 2 = y 2, . . . Y n = y n)?Extend the Bayes rulePr(X |Y 1, Y 2) =Pr(Y2|X , Y1) Pr(X |Y1)Pr(Y 2|Y 1)=Pr(Y 2|X , Y 1) Pr(X |Y 1)∫ ∞−∞ f Y 2 |Y 1 ,x ′(Y2|Y1, x ′ )f X |Y1 (x ′ |Y 1)dx ′Pr(X |Y 1, Y 2, . . . , Y n) =Pr(Yn|X , Y1, , . . . , Yn−1) Pr(X |Y1, Y2, . . . , Yn−1)Pr(Y n|Y 1, . . . , Y n−1)= . . .


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>33/58Bayes Recursive Updating - 2Markov AssumptionMarkov assumption: Y 2 independent of Y 1 if we know XThen Pr(Y 2|X , Y 1) = Pr(Y 2|X ) (see Con<strong>di</strong>tional Independence formulas)Bayes rulePr(X |Y 1, Y 2) = Pr(Y 2|X ,Y 1 ) Pr(X |Y 1 )Pr(Y 2 |Y 1= Pr(Y 2|X ) Pr(X |Y 1 )) Pr(Y 2 |Y 1 )Pr(X |Y 1, Y 2, . . . , Y n) = Pr(Yn|X ,Y 1,,...,Y n−1 ) Pr(X |Y 1 ,...,Y n−1 )Pr(Y n|Y 1 ,...,Y n−1= Pr(Yn|X ) Pr(X |Y 1,...,Y n−1 )) Pr(Y n|Y 1 ,...,Y n−1 )withPr(Y 2|Y 1) = ∫ ∞−∞ f Y 2 |Y 1 ,x ′(Y2|Y1, x ′ )f X |Y1 (x ′ |Y 1)dx ′= ∫ ∞−∞ f Y 2 |x ′(Y2|x ′ )f X |Y1 (x ′ |Y 1)dx ′similar with n measurementnote: use ∑ in <strong>di</strong>screte case


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>34/58Bayes Recursive Updating - 3The examplePr(X = open|Y 1 = far) = 0.84, and Pr(X = close|Y 1 = far) = 0.16 ,Y 2 = farPr(Y 2|Y 1) = Pr(Y 2|Y 1, open) Pr(open|Y 1) + Pr(Y 2|Y 1, close) Pr(close|Y 1)= Pr(Y 2|open) Pr(open|Y 1) + Pr(Y 2|close) Pr(close|Y 1)= Pr(far|open) Pr(open|far) + Pr(far|close) Pr(close|far)Pr(far|far) = 0.8 · 0.84 + 0.1 · 0.16 = 0.688Pr(X = open|Y 1 = far, Y 2 = far) =Pr(far|open) Pr(open|far)Pr(far|far)= 0.8·0.840.688= 0.977


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>35/58Outline1 Introduction2 Taxonomy3 Probability Recall4 Bayes Rule5 <strong>Bayesian</strong> <strong>Filtering</strong>6 Markov <strong>Localization</strong>


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>Robot environment interaction - 1The environmenta.k.a. Worldgenerally it’s a dynamic system:Robot can act on itChanges due to time passing byA robotStateCan act on environmenti.e., change the environment stateCan sense environment through sensorHas an internal belief on stateCollection of all aspects of the robot and the environmentGenerally, changes over time, some part could be staticWe will refer it with x t36/58


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>Robot environment interaction - 2Control ActionsChange the state of the world(robot and/or environment)u t1 :t n= u t1 , u t2 , . . . u tnMeasurementsInformation about the environment(<strong>di</strong>stances, images, . . .)z t1 :t n = z t1 , z t2 , . . . z tnComplete State and Markov Chainx t will be called complete if it is the best pre<strong>di</strong>ctor of the futureAll past states, measurements and inputs carry no ad<strong>di</strong>tional informationto pre<strong>di</strong>ct the future more accuratelyNo variables prior to x t may influence the stochastic evolution of future stateThis a Markov Chain37/58


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>Evolution of state - 1Evolution of statex t is stochastically generated by x t−1x t p.d.f is con<strong>di</strong>tioned on past states, inputs and measurementp(x t|x 0:t−1, z 1:t−1, u 1:t)Under Markov Chain hypothesis (or Complete State),thanks to con<strong>di</strong>tional independencep(x t|x 0:t−1 , z 1:t−1 , u 1:t ) = p(x t|x t−1 , u t)p(x t|x t−1 , u t) is the state transition probabilityState evolution is stochastic, not deterministic (i.e., is a p.d.f.)Measurement processp(z t|x 0:t, z 0:t−1, u 1:t) = p(z t|x t) under Complete Statethe state x t is sufficient to pre<strong>di</strong>ct the (potentially noisy) measurement z tp(z t|x t) is the measurement probabilitySpecify the probabilistic low of accor<strong>di</strong>ng to which measurement are generated38/58


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>39/58Evolution of state - 2Evolution of state and measurementsDescribe the dynamical stochastic system of the robot and its environmenta.k.a. as Hidden Markov Model (HMM) or Dynamic Bayes Network (DBN)


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>40/58Belief DistributionsBeliefReflects the robot’s internal knowledge about the stateUsually, the state cannot be measure <strong>di</strong>rectlyThe state need to be inferred from dataPosterior Belief <strong>di</strong>stributionCon<strong>di</strong>tional probabilityIs a posterior probabilityCon<strong>di</strong>tioned on available databel(x t) = p(x t|z 1:t, u 1:t)Prior Belief <strong>di</strong>stributionCon<strong>di</strong>tional probabilityIs a prior probabilityCon<strong>di</strong>tioned on available data before incorporating z t measurebel(x t) = p(x t|z 1:t−1, u 1:t)Often referred as a pre<strong>di</strong>tion


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>Bayes Filter AlgorithmThe AlgorithmCalculate the belief bel(x t)Recursive: use bel(x t−1) as inputUse most recent measure (z t) and input (y t)AlgorithmStep 1Calculate the prior belief bel(x t)Integral of the product ofThe prior on x tThe probability of the stateevolutionIt is a pre<strong>di</strong>ctionStep 2Calculate the posterior belief bel(x t)Product ofPrior <strong>di</strong>stributionMeasurement probabilityη: normalization factorη : ∑ ∀x tbel(x t) = 1It is a measurement update41/58


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>42/58Bayes Filter Algorithm Example - 1The Robot And The Door - v2


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>43/58Bayes Filter Algorithm Example - 2The doorCan be open or closeinitial state is unknownThe robotCan act (stochastically) on the door:push: try to open the doornop: no operationSense (noisly) the door presencenear: read by sensor when door isclosefar: read by sensor when door isopenInitial Beliefbel(x 0 = open) = 0.5bel(x 0 = close) = 0.5Measurement Probabilitybel(z t = far|x t = open) = 0.6bel(z t = near|x t = open) = 0.4bel(z t = far|x t = close) = 0.2bel(z t = near|x t = close) = 0.8


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>44/58Bayes Filter Algorithm Example - 3State Transition Probabilitybel(x t = open|u t = push, x t−1 = open) = 1bel(x t = close|u t = push, x t−1 = open) = 0bel(x t = open|u t = nop, x t−1 = open) = 1bel(x t = close|u t = nop, x t−1 = open) = 0bel(x t = open|u t = nop, x t−1 = close) = 0bel(x t = close|u t = nop, x t−1 = close) = 1bel(x t = open|u t = push, x t−1 = close) = 0.8bel(x t = close|u t = push, x t−1 = close) = 0.2001openclose0.2 1 open close10.80


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>45/58Bayes Filter Algorithm Example - 4Time t = 1, actu 1 = nop, no operation performedbel(x 1) = ∑ x 0p(x 1|u 1, x 0)bel(x 0), pre<strong>di</strong>ctionbel(x 1) = p(x 1|u 1 = nop, x 0 = open)bel(x 0 = open) ++ p(x 1|u 1 = nop, x 0 = close)bel(x 0 = close)bel(x 1 = open) = p(x 1 = open|u 1 = nop, x 0 = open)bel(x 0 = open) ++ p(x 1 = open|u 1 = nop, x 0 = close)bel(x 0 = close) == 1 · 0.5 + 0 · 0.5 = 0.5bel(x 1 = close) = p(x 1 = close|u 1 = nop, x 0 = open)bel(x 0 = open) ++ p(x 1 = close|u 1 = nop, x 0 = close)bel(x 0 = close) == 0 · 0.5 + 1 · 0.5 = 0.5


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>46/58Bayes Filter Algorithm Example - 5Time t = 1, sensez 1 = far, sense open doorbel(x 1) = ηp(z 1 = far|x 1)bel(x 1), measurement updatebel(x 1 = open) = ηp(z 1 = far|x 1 = open)bel(x 1 = open)= η0.6 · 0.5 = η 0.3bel(x 1 = close) = ηp(z 1 = far|x 1 = close)bel(x 1 = close)= η0.2 · 0.5 = η 0.1bel(x 1 = open) + bel(x 1 = close) = 1η 0.3 + η 0.1 = 1η = (0.3 + 0.1) −1 = 2.5bel(x 1 = open) = 0.75bel(x 1 = close) = 0.25


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>47/58Bayes Filter Algorithm Example - 6Time t = 2, actu 2 = push, perform push actionbel(x 2) = ∑ x 1p(x 2|u 2, x 1)bel(x 1), pre<strong>di</strong>ctionbel(x 2) = p(x 2|u 2 = push, x 1 = open)bel(x 1 = open) ++ p(x 2|u 2 = push, x 1 = close)bel(x 1 = close)bel(x 2 = open) = p(x 2 = open|u 2 = push, x 1 = open)bel(x 1 = open) ++ p(x 2 = open|u 2 = push, x 1 = close)bel(x 1 = close) == 1 · 0.75 + 0.8 · 0.25 = 0.95bel(x 2 = close) = p(x 2 = close|u 2 = push, x 1 = open)bel(x 1 = open) ++ p(x 2 = close|u 1 = push, x 1 = close)bel(x 1 = close) == 0 · 0.75 + 0.2 · 0.25 = 0.05


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>48/58Bayes Filter Algorithm Example - 7Time t = 2, sensez 2 = far, sense open doorbel(x 2) = ηp(z 2 = far|x 2)bel(x 2), measurement updatebel(x 2 = open) = ηp(z 2 = far|x 2 = open)bel(x 2 = open)= η0.6 · 0.95 = η 0.57bel(x 2 = close) = ηp(z 2 = far|x 2 = close)bel(x 2 = close)= η0.2 · 0.05 = η 0.01bel(x 2 = open) + bel(x 2 = close) = 1η 0.57 + η 0.01 = 1η = (0.57 + 0.01) −1 = 1.724bel(x 1 = open) = 0.983bel(x 1 = close) = 0.017


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>49/58Outline1 Introduction2 Taxonomy3 Probability Recall4 Bayes Rule5 <strong>Bayesian</strong> <strong>Filtering</strong>6 Markov <strong>Localization</strong>


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>50/58Markov <strong>Localization</strong>Markov <strong>Localization</strong> AlgorithmThe state x t is the robot poseRequire also a map m as inputMap m plays a key role in the measurement modelp(z t|x t, m)measure the relative pose w.r.t. the mapCan be integrated in the motion model (the pre<strong>di</strong>ction step)p(x t|u t, x t−1, m)avoid pre<strong>di</strong>ction of impossible movement, like through a wall


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>Markov <strong>Localization</strong> - Initial beliefPose Tracking - case 1The initial pose is known: x 0{The initial belief is bel(x 0) =Pose Tracking - case 21 ifx 0 = x 00 otherwiseThe initial pose is known with some uncertaintye.g. Gaussian x 0 = (N)(x 0, Σ 0)The initial belief is bel(x 0) = (2π det(Σ)) − 1 2 exp {− 12(x−µ) T Σ −1 (x−µ)}Global <strong>Localization</strong>The initial pose is unknown, uniform <strong>di</strong>stribution over all the mapThe initial belief is bel(x 0) = 1|X | , where |X | is the volume of the space 51/58


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>52/58Markov <strong>Localization</strong> - Illustration - 1Environment setupA one-<strong>di</strong>mensional hallwayThree in<strong>di</strong>stinguishable doorsremember that we know the mapThe robot sense (noisly) a door presenceThe robot known its <strong>di</strong>rection and the relative motion performed in a time stepThe state x t is the x robot position


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>53/58Markov <strong>Localization</strong> - Illustration - 2Initial BeliefInitial position is unknown, bel(x 0) is uniformly <strong>di</strong>stributed


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>54/58Markov <strong>Localization</strong> - Illustration - 3SenseThe robot sense a door presencebel(x 1) is higher on door locations


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>55/58Markov <strong>Localization</strong> - Illustration - 4Motion model - pre<strong>di</strong>ction stepRobot knows its movement (u, the control variable)bel(x 2) is shifted as result of motionbel(x 2) is flattened as result of uncertainty on motion


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>56/58Markov <strong>Localization</strong> - Illustration - 5SenseThe robot sense a door presencebel(x 2) is focused on the correct pose


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>57/58Markov <strong>Localization</strong> - Illustration - 6Motion model - pre<strong>di</strong>ction stepbel(x 3) is focused on the correct posebel(x 3) is flattened


Introduction Taxonomy Probability Recall Bayes Rule <strong>Bayesian</strong> <strong>Filtering</strong> Markov <strong>Localization</strong>58/58ReferencesReference“Probabilistic <strong>Robotics</strong>”(Intelligent <strong>Robotics</strong> and Autonomous Agents series)The MIT Press (2005)Chapters 2, 3, 7.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!