Skip to content

Preliminaries

◀ Return to Localization

Go to Dead Reckoning ▶

For a robot to successfully navigate and interact with its environment, it must know where it is. The process of determining its location is known as localization, which is essential for tasks ranging from moving across a room to autonomously driving through a city. Without accurate localization, a robot cannot plan paths, avoid obstacles, map its environment, or reach goals reliably.

Storing Localization Data

In robotics, we store a robot's localization data as a "pose". A robot's pose is the information that describes its location and orientation in an environment. In a 2d environment, pose often follows the format of (x, y, \(\theta\)), where x represents the robots position on the x-axis, y on the y-axis, and \(\theta\) represents its current orientation.

Odometry

Odometry is a set of sensors which are used to determine the current-position of the robot or localize the robot. Common examples of odometry include an IMU (a gyroscope used to track the angle of a robot), dead-wheels (wheels not attached to any particular motors but have contact with the ground to measure rotations turned), and wheel-encoders (devices that track the rotation of a drive wheel). It may also include more visual-based sensors such as cameras (capture images), LiDARs (find distance from robot to surrounding area), and sonars (similar to LiDARs but for underwater use).

Markov Assumption and Markov Chains

Markov Chains are a type of stochastic process, systems that evolve randomly over time. Since these systems are probabilistic, we can use probability theory to analyze them. One example of a stochastic process is the movement of gas molecules in a chamber.

error loading
Gas in a chamber as a stochastic process

Many stochastic processes are very complex, or even impossible, to analyze. However, Markov Chains are a special subset that are particularly simple to work with. This is because Markov Chains follow a property known as the Markov Assumption: the future state of a system only depends on its present state. Think about it like this: If you know what the weather is today, you don’t need the last month of weather to guess what it will be tomorrow. Today’s weather already captures everything relevant for one-step prediction. That’s the Markov assumption in action.

error loading
Weather visualization for Markov Chain

Mathematically, we use two equivalent definitions of the Markov Chain. One for continuous probability and one for discrete probability.

\(\text{Continuous: } \mathbb{P}(X_t \in A | X_{t-1}, X_{t-2},...) = \mathbb{P}(X_t \in A | X_{t-1})\)
\(\text{Discrete: } \mathbb{P}(X_t = j | X_{t-1} = i, X_{t-2},...) = \mathbb{P}(X_t = j | X_{t-1} = i)\)

If you are unfamiliar, \(P(X_i,X_j)\) is equivalent to \(P(X_i \cap X_j)\). The Markov Chain is an especially important element of Bayes Filtering, which we will see later.

Motion Model

Motion models calculate the position of the robot by accumulating and combining the motion detected from odometry sensors at a specific instant. The general equation for a motion model is:

\(X_{t} = f(X_{t-1}, U_{t-1})\)

where:

  • \(f\) is a function that relates the control input and current position to produce the next position of the robot.

  • \(X\) is the position (or pose) of the robot.

  • \(U\) is the control input(velocity, force, acceleration, etc.)

  • \(t\) is time (may be in continous or discrete). \(t\) refers to the present while \(t-1\) refers to the past.

You may also encounter the motion model written at \(X_{t+1} = f(X_{t}, U_{t})\). Both are valid, except \(X_{t+1}\) is forecasting (predicting the future) while \(x_t\) is filtering (predicting the present).

Motion model example

one example of a motion model is capturing the movement of a cannonball.

error loading
the parabolic trajectory of a cannon ball

In this example, we can choose to either do a one-dimension motion model that focuses on either axes, or a two-dimension motion model focused on the (x, y) plane. we will only focus on discrete one-dimension x-axis analysis.

assuming a vacuum, we model the cannonball according to the motion model: \(X_t = f(X_{t-1}, U_{t-1})\). we begin by drawing inspiration from the kinematic equations

\(x_t = x_{t-1} + v_{t-1} \Delta t + \frac{1}{2} a \Delta t^2\)
\(v_t = v_{t-1} + a \Delta t\)

Common limitations and errors in localization

Perfect localization is extremely difficult to achieve in practice. Small inaccuracies from sensors such as IMUs, wheel encoders, and other odometry sources accumulate over time, leading to significant errors in the estimated position. Because of this, most localization systems adopt a probabilistic approach, where the goal is not to determine the exact position, but rather the most likely position.

One of the most common issues is drift, the gradual accumulation of error as the robot moves. For example, even if a wheel encoder is off by a tiny fraction of a rotation, after many movements this error compounds, causing the robot’s estimated location to diverge from its true position. Similarly, IMUs suffer from bias and noise that build up over time, further contributing to drift.

While drift is a limitation in many basic localization methods, more advanced techniques such as sensor fusion, and Particle Filter, and more provide ways to reduce or correct these errors, allowing robots to maintain accurate estimates of their position over long periods.