Skip to content

LiDAR

What is LiDAR?

LiDAR (Light Detection and Ranging) sensors operate by rotating and emitting laser beams often referred to as LiDAR-rays to scan and map their surrounding environment. This rotational movement allows the sensor to cover a wide Field of View (FoV), capturing detailed spatial information essential for applications like autonomous robotics, navigation, and environmental mapping. LiDAR operates on these basic principles:

  1. Rays of Light are dispersed from the LiDAR Sensor.
  2. The Rays of Light will reflect off objects returning to the LiDAR, distance measured with the Time of Flight equation.

Time Of Flight Equation:

\(d = \frac{ct}{2}\)
  • \(d\) is the distance calculated.
  • \(c\) is the speed of light constant, representing the speed of light.
  • \(t\) is the time the LiDAR ray took to return.
  • \(2\) appears in the equation representing the initial and returning trajectory of the laser.
LiDAR properties
Basic LiDAR principles

While the distance is important, we must also consider something called the separation angle. As the angle between our LiDAR rays will play a role in determining how accurate our readings will be.

Which of the following expresses the correct Time of Flight equation for LiDAR? (try not to scroll up and look at the answer!)
Answer explanation
\(d = \frac{ct}{2}\)

The intuition for this equation comes from the fact that LiDARs work by bouncing a light ray off a wall so that the robot can detect it. The total distance the light travels is given by \(d' = ct\). Since the light goes to the wall and back, the actual distance to the wall is half of this value: \(d = \frac{d'}{2}\). Substituting \(d' = ct\) gives us \(d = \frac{ct}{2}\), and thus answer A.

Angle Separation:

\( \theta = \frac{\text{Field of View (FOV)}}{\text{Number of scans per rotation}} \)

Angle Separation is the angle between fired LiDAR-rays. It is a critical parameter that influences the sensor's ability to accurately detect and distinguish between objects within its field of view (FoV).

An analogy we can use to describe the angle separation of a LiDAR is blurryness of an image. The Image below displays what our readings would look if the angle of separation was large (blurry image) vs small angles of separation (clear image).

Low vs High 
    Angle Separation
Higher angle of separation leads to decreased resolution
What is the angle separation for a 2D LiDAR with a \(270°\) FOV and \(1080\) scans per rotation?
Answer explanation
\( \theta = \frac{\text{Field of View (FOV)}}{\text{Number of scans per rotation}} \)

Considering that the FOV is \(270°\) FOV and \(1080\) scans are done per rotation, we get the following equation: \( \theta = \frac {275^\circ}{1080} = 0.25^\circ \text{ per scan} \). This implies a high angular resolution as the LiDAR scans with fine angular distinctions, enhancing the detail and accuracy.

Understanding the Scan Angle (𝜃):

The Scan Angle \(\theta'\) represents the specific angle at which a particular laser pulse is emitted relative to a reference direction (usually the LiDAR's forward-facing direction)

Warning: The Scan Angle 𝜃' is different from the Angle Separation 𝜃 previously discussed.

Calculating Endpoint Coordinates: Each laser pulse travels outward from the LiDAR sensor, reflects off an object, and returns to the sensor. The distance \(d\) measured is used to calculate the position of the object in polar coordinates \((r, 𝜃)\) and then converted to Cartesian coordinates \((x, y)\) for mapping purposes. The conversion from polar to Cartesian coordinates is given by:

Cartesian Coordinate Conversion:

\(( x, y) = (d \times \cos \theta, d \times \sin \theta )\)

Where:

  • \(r\) Is the measured distance between the object and the LiDAR (radius).
  • \(𝜃\) The angle at which the LiDAR-ray was emitted.

Summary: The reason we are converting from Polar Coordinates \((r, \theta)\) to Cartesian Coordinates \((x,y)\) is because LiDAR sensors typically operate in polar coodinates. Therefore, to extract the distances of objects where beams are reflected, we must convert the LiDAR's rays coordinates from Polar to Cartesian Coordinates.

Why do we convert LiDAR data from polar coordinates \((r, \theta)\) to Cartesian coordinates \((x, y)\)?

Advanced Topic: Transforming to World Coordinates:

Once we obtain Cartesian coordinates \((x, y)\) relative to the LiDAR sensor, we must further transform them into the world frame if the sensor is mounted on a moving robot. This transformation considers the robot's global position \((x_0, y_0)\) and its orientation \(\phi\) (yaw angle).

World Coordinate Conversion:

\( x_{\text{world}} = x_0 + d \times \cos(\theta + \phi) \)
\( y_{\text{world}} = y_0 + d \times \sin(\theta + \phi) \)

Where:

  • \((x_0, y_0)\) is the robot's position in the global coordinate frame.
  • \(\phi\) is the robot's yaw (orientation).
  • \(d\) is the measured distance (radius).
  • \(\theta\) is the scan angle from the LiDAR sensor.

By applying this transformation, we correctly place detected objects in the global reference frame, crucial for accurate mapping and localization.

Linear Algebra Representation of Positions:

To transform the object's position to the world frame we can apply a translation utilizing the robot's position \((x_0,y_0)\) and a rotation (robot's heading angle \(\phi\)):

\(V_{\text{world}} = T \cdot V_{\text{local}} + P_{\text{robot}}\)

Where:

\(T\) is the rotation matrix:

\(T = \begin{bmatrix} \cos(\phi) & -\sin(\phi) \\ \sin(\phi) & \cos(\phi) \end{bmatrix} \)

\(P\) is the global position of the robot:

\(P_{\text{robot}} = \begin{bmatrix} x_0 \\ y_0 \end{bmatrix} \)

Combining them, we obtain the Final World Coordinates of our robot:

\( \begin{bmatrix} x_{\text{world}} \\ y_{\text{world}} \end{bmatrix} = \begin{bmatrix} x_0 \\ y_0 \end{bmatrix} + \begin{bmatrix} \cos(\phi) & -\sin(\phi) \\ \sin(\phi) & \cos(\phi) \end{bmatrix} \begin{bmatrix} d \cos(\theta) \\ d \sin(\theta) \end{bmatrix} \)

Note: The reason why we can do \(T \cdot V_{local}\) without altering the distances from the robot is due to the fact that the determinant of

\(T = [cos(\phi) * cos(\phi)] - [-sin(\phi) * sin(\phi)] = cos^2(\phi) + sin^2(\phi) = 1\)

Thus, the transformation preserves the area of \(V_{local}\), implying that it preserves the side lengths.

If a LiDAR detects an object at a distance \(d = 5\) meters and angle \(θ = 30°\), what are the Cartesian coordinates \((x, y) \) of this object in the LiDAR's local frame?
Answer explanation
\(( x, y) = (d \times \cos \theta, d \times \sin \theta )\)

Cartesian coordinates are calculated using the above equation. By solving \((5 \times cos(30°), 5 \times sin(30°))\), we arrive at the answer \((4.33, 2.5)\), which is B.

If you want to learn how LiDAR is used to produce an OGM map, move onto the next module!