LiDAR
LiDAR (Light Detection and Ranging) sensors operate by rotating and emitting laser beams often referred to as LiDAR-rays to scan and map their surrounding environment. This rotational movement allows the sensor to cover a wide Field of View (FoV), capturing detailed spatial information essential for applications like autonomous robotics, navigation, and environmental mapping. LiDAR operates on these basic principles:
- Rays of Light are dispersed from the LiDAR Sensor.
- The Rays of Light will reflect off objects returning to the LiDAR, distance measured with the Time of Flight equation.
Time Of Flight Equation:
- \(d\) is the distance calculated.
- \(c\) is the speed of light constant, representing the speed of light.
- \(t\) is the time the LiDAR ray took to return.
- \(2\) appears in the equation representing the initial and returning trajectory of the laser.
While the distance is important, we must also consider something called the separation angle. As the angle between our LiDAR rays will play a role in determining how accurate our readings will be.
The intuition for this equation comes from the fact that LiDARs work by bouncing a light ray off a wall so that the robot can detect it. The total distance the light travels is given by \(d' = ct\). Since the light goes to the wall and back, the actual distance to the wall is half of this value: \(d = \frac{d'}{2}\). Substituting \(d' = ct\) gives us \(d = \frac{ct}{2}\), and thus answer A.
Angle Separation:
Angle Separation is the angle between fired LiDAR-rays. It is a critical parameter that influences the sensor's ability to accurately detect and distinguish between objects within its field of view (FoV).
An analogy we can use to describe the angle separation of a LiDAR is blurryness of an image. The Image below displays what our readings would look if the angle of separation was large (blurry image) vs small angles of separation (clear image).
Considering that the FOV is \(270°\) FOV and \(1080\) scans are done per rotation, we get the following equation: \( \theta = \frac {275^\circ}{1080} = 0.25^\circ \text{ per scan} \). This implies a high angular resolution as the LiDAR scans with fine angular distinctions, enhancing the detail and accuracy.
Understanding the Scan Angle (\(\theta'\)):
The Scan Angle \(\theta'\) represents the specific angle at which a particular laser pulse is emitted relative to a reference direction (usually the LiDAR's forward-facing direction)
Warning: The Scan Angle \(\theta'\) is different from the Angle Separation \(\theta\) previously discussed.
Calculating Endpoint Coordinates: Each laser pulse travels outward from the LiDAR sensor, reflects off an object, and returns to the sensor. The distance \(r\) measured is used to calculate the position of the object in polar coordinates \((r, 𝜃)\) and then converted to Cartesian coordinates \((x, y)\) for mapping purposes. The conversion from polar to Cartesian coordinates is given by:
Cartesian Coordinate Conversion:
Where:
- \(r\) is the measured distance between the object and the LiDAR (radius).
- \(𝜃\) the angle at which the LiDAR-ray was emitted.
Summary: The reason we are converting from Polar Coordinates \((r, \theta)\) to Cartesian Coordinates \((x,y)\) is because LiDAR sensors typically operate in polar coordinates while maps are typically created in Cartesian coordinates. Therefore, to label the positions of objects where beams are reflected, we must convert the LiDAR rays' coordinates from Polar to Cartesian coordinates.
Transforming Coordinates through Frames:
We have seen how to obtain the Cartesian coordinates of the measurements from the LiDAR. However, these coordinates are from the perspective of our LiDAR and do not tell us where we should expect these points to be in the fixed perspective of the map. To take into account the inconsistency between different perspectives, we must use frames and transformations between them.
A frame is a perspective that has its own origin, orientation, and coordinate axes. In our context, this refers to things like the robot frame and the global frame (which we use when creating a map). If our robot frame is not identical to our global frame, the environment they perceive can be completely different even if they exist in the same environment.
For example, in the image above, the robot perceives an object forward and to the right. However, when creating a map in the global perspective, we wouldn't say that there is an object forward and to the right in the global frame. We would take into account the difference in perspective and make an adjusted statement accordingly. In this scenario, the correct description for the global frame would be that there is an object forward and to the left of the origin.
The process of adjusting coordinates or angles for different frames is called a frame transformation and can be computed algebraically or using matrix transformations. In a 2D space, a frame transformation would require the robot's global position \((x_0, y_0)\) and its orientation \(\phi\) (yaw angle).
Frame Transformation from Robot to World Frame with LiDAR data:
Algebraic Representation of Positions:
Where:
- \((x_0, y_0)\) is the robot's position in the global coordinate frame.
- \(\phi\) is the robot's yaw (orientation).
- \(d\) is the measured distance from the LiDAR sensor.
- \(\theta\) is the scan angle from the LiDAR sensor.
Matrix Representation of Positions:
Where:
\(T\) is the rotation matrix:
\(V_{\text{local}}\) is the position of the object \((x_i,y_i)\) in the robot frame:
\(P\) is the global position of the robot:
Combining them, we obtain the Final World Coordinates of our robot:
Note: The reason why we can do \(T \cdot V_{local}\) without altering the distances from the robot is due to the fact that the determinant of \(T\) equals \(1\), which implies that the matrix transformation, \(T\), preserves the lengths of coordinates it transforms.
By applying these transformations, we correctly place detected objects in the global reference frame, crucial for accurate mapping and localization. These equations assume that the LiDAR and robot have an identical frame, however if your robot has a LiDAR that is offset from its origin or rotated at an angle, you may need to consider two frame transformations, from the LiDAR frame to the robot frame to the global frame.
Cartesian coordinates are calculated using the above equation. By solving \((5 \times cos(30°), 5 \times sin(30°))\), we arrive at the answer \((4.33, 2.5)\), which is B.
If you want to learn how LiDAR is used to produce an OGM map, move onto the next module!