Mapping
One of the first challenges engineers face when developing a robot is, "How can the robot understand and remember its surrounding?" In robotics, mapping is the key to giving these machines both vision and memory. Mapping is the process of transforming raw sensor data—from cameras, LiDAR, sonar, or other sensors—into a coherent representation of the environment. This representation, or “map,” is fundamental for enabling a robot to navigate its surroundings, avoid obstacles, and perform tasks autonomously.

In this tutorial, we will only be discussing one type of mapping, Occupancy Grid Mapping (OGM), which divides the world into a grid, with each square representing a section of space in real life. It then populates it with three types of squares: unknown, free, and occupied.

At first, every cell in this grid is unknown except the Robot's location. However, by using sensor data, robots are able to label each cell either occupied or free. As the robot continues to explore the world, it will slowly form an image of what the world around it is like.

In this section, we’ll explore three key ideas:
-
How robots use Light Detection and Ranging (LiDAR) sensors to measure distances from walls, furniture, and obstacles by bouncing pulses of light off the surrounding.
-
How Occupancy Grid Mapping (OGM) transforms this raw LiDAR data into a "memory map," mimicking the way humans mentally track objects and spaces.
-
The power of Probabilistic OGM, a smarter approach that handles uncertainty (like sensor errors or moving objects) to keep robots adaptable in messy, real-world environments.
To begin, click on the LiDAR module, or choose another module to explore!