Skip to content

Robot Proving Grounds Simulation Environment

The simulation environment is set up using PyBullet, a Python module for physics simulation, robotics, and machine learning. The environment includes a ground plane, walls, and various obstacles which can be dynamically generated or placed based on a configuration file.

Environment Configuration

The environment is configurable through a YAML file where parameters such as obstacle density, wall configuration, and ground texture can be defined. The simulation parameters such as gravity (m/s²), time step frequency (Hz), and the use of a graphical user interface (GUI) are also set here.

Features

  • Dynamic Obstacle Placement: Obstacles of varying sizes can be placed randomly within the environment or as specified by a custom environment configuration. This is determined by random or deterministic algorithms described in the configuration.
  • Physics Parameters: Parameters like gravity (default -9.81 m/s²), time step (default 1/240 s), and other physics-related parameters can be adjusted to simulate different physical conditions.
  • Camera and Lighting: Adjustable camera views and lighting conditions are configurable to simulate different times of day and visibility conditions, affecting visual sensor performance.

Agent (Car)

The agent in this simulation is a car that can navigate through the environment. The car is equipped with a camera and a lidar and can be controlled either programmatically or manually through debug sliders in the GUI.

Car in Simulation

Car Configuration

  • URDF Model: The car's physical model is defined using a URDF (Unified Robot Description Format) file. This file specifies the geometry (shapes, sizes), mass, friction coefficients, and joint configurations (type, limits, etc.).
  • Motion Control: The car can be controlled by setting velocities at the wheels and steering angles at the steering joints using a velocity control interface provided by PyBullet.
  • Sensors: The car is equipped with sensors to perceive its environment, crucial for implementing autonomous navigation algorithms.

Sensors

The agent uses two primary sensors:

Camera

  • Function: Captures RGB and depth images of the environment.
  • Configuration: Field of view (default 90 degrees), aspect ratio (default 4:3), resolution (e.g., 640x480 pixels), and other camera parameters can be adjusted in the configuration file.
  • Usage: Used for visual perception tasks such as object detection, navigation, and mapping. The camera model simulates realistic optical properties including lens distortion.

LIDAR

  • Function: Provides 360-degree scans of the environment to detect obstacles and measure distances.
  • Configuration: The number of rays (resolution), scan range (e.g., 120 meters), and scanning angles can be adjusted.
  • Usage: Critical for spatial awareness and obstacle avoidance, provides distance readings in meters to objects around the vehicle.

State Representation

The state of the robot is represented as a tuple (x, y, theta), where:

  • x and y are the coordinates on the 2D map,

  • theta is the orientation of the car in radians relative to the positive x-axis.

This state is used in the simulation to compute movements and detect collisions. The state updates are based on differential drive kinematics, considering the velocity commands and steering angles.

Simulation Control

The simulation can be controlled using Python scripts that interact with the PyBullet API. Functions are provided to start, stop, and reset the simulation. Debugging tools such as sliders and visual indicators (e.g., trajectory lines) are available to manually control the car and visualize sensor rays, providing an intuitive interface for testing and debugging.