Hello GRID
This tutorial provides a detailed, step-by-step guide to configuring and launching your custom session on the GRID platform. In this session, you will:
- Set up a custom scenario using the neighbourhood scene.
- Configure an SUV vehicle with custom camera settings.
- Add a LiDAR sensor with tailored parameters.
- Integrate an AI model for image segmentation.
Configuring a Custom Session
Follow these steps to launch your custom scenario:
-
Configure the Robot
For this tutorial, we will use an SUV positioned at the origin. You can adjust the vehicle type and starting position according to your needs. -
Name Your Session and Select the Scene
Provide a descriptive name for your session and choose the neighbourhood scene from the list of available scenarios. -
Configure Sensors
-
RGB Camera: Set up an RGB camera with a 512x512 resolution to ensure high-quality images for AI processing.
-
LiDAR Sensor: Add a LiDAR sensor with custom parameters. For more details on sensor configuration options, refer to the Sensors tab.
-
-
Select AI Models
For this demonstration, we will integrate the Grounded SAM model to perform image segmentation tasks.
When you are ready, click the Launch button to start the simulation.
Get Started with the GRID Session
Once your simulation is running, you can interact with GRID using the code interface along with the Copilot, Telemetry, and Storage tabs.
All of the code featured in this section is available in a ready-to-run notebook here.
Initial Setup and Control
-
Initialize the Environment:
Begin by running the boilerplate code to initialize the Airgen API and load the necessary weights for the Grounded SAM model. -
Control the Vehicle:
Use thesetCarTargetSpeed()
API (CarClient documentation) to move the car forward, and then capture images using thegetImages()
API. The following code block demonstrates how to position the car and capture an image:We are using the “front_center” camera in this example because the road is directly ahead. For a list of default cameras and their positions, refer to the Cameras tab.
Image Segmentation with Grounded SAM
-
Segment the Road:
Apply the Grounded SAM model to segment the road in the image you just captured. Visualize the output using the rerun module. We will be running the below code to segment the road: -
Alternate Camera View:
Try capturing and segmenting an image from the “back_center” camera for a different perspective.The following code block captures an image from the back camera:
Let us now segment the image captured from the back camera:
Navigating the Scene
-
Move to a Specific Position:
Advance the car to the position x = -120 metres using the following code block: -
Perform a Left Turn:
To execute a left turn, set the steering to1.0
for a sharp turn, then move the car slowly to complete the maneuver. -
Return to Straight Movement:
Reset the steering to0.0
to resume straight-line motion.
Working with LiDAR Data
-
Generate a Point Cloud:
Utilize thegetLidardata()
API (LiDAR documentation) to capture LiDAR data. The code snippet below shows how to transform the raw data into a list of points with the appropriate coordinate transformations. -
Visualize the Point Cloud:
With Matplotlib, scatter the LiDAR point cloud data and save the plot as an image. You can locate the session’s directory via the GRID_USER_SESSION_BLOB_DIR variable. Below is how you’re storage tab would look like after you save the LiDAR data.The saved point cloud is as follows:
Was this page helpful?