Hello GRID Tutorial

This section offers comprehensive, step-by-step instructions for configuring and launching your custom session on the GRID platform. We will create a custom scenario using the neighbourhood scene, utilize the SUV vehicle with custom camera settings, add a LiDAR sensor, and integrate AI models.

Configuring a Custom Session

We shall begin by briefly going through the launch process of our custom scenario.

  1. Set the name and select the neighbourhood scene:

    You can modify the run name to easily identify it. After naming it, go ahead and select the neighbourhood scenario from the many available scenes.

    ../_images/scene.png
  2. Configure the robot:

    For this tutorial, we will use the SUV car type and launch it at the origin itself. Feel free to modify these as you please.

    ../_images/select_car.png
  3. Set up cameras:

    Let's modify the RGB camera resolution to 512x512 for a larger image. This will be helpful while using the AI models.

    ../_images/camera_settings.png
  4. Adjust sensors:

    Let us add a LiDAR sensor to our drone with some custom parameters as shown. Details for sensor configuration can be found in the Sensors tab.

    ../_images/sensors_settings.png
  5. Choose AI models:

    We will add one model, the CLIPSeg model for demonstration.

    ../_images/models_pick.png

Go ahead and click the Launch button to start the simulation.

Using the Developer Mode code notebook

Now we can use the code interface in GRID along with the Terminal, Telemetry, and Storage tabs to execute some actions. For details on these tabs, you can refer to the Developer Mode documentation.

Tip

All the code used in this section can be found in a ready-to-run notebook here.

  • First, let us begin by running the boilerplate code to initialize the Airgen API and get the weights for the ClipSeg model.

    ../_images/start.png
  • Now that we can control the car, let us start by taking off and using the setCarTargetSpeed() API (CarClient) to position the car some distance ahead and the getImages() API to capture an image (VehicleClient). We will also use the rerun module to visualize it. Below is the code block that does it.

    ../_images/image_1.png

    Note

    We are using the "front_center" camera in this case as the road is positioned directly ahead. The default available cameras in airgen are described in Cameras with their respective names.

  • Next, let us try using the CLIPSeg model to segment the road from the image we just captured, and use the same rerun module we did just now to visualize it.

    ../_images/segment_1.png
  • Let us now try the same but this time using the "back_center" camera.

    ../_images/segment_2.png

    This is how the original image looked like

    ../_images/thiswasthephoto.png
  • Now, let us move ahead to the position x = -120 metres. Below is the code block to do this.

    ../_images/movetointeresting.png
  • How about taking a left turn here? We can set the steering to 1.0 for a hard left turn and move the car slowly for a short while to do this.

    ../_images/turn.png
  • Setting the steering back to 0.0 makes the car move normally in a straight line. Neat!

    ../_images/slightly_ahead.png
  • Lastly, let us try using the LiDAR we set up while making this scene to generate a point cloud. We can use the getLidardata() API for this task (LiDAR). Below is a basic method that uses this call and returns a list of points after doing appropriate coordinate transforms.

    ../_images/lidar_1.png
  • Now, using matplotlib, we can scatter this point cloud data on a plot and save it as an image. The current session's directory can be easily found using the GRID_USER_SESSION_BLOB_DIR as shown. Finally, we can load this image using OpenCV and use the rerun module to visualize it live in the telemetry tab.

    ../_images/visualize_pcl.png

    Where we used the following code

    ../_images/lidar_2.png