Platform FAQs

I do not see the simulation streaming after I start a session

In certain browsers, the streaming does not start automatically and we're investigating this. In the meantime, you can try clicking on the simulation streaming pane (top right) to start the streaming manually.

I am unable to drive the wheeled robots with keyboard

This is most likely to happen if you've already run some code, such as the first cell in the notebook. The initialization of the simulation client enables API control for the robot, thus disabling keyboard control. To regain keyboard control, run airgen_car_0.client.enableApiControl(False) in a notebook cell.

I tried to use the terminal (LLM) to do X and it does not work

Currently, the LLM only has access to a limited set of API for the simulation/robot, so it will not be able to run every type of instruction. We are working on integrating more functionality into the scope of the LLM.

Unable to start session - it says I already have an ongoing session

The Platform does not currently allow for multiple simultaneous sessions. To start a new session while one is ongoing, navigate to Session History → Active Sessions and terminate the ongoing one.

The progress bar in the codeblocks is stuck at 0%

This is a known issue and we are working on fixing it. The code is still running in the background, so you can ignore the progress bar. Once the code finishes running, the rotating circle icon will disappear.

My drone/car crashed and I cannot control/move it anymore

Press Backspace to reset the robot to its default starting position.

How do I move the camera manually within the simulation?

Click on the simulator, and then press M to enter manual camera mode. Pressing H within the simulator context also shows a list of keyboard shortcuts and functionality.

Why does my Terminal show "…" and not allow me to type?

You cannot access the terminal (LLM interaction window) when there is code running in the notebook.

Using the controller to move the robot is not working

This feature has only been tested for Chrome and we are in the process of enabling it for other browsers.

How do I run video-native AI models on the platform?

Unlike image data which are passed as numpy arrays, for videos, we pass the video path as a string to the model. Given each model has a different preprocessing pipeline, we request you to write your video data to disk and pass the path to the model. The model will read the video and process it accordingly. We will be working on enabling video data to be passed as numpy arrays directly to the model in the future.