from grid.model.perception.detection.rt_detr import RT_DETR
import rerun as rr
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = RT_DETR(use_local=True)
result = model.run(input=img.copy(), confidence_threshold=0.5)
rr.log("result",rr.Image(result))

The RT_DETR class implements a wrapper for the RT_DETR model, which detects objects in images and videos using a real-time detection transformer.

class RT_DETR()
use_local
boolean
default:
true

If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to True.

def run()
input
Union[np.ndarray, str]
required

The image array or path to the video file for object detection.

confidence_threshold
float
required

Confidence threshold for filtering object detection results.

Returns
np.ndarray

Annotated image with bounding boxes and class labels.

from grid.model.perception.detection.rt_detr import RT_DETR
import rerun as rr
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = RT_DETR(use_local=True)
result = model.run(input=img.copy(), confidence_threshold=0.5)
rr.log("result",rr.Image(result))

This code is licensed under the Apache 2.0 License.

Was this page helpful?