from grid.model.perception.vo.vo_dpvo import DPVO
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = DPVO(use_local = True)
result = model.run(image=img)

The DPVO class is a wrapper for the DPVO model, which estimates camera poses from RGB images using a deep learning approach.

class DPVO()
use_local
boolean
default:
true

If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to True.

calib
np.ndarray
default:
"false"

The camera calibration matrix of shape (4,)(4,). Defaults to np.array([320, 320, 320, 240]).

def run()

Uses DPVO to predict the camera pose for the given RGB image relative to the previous one. If this is the first image, initializes the pose estimation routine.

image
np.ndarray
required

The input RGB image of shape (M,N,3)(M,N,3).

Returns
np.ndarray

The predicted pose as a 1x6 tensor containing X, Y, Z positions and R, P, Y orientation.

from grid.model.perception.vo.vo_dpvo import DPVO
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = DPVO(use_local = True)
result = model.run(image=img)

This code is licensed under the MIT License.

Was this page helpful?