from grid.model.perception.tracking.mft import MFT
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data
queries = torch.tensor([
        [0., 600., 350.], 
        [0., 600., 250.], 
        [10., 600., 500.], 
        [20., 750., 600.], 
        [30., 900., 200.]])

model = MFT(queries = queries, save_results=False, use_local = True)
for frame in video_frames:
    model.run(frame)

The MFT class implements a point tracking model that processes video frames and tracks points based on provided queries.

class MFT()
queries
torch.Tensor

Tensor containing the point queries for tracking.

save_results
boolean
default:
"False"

Whether to save the results as a video.

use_local
boolean
default:
true

If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to True.

def run()
frame
np.ndarray
required

The input video frame to process.

Returns
Tuple[Optional[torch.Tensor], Optional[torch.Tensor]]

Predicted coordinates and occlusions.

from grid.model.perception.tracking.mft import MFT
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data
queries = torch.tensor([
        [0., 600., 350.], 
        [0., 600., 250.], 
        [10., 600., 500.], 
        [20., 750., 600.], 
        [30., 900., 200.]])

model = MFT(queries = queries, save_results=False, use_local = True)
for frame in video_frames:
    model.run(frame)

This code is licensed under the MIT License.

Was this page helpful?