from grid.model.perception.optical_flow.gmflow import UniMatch
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

video_input =  "https://huggingface.co/datasets/pranay-ar/test/resolve/main/all_ego.mp4"

model = UniMatch(use_local = False)
result = model.run(video_input, mode='video')

The UniMatch implements a wrapper for the UniMatch model, which estimates optical flow in videos using a multi-scale transformer-based approach.

class UniMatch()
use_local
boolean
default:
"False"

If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to False.

def run()
video_input
str
required

The link to the video or the path to the video/image directory.

mode
str
default:
"video"

The mode of input, either ‘video’ or ‘image’.

Returns
list

The optical flow maps for the input video or images.

from grid.model.perception.optical_flow.gmflow import UniMatch
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

video_input =  "https://huggingface.co/datasets/pranay-ar/test/resolve/main/all_ego.mp4"

model = UniMatch(use_local = False)
result = model.run(video_input, mode='video')

This code is licensed under the Apache 2.0 License.

Was this page helpful?