from grid.model.perception.optical_flow.gmflow import UniMatchcar = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.video_input ="https://huggingface.co/datasets/pranay-ar/test/resolve/main/all_ego.mp4"model = UniMatch(use_local =False)result = model.run(video_input, mode='video')
The UniMatch implements a wrapper for the UniMatch model, which estimates optical flow in videos using a multi-scale transformer-based approach.
The optical flow maps for the input video or images.
from grid.model.perception.optical_flow.gmflow import UniMatchcar = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.video_input ="https://huggingface.co/datasets/pranay-ar/test/resolve/main/all_ego.mp4"model = UniMatch(use_local =False)result = model.run(video_input, mode='video')
This code is licensed under the Apache 2.0 License.
from grid.model.perception.optical_flow.gmflow import UniMatchcar = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.video_input ="https://huggingface.co/datasets/pranay-ar/test/resolve/main/all_ego.mp4"model = UniMatch(use_local =False)result = model.run(video_input, mode='video')