from grid.model.perception.segmentation.gsam2 import GSAM2car = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.img = car.getImage("front_center","rgb").datamodel = GSAM2(use_local =False)result = model.run(rgbimage=img, prompt=<prompt>)print(result.shape)
The GSAM2 class provides a wrapper for the GSAM2 model, which combines the power of Grounding DINO for text-based object detection with SAM2 for high-precision segmentation in RGB images.
from grid.model.perception.segmentation.gsam2 import GSAM2car = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.img = car.getImage("front_center","rgb").datamodel = GSAM2(use_local =False)result = model.run(rgbimage=img, prompt=<prompt>)print(result.shape)
This code is licensed under the Apache 2.0 and BSD-3 License.
from grid.model.perception.segmentation.gsam2 import GSAM2car = AirGenCar()# We will be capturing an image from the AirGen simulator # and run model inference on it.img = car.getImage("front_center","rgb").datamodel = GSAM2(use_local =False)result = model.run(rgbimage=img, prompt=<prompt>)print(result.shape)