from grid.model.perception.segmentation.clipseg import CLIPSeg
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = CLIPSeg(use_local = False)
result = model.run(rgbimage=img, prompt=<prompt>)
print(result.shape)

The CLIPSeg implements a wrapper for the CLIPSeg model, which segments images based on a given text prompt.

class CLIPSeg()
use_local
boolean
default:
"False"

If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to False.

def run()
rgbimage
np.ndarray
required

The input RGB image of shape (M,N,3)(M, N, 3).

prompt
str
required

The text prompt to use for segmentation.

Returns
np.ndarray

The predicted segmentation mask of shape (M,N)(M, N).

from grid.model.perception.segmentation.clipseg import CLIPSeg
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = CLIPSeg(use_local = False)
result = model.run(rgbimage=img, prompt=<prompt>)
print(result.shape)

This code is licensed under the MIT License.

Was this page helpful?