from grid.model.perception.segmentation.sapiens_segmentation import SapiensSegmentation
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = SapiensSegmentation(use_local = False)
result = model.run(rgbimage=img)
print(result.shape)

The SapiensSegmentation class provides a wrapper for the Sapiens body-part segmentation model.

This model is specifically trained for images with humans as the primary subject.
class SapiensSegmentation()
use_local
boolean
default:
"False"

If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to False.

def run()
rgbimage
np.ndarray
required

The input RGB image of shape (M,N,3)(M, N, 3).

Returns
np.ndarray

The predicted segmentation mask of shape (M,N)(M, N).

from grid.model.perception.segmentation.sapiens_segmentation import SapiensSegmentation
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data

model = SapiensSegmentation(use_local = False)
result = model.run(rgbimage=img)
print(result.shape)

This code is licensed under the CC-by-NC 4.0 License.

Was this page helpful?