from grid.model.perception.matching.glightglue import LightGlue
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data
img1 = img.copy()

model = LightGlue(use_local = False)
matches1, points0_1, points1_1  = model.run(img, img1)

The LightGlue model computes point matches between two images using SuperPoint and LightGlue.

class LightGlue()
use_local
boolean
default:"False"

If True, inference call is run on the local VM, else offloaded onto GRID-Cortex. Defaults to False.

def run()
image0
np.ndarray
required

Input RGB image 1 of shape (M,N,3)(M,N,3).

image1
np.ndarray
required

Input RGB image 2 of shape (M,N,3)(M,N,3).

Returns
List

List of point correspondences (matches) between the images.

points0
np.ndarray

Feature points detected in image 1 of shape (K,2)(K,2).

points1
np.ndarray

Feature points detected in image 2 of shape (K,2)(K,2).

from grid.model.perception.matching.glightglue import LightGlue
car = AirGenCar()

# We will be capturing an image from the AirGen simulator 
# and run model inference on it.

img =  car.getImage("front_center", "rgb").data
img1 = img.copy()

model = LightGlue(use_local = False)
matches1, points0_1, points1_1  = model.run(img, img1)

This code is licensed under the Apache 2.0 License.

Was this page helpful?