CoTracker
The CoTracker
class implements a wrapper for the CoTracker model, which tracks points
in video frames in an online manner (without having to look at the entire video).
Process a single video frame and updates tracking information.
This method appends the given frame to the window of frames and processes the frames at intervals defined by the CoTracker model’s step size. If the current frame count is a multiple of the step size, it updates the predicted tracks and visibility.
Processes input video and tracks points.
A tuple containing the predicted tracks and visibility tensors. If the frame count is not a multiple of the step size, both values will be None.
Finalize processing of remaining frames and return the final predicted tracks and visibility.
If there are any remaining frames that haven’t been processed in the window, they will be processed during this call. The results will be logged and optionally saved
Final predicted tracks and visibility.
This code is licensed under the CC-BY-NC 4.0 License.
Was this page helpful?