Introducing SAM 2: The next generation of Meta Segment Anything Model for videos and images

0

Introducing SAM 2: The next generation of Meta Segment Anything Model for videos and images

Image: https://learnopencv.com/sam-2/

“Today, we’re announcing the Meta Segment Anything Model 2 (SAM 2), the next generation of the Meta Segment Anything Model, now supporting object segmentation in videos and images. We’re releasing SAM 2 under an Apache 2.0 license, so anyone can use it to build their own experiences. We’re also sharing SA-V, the dataset we used to build SAM 2 under a CC BY 4.0 license and releasing a web-based demo experience where everyone can try a version of our model in action.

Object segmentation—identifying the pixels in an image that correspond to an object of interest—is a fundamental task in the field of computer vision. The Meta Segment Anything Model (SAM) released last year introduced a foundation model for this task on images.

Our latest model, SAM 2, is the first unified model for real-time, promptable object segmentation in images and videos, enabling a step-change in the video segmentation experience and seamless use across image and video applications. SAM 2 exceeds previous capabilities in image segmentation accuracy and achieves better video segmentation performance than existing work, while requiring three times less interaction time. SAM 2 can also segment any object in any video or image (commonly described as zero-shot generalization), which means that it can be applied to previously unseen visual content without custom adaptation…”

Source: https://ai.meta.com/blog/segment-anything-2/

Paper: https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/

July 30, 2024
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Subscribe to our Digest