Meta’s AI-driven SAM model can recognize and separate any object in images or videos
Recently, OpenAI, Microsoft, and Google have been making waves in the AI domain, and Meta, as one of the technology giants, has not been idle. Today, Meta AI open-sourced a model called the Segment Anything Model (SAM), powered by AI and trained on 11 million images and 1.1 billion annotations. This model can broadly recognize and isolate any object in images or videos.
Segment Anything, as the name suggests, aims to divide any subject matter. Utilizing the SAM model solely as a tool for extracting images might understate its capabilities. From demonstrations, Meta’s model appears to be potent, and it can be combined with other applications, such as real-time object recognition in robots and subsequent analysis of identified content.
This new AI model appears to be quite formidable, though Meta has not yet elucidated the purpose of researching this AI model. In their paper, Meta AI asserts that it is to facilitate the study of fundamental computer vision models.
We conjecture that Meta AI is still investigating other aspects of image recognition, with SAM responsible for separating objects from images or video frames. Other AI models can identify the content of objects, aligning with Meta AI’s promotion of fundamental computer vision.
Considering the current pace of AI development, it is estimated that such object separation technology will be widely employed in the future.
Meta AI has launched a demo website for users to test: https://segment-anything.com/
Currently, Meta AI offers three models: 2.4GB, 1.2GB, and 358MB. Developers can utilize these models when deploying locally.