Apple makes it easier for developers to create 3D models through iPhone and iPad

During the course of WWDC 2023, Apple brings the previously provided ‘Object Capture’ API resources to iOS and iPadOS platforms, enabling three-dimensional modeling of objects in a matter of minutes using the LiDAR systems incorporated in iPhones and iPads to photograph objects from different perspectives.

Numerous companies have previously introduced similar technologies, including machine learning techniques to convert 2D images into 3D models, or constructing 3D model data from multiple photographs.

Apple’s previously released ‘Object Capture‘ modeling tool, which composes 3D model content from multiple flat images taken from different angles, was only operable in a macOS operating environment. The new tool resources provided for the iPhone and iPad now allow users to conveniently model using handheld devices.

However, due to the need to capture precise distance information through LiDAR, the current availability is only limited to iPhone Pro models and iPad Pro series models equipped with LiDAR.

The 3D models created post-photography will be presented in the universal USDZ file format, and can be adjusted with professional 3D software that supports this file format.

The provision of this tool resource is certainly a bid to enable developers to create various 3D models more swiftly or to establish services that allow users to generate their own 3D model content, thereby propelling the application experience of the Vision Pro virtual reality headset.

Of course, the previously provided ARKit, RealityKit, and the recently announced deep collaboration between Reality Composer Pro and Unity are also aimed at expanding preparations for Vision Pro applications.