The long-rumored Apple Glass could leverage data from other units to accurately map an environment, with head-mounted displays capturing data from a scene and sharing it for a better reality experience. increased.
Augmented reality depends on an accurate measurement of the geometry in an environment, with imaging devices performing the task as the cameras move through an area. For a single headset this can be a relatively slow process, but performing the same task using multiple devices can both be beneficial and create its own issues.
For starters, if multiple AR devices monitor an environment independently, the geometry of the maps they each create may not necessarily be accurate. Two people viewing an area based on independently collected data may see virtual objects positioned slightly differently due to variations in the map.
In a patent granted to Apple on Tuesday by the US Patent and Trademark Office titled “Multi-User Simultaneous Locating and Mapping (SLAM)”, Apple suggests that using multiple devices could be beneficial by sharing map data, which can create one card for all users.
Accuracy can be improved in several ways, for example, multiple devices may perform an initial map generation of an area at a faster rate than a single device, or one device may capture areas than a second does. also can not see or missed entirely. Having duplicate data points for the same mapped area is also useful, as it can be used to correct errors in a generated map.
By creating a more accurate map, this can allow for better placement of virtual markers in a world, so that digital objects and scenes can be seen in the same real location from multiple headsets or devices.
In Apple’s description, imaging sensors on devices create keyframes from images in an environment, which apply to device-dependent coordinate systems. Key images may include image data, additional data, and a representation of a pose of the device, among other elements. Each system then generates maps of the relative locations.
The keyframes are then swapped between devices, paired with the device’s own keyframes, and then used in calculations to generate more map points. The coordinates of the anchor points are also shared between devices for positioning objects.
The patent adds that the system could operate in a decentralized fashion, with devices communicating directly rather than relying on an intermediary server to share data. In cases where location data is shared between users of an application, such as games, sharing typically uses a central server to distribute the data among players, but Apple offers a more ad hoc approach to communication instead. direct for an AR mapping session.
Different viewpoints could allow one device to share data about an object that another cannot see.
The patent lists its inventors as Abdelhamid Dine, Kuen-Han Lin and Oleg Naroditsky. It was filed on May 2, 2019.
Apple files numerous patent applications each week, but while patent filings are an indication of areas of interest to Apple’s research and development efforts, they do not guarantee that the concepts will appear in a future product or service. .
While the idea of SLAM can be applied on mobile devices, such as iPhones using ARKit apps, it is more likely that users will experience such a concept in Apple Glass, Apple’s smart glasses with reality capabilities. increased.
The 2016 patent filings indicate that Apple wanted to take advantage of AR positioning in apps, with an iPhone-based version used for “augmented reality maps” that would overlay a digital map and data over an actual camera view. A related 2017 filing suggested that an AR device could be used to identify nearby objects for a user, which could again provide more informationon on one screen and retrieve more data from another system. .