Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.
Type:
Grant
Filed:
October 5, 2021
Date of Patent:
February 21, 2023
Assignee:
Meta View, Inc.
Inventors:
Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
Abstract: An augmented reality collaboration system comprises a first system configured to display virtual content, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a user device comprising: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for the user device: determine a pose of the user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate an image of virtual content based on the pose of the user device with respect to the structure, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the structure.
Type:
Application
Filed:
September 16, 2020
Publication date:
March 17, 2022
Applicant:
Meta View, Inc.
Inventors:
Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, JR.
Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.
Type:
Application
Filed:
October 5, 2021
Publication date:
March 17, 2022
Applicant:
Meta View, Inc.
Inventors:
Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, JR.
Abstract: A sensing and display apparatus, comprising: a first phenomenon interface configured to operatively interface with a first augmediated-reality space, and a second phenomenon interface configured to operatively interface with a second augmediated-reality space, is implemented as an extramissive spatial imaging digital eye glass.
Abstract: In general, one aspect disclosed features a system comprising: a first user device configured to display virtual content, the first user device comprising one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to: generate a first image depicting virtual content in a virtual location corresponding to a physical location in a physical environment of the first user device, display the first image in the one or more displays of the first user device, enable a user of the first user device to create media and associate that media with the virtual content in the first image in the form of an annotation, and store the annotation and virtual content, and make it available for access by a plurality of additional user devices.
Type:
Application
Filed:
November 9, 2021
Publication date:
March 17, 2022
Applicant:
Meta View, Inc.
Inventors:
Alexander Tyurin, Gerald V. Wright, JR.
Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.
Type:
Grant
Filed:
September 16, 2020
Date of Patent:
November 16, 2021
Assignee:
Meta View, Inc.
Inventors:
Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
Abstract: A sensing and display apparatus, comprising: a first phenomenon interface configured to operatively interface with a first augmediated-reality space, and a second phenomenon interface configured to operatively interface with a second augmediated-reality space, is implemented as an extramissive spatial imaging digital eye glass.
Abstract: Aspects of the disclosed apparatuses, methods, and systems provide arrangement of the visual components of an augmented or virtual display system with optimized telecentricity, focal depth, and wide FOV. The visual components may include a light source and a corresponding optical element.
Abstract: The methods, systems, techniques, and components described herein may facilitate user interactions with virtual objects in a three-dimensional virtual environment using user input into a graphical interface of a control device that is coupled to a display that may display the three-dimensional virtual environment. The control device may be configured to display a 3D representation of a virtual object having a non-virtual reality representation of the virtual object. The graphical interface of the control device may receive selection information that corresponds to a user selection of the 3D representation of the virtual object. Transformation parameters that provide a basis for rendering a three-dimensional representation of a virtual object in the three-dimensional virtual environment may be obtained to define a transformation of the 3D representation of the virtual object.
Abstract: A sensing and display apparatus, comprising: a first phenomenon interface configured to operatively interface with a first augmediated-reality space, and a second phenomenon interface configured to operatively interface with a second augmediated-reality space, is implemented as an extramissive spatial imaging digital eye glass.
Abstract: Aspects of the disclosed apparatuses, methods and systems provide manipulation of a virtual world three dimensional (3D) space based on input translated from the real world. Elements in the virtual world may have an associated charge and field. An element in the virtual world becomes interactive with an element translated from the real world when the translated real world element interacts with a field associated with the virtual element according to the charges. Forces may be applied to the virtual element using a real world physics model to determine a response by the virtual element to the applied force.
Abstract: A system configured for tracking a human hand in an augmented reality environment may comprise a distancing device, one or more physical processors, and/or other components. The distancing device may be configured to generate output signals conveying position information. The position information may include positions of surfaces of real-world objects, including surfaces of a human hand. Feature positions of one or more hand features of the hand may be determined through iterative operations of determining estimated feature positions of individual hand features from estimated feature position of other ones of the hand features.
Abstract: Described herein is a head-mounted device including an audio system configured to provide a surround sound audio effect. The device may include one or more of a forehead assembly, a rear-head assembly, a set of speakers, a set of straps connecting the forehead assembly to the read-head assembly, and/or other components. The forehead assembly may include one or more speakers. Individual ones of the straps may include one or more speakers.
Abstract: A system and method to simulate physical objects occluding virtual objects within an interactive space may obtain output signals conveying three-dimensional positions of surfaces of a user object. The three-dimensional positions may be conveyed in a point cloud. Point cloud images may be obtained from the output signals. An individual point cloud image may depict a two-dimensional representation of the point cloud. A current point cloud image may be aggregated with one or more previous aggregate point cloud images to generate a current aggregate point cloud image. A mesh may be generated based on the current aggregate point cloud image. The mesh may include a set of vertices. The mesh may be generated by assigning individual vertices to individual points in the current aggregate point cloud image.
Abstract: The methods, systems, techniques, and components described herein allow interactions with virtual objects in a virtual environment, such as a Virtual Reality (VR) environment or Augmented Reality (AR) environment, to be modeled accurately. More particularly, the methods, systems, techniques, and components described herein allow interactive virtual frames to be created for virtual objects in a virtual environment. The virtual frames may be built using line primitives that form frame boundaries based on shape boundaries of virtual objects enclosed by the virtual frame. An area of interactivity defined by the virtual frame may allow users to interact with the virtual object in the virtual environment.
Type:
Grant
Filed:
January 22, 2019
Date of Patent:
June 30, 2020
Assignee:
Meta View, Inc.
Inventors:
Zachary R. Kinstner, Rebecca B. Frank, Yishai Gribetz
Abstract: Aspects of the disclosed apparatuses, methods and systems provide tethering 3-D virtual elements in digital content, extracting tethering 3-D virtual elements, and manipulating the extracted 3-D virtual elements in a virtual 3-D space.
Type:
Grant
Filed:
February 15, 2017
Date of Patent:
May 26, 2020
Assignee:
Meta View, Inc.
Inventors:
Meron Gribetz, Soren Harner, Sean Scott, Rebecca B. Frank, Duncan McRoberts
Abstract: Aspects of the disclosed apparatuses, methods and systems provide three-dimensional gradient and dynamic light fields for display in 3D technologies, in particular 3D augmented reality (AR) devices, by coupling visual accommodation and visual convergence to the same plane at any depth of an object of interest in real time.
Abstract: Aspects of the disclosed apparatuses, methods and systems provide three-dimensional gradient and dynamic light fields for display in 3D technologies, in particular 3D augmented reality (AR) devices, by coupling visual accommodation and visual convergence to the same plane at any depth of an object of interest in real time.
Abstract: Systems and methods to generate an environmental record for an interactive space are presented herein. An environmental record may represent a set of local environments and may define archival location compositions for the local environments. An archival location composition for a local environment may define aspects of the local environment associated with one or more objects and/or surfaces previously determined to be present in the local environment. A headset worn by a user in the local environment may generate a current location composition based on output signals from sensors included in the headset. The archival and current location compositions may be compared to determine updates for the environmental record.
Abstract: Systems and methods to provide an interactive environment are presented herein. The system may include one or more of a device configured to be installed on a user's head, an image-forming component, one or more physical computer processors, and/or other components. The image-forming component may be configured to generate light rays to form images. The light rays may be reflected off an optical element towards a user's eye. The light rays may be transmitted back through a portion of the image-forming component before reaching the eye.