Abstract: The methods, systems, techniques, and components described herein allow interactions with virtual elements in a virtual environment, such as a Virtual Reality (VR) environment or Augmented Reality (AR) environment, to be modeled accurately. More particularly, the methods, systems, techniques, and components described herein allow a first virtual element to move within the virtual environment based on an anchor relationship between the first virtual element and a second virtual element. The anchor relationship may define an equilibrium position for the first virtual element. The equilibrium position may define a return position for the first virtual element with respect to the second virtual element. Responsive to the first virtual element being displaced from the equilibrium position, the virtual element may move towards the equilibrium position.
Abstract: Various implementations provide for a three-dimensional trackpad in which sensors and a three-dimensional physical region may be used to interact with a three-dimensional virtual environment. The methods, systems, techniques, and components described herein may facilitate interactions with virtual objects in a three-dimensional virtual environment in response to sensor input into a control device having one or more sensors implemented thereon. The control device may be coupled to a display that may be configured to display the three-dimensional virtual environment. In various implementations, the sensor(s) capture physical movement of a user interaction element (a hand, a stylus, a physical object, etc.) within a specified three-dimensional physical region. The physical movement may be translated into a virtual interaction with the three-dimensional virtual environment. A virtual action in the three-dimensional virtual environment may be identified and displayed.
Abstract: A system configured for tracking a human hand in an augmented reality environment may comprise a distancing device, one or more physical processors, and/or other components. The distancing device may be configured to generate output signals conveying position information. The position information may include positions of surfaces of real-world objects, including surfaces of a human hand. Feature positions of one or more hand features of the hand may be determined through iterative operations of determining estimated feature positions of individual hand features from estimated feature position of other ones of the hand features.
Abstract: A sensing and display apparatus, comprising: a first phenomenon interface configured to operatively interface with a first augmediated-reality space, and a second phenomenon interface configured to operatively interface with a second augmediated-reality space, is implemented as an extramissive spatial imaging digital eye glass.
Abstract: Systems and methods for simulating user interaction with virtual objects in an augmented reality environment are provided. Three-dimensional point cloud information from a three-dimensional volumetric imaging sensor may be obtained. An object position of a virtual object may be determined. Individual potential force vectors for potential forces exerted on the virtual object may be determined. An individual potential force vector may be defined by one or more of a magnitude, a direction, and/or other information. An aggregate scalar magnitude of the individual potential force vectors may be determined. An aggregate potential force vector may be determined by aggregating the magnitudes and directions of the individual potential force vectors. It may be determined whether the potential forces exerted on the virtual object are conflicting.
Type:
Grant
Filed:
February 14, 2017
Date of Patent:
March 27, 2018
Assignee:
Meta Company
Inventors:
Zachary R. Kinstner, Raymond Chun Hing Lo, Rebecca B. Frank
Abstract: A system configured for providing views of virtual content in an augmented reality environment may comprise one or more of a first display, a second display, an optical element, one or more processors, and/or other components. The first display and second display may be separated by a separation distance. The first display and second display may be arranged such that rays of light emitted from pixels of the first display may travel through the second display, then reflect off the optical element and into a user's eyes. A three-dimensional light field perceived with a user's field-of-view may be generated. Distances of the first display and/or second display to the optical element may impact a perceived range of the three-dimensional light field. The separation distance may impact a perceived depth of the three-dimensional light field and/or a resolution of virtual content perceived with the three-dimensional light field.
Type:
Grant
Filed:
June 29, 2016
Date of Patent:
September 26, 2017
Assignee:
Meta Company
Inventors:
Meron Gribetz, Raymond Chun Hing Lo, Ashish Ahuja, Zhangyi Zhong, Steven Merlin Worden
Abstract: Aspects of the disclosed apparatuses, methods and systems provide sharing virtual elements between users of different 3-D virtual spaces. In another generation aspect, virtual elements may be sent, shared, or exchanged between different client devices whether the communication sharing the virtual element occurs synchronously or asynchronously.
Type:
Application
Filed:
February 17, 2017
Publication date:
August 17, 2017
Applicant:
Meta Company
Inventors:
Soren Harner, Sean Olivier Nelson Scott
Abstract: Aspects of the disclosed apparatuses, methods and systems provide tethering 3-D virtual elements in digital content, extracting tethering 3-D virtual elements, and manipulating the extracted 3-D virtual elements in a virtual 3-D space.
Type:
Application
Filed:
February 15, 2017
Publication date:
August 17, 2017
Applicant:
Meta Company
Inventors:
Meron Gribetz, Soren Harner, Sean Scott, Rebecca B. Frank, Duncan McRoberts
Abstract: A sensing and display apparatus, comprising: a first phenomenon interface configured to operatively interface with a first augmediated-reality space, and a second phenomenon interface configured to operatively interface with a second augmediated-reality space, is implemented as an extramissive spatial imaging digital eye glass.
Abstract: Aspects of the disclosed apparatuses, methods and systems provide three dimensional gradient and dynamic light fields for display in 3D technologies, in particular 3D augmented reality (AR) devices, by coupling visual accommodation and visual convergence to the same plane at any depth of an object of interest in real time.
Abstract: Aspects of the disclosed apparatuses, methods and systems provide manipulation of a virtual world three dimensional (3D) space based on input translated from the real world. Elements in the virtual world may have an associated charge and field. An element in the virtual world becomes interactive with an element translated from the real world when the translated real world element interacts with a field associated with the virtual element according to the charges. Forces may be applied to the virtual element using a real world physics model to determine a response by the virtual element to the applied force.
Abstract: Aspects of the disclosed apparatuses, methods and systems provide elimination of distortion induced by an optical system that reflects light from an image source. An inverse mapping of the distortion is created for the optical system. The display system applies the inverse mapping to an image prior to display to introduce a distortion to the displayed image that is the inverse of the distortion introduced by the optical system. As a result, the distortion in the displayed image is canceled by the distortion of the optical element providing the user with an image that is substantially distortion free.
Type:
Application
Filed:
January 6, 2017
Publication date:
July 6, 2017
Applicant:
Meta Company
Inventors:
Raymond Chun Hing Lo, Joshua Hernandez, Valmiki Rampersad, Agis Mesolongitis, Ali Shahdi
Abstract: Methods, systems, components, and techniques provide a retinal light scanning engine to write light corresponding to an image on the retina of a viewer. As described herein, a light source of the retinal light scanning engine forms a single point of light on the retina at any single, discrete moment in time. In one example, to form a complete image, the retinal light scanning engine uses a pattern to scan or write on the retina to provide light to millions of such points over one time segment corresponding to the image. The retinal light scanning engine changes the intensity and color of the points drawn by the pattern by simultaneously controlling the power of different light sources and movement of an optical scanner to display the desired content on the retina according to the pattern. In addition, the pattern may be optimized for writing an image on the retina. Moreover, multiple patterns may be used to additional increase or improve the field-of-view of the display.
Abstract: Aspects of the disclosed apparatuses, methods and systems provide three dimensional gradient and dynamic light fields for display in 3D technologies, in particular 3D augmented reality (AR) devices, by coupling visual accommodation and visual convergence to the same plane at any depth of an object of interest in real time.
Abstract: Aspects of the disclosed apparatuses, methods and systems provide three dimensional gradient and dynamic light fields for display in 3D technologies, in particular 3D augmented reality (AR) devices, by coupling visual accommodation and visual convergence to the same plane at any depth of an object of interest in real time.
Abstract: In one general aspect, an optical system for a head mounted display system is provided. The optical system includes an image source and an optical component. The optical component includes a reflective surface configured to receive an image from the image source, the optical component having a specified curvature that reflects and presents the image to a user of the head mounted display.
Inventors:
Meron Gribetz, Raymond Chun Hing Lo, Stefano Baldassi, Martin Hasek, Ashish Ahuja, Zhangyi Zhong, Eric Bokides, Esther Lekeu, Steven Merlin Worden
Inventors:
Meron Gribetz, Raymond Chun Hing Lo, Stefano Baldassi, Martin Hasek, Ashish Ahuja, Zhangyi Zhong, Eric Bokides, Esther Lekeu, Steven Merlin Worden