Abstract: A controller outputs an image of the virtual space in correspondence with a posture of a first user wearing the mounted display, outputs an image of the virtual space to a touch panel display used by a second user, performs a first action in the virtual space in correspondence with a touch operation performed by the second user on the touch panel display, outputs an image of the virtual space reflecting the first action to the mounted display, and performs a second action in the virtual space reflecting the first action in correspondence with an operation performed by the first user on an operation unit.
Abstract: The systems described in this disclosure can be used in construction settings to facilitate the tasks being performed. The location of projectors and augmented reality headsets can be calculated and used to determine what images to display to a worker, based on a map of work to be performed, such as a construction plan. Workers can use spatially-aware tools to make different locations be plumb, level, or equidistant with other locations. Power to tools can be disabled if they are near protected objects.
Type:
Grant
Filed:
November 13, 2019
Date of Patent:
November 9, 2021
Assignee:
Milwaukee Electric Tool Corporation
Inventors:
Samuel A. Gould, Kellen Carey, Michael John Caelwaerts, Gareth J. Mueckl, Christopher S. Hoppe, Benjamin T. Jones
Abstract: A control apparatus that controls a robot having a movable unit, includes a display control unit that changes a display form of a virtual wall displayed on a display unit transmitting visible light based on a distance between the virtual wall on a real space and the movable unit and a velocity of the movable unit.
Abstract: Automatically generating augmented reality (AR) content by constructing a three-dimensional (3D) model of an object-including scene using images recorded during a remotely-guided AR session from a camera position defined relative to first 3D axes, the model including camera positions defined relative to second 3D axes, registering the first axes with the second axes by matching a trajectory derived from the image camera positions to a trajectory derived from the model's camera positions for determining a session-to-model transform, translating, using the transform, positions of points of interest (POIs) indicated on the object during the session, to corresponding POI positions on the object within the model, where the session POI positions are defined relative to the first axes and the model POI positions are defined relative to the second axes, and generating a content package including the model, model POI positions, and POI annotations provided during the session.
Type:
Grant
Filed:
November 13, 2019
Date of Patent:
October 12, 2021
Assignee:
International Business Machines Corporation
Inventors:
Oded Dubovsky, Adi Raz Goldfarb, Yochay Tzur
Abstract: A controller outputs an image of the virtual space in correspondence with a posture of a first user wearing the mounted display, outputs an image of the virtual space to a touch panel display used by a second user, performs a first action in the virtual space in correspondence with a touch operation performed by the second user on the touch panel display, outputs an image of the virtual space reflecting the first action to the mounted display, and performs a second action in the virtual space reflecting the first action in correspondence with an operation performed by the first user on an operation unit.
Abstract: Embodiments of the present disclosure relate to continuous and/or binocular time warping methods to account for head movement of the user without having to re-render a displayed image. Continuous time warping allows for transformation of an image from a first perspective to a second perspective of the viewer without having to re-render the image from the second perspective. Binocular time warp refers to the late-frame time warp used in connection with a display device including a left display unit for the left eye and a right display unit for the right eye where the late-frame time warp is performed separately for the left display unit and the right display unit. Warped images are sent to the left and the right display units where photons are generated and emitted toward respective eyes of the viewer, thereby displaying an image on the left and the right display units at the same time.
Type:
Grant
Filed:
May 18, 2020
Date of Patent:
September 21, 2021
Assignee:
Magic Leap, Inc.
Inventors:
Ivan Li Chuen Yeoh, Lionel Ernest Edwin, Samuel A. Miller
Abstract: A three-dimensional object data generation apparatus includes an obtaining unit that obtains three-dimensional object data representing a three-dimensional object with plural voxels, for each of which a physical property value is set, a setting unit that sets a three-dimensional threshold matrix in which thresholds are arranged in a three-dimensional space in accordance with a predetermined basic shape, and a calculation unit that calculates whether to form each of the plural voxels on a basis of the physical property value of the voxel and the three-dimensional threshold matrix.
Abstract: Embodiments of the present disclosure relate to a method, apparatus, device and computer readable storage medium for reconstructing a three-dimensional scene. The method for reconstructing a three-dimensional scene includes acquiring a point cloud data frame set for a three-dimensional scene, point cloud data frames in the point cloud data frame set respectively having a pose parameter. The method further comprises determining a subset corresponding to a part of the three-dimensional scene from the point cloud data frame set. The method further comprises adjusting a pose parameter of a point cloud data frame in the subset to obtain an adjusted subset, the adjusted subset including at least two point cloud data frames having matching overlapping parts. The method further comprises updating the point cloud data frame set using the adjusted subset. In this way, distributed processing on a large amount of point cloud data may be realized.
Abstract: A method and system for generating a three-dimensional (3D) virtual scene are disclosed. The method includes: identifying a two-dimensional (2D) object in a 2D picture and the position of the 2D object in the 2D picture; obtaining the three-dimensional model of the 3D object corresponding to the 2D object; calculating the corresponding position of the 3D object corresponding to the 2D object in the horizontal plane of the 3D scene according to the position of the 2D object in the picture; and simulating the falling of the model of the 3D object onto the 3D scene from a predetermined height above the 3D scene, wherein the position of the landing point the model of the 3D object in the horizontal plane is the corresponding position of the 3D object in the horizontal plane of the 3D scene.
Type:
Grant
Filed:
November 30, 2012
Date of Patent:
July 20, 2021
Assignee:
International Business Machines Corporation
Inventors:
Hao Chen, Guo Qiang Hu, Qi Cheng Li, Li Jun Mei, Jian Wang, Yi Min Wang, Zi Yu Zhu
Abstract: The disclosure relates to a system for providing guidance for positioning a body. The system may include a video display, one or more digital cameras configured to generate a depth video stream and a visual video stream, and a computing device including, a memory, and a processor. The processor may control the one or more digital cameras to generate the depth video stream including a depth image of the body and the visual video stream including a color image of the body. The processor identifies at least a part of the body within the images using a first trained learning machine to segment the images and isolate the body. The processor may crop both the visual image and the depth image based on the identified body. The processor may estimate a position of a plurality of joints of the body by applying a second trained learning machine to the identified and isolated part of the body. The processor may generate a current pose estimate by connecting estimated positions of the plurality of joints.
Type:
Grant
Filed:
September 13, 2019
Date of Patent:
July 20, 2021
Assignee:
MirrorAR LLC
Inventors:
Hemant Virkar, Leah Kaplan, Stephen Furlani, Jacob Borgman
Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for animating a pull-to-refresh gesture. The program and method provide for receiving a pull gesture in a messaging application; selecting, in response to receiving the pull gesture, a set of users corresponding to contacts in the messaging application; and displaying a set of images for each user in the set of users, in association with refreshing screen content.
Type:
Grant
Filed:
December 30, 2019
Date of Patent:
July 13, 2021
Assignee:
Snap Inc.
Inventors:
Jeremy Voss, Christie Marie Heikkinen, Daniel Rakhamimov, Laurent Desserrey, Susan Marie Territo, Edward Koai, Joseph Timothy Fortier
Abstract: Aspects of the present disclosure relate to a method for producing an output image representing a scene. The method comprises rendering a plurality of component images. Each component image corresponds to an associated depth within the scene. The method comprises determining one or more elements of a view pose to which an output image is to correspond, and deriving an output image part from each of the plurality of component images based on the determined one or more elements. The method then comprises overlaying each of the output image parts, to produce the output image.
Abstract: A virtuality-reality overlapping method is provided. A point cloud map related to a real scene is constructed. Respective outline border vertexes of a plurality of objects are located by using 3D object detection. According to the outline border vertexes of the objects, the point cloud coordinates of the final candidate outline border vertexes are located according to the screening result of a plurality of projected key frames. Then, the point cloud map is projected to the real scene for overlapping a virtual content with the real scene.
Abstract: Techniques are disclosed relating to compression of data stored at different cache levels. In some embodiments, programmable shader circuitry is configured to execute program instructions of compute kernels that write pixel data. In some embodiments, a first cache is configured to store pixel write data from the programmable shader circuitry and first compression circuitry is configured to compress a first block of pixel write data in response to full accumulation of the first block in the first cache circuitry. In some embodiments, second cache circuitry is configured to store pixel write data from the programmable shader circuitry at a higher level in a storage hierarchy than the first cache circuitry and second compression circuitry is configured to compress a second block of pixel write data in response to full accumulation of the second block in the second cache circuitry.
Type:
Grant
Filed:
November 4, 2019
Date of Patent:
July 13, 2021
Assignee:
Apple Inc.
Inventors:
Anthony P. DeLaurier, Karl D. Mann, Tyson J. Bergland, Winnie W. Yeung
Abstract: Systems and methods for providing digital augmented reality content in a distribution network using physical distribution items as triggers for the Augmented reality content. An interface allows a sender of a physical item to provide augmented reality content to a distribution network, and the distribution network can provide the augmented reality content to a recipient of the physical distribution item.
Abstract: An approach for simulating items in an environment, such as a room, is disclosed. A package file can store information including an image of the environment and metadata including an identifier that uniquely identifies a selected image. The package file can be used to regenerate a simulation of the item arranged over the image of the environment. Later changes can be made to the simulation of the item by accessing the metadata.
Abstract: A method and an apparatus produce and reproduce Augmented Reality (AR) contents in a mobile terminal. In the method, contents are produced. An image including an object corresponding to the contents is recognized. Recognition information for the object corresponding to the contents is obtained based on a recognition result. AR contents including the contents and the recognition information are generated. Therefore, AR contents for an input image may be easily produced and reproduced, and the AR contents may be used as independent multimedia contents, not an auxiliary means of other contents.
Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided.
Type:
Grant
Filed:
October 16, 2019
Date of Patent:
May 4, 2021
Assignee:
Google LLC
Inventors:
Christoph Rhemann, Abhimitra Meka, Matthew Whalen, Jessica Lynn Busch, Sofien Bouaziz, Geoffrey Douglas Harvey, Andrea Tagliasacchi, Jonathan Taylor, Paul Debevec, Peter Joseph Denny, Sean Ryan Francesco Fanello, Graham Fyffe, Jason Angelo Dourgarian, Xueming Yu, Adarsh Prakash Murthy Kowdle, Julien Pascal Christophe Valentin, Peter Christopher Lincoln, Rohit Kumar Pandey, Christian Häne, Shahram Izadi
Abstract: Provided are an apparatus and method of three-dimensional reverse modeling of a building structure by using photographic images, and more particularly, to an apparatus and method of three-dimensional reverse modeling of a building structure by using photographic images, in which three-dimensional modeling data of a building structure may be quickly and conveniently reversely modeled by using a plurality of photographic images. According to the apparatus and method of three-dimensional reverse modeling of a building structure by using photographic images, three-dimensional modeling data of a building structure may be reversely modeled by using photographic images of the building structure captured using a camera, at low costs and quickly and easily, thereby remarkably increasing the productivity of the manufacture of a building information model (BIM).