Patents Examined by Diane M Wills
  • Patent number: 11372475
    Abstract: An information processing apparatus according to the present technology includes a control section. The control section includes a determination section and a model data generation section. The determination section determines whether or not a predetermined modeling condition is satisfied on the basis of sensor data acquired along with a movement of a moving object. In a case where it is determined by the determination section that the predetermined modeling condition is satisfied, the model data generation section generates model data relating to the floor using the sensor data.
    Type: Grant
    Filed: February 25, 2019
    Date of Patent: June 28, 2022
    Assignee: Sony Corporation
    Inventor: Shunichi Homma
  • Patent number: 11373382
    Abstract: A method is described for implementing augmented reality by means of transferring annotations from a model image to a video flow or to a sequence of destination images. The method entails the creation of a model image for each point of interest (or several points of interest), in addition to the annotation of said model image with point annotations on the objects points of interest that are intended to be augmented and made user-interactable. The method will automatically try to understand if the video flow or the sequence of destination images contains the object contained in the model image and, if so, it will automatically transfer the annotations made from the model image to the video flow or to the sequence of destination images, maintaining the location of the points relative to the object.
    Type: Grant
    Filed: April 1, 2020
    Date of Patent: June 28, 2022
    Assignee: PIKKART S.R.L.
    Inventor: Davide Baltieri
  • Patent number: 11367230
    Abstract: Systems and methods for displaying a virtual reticle in an augmented or virtual reality environment by a wearable device are described. The environment can include real or virtual objects that may be interacted with by the user through a variety of poses, such as, e.g., head pose, eye pose or gaze, or body pose. The user may select objects by pointing the virtual reticle toward a target object by changing pose or gaze. The wearable device can recognize that an orientation of a user's head or eyes is outside of a range of acceptable or comfortable head or eye poses and accelerate the movement of the reticle away from a default position and toward a position in the direction of the user's head or eye movement, which can reduce the amount of movement by the user to align the reticle and target.
    Type: Grant
    Filed: October 5, 2020
    Date of Patent: June 21, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Paul Armistead Hoover, Sam Baker, Jennifer M. R. Devine
  • Patent number: 11367250
    Abstract: The method for virtual interaction with a three-dimensional indoor room includes: generating a virtual room model, generating a virtual room visual representation, providing the room data to a display device, receiving a virtual object selection, rendering an updated virtual room visual representation based on the virtual object, and providing the updated virtual room visual representation to the display device. The method can optionally include updating virtual room S700. A system for virtual interaction with a three-dimensional indoor room includes: a backend platform and a front end application.
    Type: Grant
    Filed: March 18, 2020
    Date of Patent: June 21, 2022
    Assignee: GEOMAGICAL LABS, INC.
    Inventors: Brian Totty, Yacine Alami, Michael Otrada, Qing Guo, Hai Shang, Aaron Shea, Philip Guindi, Paul Gauthier, Jianfeng Yin, Kevin Wong, Angus Dorbie
  • Patent number: 11361493
    Abstract: A semantic texture map system to generate a semantic texture map based on a 3D model that comprises a plurality of vertices that include coordinate that indicate positions of the plurality of vertices, a UV map, and a semantic segmentation image that comprises a set of semantic labels.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: June 14, 2022
    Assignee: Snap Inc.
    Inventors: Piers Cowburn, David Li, Isac Andreas Müller Sandvik, Qi Pan
  • Patent number: 11348308
    Abstract: Systems and methods that facilitate efficient and effective shadow image generation are presented. In one embodiment, a hard shadow generation system comprises a compute shader, pixel shader and graphics shader. The compute shader is configured to retrieve pixel depth information and generate projection matrix information, wherein the generating includes performing dynamic re-projection from eye-space to light space utilizing the pixel depth information. The pixel shader is configured to create light space visibility information. The graphics shader is configured to perform frustum trace operations to produce hard shadow information, wherein the frustum trace operations utilize the light space visibility information. The light space visibility information can be considered irregular z information stored in an irregular z-buffer.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: May 31, 2022
    Assignee: NVIDIA Corporation
    Inventor: Jon Story
  • Patent number: 11328489
    Abstract: There is disclosed an augmented reality user interface including dual representation of a physical location including generating two views for viewing the augmented reality objects, a first view includes the video data of the view including the augmented reality objects superimposed thereover in augmented reality locations and a second view that includes data derived from the physical location to generate a map with the augmented reality objects from the first view visible as objects on the map in the augmented reality locations, combining the location, the motion data, the video data, and the augmented reality objects into an augmented reality video such that when the computing device is in a first position, the first view is visible and when the computing device is in a second position, the second view is visible, and displaying the augmented reality video on a display.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: May 10, 2022
    Assignee: Inspirium Laboratories LLC
    Inventor: Iegor Antypov
  • Patent number: 11328385
    Abstract: Techniques and systems are provided for configuring neural networks to perform warping of an object represented in an image to create a caricature of the object. For instance, in response to obtaining an image of an object, a warped image generator generates a warping field using the image as input. The warping field is generated using a model trained with pairings of training images and known warped images using supervised learning techniques and one or more losses. The warped image generator determines, based on the warping field, a set of displacements associated with pixels of the input image. These displacements indicate pixel displacement directions for the pixels of the input image. These displacements are applied to the digital image to generate a warped image of the object.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: May 10, 2022
    Assignee: Adobe Inc.
    Inventors: Julia Gong, Yannick Hold-Geoffroy, Jingwan Lu
  • Patent number: 11328495
    Abstract: This disclosure provides methods for generating a vehicle wrap design. The method includes: obtaining customer information corresponding to an entity; generating, using the computing device, a vehicle wrap design for covering a vehicle based on the obtained customer information; generating, using the computing device, a three-dimensional rendering of the vehicle, wherein the vehicle wrap design is applied to the three-dimensional rendering of the vehicle; and causing a client device to display the three-dimensional rendering with the applied vehicle wrap.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: May 10, 2022
    Assignee: WRAPMATE LLC
    Inventors: Christopher Loar, Jacob Atler Lozow
  • Patent number: 11321913
    Abstract: A three-dimensional (3D) modeling method of clothing to arrange and display parts constituting clothing on a 3D space includes loading pattern data and body data, wherein the pattern data comprises information about one or more parts constituting the clothing, and the body data comprises a 3D shape of a body on which the clothing is to be put; displaying the 3D shape of the body based on the body data; and displaying the one or more parts on the 3D shape of the body based on the pattern data.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: May 3, 2022
    Assignee: Z-EMOTION CO., LTD.
    Inventors: Dong Soo Han, Dong Wook Yi
  • Patent number: 11315287
    Abstract: Various implementations disclosed herein include devices, systems, and methods for generating body pose information for a person in a physical environment. In various implementations, a device includes an environmental sensor, a non-transitory memory and one or more processors coupled with the environmental sensor and the non-transitory memory. In some implementations, a method includes obtaining, via the environmental sensor, spatial data corresponding to a physical environment. The physical environment includes a person and a fixed spatial point. The method includes identifying a portion of the spatial data that corresponds to a body portion of the person. In some implementations, the method includes determining a position of the body portion relative to the fixed spatial point based on the portion of the spatial data. In some implementations, the method includes generating pose information for the person based on the position of body portion in relation to the fixed spatial point.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: April 26, 2022
    Assignee: APPLE INC.
    Inventors: Stefan Auer, Sebastian Bernhard Knorr
  • Patent number: 11308639
    Abstract: A method and apparatus for annotating point cloud data. An apparatus may be configured to cause display of the point cloud data, label points in the point cloud data with a plurality of annotation points, the plurality of annotation points corresponding to points on a human body, move, in response to a user input, one or more of the annotation points to define a human pose and create annotated point cloud data, and output the annotated point cloud data.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: April 19, 2022
    Assignee: Volvo Car Corporation
    Inventors: Saudin Botonjic, Sihao Ding, Andreas Wallin
  • Patent number: 11308573
    Abstract: A device includes a processor and memory. The memory has stored thereon a plurality of executable instructions. The executable instructions, when executed by the processor, cause the processor to: receive an access request affecting an operation of the device; facilitate encryption and/or authentication across an interface coupled to the device, wherein the interface is configured to secure the access request; and execute the access request.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: April 19, 2022
    Assignee: ARM Norway AS
    Inventors: Jorn Nystad, Edvard Sorgard, Borgar Ljosland, Mario Blazevic
  • Patent number: 11308681
    Abstract: In one embodiment, a method includes, generating rays for casting into an artificial reality scene that includes one or more surfaces to determine whether the one or more surfaces are visible from a viewpoint. An origin and a trajectory of each ray are based on the viewpoint. The method includes applying a geometric transformation to the rays to modify their respective trajectory into the artificial reality scene. The geometric transformation is based on one or more distortion characteristics of a display system. The method includes determining, based on the modified trajectories of the rays, points of intersection of rays with the one or more surfaces in the artificial reality scene. The method includes providing, for display by the display system, color values generated based on the determined points of intersection.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: April 19, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Larry Seiler, Alexander Nankervis, John Adrian Arthur Johnston, Jeremy Freeman
  • Patent number: 11302046
    Abstract: Systems and methods for low power virtual reality (VR) presence monitoring and notification via a VR headset worn by a user entail a number of aspects. In an embodiment, a person is detected entering a physical location occupied by the user of the VR headset during a VR session. This detection may occur via one or more sensors on the VR headset. In response to detecting that a person has entered the location, a representation of the person is generated and displayed to the user via the VR headset as part of the VR session. In this way, the headset user may be made aware of people in their physical environment without necessarily leaving the VR session.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: April 12, 2022
    Assignee: Motorola Mobility LLC
    Inventors: Scott DeBates, Douglas Lautner
  • Patent number: 11276184
    Abstract: A method for determining the amplitude of a movement performed by a member of an articulated body comprises: obtaining a segment representative of the positioning of the member in a given reference frame at the end of said movement, generating a three-dimensional model of the member, positioned in said reference frame by means of the obtained segment, obtaining a cloud of three-dimensional points representing the member in said reference frame at the end of said movement, based on depth information provided by a sensor, said depth information defining a three-dimensional scene comprising at least a part of the articulated body including said member, repositioning the model of the member so as to minimize a predetermined error criterion between the obtained cloud of points and said model, and determining the amplitude of the movement, based on the new positioning of the model of the member.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: March 15, 2022
    Assignee: FONDATION B-COM
    Inventors: Albert Murienne, Laurent Launay
  • Patent number: 11276236
    Abstract: An extended reality (XR) system includes an extended reality application executing on a processor within the XR system. The XR system receives, via a client device, a selection of a first extended reality (XR) object located within an XR environment. The XR system receives, via the client device, a request to move the selected first XR object within the XR environment. The XR system calculates a distance between a first feature of the first XR object and a first plane associated with a second XR object within the XR environment. The XR system determines that the distance is within a particular distance. In response to determining that the distance is within the particular distance, the XR system positions the first feature within the XR environment such that the first feature is coplanar with the first plane.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: March 15, 2022
    Assignee: SPLUNK INC.
    Inventors: Devin Bhushan, Jesse Chor, Glen Wong, Stanislav Yazhenskikh, Jim Zhu
  • Patent number: 11276201
    Abstract: Determining the position and orientation (or “pose”) of an augmented reality device includes capturing an image of a scene having a number of features and extracting descriptors of features of the scene represented in the image. The descriptors are matched to landmarks in a 3D model of the scene to generate sets of matches between the descriptors and the landmarks. Estimated poses are determined from at least some of the sets of matches between the descriptors and the landmarks. Estimated poses having deviations from an observed location measurement that are greater than a threshold value may be eliminated. Features used in the determination of estimated poses may also be weighted by the inverse of the distance between the feature and the device, so that closer features are accorded more weight.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: March 15, 2022
    Assignee: Snap Inc.
    Inventors: Maria Jose Garcia Sopo, Qi Pan, Edward James Rosten
  • Patent number: 11276231
    Abstract: Techniques are disclosed for training and applying nonlinear face models. In embodiments, a nonlinear face model includes an identity encoder, an expression encoder, and a decoder. The identity encoder takes as input a representation of a facial identity, such as a neutral face mesh minus a reference mesh, and outputs a code associated with the facial identity. The expression encoder takes as input a representation of a target expression, such as a set of blendweight values, and outputs a code associated with the target expression. The codes associated with the facial identity and the facial expression can be concatenated and input into the decoder, which outputs a representation of a face having the facial identity and expression. The representation of the face can include vertex displacements for deforming the reference mesh.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: March 15, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley
  • Patent number: 11270450
    Abstract: Provided is a method for clustering the data point groups of one or more measurement targets located in the same region from among the acquired data point groups.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: March 8, 2022
    Assignees: TADANO LTD., THE SCHOOL CORPORATION KANSAI UNIVERSITY
    Inventors: Takayuki Kosaka, Iwao Ishikawa, Satoshi Kubota, Shigenori Tanaka, Kenji Nakamura, Yuhei Yamamoto, Masaya Nakahara