Patents Examined by Diane M Wills
  • Patent number: 11361493
    Abstract: A semantic texture map system to generate a semantic texture map based on a 3D model that comprises a plurality of vertices that include coordinate that indicate positions of the plurality of vertices, a UV map, and a semantic segmentation image that comprises a set of semantic labels.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: June 14, 2022
    Assignee: Snap Inc.
    Inventors: Piers Cowburn, David Li, Isac Andreas Müller Sandvik, Qi Pan
  • Patent number: 11348308
    Abstract: Systems and methods that facilitate efficient and effective shadow image generation are presented. In one embodiment, a hard shadow generation system comprises a compute shader, pixel shader and graphics shader. The compute shader is configured to retrieve pixel depth information and generate projection matrix information, wherein the generating includes performing dynamic re-projection from eye-space to light space utilizing the pixel depth information. The pixel shader is configured to create light space visibility information. The graphics shader is configured to perform frustum trace operations to produce hard shadow information, wherein the frustum trace operations utilize the light space visibility information. The light space visibility information can be considered irregular z information stored in an irregular z-buffer.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: May 31, 2022
    Assignee: NVIDIA Corporation
    Inventor: Jon Story
  • Patent number: 11328489
    Abstract: There is disclosed an augmented reality user interface including dual representation of a physical location including generating two views for viewing the augmented reality objects, a first view includes the video data of the view including the augmented reality objects superimposed thereover in augmented reality locations and a second view that includes data derived from the physical location to generate a map with the augmented reality objects from the first view visible as objects on the map in the augmented reality locations, combining the location, the motion data, the video data, and the augmented reality objects into an augmented reality video such that when the computing device is in a first position, the first view is visible and when the computing device is in a second position, the second view is visible, and displaying the augmented reality video on a display.
    Type: Grant
    Filed: June 30, 2020
    Date of Patent: May 10, 2022
    Assignee: Inspirium Laboratories LLC
    Inventor: Iegor Antypov
  • Patent number: 11328385
    Abstract: Techniques and systems are provided for configuring neural networks to perform warping of an object represented in an image to create a caricature of the object. For instance, in response to obtaining an image of an object, a warped image generator generates a warping field using the image as input. The warping field is generated using a model trained with pairings of training images and known warped images using supervised learning techniques and one or more losses. The warped image generator determines, based on the warping field, a set of displacements associated with pixels of the input image. These displacements indicate pixel displacement directions for the pixels of the input image. These displacements are applied to the digital image to generate a warped image of the object.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: May 10, 2022
    Assignee: Adobe Inc.
    Inventors: Julia Gong, Yannick Hold-Geoffroy, Jingwan Lu
  • Patent number: 11328495
    Abstract: This disclosure provides methods for generating a vehicle wrap design. The method includes: obtaining customer information corresponding to an entity; generating, using the computing device, a vehicle wrap design for covering a vehicle based on the obtained customer information; generating, using the computing device, a three-dimensional rendering of the vehicle, wherein the vehicle wrap design is applied to the three-dimensional rendering of the vehicle; and causing a client device to display the three-dimensional rendering with the applied vehicle wrap.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: May 10, 2022
    Assignee: WRAPMATE LLC
    Inventors: Christopher Loar, Jacob Atler Lozow
  • Patent number: 11321913
    Abstract: A three-dimensional (3D) modeling method of clothing to arrange and display parts constituting clothing on a 3D space includes loading pattern data and body data, wherein the pattern data comprises information about one or more parts constituting the clothing, and the body data comprises a 3D shape of a body on which the clothing is to be put; displaying the 3D shape of the body based on the body data; and displaying the one or more parts on the 3D shape of the body based on the pattern data.
    Type: Grant
    Filed: November 28, 2019
    Date of Patent: May 3, 2022
    Assignee: Z-EMOTION CO., LTD.
    Inventors: Dong Soo Han, Dong Wook Yi
  • Patent number: 11315287
    Abstract: Various implementations disclosed herein include devices, systems, and methods for generating body pose information for a person in a physical environment. In various implementations, a device includes an environmental sensor, a non-transitory memory and one or more processors coupled with the environmental sensor and the non-transitory memory. In some implementations, a method includes obtaining, via the environmental sensor, spatial data corresponding to a physical environment. The physical environment includes a person and a fixed spatial point. The method includes identifying a portion of the spatial data that corresponds to a body portion of the person. In some implementations, the method includes determining a position of the body portion relative to the fixed spatial point based on the portion of the spatial data. In some implementations, the method includes generating pose information for the person based on the position of body portion in relation to the fixed spatial point.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: April 26, 2022
    Assignee: APPLE INC.
    Inventors: Stefan Auer, Sebastian Bernhard Knorr
  • Patent number: 11308639
    Abstract: A method and apparatus for annotating point cloud data. An apparatus may be configured to cause display of the point cloud data, label points in the point cloud data with a plurality of annotation points, the plurality of annotation points corresponding to points on a human body, move, in response to a user input, one or more of the annotation points to define a human pose and create annotated point cloud data, and output the annotated point cloud data.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: April 19, 2022
    Assignee: Volvo Car Corporation
    Inventors: Saudin Botonjic, Sihao Ding, Andreas Wallin
  • Patent number: 11308573
    Abstract: A device includes a processor and memory. The memory has stored thereon a plurality of executable instructions. The executable instructions, when executed by the processor, cause the processor to: receive an access request affecting an operation of the device; facilitate encryption and/or authentication across an interface coupled to the device, wherein the interface is configured to secure the access request; and execute the access request.
    Type: Grant
    Filed: March 24, 2016
    Date of Patent: April 19, 2022
    Assignee: ARM Norway AS
    Inventors: Jorn Nystad, Edvard Sorgard, Borgar Ljosland, Mario Blazevic
  • Patent number: 11308681
    Abstract: In one embodiment, a method includes, generating rays for casting into an artificial reality scene that includes one or more surfaces to determine whether the one or more surfaces are visible from a viewpoint. An origin and a trajectory of each ray are based on the viewpoint. The method includes applying a geometric transformation to the rays to modify their respective trajectory into the artificial reality scene. The geometric transformation is based on one or more distortion characteristics of a display system. The method includes determining, based on the modified trajectories of the rays, points of intersection of rays with the one or more surfaces in the artificial reality scene. The method includes providing, for display by the display system, color values generated based on the determined points of intersection.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: April 19, 2022
    Assignee: Facebook Technologies, LLC.
    Inventors: Larry Seiler, Alexander Nankervis, John Adrian Arthur Johnston, Jeremy Freeman
  • Patent number: 11302046
    Abstract: Systems and methods for low power virtual reality (VR) presence monitoring and notification via a VR headset worn by a user entail a number of aspects. In an embodiment, a person is detected entering a physical location occupied by the user of the VR headset during a VR session. This detection may occur via one or more sensors on the VR headset. In response to detecting that a person has entered the location, a representation of the person is generated and displayed to the user via the VR headset as part of the VR session. In this way, the headset user may be made aware of people in their physical environment without necessarily leaving the VR session.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: April 12, 2022
    Assignee: Motorola Mobility LLC
    Inventors: Scott DeBates, Douglas Lautner
  • Patent number: 11276184
    Abstract: A method for determining the amplitude of a movement performed by a member of an articulated body comprises: obtaining a segment representative of the positioning of the member in a given reference frame at the end of said movement, generating a three-dimensional model of the member, positioned in said reference frame by means of the obtained segment, obtaining a cloud of three-dimensional points representing the member in said reference frame at the end of said movement, based on depth information provided by a sensor, said depth information defining a three-dimensional scene comprising at least a part of the articulated body including said member, repositioning the model of the member so as to minimize a predetermined error criterion between the obtained cloud of points and said model, and determining the amplitude of the movement, based on the new positioning of the model of the member.
    Type: Grant
    Filed: June 26, 2020
    Date of Patent: March 15, 2022
    Assignee: FONDATION B-COM
    Inventors: Albert Murienne, Laurent Launay
  • Patent number: 11276236
    Abstract: An extended reality (XR) system includes an extended reality application executing on a processor within the XR system. The XR system receives, via a client device, a selection of a first extended reality (XR) object located within an XR environment. The XR system receives, via the client device, a request to move the selected first XR object within the XR environment. The XR system calculates a distance between a first feature of the first XR object and a first plane associated with a second XR object within the XR environment. The XR system determines that the distance is within a particular distance. In response to determining that the distance is within the particular distance, the XR system positions the first feature within the XR environment such that the first feature is coplanar with the first plane.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: March 15, 2022
    Assignee: SPLUNK INC.
    Inventors: Devin Bhushan, Jesse Chor, Glen Wong, Stanislav Yazhenskikh, Jim Zhu
  • Patent number: 11276201
    Abstract: Determining the position and orientation (or “pose”) of an augmented reality device includes capturing an image of a scene having a number of features and extracting descriptors of features of the scene represented in the image. The descriptors are matched to landmarks in a 3D model of the scene to generate sets of matches between the descriptors and the landmarks. Estimated poses are determined from at least some of the sets of matches between the descriptors and the landmarks. Estimated poses having deviations from an observed location measurement that are greater than a threshold value may be eliminated. Features used in the determination of estimated poses may also be weighted by the inverse of the distance between the feature and the device, so that closer features are accorded more weight.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: March 15, 2022
    Assignee: Snap Inc.
    Inventors: Maria Jose Garcia Sopo, Qi Pan, Edward James Rosten
  • Patent number: 11276231
    Abstract: Techniques are disclosed for training and applying nonlinear face models. In embodiments, a nonlinear face model includes an identity encoder, an expression encoder, and a decoder. The identity encoder takes as input a representation of a facial identity, such as a neutral face mesh minus a reference mesh, and outputs a code associated with the facial identity. The expression encoder takes as input a representation of a target expression, such as a set of blendweight values, and outputs a code associated with the target expression. The codes associated with the facial identity and the facial expression can be concatenated and input into the decoder, which outputs a representation of a face having the facial identity and expression. The representation of the face can include vertex displacements for deforming the reference mesh.
    Type: Grant
    Filed: March 4, 2020
    Date of Patent: March 15, 2022
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Prashanth Chandran, Dominik Thabo Beeler, Derek Edward Bradley
  • Patent number: 11270450
    Abstract: Provided is a method for clustering the data point groups of one or more measurement targets located in the same region from among the acquired data point groups.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: March 8, 2022
    Assignees: TADANO LTD., THE SCHOOL CORPORATION KANSAI UNIVERSITY
    Inventors: Takayuki Kosaka, Iwao Ishikawa, Satoshi Kubota, Shigenori Tanaka, Kenji Nakamura, Yuhei Yamamoto, Masaya Nakahara
  • Patent number: 11253338
    Abstract: Methods, systems, and apparatuses are described for determining deformation of gingiva. A three-dimensional (3D) digital model of an archform comprising a representation of gingiva and a tooth may be received. The 3D model may comprise a 3D mesh describing a surface of the gingiva, the tooth, and a contour between the gingiva and the tooth. A tooth-movement displacement may be applied to the vertices corresponding to the tooth in the 3D mesh and the vertices corresponding to the contour. A vertex-specific displacement may be determined for each of the vertices corresponding to the gingiva by solving a harmonic equation. The 3D digital model may be updated.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: February 22, 2022
    Assignee: ARKIMOS Ltd
    Inventor: Islam Khasanovich Raslambekov
  • Patent number: 11250616
    Abstract: Methods, systems and apparatuses may provide for technology that generates a point cloud model of an object depicted in image content associated with a plurality of physical cameras, generates a color projection of the point cloud model based on viewpoint information associated with a virtual camera, and excludes background color information from the color projection based on one or more segmentation masks associated with the object. The technology may also exclude peripheral information from the color projection based on at least one of the segmentation mask(s) and the viewpoint information.
    Type: Grant
    Filed: March 27, 2020
    Date of Patent: February 15, 2022
    Assignee: Intel Corporation
    Inventors: Adam Kaplan, Yuval Hovers, Ilan Beer, Ben Raziel
  • Patent number: 11219829
    Abstract: An example image processing apparatus disposes a virtual camera and a terrain object in a virtual space, and generates grass objects at a predetermined region located with reference to a land horizon that is a boundary between the terrain object and a background as viewed from the virtual camera. A player character is displayed at a position closer to the virtual camera, and the grass objects are generated in the predetermined region located with reference to the land horizon. Therefore, the terrain can be represented to look real, and the player character can be more easily seen.
    Type: Grant
    Filed: November 13, 2019
    Date of Patent: January 11, 2022
    Assignee: Nintendo Co., Ltd.
    Inventors: Makoto Yonezu, Koji Takahashi, Jun Takamura
  • Patent number: 11222449
    Abstract: A method is used in processing graphics in computing environments. A user interface layer receives a request from a user to rasterize an interactive image rendered in a user interface. A rasterizing module rasterizes the interactive image at the user interface layer associated with the user interface. The rasterizing module transmits the rasterized image to a reporting service for reporting out the rasterized image.
    Type: Grant
    Filed: October 24, 2018
    Date of Patent: January 11, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Rakesh Ram Mohan Maddala, Timothy Ramamurthy, Rameshkrishnan Subramanian