Patents Examined by Xilin Guo
  • Patent number: 11727611
    Abstract: A relational terrain for social {/virtual} worlds is provided. A user owned property (villa) may be composed of one or more terrain masses (tiles). Relational links may be established between a villa and multiple other villas. Relational maps display a villa and the villas with which it has relational links. Villas exist in relational space which reflects the relations between villas, and allows a villa to be in more than one location at the same time, maximizing interaction and property value. A portal can be created to support specialized functionality/interaction, allowing data to be passed and/or changed when a user moves from one villa to a (destination villa/another).
    Type: Grant
    Filed: June 11, 2021
    Date of Patent: August 15, 2023
    Inventor: Jim Schwaiger
  • Patent number: 11721055
    Abstract: A character animation motion control method and device are disclosed. A character animation playing method, including extracting first actions based on a state of a character, extracting second actions based on the state, selecting an action included in the first actions and the second actions, and updating the state based on the action.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: August 8, 2023
    Assignees: Samsung Electronics Co., Ltd., Korea Advanced Institute of Science and Technology
    Inventors: Junyong Noh, Kyungmin Cho, Chaelin Kim
  • Patent number: 11721057
    Abstract: A terminal device for playing a game includes a display screen for displaying animation of the game, and processing circuitry. The processing circuitry detects a frame rate inadequacy of animation frames that are generated according to animation features respectively associated with animation files. Then, the processing circuitry obtains preconfigured values respectively associated with the animation files. A preconfigured value associated with an animation file is indicative of performance influence for turning off an animation feature associated with the animation file. Further, the processing circuitry turns off one or more animation features according to the preconfigured values associated with the animation files until an adequate frame rate is achieved.
    Type: Grant
    Filed: July 7, 2022
    Date of Patent: August 8, 2023
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Wei Xie
  • Patent number: 11714890
    Abstract: Systems and methods for knowledge-based authentication are disclosed. The systems and methods can include an authentication system. The authentication system can generate authentication questions using object data received from an augmented reality system associated with a user. The authentication system can authenticate the user using the authentication questions. The augmented reality system may acquire image data, detect and validate objects in the image data, and provide object data for the objects to the authentication system. The augmented reality system may provide an indication to the user when an object is detected and may receive, in response, a user-acknowledgement of detection.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: August 1, 2023
    Assignee: Capital One Services, LLC
    Inventors: Joshua Edwards, Lukiih Cuan, Eric Loucks
  • Patent number: 11710284
    Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: July 25, 2023
    Assignee: Campfire 3D, Inc.
    Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
  • Patent number: 11704768
    Abstract: Methods and systems are provided for using temporal supersampling to increase a displayed resolution associated with peripheral region of a foveated rendering view. A method for enabling reconstitution of higher resolution pixels from a low resolution sampling region for fragment data is provided. The method includes an operation for receiving a fragment from a rasterizer of a GPU and for applying temporal supersampling to the fragment with the low resolution sampling region over a plurality of prior frames to obtain a plurality of color values. The method further includes an operation for reconstituting a plurality of high resolution pixels in a buffer that is based on the plurality of color values obtained via the temporal supersampling. Moreover, the method includes an operation for sending the plurality of high resolution pixels for display.
    Type: Grant
    Filed: August 10, 2021
    Date of Patent: July 18, 2023
    Assignee: Sony Interactive Entertainment Inc.
    Inventors: Andrew Young, Chris Ho, Jeffrey Roger Stafford
  • Patent number: 11694382
    Abstract: A method of generating or modifying poses in an animation of a character are disclosed. Variable numbers and types of supplied inputs are combined into a single input. The variable numbers and types of supplied inputs correspond to one or more effector constraints for one or more joints of the character. The single input is transformed into a pose embedding. The pose embedding includes a machine-learned representation of the single input. The pose embedding is expanded into a pose representation output. The pose representation output includes local rotation data and global position data for the one or more joints of the character.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: July 4, 2023
    Assignee: Unity IPR ApS
    Inventors: Florent Benjamin Bocquelet, Dominic Laflamme, Boris Oreshkin
  • Patent number: 11688147
    Abstract: A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: June 27, 2023
    Assignee: Campfire 3D, Inc.
    Inventors: Avi Bar-Zeev, Alexander Tyurin, Gerald V. Wright, Jr.
  • Patent number: 11679506
    Abstract: One embodiment of the present invention sets forth a technique for generating simulated training data for a physical process. The technique includes receiving, as input to at least one machine learning model, a first simulated image of a first object, wherein the at least one machine learning model includes mappings between simulated images generated from models of physical objects and real-world images of the physical objects. The technique also includes performing, by the at least one machine learning model, one or more operations on the first simulated image to generate a first augmented image of the first object. The technique further includes transmitting the first augmented image to a training pipeline for an additional machine learning model that controls a behavior of the physical process.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: June 20, 2023
    Assignee: AUTODESK, INC.
    Inventors: Hui Li, Evan Patrick Atherton, Erin Bradner, Nicholas Cote, Heather Kerrick
  • Patent number: 11681913
    Abstract: A method of updating a neural network model by a terminal device, includes training a local model using a local data set collected by a terminal device to generate a trained local model; receiving, from a server, an independent identically distributed (i.i.d.) global data set, the i.i.d. global data set being a data set sampled for each class in a plurality of predefined classes; implementing the trained local model by inputting the i.i.d. global data set and transmitting final inference results of the implemented trained local model to the server; and receiving, from the server, a global model updated based on the final inference results of the inference.
    Type: Grant
    Filed: February 10, 2020
    Date of Patent: June 20, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Songyi Han
  • Patent number: 11676343
    Abstract: The following relates generally to light detection and ranging (LIDAR) and artificial intelligence (AI). In some embodiments, a system: receives LIDAR data generated from a LIDAR camera; measures plurality of dimensions of the home based upon processor analysis of the LIDAR data; builds a 3D model of the home based upon the measured plurality of dimensions; and displays a representation of the 3D model by visually navigating through the 3D model.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: June 13, 2023
    Assignee: STATE FARM MUTUAL AUTOMOBILE INSURANCE COMPANY
    Inventors: Nicholas Carmelo Marotta, Laura Kennedy, JD Johnson Willingham
  • Patent number: 11663785
    Abstract: A method for creating an augmented reality scene, the method comprising, by a computing device with a processor and a memory, receiving a first video image data and a second video image data; calculating an error value for a current pose between the two images by comparing the pixel colors in the first video image data and the second video image data; warping pixel coordinates into a second video image data through the use of the map of depth hypotheses for each pixel; varying the pose between the first video image data and the second video image data to find a warp that corresponds to a minimum error value; calculating, using the estimated poses, a new depth measurement for each pixel that is visible in both the first video image data and the second video image data.
    Type: Grant
    Filed: April 26, 2021
    Date of Patent: May 30, 2023
    Assignee: HOLOBUILDER, INC.
    Inventors: Simon Heinen, Lars Tholen, Mostafa Akbari-Hochberg, Gloria Indra Dhewani Abidin
  • Patent number: 11645798
    Abstract: Systems and methods are disclosed for generating, a source image sequence using an image sensor of the computing device, the source image sequence comprising a plurality of source images depicting a head and face, identifying driving image sequence data to modify face image feature data in the source image sequence, generating, using an image transformation neural network, a modified source image sequence comprising a plurality of modified source images depicting modified versions of the head and face, and storing the modified source image sequence on the computing device.
    Type: Grant
    Filed: June 1, 2021
    Date of Patent: May 9, 2023
    Assignee: Snap Inc.
    Inventors: Sergey Demyanov, Aleksei Podkin, Aliaksandr Siarohin, Aleksei Stoliar, Sergey Tulyakov
  • Patent number: 11645497
    Abstract: Systems and methods relate to a network model to apply an effect to an image such as an augmented reality effect (e.g. makeup, hair, nail, etc.). The network model uses a conditional cycle-consistent generative image-to-image translation model to translate images from a first domain space where the effect is not applied and to a second continuous domain space where the effect is applied. In order to render arbitrary effects (e.g. lipsticks) not seen at training time, the effect's space is represented as a continuous domain (e.g. a conditional variable vector) learned by encoding simple swatch images of the effect, such as are available as product swatches, as well as a null effect. The model is trained end-to-end in an unsupervised fashion. To condition a generator of the model, convolutional conditional batch normalization (CCBN) is used to apply the vector encoding the reference swatch images that represent the makeup properties.
    Type: Grant
    Filed: November 14, 2019
    Date of Patent: May 9, 2023
    Assignee: L'Oreal
    Inventors: Eric Elmoznino, He Ma, Irina Kezele, Edmund Phung, Alex Levinshtein, Parham Aarabi
  • Patent number: 11640687
    Abstract: Mesh-tracking based dynamic 4D modeling for machine learning deformation training includes: using a volumetric capture system for high-quality 4D scanning, using mesh-tracking to establish temporal correspondences across a 4D scanned human face and full-body mesh sequence, using mesh registration to establish spatial correspondences between a 4D scanned human face and full-body mesh and a 3D CG physical simulator, and training surface deformation as a delta from the physical simulator using machine learning. The deformation for natural animation is able to be predicted and synthesized using the standard MoCAP animation workflow. Machine learning based deformation synthesis and animation using standard MoCAP animation workflow includes using single-view or multi-view 2D videos of MoCAP actors as input, solving 3D model parameters (3D solving) for animation (deformation not included), and given 3D model parameters solved by 3D solving, predicting 4D surface deformation from ML training.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: May 2, 2023
    Assignee: Sony Group Corporation
    Inventors: Kenji Tashiro, Qing Zhang
  • Patent number: 11640698
    Abstract: Systems, methods, and computer programming products for generating, rendering and/or displaying a computer-generated virtual environment as augmented reality and/or virtual reality. The physical boundaries containing the active area where the virtual environments are rendered and displayed are established. Based on the constraints and characteristics of the physical boundaries, virtual environments are mapped using assets from real, historical and/or fictitious locations. The assets can be dynamically re-sized and distanced to fit constraints of the physical space. Based on historical levels of interactivity with the selected environments, the virtual assets can be sorted and tagged as points of interest or filler assets, then mapped to the virtual environment using GAN technology and other machine learning techniques to re-create unique versions of the selected environments.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: May 2, 2023
    Assignee: International Business Machines Corporation
    Inventors: Caroline Li, Jacob Greenleaf, Nimra Tariq, Zachary A. Silverstein, Clement Decrop
  • Patent number: 11640692
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determines generates a three-dimensional (3D) model based on depth data and a segmentation mask. For example, an example process may include obtaining depth data including depth values for pixels of a first image, obtaining a segmentation mask associated with a second image, the segmentation mask identifying a portion of the second image associated with an object, and generating a 3D model based on the depth data and the mask.
    Type: Grant
    Filed: January 27, 2021
    Date of Patent: May 2, 2023
    Assignee: Apple Inc.
    Inventors: Praveen Gowda Ippadi Veerabhadre Gowda, Quinton L. Petty
  • Patent number: 11620785
    Abstract: Disclosed is a method of localizing a user operating a plurality of sensing components, preferably in an augmented or mixed reality environment, the method comprising transmitting pose data from a fixed control and processing module and receiving the pose data at a first sensing component, the pose data is then transformed into a first component relative pose in a coordinate frame based on the control and processing module. A display unit in communication with the first sensing component is updated with the transformed first component relative pose to render virtual content with improved environmental awareness.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: April 4, 2023
    Assignee: Magic Leap, Inc.
    Inventor: Paul M. Greco
  • Patent number: 11615302
    Abstract: In one embodiment, a computer-implemented method includes acquiring sequential user behavior data including one-dimensional data. The user behavior data is associated with a user. The method includes abstracting features from the sequential user behavior data to cover short-term and long-term timeframes. The method includes determining one or more properties of the user based on the features.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: March 28, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Xiangyuan Zhao, Hong-hoe Kim, Peng Zhou, Yingnan Zhu, Hyun Chul Lee
  • Patent number: 11605207
    Abstract: There is provided an information processing device, an information processing method, and a program for enabling display of AR content that has been generated for a predetermined environment and is applied to the real environment. The information processing device according to one aspect of the present technology generates a template environment map showing the environment of a three-dimensional space that is to be a template and in which a predetermined object exists, and generates template content that is a template to be used in generating display content for displaying an object superimposed on the environment of a real space, the template content including information about the object disposed at a position in the three-dimensional space, the position having a predetermined positional relationship with the predetermined object. The present technology can be applied to a transmissive HMD, for example.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: March 14, 2023
    Assignee: Sony Group Corporation
    Inventors: Gaku Narita, Takashi Seno, Tomoya Ishikawa, Yohsuke Kaji