Patents Examined by Hilina K Demeter
  • Patent number: 11963741
    Abstract: The pose and shape of a human body may be recovered based on joint location information associated with the human body. The joint location information may be derived based on an image of the human body or from an output of a human motion capture system. The recovery of the pose and shape of the human body may be performed by a computer-implemented artificial neural network (ANN) trained to perform the recovery task using training datasets that include paired joint location information and human model parameters. The training of the ANN may be conducted in accordance with multiple constraints designed to improve the accuracy of the recovery and by artificially manipulating the training data so that the ANN can learn to recover the pose and shape of the human body even with partially observed joint locations.
    Type: Grant
    Filed: January 11, 2023
    Date of Patent: April 23, 2024
    Assignee: Shanghai United Imaging Intelligence Co., Ltd.
    Inventors: Ziyan Wu, Srikrishna Karanam, Changjiang Cai, Georgios Georgakis
  • Patent number: 11960641
    Abstract: The present disclosure relates to determining when the head position of a user viewing user interfaces in a computer-generated reality environment is not in a comfortable and/or ergonomic position and repositioning the displayed user interface so that the user will reposition her/his head to view the user interface at a more comfortable and/or ergonomic head position.
    Type: Grant
    Filed: June 21, 2022
    Date of Patent: April 16, 2024
    Assignee: Apple Inc.
    Inventor: Aaron M. Burns
  • Patent number: 11954779
    Abstract: An animation generation method for tracking a facial expression and a neural network training method thereof are provided. The animation generation method for tracking a facial expression includes: driving a first role model according to an expression parameter set to obtain a virtual expression image corresponding to the expression parameter set; applying a plurality of real facial images to the virtual expression image corresponding to the facial expression respectively to generate a plurality of real expression images; training a tracking neural network according to the expression parameter set and the real expression images; inputting a target facial image to the trained tracking neural network to obtain a predicted expression parameter set; and using the predicted expression parameter set to control a second role model.
    Type: Grant
    Filed: March 8, 2022
    Date of Patent: April 9, 2024
    Assignee: DIGITAL DOMAIN ENTERPRISES GROUP LIMITED
    Inventors: Chin-Yu Chien, Yu-Hsien Li, Yi-Chi Cheng
  • Patent number: 11954790
    Abstract: The present invention discloses a Web-side real-time hybrid rendering method, device and computer equipment combined with ray tracing. The method includes acquiring three-dimensional scene data and the textures transformed according to the three-dimensional scene data; for the part with slow convergence speed and low frequency of rendering result, employ rasterization rendering according to the three-dimensional scene data; for the part with fast convergence speed and high-frequency rendering results, employ ray tracing rendering according to the texture; according to the rasterization rendering result and/or the ray tracing rendering result, the rendering results of the current frame and the historical frame are mixed. In this way, the problem of low rendering realism on the Web-side was solved, and high-quality global illumination effects can be achieved on the Web-side at a relatively low cost, which enhances the realism of rendering on the Web-side.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: April 9, 2024
    Assignee: HANGZHOU QUNHE INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Jiaxiang Zheng, Qing Ye, Rui Tang
  • Patent number: 11941743
    Abstract: A system and method for generating a set of samples stratified across two-dimensional elementary intervals of a two-dimensional space is disclosed within the application. A computer-implemented technique for generating the set of samples includes selecting an elementary interval associated with a stratification of the two-dimensional space, initializing at least one data structure that indicates valid regions within the elementary interface based on other samples previously placed within the two-dimensional space, and generating a sample in a valid region of the elementary interval utilizing the at least one data structure to identify the valid region prior to generating the sample. In some embodiments, the data structures comprise a pair of binary trees. The process can be repeated for each elementary interval of a selected stratification to generate the set of stratified two-dimensional samples.
    Type: Grant
    Filed: July 20, 2022
    Date of Patent: March 26, 2024
    Assignee: NVIDIA Corporation
    Inventor: Matthew Milton Pharr
  • Patent number: 11922552
    Abstract: There is provided a data processing device including: a data acquisition unit configured to acquire animation data in which clothing moves according to a motion of a wearer's body wearing the clothing; and a data update unit configured to update the animation data based on three types of elements having ratios in accordance with a specified type of an emotion. There is provided a data processing method that is executed by a computer, the data processing method including: acquiring animation data in which clothing moves according to a motion of a wearer's body wearing the clothing; and updating the animation data based on three types of elements having ratios in accordance with a specified characteristic of a motion.
    Type: Grant
    Filed: March 13, 2023
    Date of Patent: March 5, 2024
    Assignee: SoftBank Corp.
    Inventors: Yuko Ishiwaka, Kazuto Suda, Sho Kakazu
  • Patent number: 11921880
    Abstract: Aspects of the subject disclosure may include, for example, a method for training a deep learning model that includes encoding a content item; generating a blended image by combining a background image and the encoded content; decoding the blended image to generate decoded content corresponding to the content item; and defining or specifying a loss function related to the deep learning model. The method also includes determining values of training parameters for the deep learning model to minimize the loss function, thereby obtaining a trained deep learning model. The method also includes an information concealing procedure using the trained deep learning model to conceal user content by encoding the user content and blending the encoded user content with a user-selected image; the information concealing procedure is substantially independent of the user-selected image. Other embodiments are disclosed.
    Type: Grant
    Filed: August 11, 2022
    Date of Patent: March 5, 2024
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Wei Wang, Mikhail Istomin
  • Patent number: 11836839
    Abstract: The present disclosure provides a method for generating an animation figure, a device and a storage medium. The method includes: acquiring an image including at least one target object; acquiring position information about key points of the target object in the image; determining target angle information about lines connecting the key points in accordance with the position information about the key points; and adjusting a predetermined animation figure in accordance with the target angle information. A pose of a target animation figure acquired through adjustment is identical to a pose of the target object in the image, and the target animation figure corresponds to the target object.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: December 5, 2023
    Assignee: BOE Technology Group Co., Ltd.
    Inventors: Rui Zheng, Fengshuo Hu
  • Patent number: 11836943
    Abstract: The present application provides a method and apparatus of creating a face model, and an electronic device. The method includes: obtaining at least one key point feature of an current face image by performing key point detection on the current face image; obtaining a target bone parameter set matching the current face image according to the at least one key point feature; and creating a virtual three-dimensional face model corresponding to the current face image according to the target bone parameter set and a standard three-dimensional face model.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: December 5, 2023
    Assignee: Beijing Sensetime Technology Development Co., Ltd.
    Inventors: Shengwei Xu, Quan Wang, Jingtan Piao, Chen Qian
  • Patent number: 11838518
    Abstract: Improved video compression and video streaming systems and methods are disclosed for environments where camera motion is common, such as cameras incorporated into head-mounted displays. This is accomplished by combining a 3D representation of the shape of the user's environment (walls, floor, ceiling, furniture, etc.), image data, and data representative of changes in the location and orientation (pose) of the camera between successive image frames, thereby reducing data bandwidth needed to send streaming video in the presence of camera motion.
    Type: Grant
    Filed: November 18, 2022
    Date of Patent: December 5, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Forrest Power Trepte
  • Patent number: 11829526
    Abstract: In an embodiment, a processing system provides an augmented reality object for display by a head-mounted device (HMD) worn by a user. The processing system provides an augmented reality graphic for display by the HMD on a plane and overlaid on the augmented reality object. The processing system determines a gaze direction of the user using sensor data captured by a sensor of the HMD. Responsive to determining that the gaze direction intersects with the augmented reality graphic on the plane and remains intersecting for at least a period of time, the processing system determines a position of intersection between the gaze direction and the augmented reality graphic on the plane. The processing system provides a modified version of the augmented reality object for display by the HMD according to the position of intersection during the period of time.
    Type: Grant
    Filed: November 12, 2021
    Date of Patent: November 28, 2023
    Assignee: SENTIAR, INC.
    Inventors: Walter Blume, Michael K. Southworth, Jennifer N. Avari Silva, Jonathan R. Silva
  • Patent number: 11823315
    Abstract: This application belongs to the field of computer technologies, and provides an animation making method and apparatus, a computing device and a storage medium, to improve execution efficiency of animation making. In response to a pose selection instruction for a non-reference skeleton pose, a target plug-in node is invoked, the target plug-in node obtaining a non-reference skeleton shape model corresponding to the non-reference skeleton pose from a non-reference skeleton shape model set based on the pose selection instruction; target skeleton pose for an animated character is determined based on a parameter input instruction for a parameter of the target skeleton pose; and a target skeleton shape model of the target skeleton pose is generated based on the obtained non-reference skeleton shape model of the non-reference skeleton pose.
    Type: Grant
    Filed: February 25, 2022
    Date of Patent: November 21, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LTD
    Inventors: Jie Liu, Jingxiang Li, Hua Zhang
  • Patent number: 11810263
    Abstract: A system for manufacturing a customized product includes at least one processor programmed and/or configured to: display an image of a first product having first dimensions on a user interface of a computing device of a user; receive an augmented reality or virtual reality (AR/VR) request; in response to receiving the AR/VR request, capture image data from an image capturing device of the computing device and display the image data on the computing device; overlay the image of the first product over a portion of the image data captured by the image capturing device; and resize the overlaying image of the first product based on user input from a computing device of the user, such that second dimensions are associated with the first product.
    Type: Grant
    Filed: January 7, 2022
    Date of Patent: November 7, 2023
    Assignee: Baru, Inc.
    Inventor: Augustine K. Go
  • Patent number: 11810234
    Abstract: In embodiments of a method and apparatus for processing avatar usage data, a user obtains the avatar usage data, so as to use a plurality of avatars. If the user selects a target avatar from the plurality of avatars, the target avatar is loaded in the target round, and the permission data of the target avatar associated with the avatar usage data is updated. Through embodiments of the method and apparatus, the user does not need spend a lot of time in collecting different avatars when the user wants to use the different avatars, thereby reducing complexity of user operations, simplifying operation steps, and improving the efficiency of human-computer interaction.
    Type: Grant
    Filed: October 28, 2021
    Date of Patent: November 7, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Yongrong Jiao
  • Patent number: 11803995
    Abstract: This application discloses a target tracking method and apparatus, a terminal device, and a storage medium. The target tracking method includes: determining a path node nearest to the virtual character as a travelling node according to a starting position of the virtual character; moving toward the travelling node until arriving at the travelling node; determining a current next path node to be reached by the virtual character according to the path node where the virtual character is currently located, the path node currently nearest to a target being tracked and a pre-stored path routing matrix, here the path routing matrix is a matrix storing an optimal next path node from a first path node to a second path node; moving to the next path node to approach the target being tracked. Embodiments of this application can improve the target tracking efficiency of the virtual character in the VR scene.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: October 31, 2023
    Assignee: SHENZHEN INSTITUTE OF INFORMATION TECHNOLOGY
    Inventor: Shouxiang Xu
  • Patent number: 11789527
    Abstract: A wearable or a mobile device includes a camera to capture an image of a scene with a face and a display for displaying an image overlaid on the face. Execution of programming by a processor configures the device to perform functions, including functions to capture, via a camera of an eyewear device, an image of a scene including a face, identify the face in the image of the scene, track positional information of the face with respect to the eyewear device, generate an overlay image responsive the positional information, and present the overlay image on an image display.
    Type: Grant
    Filed: November 17, 2020
    Date of Patent: October 17, 2023
    Assignee: Snap Inc.
    Inventors: Kostiantyn Bilous, Stanislav Minakov
  • Patent number: 11782513
    Abstract: The technology disclosed relates to user interfaces for controlling augmented reality (AR) or virtual reality (VR) environments. Real and virtual objects can be seamlessly integrated to form an augmented reality by tracking motion of one or more real objects within view of a wearable sensor system. Switching the AR/VR presentation on or off to interact with the real world surrounding them, for example to drink some soda, can be addressed with a convenient mode switching gesture associated with switching between operational modes in a VR/AR enabled device.
    Type: Grant
    Filed: June 11, 2021
    Date of Patent: October 10, 2023
    Assignee: Ultrahaptics IP Two Limited
    Inventor: David Samuel Holz
  • Patent number: 11763528
    Abstract: A portal, which is an object for an avatar to move between virtual reality spaces, can be installed with an information processing device. The installation processing device includes circuitry configured to receive an installation instruction of a portal, which is an object for an avatar to move from a first VR space to a second VR space, in the first VR space, the installation instruction specifying the first VR space and the second VR space; and perform, in response to the acceptance of the installation instruction, installation processing for installing the portal in the first VR space in one or more devices including another device other than a device that has accepted an input of the installation instruction.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: September 19, 2023
    Assignee: CLUSTER, INC.
    Inventors: Daiki Handa, Hiroyuki Tomine
  • Patent number: 11763495
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and efficiently modifying a generative adversarial neural network using few-shot adaptation to generate digital images corresponding to a target domain while maintaining diversity of a source domain and realism of the target domain. In particular, the disclosed systems utilize a generative adversarial neural network with parameters learned from a large source domain. The disclosed systems preserve relative similarities and differences between digital images in the source domain using a cross-domain distance consistency loss. In addition, the disclosed systems utilize an anchor-based strategy to encourage different levels or measures of realism over digital images generated from latent vectors in different regions of a latent space.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: September 19, 2023
    Assignee: Adobe Inc.
    Inventors: Utkarsh Ojha, Yijun Li, Richard Zhang, Jingwan Lu, Elya Shechtman, Alexei A. Efros
  • Patent number: 11721303
    Abstract: Aspects of the present invention relate to providing see-through computer display optics with improved content presentation. The see-through computer display includes an ambient light sensor adapted to measure environmental scene light in an area that forms the background for digital content presented in the see-through computer display, and a processor adapted to invert a color channel parameter of the digital content based on data from the ambient light sensor.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: August 8, 2023
    Assignee: Mentor Acquisition One, LLC
    Inventor: John D. Haddick