Patents Examined by Peter Hoang
  • Patent number: 11961179
    Abstract: One embodiment provides for a graphics processing unit comprising a processing cluster to perform multi-rate shading via coarse pixel shading and output shaded coarse pixels for processing by a post-shader pixel processing pipeline.
    Type: Grant
    Filed: April 24, 2023
    Date of Patent: April 16, 2024
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Abhishek R. Appu, Subhajit Dasgupta, Srivallaba Mysore, Michael J. Norris, Vasanth Ranganathan, Joydeep Ray
  • Patent number: 11954781
    Abstract: Embodiments of the present disclosure provide a video processing method, a video processing apparatus, an electronic device and a computer-readable storage medium. The video processing method includes: displaying an initial image which includes a first-style image; in response to a first trigger event, displaying an image switching animation which is used for presenting a dynamic process of the switching from the initial image to a target image which includes a second-style image; and in response to completion of the displaying of the image switching animation, displaying the target image. A switching image in the image switching animation includes a first image area, a second image area and a third image area, and the first image area covers the entire image area of the image switching animation by means of position movement and in a time-sharing manner, and has a change in shape during a position movement process.
    Type: Grant
    Filed: June 9, 2023
    Date of Patent: April 9, 2024
    Assignee: BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
    Inventor: Shuyun Yang
  • Patent number: 11954794
    Abstract: Systems and methods for retrieval of augmented parameters for an artificial intelligence (AI)-based character are provided. An example method includes receiving, from a user via a user interface, at least one keyword describing the AI-based character; retrieving, from at least one data source and based on the at least one keyword, the augmented parameters describing the AI-based character; and generating, based on the augmented parameters, an AI-based character model corresponding to the AI-based character. The at least one data source includes a database configured to store records associated with the AI-based character, an online search service, and a set of clusters associated with a type of a feature of the AI-based character and at least one hidden prompt corresponding to the type of the feature. The type of the feature includes one of the following: a voice, a dialog style, an emotional state, an age, and temperament.
    Type: Grant
    Filed: April 28, 2023
    Date of Patent: April 9, 2024
    Assignee: Theai, Inc.
    Inventors: Ilya Gelfenbeyn, Mikhail Ermolenko, Kylan Gibbs
  • Patent number: 11948223
    Abstract: Methods and systems are described. A system includes a redundant shader pipe array that performs rendering calculations on data provided thereto and a shader pipe array that includes a plurality of shader pipes, each of which performs rendering calculations on data provided thereto. The system also includes a circuit that identifies a defective shader pipe of the plurality of shader pipes in the shader pipe array. In response to identifying the defective shader pipe, the circuit generates a signal. The system also includes a redundant shader switch. The redundant shader switch receives the generated signal, and, in response to receiving the generated signal, transfers the data for the defective shader pipe to the redundant shader pipe array.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: April 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Michael J. Mantor, Jeffrey T. Brady, Angel E. Socarras
  • Patent number: 11948235
    Abstract: Disclosed is a system for encoding and/or rendering animations without temporal or spatial restrictions. The system may encode an animation as a point cloud with first data points having a first time value and different positional and non-positional values, and second data points having a second time value and different positional and non-positional values. Rendering the animation may include generating and presenting a first image for the first time value of the animation based on the positional and non-positional values of the first data points, and generating and presenting a second image for the second time value of the animation by changing a visualization at a first position in the first image based on the positional values of a data point from the second data points corresponding to the first position and the data point non-positional values differing from the visualization.
    Type: Grant
    Filed: October 2, 2023
    Date of Patent: April 2, 2024
    Assignee: Illuscio, Inc.
    Inventors: William Peake, III, Joseph Bogacz
  • Patent number: 11941759
    Abstract: A computer-implemented method that allows users evaluate the densities of images and search for abnormalities in three-dimensional space. The voxel buildup uses a series of two-dimensional images and evaluates every pixel based on the user described predetermined threshold value at runtime. A singular optimized voxel-generated mesh is spawned to represent the combined locations of every pixel.
    Type: Grant
    Filed: February 21, 2022
    Date of Patent: March 26, 2024
    Assignee: Intuitive Research and Technology Corporation
    Inventors: Chanler Crowe, Michael Jones, Kyle Russell, Michael Yohe
  • Patent number: 11935192
    Abstract: Technologies for 3D virtual environment placement of 3D models based on 2D images are disclosed. At least an outline of a 3D virtual environment may be generated. A 2D image of one or more 2D images may be identified. A first product from the first 2D image may be identified. At least one 3D model of one or more 3D models based, at least, on the first product may be determined. A first location for placement of the first product in the 3D virtual environment may be identified. The at least one 3D model may be added within the 3D virtual environment based, at least, on the first location. The 3D virtual environment may be rendered into a visually interpretable form. A second product may be identified from the first 2D image, forming a first grouping of products. A starting element for the first grouping of products may be determined.
    Type: Grant
    Filed: December 5, 2022
    Date of Patent: March 19, 2024
    Assignee: Marxent Labs LLC
    Inventors: Bret Besecker, Barry Besecker, Jeffrey L. Cowgill, Jr., Jonathan Jekeli
  • Patent number: 11927753
    Abstract: Systems and methods disclosed provided a virtual reality experience, including: a set of motorized pads for coupling to feet of a user; a means of communication between the set of motorized pads and a computing environment operating a virtual reality headset; such that the set of motorized pads are configured to provide pressure on a user's feet as an avatar within a virtual environment traverses the environment. Systems and methods disclosed further provide a multiuser virtual reality experience. Systems and methods disclosed further provide a multiuser experience including generation of a common environment viewable in the virtual reality headsets; and recording a tour including audio data as the first user describes the common environment.
    Type: Grant
    Filed: July 28, 2023
    Date of Patent: March 12, 2024
    Inventor: Mark D. Wieczorek
  • Patent number: 11928767
    Abstract: Embodiments of the present disclosure provide a method for audio-driven character lip sync, a model for audio-driven character lip sync, and a training method therefor. A target dynamic image is obtained by acquiring a character image of a target character and speech for generating a target dynamic image, processing the character image and the speech as image-audio data that may be trained, respectively, and mixing the image-audio data with auxiliary data for training. When a large amount of sample data needs to be obtained for training in different scenarios, a video when another character speaks is used as an auxiliary video for processing, so as to obtain the auxiliary data. The auxiliary data, which replaces non-general sample data, and other data are input into a model in a preset ratio for training. The auxiliary data may improve a process of training a synthetic lip sync action of the model, so that there are no parts unrelated to the synthetic lip sync action during the training process.
    Type: Grant
    Filed: June 21, 2023
    Date of Patent: March 12, 2024
    Assignee: NANJING SILICON INTELLIGENCE TECHNOLOGY CO., LTD.
    Inventors: Huapeng Sima, Zheng Liao
  • Patent number: 11921976
    Abstract: A display method, a displaying device, electronic equipment and a storage medium. The display method comprises: acquiring menu data, target display position data of the menu data on a display screen, and source data, wherein the source data includes N channels of first display data, and the first display data has a preset size; conducting format conversion on the menu data to obtain M channels of second display data with a preset size; fusing the first display data with the second display data according to the target display position to obtain third display data; and displaying the third display data on the display screen.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: March 5, 2024
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Yanfu Li, Lihua Geng, Qingguo Yang
  • Patent number: 11918306
    Abstract: The technology described in this document can be embodied in a method of displaying images of portions of a human body on a display device. The method includes receiving a representation of a plurality of images that includes images of at least two different modalities, and location information corresponding to at least a subset of the plurality of images. A first image of a first modality is displayed on the display device in accordance with the corresponding location information. A second image of a second modality is overlaid on the first image in accordance with corresponding location information. At least a third image is overlaid on the first image in accordance with corresponding location information, the third image being of the second modality, and the second and third images being displayed concurrently for at least a period of time.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: March 5, 2024
    Assignee: INTUITIVE SURGICAL OPERATIONS, INC.
    Inventor: Mahdi Azizian
  • Patent number: 11915487
    Abstract: Systems and methods to improve machine learning by explicitly over-fitting environmental data obtained by an imaging system, such as a monocular camera are disclosed. The system includes training self-supervised depth and pose networks in monocular visual data collected from a certain area over multiple passes. Pose and depth networks may be trained by extracting data from multiple images of a single environment or trajectory, allowing the system to overfit the image data.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: February 27, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Rares A. Ambrus, Vitor Guizilini, Sudeep Pillai, Adrien David Gaidon
  • Patent number: 11900672
    Abstract: Devices, systems and processes for an integrated internal and external camera system that enhances the passenger experience in vehicles are described. One example method for enhancing a passenger experiences includes capturing a first set of images of an area around the vehicle using an external camera system, capturing a second set of images of one or more passengers inside the vehicle using an internal camera system, recognizing at least one gesture made by the one or more passengers based on the second set of images, identifying an object or a location external to the vehicle based on the first set of images and the at least one gesture, and displaying information related to the object or the location to the one or more passengers.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: February 13, 2024
    Assignee: ALPINE ELECTRONICS OF SILICON VALLEY, INC.
    Inventors: Rocky Chau-Hsiung Lin, Thomas Yamasaki, Koichiro Kanda, Diego Rodriguez Risco, Alexander Joseph Ryan, Samah Najeeb, Samir El Aouar
  • Patent number: 11900520
    Abstract: In an exemplary process for specifying an entrance or exit effect in a computer-generated reality environment, in response to a user entering or exiting the computer-generated reality environment, a transition effect is provided.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: February 13, 2024
    Assignee: Apple Inc.
    Inventors: Clément Pierre Nicolas Boissière, Samuel Lee Iglesias, James McIlree
  • Patent number: 11899208
    Abstract: Systems and methods disclosed provided a virtual reality experience, including: a set of motorized pads for coupling to feet of a user; a means of communication between the set of motorized pads and a computing environment operating a virtual reality headset; such that the set of motorized pads are configured to provide pressure on a user's feet as an avatar within a virtual environment traverses the environment. Systems and methods disclosed further provide a multiuser virtual reality experience. Systems and methods disclosed further provide a multiuser experience including generation of a common environment viewable in the virtual reality headsets; and recording a tour including audio data as the first user describes the common environment.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: February 13, 2024
    Inventor: Mark D. Wieczorek
  • Patent number: 11885971
    Abstract: An information processing device including a display unit, a detector, and a first control unit and a method of using same. The display unit may be a head-mounted display. The display unit is capable of providing the user with a field of view of a real space and a virtual object. The detector detects an azimuth of the display unit around at least one axis and display of the virtual object is controlled based in the detected azimuth.
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: January 30, 2024
    Assignee: SONY CORPORATION
    Inventors: Hirotaka Ishikawa, Takeshi Iwatsu
  • Patent number: 11875424
    Abstract: A point cloud data processing method and device, a computer device and a storage medium are provided. The method includes: acquiring point cloud data, and constructing a corresponding neighboring point set for each of data points in the point cloud data; calculating Hausdorff distances between the neighboring point set and a pre-constructed kernel point cloud to obtain a distance matrix; calculating a convolution of the neighboring point set with the distance matrix and a network weight matrix in a Hausdorff convolution layer in an encoder, to obtain high-dimensional point cloud features, the encoder and a decoder being two parts in a deep learning network; and reducing feature dimension of the high-dimensional point cloud features through the decoder, so that a classifier performs semantic classification on the point cloud data according to object point cloud features obtained by the dimension reduction.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: January 16, 2024
    Assignee: Shenzhen University
    Inventors: Hui Huang, Pengdi Huang
  • Patent number: 11860981
    Abstract: A computing system captures markerless motion data of a user via a camera of the computing system. The computing system retargets the first plurality of points and the second plurality of points to a three-dimensional model of an avatar associated with the user, wherein the avatar is associated with an identity non-fungible token that uniquely represents the user across Web2 environments and Web3 environments, and wherein retargeting the first plurality of points and the second plurality of points animates the three-dimensional model of the avatar. The computing system renders a video local to the computing system, wherein the video comprises the markerless motion data of the user retargeted to the three-dimensional model of the avatar causing hands, face, and body of the avatar to be animated in real-time. The computing system causes a non-fungible token to be generated, the non-fungible token uniquely identifying ownership of the video.
    Type: Grant
    Filed: March 30, 2023
    Date of Patent: January 2, 2024
    Assignee: Metatope LLC
    Inventors: Jordan Yallen, Walker Holmes, Joseph Poulose
  • Patent number: 11847758
    Abstract: Provided are a material presentation method and apparatus, a terminal and a storage medium. The material presentation method includes steps described below. A to-be-presented splash presentation material is received. A splash presentation image of the to-be-presented splash presentation material is magnified, and the magnified splash presentation image is presented. In response to the presented magnified splash presentation image satisfying a minifying condition, the presented magnified splash presentation image is minified.
    Type: Grant
    Filed: July 6, 2022
    Date of Patent: December 19, 2023
    Assignee: BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
    Inventor: Shuai Liu
  • Patent number: 11847729
    Abstract: Implementations described herein relate to methods, systems, and computer-readable media for remote production collaboration tools. The remote production collaboration tools can include one or more client devices, a server providing a single connection point, and an animation server configured to produce an animated production. The one or more client devices may provide motion capture data, audio data, control data, and/or associated timestamps. The animation server is configured to process the motion capture data, audio data, control data, and/or the associated timestamps and create the animated production. The animated production may be transmitted as a video stream.
    Type: Grant
    Filed: October 19, 2021
    Date of Patent: December 19, 2023
    Assignee: Evil Eye Pictures LLC
    Inventors: Alastair Macleod, Andrew Michael Angulo, Matthew Keith McDonald, Patrick Thomas Osborne, Arnold Joseph Riebli, III, James Paul Ritts, Daniel P Rosen, Justin Schubert, Yovel Schwartz, Brian William Smith