Patents Examined by Peter Hoang
  • Patent number: 11993382
    Abstract: A display in a commercial passenger vehicle can an option to be locked or unlock to enhance security. An example system for securing an in-vehicle entertainment (IVE) display in a commercial passenger vehicle comprises a mobile device and a computer. The mobile device comprises a first processor configured to secure a display located in the commercial passenger vehicle, where the first processor is configured to: obtain a payload; generate a first digitally signed payload as a first output of a first mathematical computation performed on the payload with a secret key; and send a first message comprising the first digitally signed payload and a lock command to instruct the display to lock the display. The computer is communicably coupled with the display and comprises a second processor configured to send a first instruction to cause the display to lock in response to a reception of the lock command.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: May 28, 2024
    Assignee: PANASONIC AVIONICS CORPORATION
    Inventor: Gurmukh Khabrani
  • Patent number: 11989813
    Abstract: A talking head digital identity immutable dual authentication method for use over a distributed network, comprising: downloading talking head and talking head show files from nodes of a distributed network; downloading hash values of a talking head and talking head show from a blockchain; sending a hash of a publisher's identification from the blockchain to a name lookup service; retrieving and confirming the identity of the publisher from the name look up service; recalculating hash values of the talking head and talking head show; comparing the recalculated hash values of the talking head and talking head show with the hash values of the talking head and talking head show retrieved from the blockchain; starting playback of the talking head show if the hash values received from the blockchain match the recalculated hash values of the talking head and the talking head show retrieved from the nodes of the distributed network.
    Type: Grant
    Filed: February 11, 2023
    Date of Patent: May 21, 2024
    Assignee: AvaWorks Incorporated
    Inventors: Roberta Jean Smith, Nicolas Antczak
  • Patent number: 11961179
    Abstract: One embodiment provides for a graphics processing unit comprising a processing cluster to perform multi-rate shading via coarse pixel shading and output shaded coarse pixels for processing by a post-shader pixel processing pipeline.
    Type: Grant
    Filed: April 24, 2023
    Date of Patent: April 16, 2024
    Assignee: Intel Corporation
    Inventors: Prasoonkumar Surti, Abhishek R. Appu, Subhajit Dasgupta, Srivallaba Mysore, Michael J. Norris, Vasanth Ranganathan, Joydeep Ray
  • Patent number: 11954794
    Abstract: Systems and methods for retrieval of augmented parameters for an artificial intelligence (AI)-based character are provided. An example method includes receiving, from a user via a user interface, at least one keyword describing the AI-based character; retrieving, from at least one data source and based on the at least one keyword, the augmented parameters describing the AI-based character; and generating, based on the augmented parameters, an AI-based character model corresponding to the AI-based character. The at least one data source includes a database configured to store records associated with the AI-based character, an online search service, and a set of clusters associated with a type of a feature of the AI-based character and at least one hidden prompt corresponding to the type of the feature. The type of the feature includes one of the following: a voice, a dialog style, an emotional state, an age, and temperament.
    Type: Grant
    Filed: April 28, 2023
    Date of Patent: April 9, 2024
    Assignee: Theai, Inc.
    Inventors: Ilya Gelfenbeyn, Mikhail Ermolenko, Kylan Gibbs
  • Patent number: 11954781
    Abstract: Embodiments of the present disclosure provide a video processing method, a video processing apparatus, an electronic device and a computer-readable storage medium. The video processing method includes: displaying an initial image which includes a first-style image; in response to a first trigger event, displaying an image switching animation which is used for presenting a dynamic process of the switching from the initial image to a target image which includes a second-style image; and in response to completion of the displaying of the image switching animation, displaying the target image. A switching image in the image switching animation includes a first image area, a second image area and a third image area, and the first image area covers the entire image area of the image switching animation by means of position movement and in a time-sharing manner, and has a change in shape during a position movement process.
    Type: Grant
    Filed: June 9, 2023
    Date of Patent: April 9, 2024
    Assignee: BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
    Inventor: Shuyun Yang
  • Patent number: 11948235
    Abstract: Disclosed is a system for encoding and/or rendering animations without temporal or spatial restrictions. The system may encode an animation as a point cloud with first data points having a first time value and different positional and non-positional values, and second data points having a second time value and different positional and non-positional values. Rendering the animation may include generating and presenting a first image for the first time value of the animation based on the positional and non-positional values of the first data points, and generating and presenting a second image for the second time value of the animation by changing a visualization at a first position in the first image based on the positional values of a data point from the second data points corresponding to the first position and the data point non-positional values differing from the visualization.
    Type: Grant
    Filed: October 2, 2023
    Date of Patent: April 2, 2024
    Assignee: Illuscio, Inc.
    Inventors: William Peake, III, Joseph Bogacz
  • Patent number: 11948223
    Abstract: Methods and systems are described. A system includes a redundant shader pipe array that performs rendering calculations on data provided thereto and a shader pipe array that includes a plurality of shader pipes, each of which performs rendering calculations on data provided thereto. The system also includes a circuit that identifies a defective shader pipe of the plurality of shader pipes in the shader pipe array. In response to identifying the defective shader pipe, the circuit generates a signal. The system also includes a redundant shader switch. The redundant shader switch receives the generated signal, and, in response to receiving the generated signal, transfers the data for the defective shader pipe to the redundant shader pipe array.
    Type: Grant
    Filed: July 11, 2022
    Date of Patent: April 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Michael J. Mantor, Jeffrey T. Brady, Angel E. Socarras
  • Patent number: 11941759
    Abstract: A computer-implemented method that allows users evaluate the densities of images and search for abnormalities in three-dimensional space. The voxel buildup uses a series of two-dimensional images and evaluates every pixel based on the user described predetermined threshold value at runtime. A singular optimized voxel-generated mesh is spawned to represent the combined locations of every pixel.
    Type: Grant
    Filed: February 21, 2022
    Date of Patent: March 26, 2024
    Assignee: Intuitive Research and Technology Corporation
    Inventors: Chanler Crowe, Michael Jones, Kyle Russell, Michael Yohe
  • Patent number: 11935192
    Abstract: Technologies for 3D virtual environment placement of 3D models based on 2D images are disclosed. At least an outline of a 3D virtual environment may be generated. A 2D image of one or more 2D images may be identified. A first product from the first 2D image may be identified. At least one 3D model of one or more 3D models based, at least, on the first product may be determined. A first location for placement of the first product in the 3D virtual environment may be identified. The at least one 3D model may be added within the 3D virtual environment based, at least, on the first location. The 3D virtual environment may be rendered into a visually interpretable form. A second product may be identified from the first 2D image, forming a first grouping of products. A starting element for the first grouping of products may be determined.
    Type: Grant
    Filed: December 5, 2022
    Date of Patent: March 19, 2024
    Assignee: Marxent Labs LLC
    Inventors: Bret Besecker, Barry Besecker, Jeffrey L. Cowgill, Jr., Jonathan Jekeli
  • Patent number: 11927753
    Abstract: Systems and methods disclosed provided a virtual reality experience, including: a set of motorized pads for coupling to feet of a user; a means of communication between the set of motorized pads and a computing environment operating a virtual reality headset; such that the set of motorized pads are configured to provide pressure on a user's feet as an avatar within a virtual environment traverses the environment. Systems and methods disclosed further provide a multiuser virtual reality experience. Systems and methods disclosed further provide a multiuser experience including generation of a common environment viewable in the virtual reality headsets; and recording a tour including audio data as the first user describes the common environment.
    Type: Grant
    Filed: July 28, 2023
    Date of Patent: March 12, 2024
    Inventor: Mark D. Wieczorek
  • Patent number: 11928767
    Abstract: Embodiments of the present disclosure provide a method for audio-driven character lip sync, a model for audio-driven character lip sync, and a training method therefor. A target dynamic image is obtained by acquiring a character image of a target character and speech for generating a target dynamic image, processing the character image and the speech as image-audio data that may be trained, respectively, and mixing the image-audio data with auxiliary data for training. When a large amount of sample data needs to be obtained for training in different scenarios, a video when another character speaks is used as an auxiliary video for processing, so as to obtain the auxiliary data. The auxiliary data, which replaces non-general sample data, and other data are input into a model in a preset ratio for training. The auxiliary data may improve a process of training a synthetic lip sync action of the model, so that there are no parts unrelated to the synthetic lip sync action during the training process.
    Type: Grant
    Filed: June 21, 2023
    Date of Patent: March 12, 2024
    Assignee: NANJING SILICON INTELLIGENCE TECHNOLOGY CO., LTD.
    Inventors: Huapeng Sima, Zheng Liao
  • Patent number: 11921976
    Abstract: A display method, a displaying device, electronic equipment and a storage medium. The display method comprises: acquiring menu data, target display position data of the menu data on a display screen, and source data, wherein the source data includes N channels of first display data, and the first display data has a preset size; conducting format conversion on the menu data to obtain M channels of second display data with a preset size; fusing the first display data with the second display data according to the target display position to obtain third display data; and displaying the third display data on the display screen.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: March 5, 2024
    Assignee: BOE TECHNOLOGY GROUP CO., LTD.
    Inventors: Yanfu Li, Lihua Geng, Qingguo Yang
  • Patent number: 11918306
    Abstract: The technology described in this document can be embodied in a method of displaying images of portions of a human body on a display device. The method includes receiving a representation of a plurality of images that includes images of at least two different modalities, and location information corresponding to at least a subset of the plurality of images. A first image of a first modality is displayed on the display device in accordance with the corresponding location information. A second image of a second modality is overlaid on the first image in accordance with corresponding location information. At least a third image is overlaid on the first image in accordance with corresponding location information, the third image being of the second modality, and the second and third images being displayed concurrently for at least a period of time.
    Type: Grant
    Filed: February 14, 2018
    Date of Patent: March 5, 2024
    Assignee: INTUITIVE SURGICAL OPERATIONS, INC.
    Inventor: Mahdi Azizian
  • Patent number: 11915487
    Abstract: Systems and methods to improve machine learning by explicitly over-fitting environmental data obtained by an imaging system, such as a monocular camera are disclosed. The system includes training self-supervised depth and pose networks in monocular visual data collected from a certain area over multiple passes. Pose and depth networks may be trained by extracting data from multiple images of a single environment or trajectory, allowing the system to overfit the image data.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: February 27, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Rares A. Ambrus, Vitor Guizilini, Sudeep Pillai, Adrien David Gaidon
  • Patent number: 11900672
    Abstract: Devices, systems and processes for an integrated internal and external camera system that enhances the passenger experience in vehicles are described. One example method for enhancing a passenger experiences includes capturing a first set of images of an area around the vehicle using an external camera system, capturing a second set of images of one or more passengers inside the vehicle using an internal camera system, recognizing at least one gesture made by the one or more passengers based on the second set of images, identifying an object or a location external to the vehicle based on the first set of images and the at least one gesture, and displaying information related to the object or the location to the one or more passengers.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: February 13, 2024
    Assignee: ALPINE ELECTRONICS OF SILICON VALLEY, INC.
    Inventors: Rocky Chau-Hsiung Lin, Thomas Yamasaki, Koichiro Kanda, Diego Rodriguez Risco, Alexander Joseph Ryan, Samah Najeeb, Samir El Aouar
  • Patent number: 11900520
    Abstract: In an exemplary process for specifying an entrance or exit effect in a computer-generated reality environment, in response to a user entering or exiting the computer-generated reality environment, a transition effect is provided.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: February 13, 2024
    Assignee: Apple Inc.
    Inventors: Clément Pierre Nicolas Boissière, Samuel Lee Iglesias, James McIlree
  • Patent number: 11899208
    Abstract: Systems and methods disclosed provided a virtual reality experience, including: a set of motorized pads for coupling to feet of a user; a means of communication between the set of motorized pads and a computing environment operating a virtual reality headset; such that the set of motorized pads are configured to provide pressure on a user's feet as an avatar within a virtual environment traverses the environment. Systems and methods disclosed further provide a multiuser virtual reality experience. Systems and methods disclosed further provide a multiuser experience including generation of a common environment viewable in the virtual reality headsets; and recording a tour including audio data as the first user describes the common environment.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: February 13, 2024
    Inventor: Mark D. Wieczorek
  • Patent number: 11885971
    Abstract: An information processing device including a display unit, a detector, and a first control unit and a method of using same. The display unit may be a head-mounted display. The display unit is capable of providing the user with a field of view of a real space and a virtual object. The detector detects an azimuth of the display unit around at least one axis and display of the virtual object is controlled based in the detected azimuth.
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: January 30, 2024
    Assignee: SONY CORPORATION
    Inventors: Hirotaka Ishikawa, Takeshi Iwatsu
  • Patent number: 11875424
    Abstract: A point cloud data processing method and device, a computer device and a storage medium are provided. The method includes: acquiring point cloud data, and constructing a corresponding neighboring point set for each of data points in the point cloud data; calculating Hausdorff distances between the neighboring point set and a pre-constructed kernel point cloud to obtain a distance matrix; calculating a convolution of the neighboring point set with the distance matrix and a network weight matrix in a Hausdorff convolution layer in an encoder, to obtain high-dimensional point cloud features, the encoder and a decoder being two parts in a deep learning network; and reducing feature dimension of the high-dimensional point cloud features through the decoder, so that a classifier performs semantic classification on the point cloud data according to object point cloud features obtained by the dimension reduction.
    Type: Grant
    Filed: May 3, 2022
    Date of Patent: January 16, 2024
    Assignee: Shenzhen University
    Inventors: Hui Huang, Pengdi Huang
  • Patent number: 11860981
    Abstract: A computing system captures markerless motion data of a user via a camera of the computing system. The computing system retargets the first plurality of points and the second plurality of points to a three-dimensional model of an avatar associated with the user, wherein the avatar is associated with an identity non-fungible token that uniquely represents the user across Web2 environments and Web3 environments, and wherein retargeting the first plurality of points and the second plurality of points animates the three-dimensional model of the avatar. The computing system renders a video local to the computing system, wherein the video comprises the markerless motion data of the user retargeted to the three-dimensional model of the avatar causing hands, face, and body of the avatar to be animated in real-time. The computing system causes a non-fungible token to be generated, the non-fungible token uniquely identifying ownership of the video.
    Type: Grant
    Filed: March 30, 2023
    Date of Patent: January 2, 2024
    Assignee: Metatope LLC
    Inventors: Jordan Yallen, Walker Holmes, Joseph Poulose