Patents by Inventor Kevin Sheu

Kevin Sheu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230351775
    Abstract: A system trains a model to infer an intent of an entity. The model includes one or more sensors to obtain frames of data, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform steps. A first step includes determining, in each frame of the frames, one or more bounding regions, each of the bounding regions enclosing an entity. A second step includes identifying a common entity, the common entity being present in bounding regions corresponding to a plurality of the frames. A third step includes associating the common entity across the frames. A fourth step includes training a model to infer an intent of the common entity based on data outside of the bounding regions.
    Type: Application
    Filed: June 23, 2023
    Publication date: November 2, 2023
    Inventors: Kevin Sheu, Jie Mao
  • Publication number: 20230350979
    Abstract: Described herein are systems, methods, and non-transitory computer readable media for generating fused sensor data through metadata association. First sensor data captured by a first vehicle sensor and second sensor data captured by a second vehicle sensor are associated with first metadata and second metadata, respectively, to obtain labeled first sensor data and labeled second sensor data. A frame synchronization is performed between the first sensor data and the second sensor data to obtain a set of synchronized frames, where each synchronized frame includes a portion of the first sensor data and a corresponding portion of the second sensor data. For each frame in the set of synchronized frames, a metadata association algorithm is executed on the labeled first sensor data and the labeled second sensor data to generate fused sensor data that identifies associations between the first metadata and the second metadata.
    Type: Application
    Filed: June 30, 2023
    Publication date: November 2, 2023
    Inventors: Kevin Sheu, Jie Mao, Deling Li
  • Patent number: 11709460
    Abstract: A digital display and a digital time device are provided, which includes a housing, a drive assembly, display assemblies and a mainboard assembly. The housing is provided with seven windows arranged to form the shape of the character “”. The mainboard assembly is fixedly connected to the housing and is located behind the windows. Seven curved grooves are provided inside of the drive assembly. The display assemblies include display sheets and pull rods. One end of each pull rod is fixedly connected to the display sheets. The display sheets are slidingly connected to the mainboard assembly. There are seven display assemblies. Each pull rod is slidingly connected to the curved grooves in one-to-one correspondence, and each display sheet is one-to-one correspondence with the windows. The present invention can achieve many advantages, such as low power consumption, great waterproof, strong moisture resistance capabilities and high stability.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: July 25, 2023
    Inventor: Kevin Sheu
  • Patent number: 11693927
    Abstract: Described herein are systems, methods, and non-transitory computer readable media for generating fused sensor data through metadata association. First sensor data captured by a first vehicle sensor and second sensor data captured by a second vehicle sensor are associated with first metadata and second metadata, respectively, to obtain labeled first sensor data and labeled second sensor data. A frame synchronization is performed between the first sensor data and the second sensor data to obtain a set of synchronized frames, where each synchronized frame includes a portion of the first sensor data and a corresponding portion of the second sensor data. For each frame in the set of synchronized frames, a metadata association algorithm is executed on the labeled first sensor data and the labeled second sensor data to generate fused sensor data that identifies associations between the first metadata and the second metadata.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: July 4, 2023
    Assignee: Pony AI Inc.
    Inventors: Kevin Sheu, Jie Mao, Deling Li
  • Patent number: 11688179
    Abstract: A system trains a model to infer an intent of an entity. The model includes one or more sensors to obtain frames of data, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform steps. A first step includes determining, in each frame of the frames, one or more bounding regions, each of the bounding regions enclosing an entity. A second step includes identifying a common entity, the common entity being present in bounding regions corresponding to a plurality of the frames. A third step includes associating the common entity across the frames. A fourth step includes training a model to infer an intent of the common entity based on data outside of the bounding regions.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: June 27, 2023
    Assignee: Pony AI Inc.
    Inventors: Kevin Sheu, Jie Mao
  • Publication number: 20220172495
    Abstract: Described herein are systems, methods, and non-transitory computer readable media for using 3D point cloud data such as that captured by a LiDAR as ground truth data for training an instance segmentation deep learning model. 3D point cloud data captured by a LiDAR can be projected on a 2D image captured by a camera and provided as input to a 2D instance segmentation model. 2D sparse instance segmentation masks may be generated from the 2D image with the projected 3D data points. These 2D sparse masks can be used to propagate loss during training of the model. Generation and use of the 2D image data with the projected 3D data points as well as the 2D sparse instance segmentation masks for training the instance segmentation model obviates the need to generate and use actual instance segmentation data for training, thereby providing an improved technique for training an instance segmentation model.
    Type: Application
    Filed: February 15, 2022
    Publication date: June 2, 2022
    Inventors: Kevin Sheu, Jie Mao
  • Publication number: 20220067408
    Abstract: A system trains a model to infer an intent of an entity. The model includes one or more sensors to obtain frames of data, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform steps. A first step includes determining, in each frame of the frames, one or more bounding regions, each of the bounding regions enclosing an entity. A second step includes identifying a common entity, the common entity being present in bounding regions corresponding to a plurality of the frames. A third step includes associating the common entity across the frames. A fourth step includes training a model to infer an intent of the common entity based on data outside of the bounding regions.
    Type: Application
    Filed: September 3, 2020
    Publication date: March 3, 2022
    Inventors: Kevin Sheu, Jie Mao
  • Patent number: 11250240
    Abstract: Described herein are systems, methods, and non-transitory computer readable media for using 3D point cloud data such as that captured by a LiDAR as ground truth data for training an instance segmentation deep learning model. 3D point cloud data captured by a LiDAR can be projected on a 2D image captured by a camera and provided as input to a 2D instance segmentation model. 2D sparse instance segmentation masks may be generated from the 2D image with the projected 3D data points. These 2D sparse masks can be used to propagate loss during training of the model. Generation and use of the 2D image data with the projected 3D data points as well as the 2D sparse instance segmentation masks for training the instance segmentation model obviates the need to generate and use actual instance segmentation data for training, thereby providing an improved technique for training an instance segmentation model.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: February 15, 2022
    Assignee: Pony AI Inc.
    Inventors: Kevin Sheu, Jie Mao
  • Publication number: 20220027684
    Abstract: Described herein are systems, methods, and non-transitory computer readable media for generating fused sensor data through metadata association. First sensor data captured by a first vehicle sensor and second sensor data captured by a second vehicle sensor are associated with first metadata and second metadata, respectively, to obtain labeled first sensor data and labeled second sensor data. A frame synchronization is performed between the first sensor data and the second sensor data to obtain a set of synchronized frames, where each synchronized frame includes a portion of the first sensor data and a corresponding portion of the second sensor data. For each frame in the set of synchronized frames, a metadata association algorithm is executed on the labeled first sensor data and the labeled second sensor data to generate fused sensor data that identifies associations between the first metadata and the second metadata.
    Type: Application
    Filed: July 24, 2020
    Publication date: January 27, 2022
    Inventors: Kevin Sheu, Jie Mao, Deling Li
  • Publication number: 20220027675
    Abstract: Described herein are systems, methods, and non-transitory computer readable media for using 3D point cloud data such as that captured by a LiDAR as ground truth data for training an instance segmentation deep learning model. 3D point cloud data captured by a LiDAR can be projected on a 2D image captured by a camera and provided as input to a 2D instance segmentation model. 2D sparse instance segmentation masks may be generated from the 2D image with the projected 3D data points. These 2D sparse masks can be used to propagate loss during training of the model. Generation and use of the 2D image data with the projected 3D data points as well as the 2D sparse instance segmentation masks for training the instance segmentation model obviates the need to generate and use actual instance segmentation data for training, thereby providing an improved technique for training an instance segmentation model.
    Type: Application
    Filed: July 27, 2020
    Publication date: January 27, 2022
    Inventors: Kevin Sheu, Jie Mao
  • Publication number: 20210325830
    Abstract: A digital display and a digital time device are provided, which includes a housing, a drive assembly, display assemblies and a mainboard assembly. The housing is provided with seven windows arranged to form the shape of the character “”. The mainboard assembly is fixedly connected to the housing and is located behind the windows. Seven curved grooves are provided inside of the drive assembly. The display assemblies include display sheets and pull rods. One end of each pull rod is fixedly connected to the display sheets. The display sheets are slidingly connected to the mainboard assembly. There are seven display assemblies. Each pull rod is slidingly connected to the curved grooves in one-to-one correspondence, and each display sheet is one-to-one correspondence with the windows. The present invention can achieve many advantages, such as low power consumption, great waterproof, strong moisture resistance capabilities and high stability.
    Type: Application
    Filed: December 6, 2019
    Publication date: October 21, 2021
    Inventor: Kevin Sheu