Patents by Inventor Kevin Sheu
Kevin Sheu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230351775Abstract: A system trains a model to infer an intent of an entity. The model includes one or more sensors to obtain frames of data, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform steps. A first step includes determining, in each frame of the frames, one or more bounding regions, each of the bounding regions enclosing an entity. A second step includes identifying a common entity, the common entity being present in bounding regions corresponding to a plurality of the frames. A third step includes associating the common entity across the frames. A fourth step includes training a model to infer an intent of the common entity based on data outside of the bounding regions.Type: ApplicationFiled: June 23, 2023Publication date: November 2, 2023Inventors: Kevin Sheu, Jie Mao
-
Publication number: 20230350979Abstract: Described herein are systems, methods, and non-transitory computer readable media for generating fused sensor data through metadata association. First sensor data captured by a first vehicle sensor and second sensor data captured by a second vehicle sensor are associated with first metadata and second metadata, respectively, to obtain labeled first sensor data and labeled second sensor data. A frame synchronization is performed between the first sensor data and the second sensor data to obtain a set of synchronized frames, where each synchronized frame includes a portion of the first sensor data and a corresponding portion of the second sensor data. For each frame in the set of synchronized frames, a metadata association algorithm is executed on the labeled first sensor data and the labeled second sensor data to generate fused sensor data that identifies associations between the first metadata and the second metadata.Type: ApplicationFiled: June 30, 2023Publication date: November 2, 2023Inventors: Kevin Sheu, Jie Mao, Deling Li
-
Patent number: 11709460Abstract: A digital display and a digital time device are provided, which includes a housing, a drive assembly, display assemblies and a mainboard assembly. The housing is provided with seven windows arranged to form the shape of the character “”. The mainboard assembly is fixedly connected to the housing and is located behind the windows. Seven curved grooves are provided inside of the drive assembly. The display assemblies include display sheets and pull rods. One end of each pull rod is fixedly connected to the display sheets. The display sheets are slidingly connected to the mainboard assembly. There are seven display assemblies. Each pull rod is slidingly connected to the curved grooves in one-to-one correspondence, and each display sheet is one-to-one correspondence with the windows. The present invention can achieve many advantages, such as low power consumption, great waterproof, strong moisture resistance capabilities and high stability.Type: GrantFiled: December 6, 2019Date of Patent: July 25, 2023Inventor: Kevin Sheu
-
Patent number: 11693927Abstract: Described herein are systems, methods, and non-transitory computer readable media for generating fused sensor data through metadata association. First sensor data captured by a first vehicle sensor and second sensor data captured by a second vehicle sensor are associated with first metadata and second metadata, respectively, to obtain labeled first sensor data and labeled second sensor data. A frame synchronization is performed between the first sensor data and the second sensor data to obtain a set of synchronized frames, where each synchronized frame includes a portion of the first sensor data and a corresponding portion of the second sensor data. For each frame in the set of synchronized frames, a metadata association algorithm is executed on the labeled first sensor data and the labeled second sensor data to generate fused sensor data that identifies associations between the first metadata and the second metadata.Type: GrantFiled: July 24, 2020Date of Patent: July 4, 2023Assignee: Pony AI Inc.Inventors: Kevin Sheu, Jie Mao, Deling Li
-
Patent number: 11688179Abstract: A system trains a model to infer an intent of an entity. The model includes one or more sensors to obtain frames of data, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform steps. A first step includes determining, in each frame of the frames, one or more bounding regions, each of the bounding regions enclosing an entity. A second step includes identifying a common entity, the common entity being present in bounding regions corresponding to a plurality of the frames. A third step includes associating the common entity across the frames. A fourth step includes training a model to infer an intent of the common entity based on data outside of the bounding regions.Type: GrantFiled: September 3, 2020Date of Patent: June 27, 2023Assignee: Pony AI Inc.Inventors: Kevin Sheu, Jie Mao
-
Publication number: 20220172495Abstract: Described herein are systems, methods, and non-transitory computer readable media for using 3D point cloud data such as that captured by a LiDAR as ground truth data for training an instance segmentation deep learning model. 3D point cloud data captured by a LiDAR can be projected on a 2D image captured by a camera and provided as input to a 2D instance segmentation model. 2D sparse instance segmentation masks may be generated from the 2D image with the projected 3D data points. These 2D sparse masks can be used to propagate loss during training of the model. Generation and use of the 2D image data with the projected 3D data points as well as the 2D sparse instance segmentation masks for training the instance segmentation model obviates the need to generate and use actual instance segmentation data for training, thereby providing an improved technique for training an instance segmentation model.Type: ApplicationFiled: February 15, 2022Publication date: June 2, 2022Inventors: Kevin Sheu, Jie Mao
-
Publication number: 20220067408Abstract: A system trains a model to infer an intent of an entity. The model includes one or more sensors to obtain frames of data, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform steps. A first step includes determining, in each frame of the frames, one or more bounding regions, each of the bounding regions enclosing an entity. A second step includes identifying a common entity, the common entity being present in bounding regions corresponding to a plurality of the frames. A third step includes associating the common entity across the frames. A fourth step includes training a model to infer an intent of the common entity based on data outside of the bounding regions.Type: ApplicationFiled: September 3, 2020Publication date: March 3, 2022Inventors: Kevin Sheu, Jie Mao
-
Patent number: 11250240Abstract: Described herein are systems, methods, and non-transitory computer readable media for using 3D point cloud data such as that captured by a LiDAR as ground truth data for training an instance segmentation deep learning model. 3D point cloud data captured by a LiDAR can be projected on a 2D image captured by a camera and provided as input to a 2D instance segmentation model. 2D sparse instance segmentation masks may be generated from the 2D image with the projected 3D data points. These 2D sparse masks can be used to propagate loss during training of the model. Generation and use of the 2D image data with the projected 3D data points as well as the 2D sparse instance segmentation masks for training the instance segmentation model obviates the need to generate and use actual instance segmentation data for training, thereby providing an improved technique for training an instance segmentation model.Type: GrantFiled: July 27, 2020Date of Patent: February 15, 2022Assignee: Pony AI Inc.Inventors: Kevin Sheu, Jie Mao
-
Publication number: 20220027684Abstract: Described herein are systems, methods, and non-transitory computer readable media for generating fused sensor data through metadata association. First sensor data captured by a first vehicle sensor and second sensor data captured by a second vehicle sensor are associated with first metadata and second metadata, respectively, to obtain labeled first sensor data and labeled second sensor data. A frame synchronization is performed between the first sensor data and the second sensor data to obtain a set of synchronized frames, where each synchronized frame includes a portion of the first sensor data and a corresponding portion of the second sensor data. For each frame in the set of synchronized frames, a metadata association algorithm is executed on the labeled first sensor data and the labeled second sensor data to generate fused sensor data that identifies associations between the first metadata and the second metadata.Type: ApplicationFiled: July 24, 2020Publication date: January 27, 2022Inventors: Kevin Sheu, Jie Mao, Deling Li
-
Publication number: 20220027675Abstract: Described herein are systems, methods, and non-transitory computer readable media for using 3D point cloud data such as that captured by a LiDAR as ground truth data for training an instance segmentation deep learning model. 3D point cloud data captured by a LiDAR can be projected on a 2D image captured by a camera and provided as input to a 2D instance segmentation model. 2D sparse instance segmentation masks may be generated from the 2D image with the projected 3D data points. These 2D sparse masks can be used to propagate loss during training of the model. Generation and use of the 2D image data with the projected 3D data points as well as the 2D sparse instance segmentation masks for training the instance segmentation model obviates the need to generate and use actual instance segmentation data for training, thereby providing an improved technique for training an instance segmentation model.Type: ApplicationFiled: July 27, 2020Publication date: January 27, 2022Inventors: Kevin Sheu, Jie Mao
-
Publication number: 20210325830Abstract: A digital display and a digital time device are provided, which includes a housing, a drive assembly, display assemblies and a mainboard assembly. The housing is provided with seven windows arranged to form the shape of the character “”. The mainboard assembly is fixedly connected to the housing and is located behind the windows. Seven curved grooves are provided inside of the drive assembly. The display assemblies include display sheets and pull rods. One end of each pull rod is fixedly connected to the display sheets. The display sheets are slidingly connected to the mainboard assembly. There are seven display assemblies. Each pull rod is slidingly connected to the curved grooves in one-to-one correspondence, and each display sheet is one-to-one correspondence with the windows. The present invention can achieve many advantages, such as low power consumption, great waterproof, strong moisture resistance capabilities and high stability.Type: ApplicationFiled: December 6, 2019Publication date: October 21, 2021Inventor: Kevin Sheu