Patents by Inventor Xinshuo Weng

Xinshuo Weng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240290105
    Abstract: A method for sequential point cloud forecasting is described. The method includes training a vector-quantized conditional variational autoencoder (VQ-CVAE) framework to map an output to a closest vector in a discrete latent space to obtain a future latent space. The method also includes outputting, by a trained VQ-CVAE, a categorical distribution of a probability of V vectors in a discrete latent space in response to an input previously sampled latent space and past point cloud sequences. The method further includes sampling an inferred future latent space from the categorical distribution of the probability of the V vectors in the discrete latent space. The method also includes predicting a future point cloud sequence according to the inferred future latent space and the past point cloud sequences. The method further includes denoising, by a denoising diffusion probabilistic model (DDPM), the predicted future point cloud sequences according to an added noise.
    Type: Application
    Filed: October 10, 2023
    Publication date: August 29, 2024
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA, CARNEGIE MELLON UNIVERSITY, THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
    Inventors: Junyu NAN, Xinshuo WENG, Jean MERCAT, Blake Warren WULFE, Rowan Thomas MCALLISTER, Adrien David GAIDON, Nicholas Andrew RHINEHART, Kris Makoto KITANI
  • Publication number: 20240182082
    Abstract: In various examples, policy planning using behavior models for autonomous and semi-autonomous systems and applications is described herein. Systems and methods are disclosed that determine a policy for navigating a vehicle, such as a semi-autonomous vehicle or an autonomous vehicle (or other machine), where the policy allows for multistage reasoning that leverages future reactive behaviors of one or more other objects. For instance, a first behavior model (e.g., a trajectory tree) may be generated that represents candidate trajectories for the vehicle and one or more second behavior models (e.g., one or more scenario trees) may be generated that respectively represent future behaviors of the other object(s). The first behavior model and the second behavior model(s) may then be processed, such as in a closed-loop simulation based on a realistic data-driven traffic model, to determine the policy for navigating the vehicle.
    Type: Application
    Filed: July 19, 2023
    Publication date: June 6, 2024
    Inventors: Yuxiao Chen, Peter Karkus, Boris Ivanovic, Xinshuo Weng, Marco Pavone
  • Patent number: 11995761
    Abstract: A method of generating virtual sensor data of a virtual single-photon avalanche diode (SPAD) lidar sensor includes generating a two-dimensional (2D) lidar array having a plurality of cells. The method further includes interpolating image data from a virtual camera with the 2D lidar array to define auxiliary image data, generating a virtual ambient image based on a red-channel (R-channel) data of the auxiliary image data, identifying a plurality of virtual echoes of the virtual SPAD lidar sensor based on the R-channel data and a defined photon threshold, defining a virtual point cloud indicative of virtual photon measurements of the virtual SPAD lidar sensor based on the plurality of virtual echoes, and outputting data indicative of the virtual ambient image, the virtual photon measurements, the virtual point cloud, or a combination thereof, as the virtual sensor data of the virtual SPAD lidar sensor.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: May 28, 2024
    Assignees: DENSO CORPORATION, Carnegie Mellon University
    Inventors: Prasanna Sivakumar, Kris Kitani, Matthew O'Toole, Xinshuo Weng, Shawn Hunt, Yunze Man
  • Publication number: 20240028673
    Abstract: In various examples, robust trajectory predictions against adversarial attacks in autonomous machines and applications are described herein. Systems and methods are disclosed that perform adversarial training for trajectory predictions determined using a neural network(s). In order to improve the training, the systems and methods may devise a deterministic attach that creates a deterministic gradient path within a probabilistic model to generate adversarial samples for training. Additionally, the systems and methods may introduce a hybrid objective that interleaves the adversarial training and learning from clean data to anchor the output from the neural network(s) on stable, clean data distribution. Furthermore, the systems and methods may use a domain-specific data augmentation technique that generates diverse, realistic, and dynamically-feasible samples for additional training of the neural network(s).
    Type: Application
    Filed: March 8, 2023
    Publication date: January 25, 2024
    Inventors: Chaowei Xiao, Yolong Cao, Danfei Xu, Animashree Anandkumar, Marco Pavone, Xinshuo Weng
  • Publication number: 20230394823
    Abstract: Apparatuses, systems, and techniques to perform trajectory predictions within one or more images. In at least one embodiment, a processor comprises one or more circuits to cause one or more neural networks to perform trajectory predictions of two or more objects detected within a plurality of frames without tracking the two or more objects based, at least in part, on processing a sequence of data of the one or more objects as a whole.
    Type: Application
    Filed: March 8, 2023
    Publication date: December 7, 2023
    Inventors: Xinshuo Weng, Boris Ivanovic, Marco Pavone
  • Publication number: 20230114731
    Abstract: A method of generating virtual sensor data of a virtual single-photon avalanche diode (SPAD) lidar sensor includes generating a two-dimensional (2D) lidar array having a plurality of cells. The method further includes interpolating image data from a virtual camera with the 2D lidar array to define auxiliary image data, generating a virtual ambient image based on a red-channel (R-channel) data of the auxiliary image data, identifying a plurality of virtual echoes of the virtual SPAD lidar sensor based on the R-channel data and a defined photon threshold, defining a virtual point cloud indicative of virtual photon measurements of the virtual SPAD lidar sensor based on the plurality of virtual echoes, and outputting data indicative of the virtual ambient image, the virtual photon measurements, the virtual point cloud, or a combination thereof, as the virtual sensor data of the virtual SPAD lidar sensor.
    Type: Application
    Filed: March 30, 2022
    Publication date: April 13, 2023
    Applicant: DENSO CORPORATION
    Inventors: Prasanna SIVAKUMAR, Kris KITANI, Matthew O'TOOLE, Xinshuo WENG, Shawn HUNT
  • Publication number: 20230112664
    Abstract: A method includes generating a plurality of lidar inputs based on the lidar data, where each lidar input from among the plurality of lidar inputs comprises an image-based portion and a geometric-based portion, and where each lidar input from among the plurality of lidar inputs defines a position coordinate of the one or more objects. The method includes performing, for each lidar input from among the plurality of lidar inputs, a convolutional neural network (CNN) routine based on the image-based portion to generate one or more image-based outputs and assigning the plurality of lidar inputs to a plurality of echo groups based on the geometric-based portion. The method includes concatenating the one or more image-based outputs and the plurality of echo groups to generate a plurality of fused outputs and identifying the one or more objects based on the plurality of fused outputs.
    Type: Application
    Filed: March 30, 2022
    Publication date: April 13, 2023
    Applicant: DENSO CORPORATION
    Inventors: Prasanna SIVAKUMAR, Kris KITANI, Matthew Patrick O'TOOLE, Xinshuo WENG, Shawn HUNT
  • Publication number: 20220268938
    Abstract: In one embodiment, a method includes receiving sensor data. The sensor data is based on information from a first set of echo points and a second set of echo points. At least one echo point from the first set of echo points and one echo point from the second set of echo points originate from a single beam. The method includes generating a first set of feature maps based on the first set of echo points and a second set of feature maps based on the second set of echo points. The method includes predicting a bounding box for the object based on the first set of feature maps and the second set of feature maps.
    Type: Application
    Filed: February 24, 2021
    Publication date: August 25, 2022
    Inventors: Prasanna Sivakumar, Kris Kitani, Matthew O'Toole, Yunze Man, Xinshuo Weng
  • Publication number: 20220270327
    Abstract: Systems, methods, and other embodiments described herein relate to generating bounding box proposals. In one embodiment, a method includes generating blended 2-dimensional (2D) data based on 2D data and 3-dimensional (3D) data, and generating blended 3D data based on the 2D data and the 3D data. The method includes generating 2D features based on the 2D data and the blended 2D data, generating 3D features based on the 3D data and the blended 3D data, and generating the bounding box proposals based on the 2D features and the 3D features.
    Type: Application
    Filed: February 24, 2021
    Publication date: August 25, 2022
    Inventors: Prasanna Sivakumar, Kris Kitani, Matthew O' Toole, Yunze Man, Xinshuo Weng