Patents by Inventor Arjun BHARGAVA

Arjun BHARGAVA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11975725
    Abstract: A computer implemented method for determining optimal values for operational parameters for a model predictive controller for controlling a vehicle, can receive from a data store or a graphical user interface, ranges for one or more external parameters. The computer implemented method can determine optimum values for external parameters of the vehicle by simulating a vehicle operation across the ranges of the one or more operational parameters by solving a vehicle control problem and determining an output of the vehicle control problem based on a result for the simulated vehicle operation. A vehicle can include a processing component configured to adjust a control input for an actuator of the vehicle according to a control algorithm and based on the optimum values of the vehicle parameter as determined by the computer implemented method.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: May 7, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Michael Thompson, Carrie Bobier-Tiu, Manuel Ahumada, Arjun Bhargava, Avinash Balachandran
  • Patent number: 11922640
    Abstract: A method for 3D object tracking is described. The method includes inferring first 2D semantic keypoints of a 3D object within a sparsely annotated video stream. The method also includes matching the first 2D semantic keypoints of a current frame with second 2D semantic keypoints in a next frame of the sparsely annotated video stream using embedded descriptors within the current frame and the next frame. The method further includes warping the first 2D semantic keypoints to the second 2D semantic keypoints to form warped 2D semantic keypoints in the next frame. The method also includes labeling a 3D bounding box in the next frame according to the warped 2D semantic keypoints in the next frame.
    Type: Grant
    Filed: March 8, 2021
    Date of Patent: March 5, 2024
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Arjun Bhargava, Sudeep Pillai, Kuan-Hui Lee
  • Patent number: 11854280
    Abstract: A method for 3D object detection is described. The method includes detecting semantic keypoints from monocular images of a video stream capturing a 3D object. The method also includes inferring a 3D bounding box of the 3D object corresponding to the detected semantic vehicle keypoints. The method further includes scoring the inferred 3D bounding box of the 3D object. The method also includes detecting the 3D object according to a final 3D bounding box generated based on the scoring of the inferred 3D bounding box.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: December 26, 2023
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Arjun Bhargava, Haofeng Chen, Adrien David Gaidon, Rares A. Ambrus, Sudeep Pillai
  • Publication number: 20230351886
    Abstract: A method for vehicle prediction, planning, and control is described. The method includes separately encoding traffic state information at an intersection into corresponding traffic state latent spaces. The method also includes aggregating the corresponding traffic state latent spaces to form a generalized traffic geometry latent space. The method further includes interpreting the generalized traffic geometry latent space to form a traffic flow map including current and future vehicle trajectories. The method also includes decoding the generalized traffic geometry latent space to predict a vehicle behavior according to the traffic flow map based on the current and future vehicle trajectories.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kuan-Hui LEE, Charles Christopher OCHOA, Arjun BHARGAVA, Chao FANG, Kun-Hsin CHEN
  • Publication number: 20230350050
    Abstract: The disclosure generally relates to methods for gathering radar measurements, wherein the radar measurements includes one or more angular uncertainties, generating a two dimensional radar uncertainty cloud, wherein the radar uncertainty cloud includes one or more shaded regions that each represent an angular uncertainty, capturing image data, wherein the image data includes one or more targets within a region of interest, and fusing the two dimensional radar uncertainty cloud with the image data to overlay the one or more regions of uncertainty over a target.
    Type: Application
    Filed: April 27, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Charles Christopher Ochoa, Arjun Bhargava, Chao Fang, Kun-Hsin Chen, Kuan-Hui Lee
  • Publication number: 20230351739
    Abstract: Systems, methods, and other embodiments described herein relate to a multi-task model that integrates recurrent models to improve handling of multi-sweep inputs. In one embodiment, a method includes acquiring sensor data from multiple modalities. The method includes separately encoding respective segments of the sensor data according to an associated one of the different modalities to form encoded features using separate encoders of a network. The method includes accumulating, in a detector, sparse features associated with sparse sensor inputs of the multiple modalities to densify the sparse features into dense features. The method includes providing observations according to the encoded features and the sparse features using the network.
    Type: Application
    Filed: April 29, 2022
    Publication date: November 2, 2023
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Kuan-Hui Lee, Charles Christopher Ochoa, Arjun Bhargava, Chao Fang, Kun-Hsin Chen
  • Publication number: 20230351766
    Abstract: A method controlling an ego vehicle in an environment includes determining, via a flow model of a parked vehicle recognition system, a flow between a first representation of the environment and a second representation of the environment. The method also includes determining, via a velocity model of the parked vehicle recognition system, a velocity of a vehicle in the environment based on the flow. The method further includes determining, via a parked vehicle classification model of the parked vehicle recognition system, the vehicle is parked based on the velocity of the vehicle and one or more of features associated with the vehicle and/or the environment. The method still further includes planning a trajectory of the ego vehicle based on determining the vehicle is parked.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kuan-Hui LEE, Charles Christopher OCHOA, Arjun BHARGAVA, Chao FANG
  • Publication number: 20230351767
    Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system includes receiving, at a sparse depth network, one or more sparse representations of an environment. The method also includes generating a depth estimate of the environment depicted in an image captured by an image capturing sensor. The method further includes generating, via the sparse depth network, one or more sparse depth estimates based on receiving the one or more sparse representations. The method also includes fusing the depth estimate and the one or more sparse depth estimates to generate a dense depth estimate. The method further includes generating the dense LiDAR representation based on the dense depth estimate and controlling an action of the vehicle based on identifying a three-dimensional object in the dense LiDAR representation.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Arjun BHARGAVA, Chao FANG, Charles Christopher OCHOA, Kun-Hsin CHEN, Kuan-Hui LEE, Vitor GUIZILINI
  • Publication number: 20230351774
    Abstract: A method for controlling an ego vehicle in an environment includes associating, by a velocity model, one or more objects within the environment with a respective velocity instance label. The method also includes selectively, by a recurrent network of the taillight recognition system, focusing on a selected region of the sequence of images according to a spatial attention model for a vehicle taillight recognition task. The method further includes concatenating the selected region with the respective velocity instance label of each object of the one or more objects within the environment to generate a concatenated region label. The method still further planning a trajectory of the ego vehicle based on inferring, at a classifier of the taillight recognition system, an intent of each object of the one or more objects according to a respective taillight state of each object, as determined based on the concatenated region label.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kuan-Hui LEE, Charles Christopher OCHOA, Arjun BHARGAVA, Chao FANG
  • Patent number: 11721065
    Abstract: A method for 3D object modeling includes linking 2D semantic keypoints of an object within a video stream into a 2D structured object geometry. The method includes inputting, to a neural network, the object to generate a 2D NOCS image and a shape vector, the shape vector being mapped to a continuously traversable coordinate shape. The method includes applying a differentiable shape renderer to the SDF shape and the 2D NOCS image to render a shape of the object corresponding to a 3D object model in the continuously traversable coordinate shape space. The method includes lifting the linked, 2D semantic keypoints of the 2D structured object geometry to a 3D structured object geometry. The method includes geometrically and projectively aligning the 3D object model, the 3D structured object geometry, and the rendered shape to form a rendered object. The method includes generating 3D bounding boxes from the rendered object.
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: August 8, 2023
    Inventors: Arjun Bhargava, Sudeep Pillai, Kuan-Hui Lee, Kun-Hsin Chen
  • Publication number: 20230047160
    Abstract: Systems, methods, computer-readable media, techniques, and methodologies are disclosed for performing end-to-end, learning-based keypoint detection and association. A scene graph of a signalized intersection is constructed from an input image of the intersection. The scene graph includes detected keypoints and linkages identified between the keypoints. The scene graph can be used along with a vehicle's localization information to identify which keypoint that represents a traffic signal is associated with the vehicle's current travel lane. An appropriate vehicle action may then be determined based on a transition state of the traffic signal keypoint and trajectory information for the vehicle. A control signal indicative of this vehicle action may then be output to cause an autonomous vehicle, for example, to implement the appropriate vehicle action.
    Type: Application
    Filed: October 29, 2022
    Publication date: February 16, 2023
    Inventors: KUN-HSIN CHEN, PEIYAN GONG, SUDEEP PILLAI, ARJUN BHARGAVA, SHUNSHO KAKU, HAI JIN, KUAN-HUI LEE
  • Publication number: 20230031289
    Abstract: A method for 2D semantic keypoint detection and tracking is described. The method includes learning embedded descriptors of salient object keypoints detected in previous images according to a descriptor embedding space model. The method also includes predicting, using a shared image encoder backbone, salient object keypoints within a current image of a video stream. The method further includes inferring an object represented by the predicted, salient object keypoints within the current image of the video stream. The method also includes tracking the inferred object by matching embedded descriptors of the predicted, salient object keypoints representing the inferred object within the previous images of the video stream based on the descriptor embedding space model.
    Type: Application
    Filed: July 30, 2021
    Publication date: February 2, 2023
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Haofeng CHEN, Arjun BHARGAVA, Rares Andrei AMBRUS, Sudeep PILLAI
  • Publication number: 20220414981
    Abstract: A method for 3D object modeling includes linking 2D semantic keypoints of an object within a video stream into a 2D structured object geometry. The method includes inputting, to a neural network, the object to generate a 2D NOCS image and a shape vector, the shape vector being mapped to a continuously traversable coordinate shape. The method includes applying a differentiable shape renderer to the SDF shape and the 2D NOCS image to render a shape of the object corresponding to a 3D object model in the continuously traversable coordinate shape space. The method includes lifting the linked, 2D semantic keypoints of the 2D structured object geometry to a 3D structured object geometry. The method includes geometrically and projectively aligning the 3D object model, the 3D structured object geometry, and the rendered shape to form a rendered object. The method includes generating 3D bounding boxes from the rendered object.
    Type: Application
    Filed: August 25, 2022
    Publication date: December 29, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Arjun BHARGAVA, Sudeep PILLAI, Kuan-Hui LEE, Kun-Hsin CHEN
  • Patent number: 11514685
    Abstract: Systems, methods, computer-readable media, techniques, and methodologies are disclosed for performing end-to-end, learning-based keypoint detection and association. A scene graph of a signalized intersection is constructed from an input image of the intersection. The scene graph includes detected keypoints and linkages identified between the keypoints. The scene graph can be used along with a vehicle's localization information to identify which keypoint that represents a traffic signal is associated with the vehicle's current travel lane. An appropriate vehicle action may then be determined based on a transition state of the traffic signal keypoint and trajectory information for the vehicle. A control signal indicative of this vehicle action may then be output to cause an autonomous vehicle, for example, to implement the appropriate vehicle action.
    Type: Grant
    Filed: February 17, 2021
    Date of Patent: November 29, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Kun-Hsin Chen, Peiyan Gong, Sudeep Pillai, Arjun Bhargava, Shunsho Kaku, Hai Jin, Kuan-Hui Lee
  • Patent number: 11501525
    Abstract: Systems and methods for panoptic image segmentation are disclosed herein. One embodiment performs semantic segmentation and object detection on an input image, wherein the object detection generates a plurality of bounding boxes associated with an object in the input image; selects a query bounding box from among the plurality of bounding boxes; maps at least one of the bounding boxes in the plurality of bounding boxes other than the query bounding box to the query bounding box based on similarity between the at least one of the bounding boxes and the query bounding box to generate a mask assignment for the object, the mask assignment defining a contour of the object; compares the mask assignment with results of the semantic segmentation to produce a refined mask assignment for the object; and outputs a panoptic segmentation of the input image that includes the refined mask assignment for the object.
    Type: Grant
    Filed: April 8, 2020
    Date of Patent: November 15, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Rui Hou, Jie Li, Vitor Guizilini, Adrien David Gaidon, Dennis Park, Arjun Bhargava
  • Publication number: 20220343096
    Abstract: A method for 3D object detection is described. The method includes detecting semantic keypoints from monocular images of a video stream capturing a 3D object. The method also includes inferring a 3D bounding box of the 3D object corresponding to the detected semantic vehicle keypoints. The method further includes scoring the inferred 3D bounding box of the 3D object. The method also includes detecting the 3D object according to a final 3D bounding box generated based on the scoring of the inferred 3D bounding box.
    Type: Application
    Filed: April 27, 2021
    Publication date: October 27, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Arjun BHARGAVA, Haofeng CHEN, Adrien David GAIDON, Rares A. AMBRUS, Sudeep PILLAI
  • Publication number: 20220335258
    Abstract: Datasets for autonomous driving systems and multi-modal scenes may be automatically labeled using previously trained models as priors to mitigate the limitations of conventional manual data labeling. Properly versioned models, including model weights and knowledge of the dataset on which the model was previously trained, may be used to run an inference operation on unlabeled data, thus automatically labeling the dataset. The newly labeled dataset may then be used to train new models, including sparse data sets, in a semi-supervised or weakly-supervised fashion.
    Type: Application
    Filed: April 16, 2021
    Publication date: October 20, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Allan RAVENTOS, Arjun BHARGAVA, Kun-Hsin CHEN, Sudeep PILLAI, Adrien David GAIDON
  • Patent number: 11475628
    Abstract: A method for 3D object modeling includes linking 2D semantic keypoints of an object within a video stream into a 2D structured object geometry. The method includes inputting, to a neural network, the object to generate a 2D NOCS image and a shape vector, the shape vector being mapped to a continuously traversable coordinate shape. The method includes applying a differentiable shape renderer to the SDF shape and the 2D NOCS image to render a shape of the object corresponding to a 3D object model in the continuously traversable coordinate shape space. The method includes lifting the linked, 2D semantic keypoints of the 2D structured object geometry to a 3D structured object geometry. The method includes geometrically and projectively aligning the 3D object model, the 3D structured object geometry, and the rendered shape to form a rendered object. The method includes generating 3D bounding boxes from the rendered object.
    Type: Grant
    Filed: January 12, 2021
    Date of Patent: October 18, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Arjun Bhargava, Sudeep Pillai, Kuan-Hui Lee, Kun-Hsin Chen
  • Publication number: 20220284598
    Abstract: A method for 3D object tracking is described. The method includes inferring first 2D semantic keypoints of a 3D object within a sparsely annotated video stream. The method also includes matching the first 2D semantic keypoints of a current frame with second 2D semantic keypoints in a next frame of the sparsely annotated video stream using embedded descriptors within the current frame and the next frame. The method further includes warping the first 2D semantic keypoints to the second 2D semantic keypoints to form warped 2D semantic keypoints in the next frame. The method also includes labeling a 3D bounding box in the next frame according to the warped 2D semantic keypoints in the next frame.
    Type: Application
    Filed: March 8, 2021
    Publication date: September 8, 2022
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Arjun BHARGAVA, Sudeep PILLAI, Kuan-Hui LEE
  • Publication number: 20220284222
    Abstract: In one embodiment, a vehicle light classification system captures a sequence of images of a scene that includes a front/rear view of a vehicle with front/rear-side lights, determines semantic keypoints, in the images and associated with the front/rear-side lights, based on inputting the images into a first neural network, obtains multiple difference images that are each a difference between successive images from among the sequence of images, the successive images being aligned based on their respective semantic keypoints, and determines a classification of the front/rear-side lights based at least in part on the difference images by inputting the difference images into a second neural network.
    Type: Application
    Filed: March 4, 2021
    Publication date: September 8, 2022
    Inventors: Jia-En Pan, Kuan-Hui Lee, Chao Fang, Kun-Hsin Chen, Arjun Bhargava, Sudeep Pillai