Patents by Inventor Fnu Ratnesh Kumar
Fnu Ratnesh Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230351795Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.Type: ApplicationFiled: July 5, 2023Publication date: November 2, 2023Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
-
Patent number: 11741736Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.Type: GrantFiled: December 20, 2021Date of Patent: August 29, 2023Assignee: NVIDIA CorporationInventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
-
Publication number: 20230266768Abstract: Systems may include at least one processor configured to determine a predicted value of an unwrap factor using a machine learning model, wherein the machine learning model is a trained machine learning model configured to provide a predicted value of an unwrap factor for dealiasing a measurement of range rate of a target object as an output, dealiase a measurement value of range rate from a radar of an autonomous vehicle (AV) based on the predicted value of the unwrap factor to provide a true value of range rate, and control an operation of the AV in a real-time environment based on the true value of range rate. Methods, computer program products, and autonomous vehicles are also disclosed.Type: ApplicationFiled: February 24, 2022Publication date: August 24, 2023Inventors: Minhan Li, Fnu Ratnesh Kumar, Xiufeng Song
-
Publication number: 20230150543Abstract: Disclosed herein are systems, methods, and computer program products for operating a robotic system. For example, the method includes: obtaining a first cuboid generated based on an image, a second cuboid generated based on a lidar dataset and/or a third cuboid generated by a heuristic algorithm using the lidar dataset; using a machine learning model to generate a heading for an object in proximity to the robotic system based on the first cuboid, second cuboid and/or third cuboid; generating a bounding box geometry and a bounding box location based on the second cuboid or third cuboid; and generating a fourth cuboid using the bounding box geometry, the bounding box location, and the heading generated using the machine learning model.Type: ApplicationFiled: November 16, 2021Publication date: May 18, 2023Inventors: Wulue Zhao, FNU Ratnesh Kumar, Kevin L. Wyffels
-
Publication number: 20230123184Abstract: This document discloses system, method, and computer program product embodiments for detecting an object. For example, the method includes generating a plurality of cuboids by performing the following operations: defining a plurality of first cuboids each encompassing lidar data points that are plotted on a respective 3D graph of a plurality of 3D graphs; accumulating the lidar data points encompassed by the plurality of first cuboids; computing an extent using the accumulated lidar data points; and defining a second cuboid that has dimensions specified by the extent. The first cuboids and/or the second cuboid may be used to detect the object.Type: ApplicationFiled: December 15, 2022Publication date: April 20, 2023Inventors: Ming-Fang Chang, FNU Ratnesh Kumar, De Wang, James Hays
-
Patent number: 11557129Abstract: Systems and methods for operating an autonomous vehicle. The methods comprising: obtaining, by a computing device, loose-fit cuboids overlaid on 3D graphs so as to each encompass LiDAR data points associated with a given object; defining, by the computing device, an amodal cuboid based on the loose-fit cuboids; using, by the computing device, the amodal cuboid to train a machine learning algorithm to detect objects of a given class using sensor data generated by sensors of the autonomous vehicle or another vehicle; and causing, by the computing device, operations of the autonomous vehicle to be controlled using the machine learning algorithm.Type: GrantFiled: April 27, 2021Date of Patent: January 17, 2023Assignee: ARGO AI, LLCInventors: Ming-Fang Chang, FNU Ratnesh Kumar, De Wang, James Hays
-
Publication number: 20220392234Abstract: In various examples, a neural network may be trained for use in vehicle re-identification tasks—e.g., matching appearances and classifications of vehicles across frames—in a camera network. The neural network may be trained to learn an embedding space such that embeddings corresponding to vehicles of the same identify are projected closer to one another within the embedding space, as compared to vehicles representing different identities. To accurately and efficiently learn the embedding space, the neural network may be trained using a contrastive loss function or a triplet loss function. In addition, to further improve accuracy and efficiency, a sampling technique—referred to herein as batch sample—may be used to identify embeddings, during training, that are most meaningful for updating parameters of the neural network.Type: ApplicationFiled: August 18, 2022Publication date: December 8, 2022Inventors: Fnu Ratnesh Kumar, Farzin Aghdasi, Parthasarathy Sriram, Edwin Weill
-
Publication number: 20220379911Abstract: Methods of determining relevance of objects that a vehicle detected are disclosed. A system will receive a data log of a run of the vehicle. The data log includes perception data captured by vehicle sensors during the run. The system will identify an interaction time, along with a look-ahead lane based on a lane in which the vehicle traveled during the run. The system will define a region of interest (ROI) that includes a lane segment within the look-ahead lane. The system will identify, from the perception data, objects that the vehicle detected within the ROI during the run. For each object, the system will determine a detectability value by measuring an amount of the object that the vehicle detected. The system will create a subset with only objects having at least a threshold detectability value, and it will classify any such object as a priority relevant object.Type: ApplicationFiled: May 26, 2021Publication date: December 1, 2022Inventors: G. Peter K. Carr, FNU Ratnesh Kumar
-
Publication number: 20220382284Abstract: Methods of determining relevance of objects that a vehicle's perception system detects are disclosed. A system on or in communication with the vehicle will identify a time horizon, and a look-ahead lane based on a lane in which the vehicle is currently traveling. The system defines a region of interest (ROI) that includes one or more lane segments within the look-ahead lane. The system identifies a first subset that includes objects located within the ROI, but not objects not located within the ROI. The system identifies a second subset that includes objects located within the ROI that may interact with the vehicle during the time horizon, but not excludes actors that may not interact with the vehicle during the time horizon. The system classifies any object that is in the first subset, the second subset or both subsets as a priority relevant object.Type: ApplicationFiled: May 26, 2021Publication date: December 1, 2022Inventors: G. Peter K. Carr, FNU Ratnesh Kumar
-
Publication number: 20220343101Abstract: Systems and methods for operating an autonomous vehicle. The methods comprising: obtaining, by a computing device, loose-fit cuboids overlaid on 3D graphs so as to each encompass LiDAR data points associated with a given object; defining, by the computing device, an amodal cuboid based on the loose-fit cuboids; using, by the computing device, the amodal cuboid to train a machine learning algorithm to detect objects of a given class using sensor data generated by sensors of the autonomous vehicle or another vehicle; and causing, by the computing device, operations of the autonomous vehicle to be controlled using the machine learning algorithm.Type: ApplicationFiled: April 27, 2021Publication date: October 27, 2022Inventors: Ming-Fang Chang, FNU Ratnesh Kumar, De Wang, James Hays
-
Patent number: 11455807Abstract: In various examples, a neural network may be trained for use in vehicle re-identification tasks—e.g., matching appearances and classifications of vehicles across frames—in a camera network. The neural network may be trained to learn an embedding space such that embeddings corresponding to vehicles of the same identify are projected closer to one another within the embedding space, as compared to vehicles representing different identities. To accurately and efficiently learn the embedding space, the neural network may be trained using a contrastive loss function or a triplet loss function. In addition, to further improve accuracy and efficiency, a sampling technique—referred to herein as batch sample—may be used to identify embeddings, during training, that are most meaningful for updating parameters of the neural network.Type: GrantFiled: September 20, 2019Date of Patent: September 27, 2022Assignee: NVIDIA CorporationInventors: Fnu Ratnesh Kumar, Farzin Aghdasi, Parthasarathy Sriram, Edwin Weill
-
Publication number: 20220114800Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.Type: ApplicationFiled: December 20, 2021Publication date: April 14, 2022Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
-
Patent number: 11205086Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.Type: GrantFiled: November 8, 2019Date of Patent: December 21, 2021Assignee: NVIDIA CorporationInventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
-
Publication number: 20210089921Abstract: Transfer learning can be used to enable a user to obtain a machine learning model that is fully trained for an intended inferencing task without having to train the model from scratch. A pre-trained model can be obtained that is relevant for that inferencing task. Additional training data, as may correspond to at least one additional class of data, can be used to further train this model. This model can then be pruned and retrained in order to obtain a smaller model that retains high accuracy for the intended inferencing task.Type: ApplicationFiled: September 23, 2020Publication date: March 25, 2021Inventors: Farzin Aghdasi, Varun Praveen, FNU Ratnesh Kumar, Partha Sriram
-
Publication number: 20200151489Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.Type: ApplicationFiled: November 8, 2019Publication date: May 14, 2020Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
-
Publication number: 20200097742Abstract: In various examples, a neural network may be trained for use in vehicle re-identification tasks—e.g., matching appearances and classifications of vehicles across frames—in a camera network. The neural network may be trained to learn an embedding space such that embeddings corresponding to vehicles of the same identify are projected closer to one another within the embedding space, as compared to vehicles representing different identities. To accurately and efficiently learn the embedding space, the neural network may be trained using a contrastive loss function or a triplet loss function. In addition, to further improve accuracy and efficiency, a sampling technique—referred to herein as batch sample—may be used to identify embeddings, during training, that are most meaningful for updating parameters of the neural network.Type: ApplicationFiled: September 20, 2019Publication date: March 26, 2020Inventors: Fnu Ratnesh Kumar, Farzin Aghdasi, Parthasarathy Sriram, Edwin Weill