Patents by Inventor Ratnesh Kumar

Ratnesh Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230138346
    Abstract: A computing device comprises a memory to store a first untrusted file and a second untrusted file; and a processor to scan a file system operation executing on the computing device; create an association between the first untrusted file and the second untrusted file based on the scanning; execute the first untrusted file together with the associated second untrusted file in a micro virtual machine (VM); and identify a malicious behavior of the executed first untrusted file interacting with the associated second untrusted file in the micro VM.
    Type: Application
    Filed: April 28, 2020
    Publication date: May 4, 2023
    Inventors: RATNESH KUMAR LOCKTON, VIVEK SRIVASTAVA
  • Publication number: 20230123184
    Abstract: This document discloses system, method, and computer program product embodiments for detecting an object. For example, the method includes generating a plurality of cuboids by performing the following operations: defining a plurality of first cuboids each encompassing lidar data points that are plotted on a respective 3D graph of a plurality of 3D graphs; accumulating the lidar data points encompassed by the plurality of first cuboids; computing an extent using the accumulated lidar data points; and defining a second cuboid that has dimensions specified by the extent. The first cuboids and/or the second cuboid may be used to detect the object.
    Type: Application
    Filed: December 15, 2022
    Publication date: April 20, 2023
    Inventors: Ming-Fang Chang, FNU Ratnesh Kumar, De Wang, James Hays
  • Publication number: 20230016568
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Application
    Filed: September 13, 2022
    Publication date: January 19, 2023
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Patent number: 11557129
    Abstract: Systems and methods for operating an autonomous vehicle. The methods comprising: obtaining, by a computing device, loose-fit cuboids overlaid on 3D graphs so as to each encompass LiDAR data points associated with a given object; defining, by the computing device, an amodal cuboid based on the loose-fit cuboids; using, by the computing device, the amodal cuboid to train a machine learning algorithm to detect objects of a given class using sensor data generated by sensors of the autonomous vehicle or another vehicle; and causing, by the computing device, operations of the autonomous vehicle to be controlled using the machine learning algorithm.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: January 17, 2023
    Assignee: ARGO AI, LLC
    Inventors: Ming-Fang Chang, FNU Ratnesh Kumar, De Wang, James Hays
  • Publication number: 20220392234
    Abstract: In various examples, a neural network may be trained for use in vehicle re-identification tasks—e.g., matching appearances and classifications of vehicles across frames—in a camera network. The neural network may be trained to learn an embedding space such that embeddings corresponding to vehicles of the same identify are projected closer to one another within the embedding space, as compared to vehicles representing different identities. To accurately and efficiently learn the embedding space, the neural network may be trained using a contrastive loss function or a triplet loss function. In addition, to further improve accuracy and efficiency, a sampling technique—referred to herein as batch sample—may be used to identify embeddings, during training, that are most meaningful for updating parameters of the neural network.
    Type: Application
    Filed: August 18, 2022
    Publication date: December 8, 2022
    Inventors: Fnu Ratnesh Kumar, Farzin Aghdasi, Parthasarathy Sriram, Edwin Weill
  • Publication number: 20220379911
    Abstract: Methods of determining relevance of objects that a vehicle detected are disclosed. A system will receive a data log of a run of the vehicle. The data log includes perception data captured by vehicle sensors during the run. The system will identify an interaction time, along with a look-ahead lane based on a lane in which the vehicle traveled during the run. The system will define a region of interest (ROI) that includes a lane segment within the look-ahead lane. The system will identify, from the perception data, objects that the vehicle detected within the ROI during the run. For each object, the system will determine a detectability value by measuring an amount of the object that the vehicle detected. The system will create a subset with only objects having at least a threshold detectability value, and it will classify any such object as a priority relevant object.
    Type: Application
    Filed: May 26, 2021
    Publication date: December 1, 2022
    Inventors: G. Peter K. Carr, FNU Ratnesh Kumar
  • Publication number: 20220382284
    Abstract: Methods of determining relevance of objects that a vehicle's perception system detects are disclosed. A system on or in communication with the vehicle will identify a time horizon, and a look-ahead lane based on a lane in which the vehicle is currently traveling. The system defines a region of interest (ROI) that includes one or more lane segments within the look-ahead lane. The system identifies a first subset that includes objects located within the ROI, but not objects not located within the ROI. The system identifies a second subset that includes objects located within the ROI that may interact with the vehicle during the time horizon, but not excludes actors that may not interact with the vehicle during the time horizon. The system classifies any object that is in the first subset, the second subset or both subsets as a priority relevant object.
    Type: Application
    Filed: May 26, 2021
    Publication date: December 1, 2022
    Inventors: G. Peter K. Carr, FNU Ratnesh Kumar
  • Publication number: 20220343101
    Abstract: Systems and methods for operating an autonomous vehicle. The methods comprising: obtaining, by a computing device, loose-fit cuboids overlaid on 3D graphs so as to each encompass LiDAR data points associated with a given object; defining, by the computing device, an amodal cuboid based on the loose-fit cuboids; using, by the computing device, the amodal cuboid to train a machine learning algorithm to detect objects of a given class using sensor data generated by sensors of the autonomous vehicle or another vehicle; and causing, by the computing device, operations of the autonomous vehicle to be controlled using the machine learning algorithm.
    Type: Application
    Filed: April 27, 2021
    Publication date: October 27, 2022
    Inventors: Ming-Fang Chang, FNU Ratnesh Kumar, De Wang, James Hays
  • Patent number: 11455807
    Abstract: In various examples, a neural network may be trained for use in vehicle re-identification tasks—e.g., matching appearances and classifications of vehicles across frames—in a camera network. The neural network may be trained to learn an embedding space such that embeddings corresponding to vehicles of the same identify are projected closer to one another within the embedding space, as compared to vehicles representing different identities. To accurately and efficiently learn the embedding space, the neural network may be trained using a contrastive loss function or a triplet loss function. In addition, to further improve accuracy and efficiency, a sampling technique—referred to herein as batch sample—may be used to identify embeddings, during training, that are most meaningful for updating parameters of the neural network.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: September 27, 2022
    Assignee: NVIDIA Corporation
    Inventors: Fnu Ratnesh Kumar, Farzin Aghdasi, Parthasarathy Sriram, Edwin Weill
  • Patent number: 11443555
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: September 13, 2022
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20220114800
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Application
    Filed: December 20, 2021
    Publication date: April 14, 2022
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Patent number: 11205086
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: December 21, 2021
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Patent number: 11182598
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: November 23, 2021
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Patent number: 11030195
    Abstract: Methods and apparatuses are described for system for identifying and mitigating high-risk database queries through ranked variance analysis. A server identifies database queries executed against databases in a production computing environment within a predetermined time period, each database query associated with execution plans and each execution plan having corresponding plan data elements. For each database query: the server generates execution variance data for the execution plans for a database query based upon the corresponding plan data elements, comprising: determining an execution time variance between the execution plans; and determining a buffer gets variance between the execution plans. The server ranks the database queries according to (i) the execution time variance, and (ii) the buffer gets variance.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: June 8, 2021
    Assignee: FMR LLC
    Inventors: Ratnesh Kumar Singh, Ambica Rajagopal, Akhilesh Raghavendrachar Srinivasachar Kaddi, Harikrishnan Choondani Velayudhan, Stephanie Trethaway
  • Patent number: 11022610
    Abstract: An integrated dual-modality microfluidic sensor chip and methods for using the same. In one form, the sensor comprises a patterned periodic array of nanoposts coated with a noble metal and graphene oxide (GO) to detect target biomarker molecules in a limited sample volume. The device generates both electrochemical and surface plasmon resonance (SPR) signals from a single sensing area of the metal-GO nanoposts. The metal-GO nanoposts are functionalized with specific receptor molecules, serving as a spatially well-defined nanostructured working electrode for electrochemical sensing, as well as a nanostructured plasmonic crystal for SPR-based sensing via the excitation of surface plasmon polaritons.
    Type: Grant
    Filed: January 21, 2019
    Date of Patent: June 1, 2021
    Assignee: Iowa State University Research Foundation, Inc.
    Inventors: Liang Dong, Azahar Ali, Shawana Tabassum, Qiugu Wang, Ratnesh Kumar
  • Publication number: 20210089921
    Abstract: Transfer learning can be used to enable a user to obtain a machine learning model that is fully trained for an intended inferencing task without having to train the model from scratch. A pre-trained model can be obtained that is relevant for that inferencing task. Additional training data, as may correspond to at least one additional class of data, can be used to further train this model. This model can then be pruned and retrained in order to obtain a smaller model that retains high accuracy for the intended inferencing task.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Inventors: Farzin Aghdasi, Varun Praveen, FNU Ratnesh Kumar, Partha Sriram
  • Publication number: 20200302161
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Application
    Filed: June 9, 2020
    Publication date: September 24, 2020
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Patent number: 10725373
    Abstract: A technique, and its applications, for high resolution, rapid, and simple nanopatterning. The general method has been demonstrated in several forms and applications. One is patterning nanophotonic structures at an optical fiber tip for refractive index sensing. Another is patterning nanoresonator structures on a sensor substrate for plasmonic effect related detection of VOCs. In the latter example, a graphene oxide coated plasmonic crystal as a gas sensor capable of identifying different gas species using an array of such structures. By coating the surface of multiple identical plasmonic crystals with different thicknesses of Graphene-Oxide (GO) layer, the effective refractive index of the GO layer on each plasmonic crystal is differently modulated when exposed to a specific gas. Identification of various gas species is accomplished using pattern recognition algorithm.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: July 28, 2020
    Assignee: Iowa State University Research Foundation, Inc.
    Inventors: Ratnesh Kumar, Shawana Tabassum, Liang Dong
  • Patent number: 10680702
    Abstract: A tracking system comprising: a radio frequency (RF) signal circulator; an iridium modem coupled to first port of the circulator; an antenna coupled to second port of the circulator, wherein the circulator passes a signal transmitted by the modem to the antenna when the switch is switched to a first mode; a low noise amplifier (LNA) coupled to third port of the circulator, wherein the circulator passes RF signals received from the antenna to the LNA; a diplexer coupled to an output of the LNA; a GNSS receiver coupled to a first output of the diplexer through a GNSS filter; an iridium filter coupled to a second output of the diplexer; wherein the switch couples the iridium filter to the modem when the iridium modem is in a receiving mode, and wherein the switch couples the modem to the first port when the modem is in a transmitting mode.
    Type: Grant
    Filed: October 24, 2017
    Date of Patent: June 9, 2020
    Assignee: Honeywell International Inc.
    Inventors: Narayan Singh Rana, Ratnesh Kumar Gaur, Kancharla HariNarayana
  • Publication number: 20200151489
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 14, 2020
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan