Patents by Inventor Parthasarathy Sriram

Parthasarathy Sriram has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11941887
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: March 26, 2024
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20240071064
    Abstract: In various examples, techniques for optimizing object detection models are described herein. Systems and methods are disclosed that process sensor data using a backbone of a machine learning model(s) in order to generate feature maps at different resolutions. The systems and methods then use the machine learning model(s) to generate a vector based at least in part on one or more of the feature maps. For example, if the backbone generates four feature maps, then the machine learning model(s) may generate the vector using two feature maps from the four feature maps. The systems and methods then process the vector using a transformer of the machine learning model(s) in order to generate data representing a class label(s) for an object(s) depicted by an image represented by the sensor data and/or a location(s) of the object(s) within the image.
    Type: Application
    Filed: August 25, 2022
    Publication date: February 29, 2024
    Inventors: Dahjung CHUNG, Farzin AGHDASI, Parthasarathy SRIRAM, Bingxin HOU
  • Publication number: 20230351795
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Application
    Filed: July 5, 2023
    Publication date: November 2, 2023
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Publication number: 20230342666
    Abstract: Devices, systems, and techniques for experiment-based training of machine learning models (MLMs) using early stopping. The techniques include starting training tracks (TTs) that train candidate MLMs using the same training data and respective sets of training settings, performing a first evaluation of a first candidate MLM prior to completion of a corresponding first TT, and responsive to the first evaluation, placing the first TT on an inactive status, inactive status indicating that further training of the first candidate MLM is to be ceased. The techniques further include continuing at least a second TT using the training data, and responsive to conclusion of the TTs, selecting, as one or more final MLMs, the first candidate MLM or a second candidate MLM.
    Type: Application
    Filed: April 25, 2023
    Publication date: October 26, 2023
    Inventors: Steve Masson, Farzin Aghdasi, Parthasarathy Sriram, Arvind Sai Kumar, Varun Praveen
  • Publication number: 20230342600
    Abstract: Devices, systems, and techniques for provisioning of cloud-based machine learning training, optimization, and deployment services. The techniques include providing, to a remote client device, a list of available machine learning models (MLMs), receiving from the remote client device an indication of selected MLM(s) from the provided list, identifying training settings for selected MLM(s), identifying a training data for the selected MLM(s), configuring, using the identified training settings, execution of one or more processes to train the selected MLM(s) using the identified training data, and providing to the remote client device a representation of completed training of at least one MLM.
    Type: Application
    Filed: April 25, 2023
    Publication date: October 26, 2023
    Inventors: Steve Masson, Farzin Aghdasi, Parthasarathy Sriram, Arvind Sai Kumar, Varun Praveen
  • Patent number: 11741736
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: August 29, 2023
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Patent number: 11683453
    Abstract: In various examples, cloud computing systems may store frames of video streams and metadata generated from the frames in separate data stores, with each type of data being indexed using shared timestamps. Thus, the frames of a video stream may be stored and/or processed and corresponding metadata of the frames may be stored and/or generated across any number of devices of the cloud computing system (e.g., edge and/or core devices) while being linked by the timestamps. A client device may provide a request or query to dynamically annotate the video stream using a particular subset of the metadata. In processing the request or query, the timestamps may be used to retrieve video data representing frames of the video stream and metadata extracted from those frames across the data stores. The retrieved metadata and video data may be used to annotate the frames for display on the client device.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: June 20, 2023
    Assignee: NVIDIA Corporation
    Inventors: Milind Naphade, Parthasarathy Sriram, Farzin Aghdasi, Shuo Wang
  • Publication number: 20230153612
    Abstract: When visiting a child node in a graph corresponding to a deep learning model to analyze the child node for pruning in the deep learning model, data identifying pruning information corresponding to one or more parent nodes may be determined and used to access the pruning information. For example, a list of parent nodes of the parent node may be used to access the pruning information for the visit to the child node. The graph may be explored using recursion to iteratively visit nodes to determine portions of pruning information for pruning a node where a portion of the pruning information determined for prior visits to the nodes may be reused. A layer of the deep learning model including multiple dependent convolutions may be pruned by treating each convolution as a separate node and/or layer.
    Type: Application
    Filed: November 17, 2022
    Publication date: May 18, 2023
    Inventors: Yu Wang, Farzin Aghdasi, Parthasarathy Sriram
  • Publication number: 20230078218
    Abstract: Apparatuses, systems, and techniques for training an object detection model using transfer learning.
    Type: Application
    Filed: September 16, 2021
    Publication date: March 16, 2023
    Inventors: Yu Wang, Farzin Aghdasi, Parthasarathy Sriram, Subhashree Radhakrishnan
  • Publication number: 20230016568
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Application
    Filed: September 13, 2022
    Publication date: January 19, 2023
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20220392234
    Abstract: In various examples, a neural network may be trained for use in vehicle re-identification tasks—e.g., matching appearances and classifications of vehicles across frames—in a camera network. The neural network may be trained to learn an embedding space such that embeddings corresponding to vehicles of the same identify are projected closer to one another within the embedding space, as compared to vehicles representing different identities. To accurately and efficiently learn the embedding space, the neural network may be trained using a contrastive loss function or a triplet loss function. In addition, to further improve accuracy and efficiency, a sampling technique—referred to herein as batch sample—may be used to identify embeddings, during training, that are most meaningful for updating parameters of the neural network.
    Type: Application
    Filed: August 18, 2022
    Publication date: December 8, 2022
    Inventors: Fnu Ratnesh Kumar, Farzin Aghdasi, Parthasarathy Sriram, Edwin Weill
  • Patent number: 11455807
    Abstract: In various examples, a neural network may be trained for use in vehicle re-identification tasks—e.g., matching appearances and classifications of vehicles across frames—in a camera network. The neural network may be trained to learn an embedding space such that embeddings corresponding to vehicles of the same identify are projected closer to one another within the embedding space, as compared to vehicles representing different identities. To accurately and efficiently learn the embedding space, the neural network may be trained using a contrastive loss function or a triplet loss function. In addition, to further improve accuracy and efficiency, a sampling technique—referred to herein as batch sample—may be used to identify embeddings, during training, that are most meaningful for updating parameters of the neural network.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: September 27, 2022
    Assignee: NVIDIA Corporation
    Inventors: Fnu Ratnesh Kumar, Farzin Aghdasi, Parthasarathy Sriram, Edwin Weill
  • Patent number: 11443555
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: September 13, 2022
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20220237336
    Abstract: Systems and methods disclosed relate to generating training data. In one embodiment, the disclosure relates to systems and methods for generating training data to train a neural network to detect and classify objects. A simulator obtains 3D models of objects, and simulates 3D environments comprising the objects using physics-based simulations. The simulations may include applying real-world physical conditions, such as gravity, friction, and the like on the objects. The system may generate images of the simulations, and use the images to train a neural network to detect and classify the objects from images.
    Type: Application
    Filed: January 22, 2021
    Publication date: July 28, 2022
    Inventors: Zeyu Zhao, Shangru Li, Parthasarathy Sriram, Farzin Aghdasi
  • Publication number: 20220165304
    Abstract: Intelligent Video Analytics system may be implemented using a distributed computing architecture with edge and remote devices, where the edge devices analyze the video stream and transmit detection data corresponding to time segments to the remote device. The detection data may identify an object (e.g., vehicle, pedestrian, etc.) in the video stream. The remote device analyzes the detection data received from one or more edge devices and generates extraction triggers that are transmitted to the one or more edge devices. When an edge device receives an extraction trigger, the edge device extracts a clip from the video stream and stores the clip to persistent storage. The remote device may then retrieve the clip. The edge devices may perform simple identification operations while the remote device implements complex algorithms to detect events, benefitting from a larger context than is available to the individual edge devices.
    Type: Application
    Filed: November 24, 2020
    Publication date: May 26, 2022
    Inventors: Milind Ramesh Naphade, Parthasarathy Sriram, Shuo Wang
  • Publication number: 20220114800
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Application
    Filed: December 20, 2021
    Publication date: April 14, 2022
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Publication number: 20220053171
    Abstract: In various examples, cloud computing systems may store frames of video streams and metadata generated from the frames in separate data stores, with each type of data being indexed using shared timestamps. Thus, the frames of a video stream may be stored and/or processed and corresponding metadata of the frames may be stored and/or generated across any number of devices of the cloud computing system (e.g., edge and/or core devices) while being linked by the timestamps. A client device may provide a request or query to dynamically annotate the video stream using a particular subset of the metadata. In processing the request or query, the timestamps may be used to retrieve video data representing frames of the video stream and metadata extracted from those frames across the data stores. The retrieved metadata and video data may be used to annotate the frames for display on the client device.
    Type: Application
    Filed: August 12, 2020
    Publication date: February 17, 2022
    Inventors: Milind Naphade, Parthasarathy Sriram, Farzin Aghdasi, Shuo Wang
  • Publication number: 20220044114
    Abstract: Apparatuses, systems, and techniques to use low precision quantization to train a neural network. In at least one embodiment, one or more weights of a trained model are represented by low bit integer numbers instead of using full floating point precision. Changing precision of the one or more weights is performed by first quantizing all weights and activations of a neural network except for layers that require finer granularity in representation than an 8 bit quantization can provide to generate a first trained model. Subsequently, precision of the one or more weights of the first trained model is changed again to generate a second trained model. For the second trained model, the precision of one or more weights of at least one additional layer is changed in addition to the layers that previously had precision values changed while training the neural network to generate the first trained model.
    Type: Application
    Filed: June 9, 2021
    Publication date: February 10, 2022
    Inventors: Parthasarathy Sriram, Varun Praveen, Farzin Aghdasi
  • Patent number: 11205086
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: December 21, 2021
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Patent number: 11182598
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: November 23, 2021
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew