Patents by Inventor Milind Naphade

Milind Naphade has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240233387
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Application
    Filed: March 21, 2024
    Publication date: July 11, 2024
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20240169652
    Abstract: In various embodiments, a scene reconstruction model generates three-dimensional (3D) representations of scenes. The scene reconstruction model computes a first 3D feature grid based on a set of red, blue, green, and depth (RGBD) images associated with a first scene. The scene reconstruction model maps the first 3D feature grid to a first 3D representation of the first scene. The scene reconstruction model computes a first reconstruction loss based on the first 3D representation and the set of RGBD images. The scene reconstruction model modifies at least one of the first 3D feature grid, a first pre-trained geometry decoder, or a first pre-trained texture decoder based on the first reconstruction loss to generate a second 3D representation of the first scene.
    Type: Application
    Filed: October 30, 2023
    Publication date: May 23, 2024
    Inventors: Yang FU, Sifei LIU, Jan KAUTZ, Xueting LI, Shalini DE MELLO, Amey KULKARNI, Milind NAPHADE
  • Publication number: 20240161383
    Abstract: In various embodiments, a scene reconstruction model generates three-dimensional (3D) representations of scenes. The scene reconstruction model maps a first red, blue, green, and depth (RGBD) image associated with both a first scene and a first viewpoint to a first surface representation of at least a first portion of the first scene. The scene reconstruction model maps a second RGBD image associated with both the first scene and a second viewpoint to a second surface representation of at least a second portion of the first scene. The scene reconstruction model aggregates at least the first surface representation and the second surface representation in a 3D space to generate a first fused surface representation of the first scene. The scene reconstruction model maps the first fused surface representation of the first scene to a 3D representation of the first scene.
    Type: Application
    Filed: October 30, 2023
    Publication date: May 16, 2024
    Inventors: Yang FU, Sifei LIU, Jan KAUTZ, Xueting LI, Shalini DE MELLO, Amey KULKARNI, Milind NAPHADE
  • Publication number: 20240161404
    Abstract: In various embodiments, a training application trains a machine learning model to generate three-dimensional (3D) representations of two-dimensional images. The training application maps a depth image and a viewpoint to signed distance function (SDF) values associated with 3D query points. The training application maps a red, blue, and green (RGB) image to radiance values associated with the 3DI query points. The training application computes a red, blue, green, and depth (RGBD) reconstruction loss based on at least the SDF values and the radiance values. The training application modifies at least one of a pre-trained geometry encoder, a pre-trained geometry decoder, an untrained texture encoder, or an untrained texture decoder based on the RGBD reconstruction loss to generate a trained machine learning model that generates 3D representations of RGBD images.
    Type: Application
    Filed: October 30, 2023
    Publication date: May 16, 2024
    Inventors: Yang FU, Sifei LIU, Jan KAUTZ, Xueting LI, Shalini DE MELLO, Amey KULKARNI, Milind NAPHADE
  • Patent number: 11941887
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Grant
    Filed: September 13, 2022
    Date of Patent: March 26, 2024
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Patent number: 11683453
    Abstract: In various examples, cloud computing systems may store frames of video streams and metadata generated from the frames in separate data stores, with each type of data being indexed using shared timestamps. Thus, the frames of a video stream may be stored and/or processed and corresponding metadata of the frames may be stored and/or generated across any number of devices of the cloud computing system (e.g., edge and/or core devices) while being linked by the timestamps. A client device may provide a request or query to dynamically annotate the video stream using a particular subset of the metadata. In processing the request or query, the timestamps may be used to retrieve video data representing frames of the video stream and metadata extracted from those frames across the data stores. The retrieved metadata and video data may be used to annotate the frames for display on the client device.
    Type: Grant
    Filed: August 12, 2020
    Date of Patent: June 20, 2023
    Assignee: NVIDIA Corporation
    Inventors: Milind Naphade, Parthasarathy Sriram, Farzin Aghdasi, Shuo Wang
  • Publication number: 20230036879
    Abstract: In various examples, a set of object trajectories may be determined based at least in part on sensor data representative of a field of view of a sensor. The set of object trajectories may be applied to a long short-term memory (LSTM) network to train the LSTM network. An expected object trajectory for an object in the field of view of the sensor may be computed by the LSTM network based at least in part an observed object trajectory. By comparing the observed object trajectory to the expected object trajectory, a determination may be made that the observed object trajectory is indicative of an anomaly.
    Type: Application
    Filed: October 12, 2022
    Publication date: February 2, 2023
    Inventors: Milind Naphade, Shuo Wang
  • Publication number: 20230016568
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Application
    Filed: September 13, 2022
    Publication date: January 19, 2023
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20220391639
    Abstract: Apparatuses, systems, and techniques to train neural networks to perform classification. In at least one embodiment, one or more neural networks are trained to perform classification based, at least in part, on grouping one or more sets of neural network training data according to behaviors of one or more objects within one or more images represented by the training data.
    Type: Application
    Filed: June 2, 2021
    Publication date: December 8, 2022
    Inventors: Prakash Gurumurthy, Milind Naphade, Yan Breek, Shuo Wang
  • Patent number: 11501572
    Abstract: In various examples, a set of object trajectories may be determined based at least in part on sensor data representative of a field of view of a sensor. The set of object trajectories may be applied to a long short-term memory (LSTM) network to train the LSTM network. An expected object trajectory for an object in the field of view of the sensor may be computed by the LSTM network based at least in part an observed object trajectory. By comparing the observed object trajectory to the expected object trajectory, a determination may be made that the observed object trajectory is indicative of an anomaly.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: November 15, 2022
    Assignee: NVIDIA Corporation
    Inventors: Milind Naphade, Shuo Wang
  • Patent number: 11443555
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: September 13, 2022
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20220053171
    Abstract: In various examples, cloud computing systems may store frames of video streams and metadata generated from the frames in separate data stores, with each type of data being indexed using shared timestamps. Thus, the frames of a video stream may be stored and/or processed and corresponding metadata of the frames may be stored and/or generated across any number of devices of the cloud computing system (e.g., edge and/or core devices) while being linked by the timestamps. A client device may provide a request or query to dynamically annotate the video stream using a particular subset of the metadata. In processing the request or query, the timestamps may be used to retrieve video data representing frames of the video stream and metadata extracted from those frames across the data stores. The retrieved metadata and video data may be used to annotate the frames for display on the client device.
    Type: Application
    Filed: August 12, 2020
    Publication date: February 17, 2022
    Inventors: Milind Naphade, Parthasarathy Sriram, Farzin Aghdasi, Shuo Wang
  • Patent number: 11182598
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: November 23, 2021
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20210348938
    Abstract: Calibration of various sensors may be difficult without specialized software to process intrinsic and extrinsic information about the sensors. Certain types of input files, such as image files, may also lack certain information, like depth information, to effectively translate regions of interest between images taken from a different perspective. Landmarks can be used to establish points for associating regions of interest between images taken from a different perspective and provided as an overlay to verify sensor calibration.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 11, 2021
    Inventors: Evan McLaughlin, Farzin Aghdasi, Milind Naphade, Arihant Jain, Sujit Biswas, Parthasarathy Sriram
  • Patent number: 11170319
    Abstract: In one embodiment, a computing device scans a plurality of available data sources associated with a profiled identity for an individual, and categorizes instances of the data sources according to recognized terms within the data sources. Once determining whether the profiled identity contributed positively to each categorized instance, categorized instances that have a positive contribution by the profiled identity may be clustered into clusters. The computing device may then rank the clusters based on size of the clusters and frequency of recognized terms within the clusters, and can then infer an expertise of the profiled identity based on one or more best-ranked clusters. The inferred expertise of the profiled identity may then be stored.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: November 9, 2021
    Assignee: Cisco Technology, Inc.
    Inventors: Sujit Biswas, Milind Naphade, Manjula Shivanna, Gyana Ranjan Dash, Srinivas Ruddaraju, Carlos M. Pignataro
  • Publication number: 20200410322
    Abstract: Systems and methods that use at least one neural network to infer content of individual frames in a sequence of images and to further infer changes to content in sequence of images over time to determine whether one or more anomalous events are present in sequence of images is described herein.
    Type: Application
    Filed: June 26, 2019
    Publication date: December 31, 2020
    Inventors: Milind Naphade, Tingting Huang, Shuo Wang, Xiaodong Yang, Ming-Yu Liu
  • Publication number: 20200302161
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Application
    Filed: June 9, 2020
    Publication date: September 24, 2020
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20200175392
    Abstract: Apparatuses, methods, and computer program products are provided for inferring one or more paths of one or more objects, where each one or more objects corresponds to one or more different machine learning models. The inferred one or more paths of the one or more objects are further used to infer a path of a target object. Further, this is accomplished using a different machine learning model than the one or more different machine learning models.
    Type: Application
    Filed: November 30, 2018
    Publication date: June 4, 2020
    Inventors: Shuai Tang, Milind Naphade, Murali M. Gopalakrishna
  • Publication number: 20190294889
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Application
    Filed: March 26, 2019
    Publication date: September 26, 2019
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20190294869
    Abstract: In various examples, a set of object trajectories may be determined based at least in part on sensor data representative of a field of view of a sensor. The set of object trajectories may be applied to a long short-term memory (LSTM) network to train the LSTM network. An expected object trajectory for an object in the field of view of the sensor may be computed by the LSTM network based at least in part an observed object trajectory. By comparing the observed object trajectory to the expected object trajectory, a determination may be made that the observed object trajectory is indicative of an anomaly.
    Type: Application
    Filed: March 25, 2019
    Publication date: September 26, 2019
    Inventors: Milind Naphade, Shuo Wang