Patents by Inventor Farzin Aghdasi

Farzin Aghdasi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11205086
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: December 21, 2021
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Patent number: 11182598
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: November 23, 2021
    Assignee: NVIDIA Corporation
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Publication number: 20210348938
    Abstract: Calibration of various sensors may be difficult without specialized software to process intrinsic and extrinsic information about the sensors. Certain types of input files, such as image files, may also lack certain information, like depth information, to effectively translate regions of interest between images taken from a different perspective. Landmarks can be used to establish points for associating regions of interest between images taken from a different perspective and provided as an overlay to verify sensor calibration.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 11, 2021
    Inventors: Evan McLaughlin, Farzin Aghdasi, Milind Naphade, Arihant Jain, Sujit Biswas, Parthasarathy Sriram
  • Patent number: 11165845
    Abstract: Processing video for low-bandwidth transmission may be complex. At a content source embodiment of methods disclosed herein may include assigning the content identifier as a function of content in a packet of a packet stream on a packet-by-packet basis. The method may further comprise forwarding the content identifier with the packet to enable a downstream network node or device to effect prioritization of the packet within the packet stream. The downstream network node or device may make drop decisions that are guided by a content identifier. Packets, or video frames that contain useful information may be prioritized and have a higher probability of being delivered.
    Type: Grant
    Filed: October 8, 2019
    Date of Patent: November 2, 2021
    Assignee: Pelco, Inc.
    Inventors: Bryan K. Neff, Farzin Aghdasi
  • Publication number: 20210334629
    Abstract: A multi-stage multimedia inferencing pipeline may be set up and executed using configuration data including information used to set up each stage by deploying the specified or desired models and/or other pipeline components into a repository (e.g., a shared folder in a repository). The configuration data may also include information a central inference server library uses to manage and set parameters for these components with respect to a variety of inference frameworks that may be incorporated into the pipeline. The configuration data can define a pipeline that encompasses stages for video decoding, video transform, cascade inferencing on different frameworks, metadata filtering and exchange between models and display. The entire pipeline can be efficiently hardware-accelerated using parallel processing circuits (e.g., one or more GPUs, CPUs, DPUs, or TPUs). Embodiments of the present disclosure can integrate an entire video/audio analytics pipeline into an embedded platform in real time.
    Type: Application
    Filed: December 9, 2020
    Publication date: October 28, 2021
    Inventors: Wind Yuan, Kaustubh Purandare, Bhushan Rupde, Shaunak Gupte, Farzin Aghdasi
  • Publication number: 20210089921
    Abstract: Transfer learning can be used to enable a user to obtain a machine learning model that is fully trained for an intended inferencing task without having to train the model from scratch. A pre-trained model can be obtained that is relevant for that inferencing task. Additional training data, as may correspond to at least one additional class of data, can be used to further train this model. This model can then be pruned and retrained in order to obtain a smaller model that retains high accuracy for the intended inferencing task.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Inventors: Farzin Aghdasi, Varun Praveen, FNU Ratnesh Kumar, Partha Sriram
  • Publication number: 20200302161
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Application
    Filed: June 9, 2020
    Publication date: September 24, 2020
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Patent number: 10769913
    Abstract: Systems and methods are described herein that provide a three-tier intelligent video surveillance management system. An example of a system described herein includes a gateway configured to obtain video content and metadata relating to the video content from a plurality of network devices, a metadata processing module communicatively coupled to the gateway and configured to filter the metadata according to one or more criteria to obtain a filtered set of metadata, a video processing module communicatively coupled to the gateway and the metadata processing module and configured to isolate video portions, of video the content, associated with respective first portions of the filtered set of metadata, and a cloud services interface communicatively coupled to the gateway, the metadata processing module and the video processing module and configured to provide at least some of the filtered set of metadata or the isolated video portions to a cloud computing service.
    Type: Grant
    Filed: December 22, 2011
    Date of Patent: September 8, 2020
    Assignee: PELCO, INC.
    Inventors: Lei Wang, Hongwei Zhu, Farzin Aghdasi, Greg Millar
  • Publication number: 20200265085
    Abstract: Embodiments of the disclosure provide for systems and methods for creating metadata associated with a video data. The metadata can include data about objects viewed within a video scene and/or events that occur within the video scene. Some embodiments allow users to search for specific objects and/or events by searching the recorded metadata. In some embodiments, metadata is created by receiving a video frame and developing a background model for the video frame. Foreground object(s) can then be identified in the video frame using the background model. Once these objects are identified they can be classified and/or an event associated with the foreground object may be detected. The event and the classification of the foreground object can then be recorded as metadata.
    Type: Application
    Filed: February 25, 2020
    Publication date: August 20, 2020
    Inventors: Greg Millar, Farzin Aghdasi, Lei Wang
  • Patent number: 10750132
    Abstract: An automated security surveillance system determines a location of a possible disturbance and adjusts its cameras to record video footage of the disturbance. In one embodiment, a disturbance can be determined by recording audio of the nearby area. A system, coupled to a camera, may include an arrangement of four audio sensors recording audio of the nearby area to produce independent outputs. The system further may include a processing module configured to determine an angle and distance of an audio source relative to a location of the arrangement of the four audio sensors. The system can then adjust the camera by rotation along an azimuth or elevation angle and adjusting the zoom level to record video of the audio source. Through use of the system, a surveillance system can present an image of a source of possible disturbance to an operator more rapidly and precisely than through manual techniques.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: August 18, 2020
    Assignee: PELCO, INC.
    Inventors: Chien-Min Huang, Wei Su, Farzin Aghdasi, James G. Millar, Greg M. Millar
  • Patent number: 10679671
    Abstract: A method of summarizing events in a video recording includes evaluating at least one video recording to identify event that violate at least one rule. The method further includes excerpting a fragment of the at least one video recording. The fragment contains a depiction of the event. The method also includes causing the fragment to be included in a summary video recording. The rules may relate to a threshold amount of motion in a physical space being recorded in the at least one received video recording, or a threshold duration of motion in a physical space being recorded in the at least one received video recording.
    Type: Grant
    Filed: June 9, 2014
    Date of Patent: June 9, 2020
    Assignee: Pelco, Inc.
    Inventors: Farzin Aghdasi, Kirsten A. Medhurst, Greg M. Millar, Stephen J. Mitchell
  • Publication number: 20200151489
    Abstract: In various examples, sensor data—such as masked sensor data—may be used as input to a machine learning model to determine a confidence for object to person associations. The masked sensor data may focus the machine learning model on particular regions of the image that correspond to persons, objects, or some combination thereof. In some embodiments, coordinates corresponding to persons, objects, or combinations thereof, in addition to area ratios between various regions of the image corresponding to the persons, objects, or combinations thereof, may be used to further aid the machine learning model in focusing on important regions of the image for determining the object to person associations.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 14, 2020
    Inventors: Parthasarathy Sriram, Fnu Ratnesh Kumar, Anil Ubale, Farzin Aghdasi, Yan Zhai, Subhashree Radhakrishnan
  • Publication number: 20200097742
    Abstract: In various examples, a neural network may be trained for use in vehicle re-identification tasks—e.g., matching appearances and classifications of vehicles across frames—in a camera network. The neural network may be trained to learn an embedding space such that embeddings corresponding to vehicles of the same identify are projected closer to one another within the embedding space, as compared to vehicles representing different identities. To accurately and efficiently learn the embedding space, the neural network may be trained using a contrastive loss function or a triplet loss function. In addition, to further improve accuracy and efficiency, a sampling technique—referred to herein as batch sample—may be used to identify embeddings, during training, that are most meaningful for updating parameters of the neural network.
    Type: Application
    Filed: September 20, 2019
    Publication date: March 26, 2020
    Inventors: Fnu Ratnesh Kumar, Farzin Aghdasi, Parthasarathy Sriram, Edwin Weill
  • Publication number: 20200036767
    Abstract: Processing video for low-bandwidth transmission may be complex. At a content source embodiment of methods disclosed herein may include assigning the content identifier as a function of content in a packet of a packet stream on a packet-by-packet basis. The method may further comprise forwarding the content identifier with the packet to enable a downstream network node or device to effect prioritization of the packet within the packet stream. The downstream network node or device may make drop decisions that are guided by a content identifier. Packets, or video frames that contain useful information may be prioritized and have a higher probability of being delivered.
    Type: Application
    Filed: October 8, 2019
    Publication date: January 30, 2020
    Inventors: Bryan K. Neff, Farzin Aghdasi
  • Patent number: 10511649
    Abstract: Processing video for low-bandwidth transmission may be complex. At a content source embodiment of methods disclosed herein may include assigning the content identifier as a function of content in a packet of a packet stream on a packet-by-packet basis. The method may further comprise forwarding the content identifier with the packet to enable a downstream network node or device to effect prioritization of the packet within the packet stream. The downstream network node or device may make drop decisions that are guided by a content identifier. Packets, or video frames that contain useful information may be prioritized and have a higher probability of being delivered.
    Type: Grant
    Filed: November 20, 2012
    Date of Patent: December 17, 2019
    Assignee: Pelco, Inc.
    Inventors: Bryan K. Neff, Farzin Aghdasi
  • Patent number: 10491936
    Abstract: A security system images a large amount of data through routine use which is difficult to transfer or share. In one embodiment, through the use of a cloud-based video service and an application program interface, the methods and systems disclosed herein comprise accepting a communication that identifies parameters associated with a video on a video server accessible via a network. The methods and systems further cause the video server to transfer the video via the network to a cloud-based video service location in response to the communication, and transmit a notification to a receiving party (or cause the cloud-based video service location to transmit the notification) in concert with the transfer of the video, which provides availability information of the video at the cloud-based service location. The methods and systems facilitate video sharing amongst parties.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: November 26, 2019
    Assignee: PELCO, INC.
    Inventors: Farzin Aghdasi, Kirsten A. Medhurst, Greg M. Millar, Stephen J. Mitchell
  • Publication number: 20190294889
    Abstract: The present disclosure provides various approaches for smart area monitoring suitable for parking garages or other areas. These approaches may include ROI-based occupancy detection to determine whether particular parking spots are occupied by leveraging image data from image sensors, such as cameras. These approaches may also include multi-sensor object tracking using multiple sensors that are distributed across an area that leverage both image data and spatial information regarding the area, to provide precise object tracking across the sensors. Further approaches relate to various architectures and configurations for smart area monitoring systems, as well as visualization and processing techniques. For example, as opposed to presenting video of an area captured by cameras, 3D renderings may be generated and played from metadata extracted from sensors around the area.
    Type: Application
    Filed: March 26, 2019
    Publication date: September 26, 2019
    Inventors: Parthasarathy Sriram, Ratnesh Kumar, Farzin Aghdasi, Arman Toorians, Milind Naphade, Sujit Biswas, Vinay Kolar, Bhanu Pisupati, Aaron Bartholomew
  • Patent number: 10134145
    Abstract: According to at least one example embodiment, a method and corresponding apparatus of pruning video data comprise detecting motion areas within video frames of the video data based on short-term and long-term variations associated with content of the video data. Motion events, associated with the content of the video data, are then identified based on the motion areas detected, corresponding filtered motion areas, and variation patterns associated with the video data. Based on the motion events identified, a storage pattern for storing video frames of the video data is determined. The video frames are stored according to the determined storage pattern.
    Type: Grant
    Filed: December 24, 2013
    Date of Patent: November 20, 2018
    Assignee: Pelco, Inc.
    Inventors: Lei Wang, Farzin Aghdasi, Greg M. Millar, Stephen J. Mitchell
  • Patent number: 10051246
    Abstract: A video surveillance system includes: an input configured to receive indications of images each comprising a plurality of pixels; a memory; and a processing unit communicatively coupled to the input and the memory and configured to: analyze the indications of the images; compare the present image with a short-term background image stored in the memory; compare the present image with a long-term background image stored in the memory; provide an indication in response to an object in the present image being disposed in a first location in the present image, in a second location in, or absent from, the short-term background image, and in a third location in, or absent from, the long-term background image, where the first location is different from both the second location and the third location.
    Type: Grant
    Filed: September 10, 2015
    Date of Patent: August 14, 2018
    Assignee: Pelco, Inc.
    Inventors: Wei Su, Lei Wang, Farzin Aghdasi, Shu Yang
  • Patent number: 10009579
    Abstract: A sensor system according to an embodiment of the invention may process depth data and visible light data for a more accurate detection. Depth data assists where visible light images are susceptible to false positives. Visible light images (or video) may similarly enhance conclusions drawn from depth data alone. Detections may be object-based or defined with the context of a target object. Depending on the target object, the types of detections may vary to include motion and behavior. Applications of the described sensor system include motion guided interfaces where users may interact with one or more systems through gestures. The sensor system described may also be applied to counting systems, surveillance systems, polling systems, retail store analytics, or the like.
    Type: Grant
    Filed: November 21, 2012
    Date of Patent: June 26, 2018
    Assignee: PELCO, INC.
    Inventors: Lei Wang, Farzin Aghdasi