Patents by Inventor Mahesh Saptharishi

Mahesh Saptharishi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11326956
    Abstract: One example temperature sensing device includes an electronic processor configured to receive a thermal image of a person captured by a thermal camera. The electronic processor is configured to determine a first temperature and a first location of a first hotspot on the person. The electronic processor is configured to determine a second location of a second hotspot on the person based on the second location being approximately symmetrical with respect to the first location about an axis, and the second hotspot having a second temperature that is approximately equal to the first temperature. The electronic processor is configured to determine a distance between the first location of the first hotspot and the second location of the second hotspot. In response to determining that the distance is within the predetermined range of distances, the electronic processor is configured to generate and output an estimated temperature of the person.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: May 10, 2022
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Mahesh Saptharishi, Pietro Russo, Peter L. Venetianer
  • Patent number: 11321592
    Abstract: Methods, systems, and techniques for tagless tracking of an object-of-interest are disclosed. Image and non-image data are generated across a plurality of camera-specific regions, and the object-of-interest is tracked over a global map formed as a composite of these regions.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: May 3, 2022
    Assignee: AVIGILON CORPORATION
    Inventors: Yanyan Hu, Pietro Russo, Mahesh Saptharishi
  • Patent number: 11303877
    Abstract: Methods, systems, and techniques for enhancing use of two-dimensional (2D) video analytics by using depth data. Two-dimensional image data representing an image comprising a first object is obtained, as well as depth data of a portion of the image that includes the first object. The depth data indicates a depth of the first object. An initial 2D classification of the portion of the image is generated using the 2D image data without using the depth data. The initial 2D classification is stored as an approved 2D classification when the initial 2D classification is determined consistent with the depth data. Additionally or alternatively, a confidence level of the initial 2D classification may be adjusted depending on whether the initial 2D classification is determined to be consistent with the depth data, or the depth data may be used with the 2D image data for classification.
    Type: Grant
    Filed: August 13, 2019
    Date of Patent: April 12, 2022
    Assignee: AVIGILON CORPORATION
    Inventors: Dharanish Kedarisetti, Pietro Russo, Peter L. Venetianer, Mahesh Saptharishi
  • Patent number: 11295179
    Abstract: Methods, systems, and techniques for monitoring an object-of-interest within a region involve receiving at least data from two sources monitoring a region and correlating that data to determine that an object-of-interest depicted or represented in data from one of the sources is the same object-of-interest depicted or represented in data from the other source. Metadata identifying that the object-of-interest from the two sources is the same object-of-interest is then stored for later use in, for example, object tracking.
    Type: Grant
    Filed: August 27, 2020
    Date of Patent: April 5, 2022
    Assignee: AVIGILON CORPORATION
    Inventors: Moussa Doumbouya, Yanyan Hu, Kevin Piette, Pietro Russo, Mahesh Saptharishi, Bo Yang Yu
  • Publication number: 20220042851
    Abstract: One example temperature sensing device includes an electronic processor configured to receive a thermal image of a person captured by a thermal camera. The electronic processor is configured to determine a first temperature and a first location of a first hotspot on the person. The electronic processor is configured to determine a second location of a second hotspot on the person based on the second location being approximately symmetrical with respect to the first location about an axis, and the second hotspot having a second temperature that is approximately equal to the first temperature. The electronic processor is configured to determine a distance between the first location of the first hotspot and the second location of the second hotspot. In response to determining that the distance is within the predetermined range of distances, the electronic processor is configured to generate and output an estimated temperature of the person.
    Type: Application
    Filed: August 6, 2020
    Publication date: February 10, 2022
    Inventors: Mahesh Saptharishi, Pietro Russo, Peter L. Venetianer
  • Publication number: 20220027618
    Abstract: A camera system comprises an image capturing device, object detection module, object tracking module, and match classifier. The object detection module receives image data and detects objects appearing in one or more of the images. The object tracking module temporally associates instances of detected objects, each of which has a signature representing features of the detected object. The match classifier matches object instances by analyzing data derived from the signatures. The match classifier determines whether the signatures match.
    Type: Application
    Filed: October 7, 2021
    Publication date: January 27, 2022
    Inventors: MAHESH SAPTHARISHI, DIMITRI A. LISIN
  • Patent number: 11176366
    Abstract: A camera system comprises an image capturing device, object detection module, object tracking module, and match classifier. The object detection module receives image data and detects objects appearing in one or more of the images. The object tracking module temporally associates instances of detected objects, each of which has a signature representing features of the detected object. The match classifier matches object instances by analyzing data derived from the signatures. The match classifier determines whether the signatures match.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: November 16, 2021
    Assignee: AVIGILON ANALYTICS CORPORATION
    Inventors: Mahesh Saptharishi, Dimitri A. Lisin
  • Patent number: 11113587
    Abstract: There is provided an appearance search system comprising one or more cameras configured to capture video of a scene, the video having images of objects. The system comprises one or more processors and memory comprising computer program code stored on the memory and configured when executed by the one or more processors to cause the one or more processors to perform a method. The method comprises identifying one or more of the objects within the images of the objects. The method further comprises implementing a learning machine configured to generate signatures of the identified objects and generate a signature of an object of interest. The system further comprises a network configured to send the images of the objects from the camera to the one or more processors.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: September 7, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Richard Butt, Alexander Chau, Moussa Doumbouya, Levi Glozman, Lu He, Aleksey Lipchin, Shaun P. Marlatt, Sreemanananth Sadanand, Mitul Saha, Mahesh Saptharishi, Yanyan Hu
  • Patent number: 11100350
    Abstract: Methods, systems, and techniques for classifying and/or detecting objects using visible and invisible light images. A visible light image and an invisible light image are received at a convolutional neural network (CNN). The visible light image depicts a region-of-interest imaged using visible light. The invisible light image depicts at least a portion of the region-of-interest imaged using invisible light, and at least one of the images depicts an object-of-interest within the portion of the region-of-interest shared between the images. The CNN then classifies and/or detects the object-of-interest using the images. The CNN may be trained to perform this classification and/or detection using pairs of visible and invisible light training images.
    Type: Grant
    Filed: February 19, 2019
    Date of Patent: August 24, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Kevin Piette, Pietro Russo, Mahesh Saptharishi, Bo Yang Yu
  • Patent number: 11051001
    Abstract: Methods, systems, and techniques for generating two-dimensional (2D) and three-dimensional (3D) images and image streams. The images and image streams may be generated using active stereo cameras projecting at least one illumination pattern, or by using a structured light camera and a pair of different illumination patterns of which at least one is a structured light illumination pattern. When using an active stereo camera, a 3D image may be generated by performing a stereoscopic combination of a first set of images (depicting a first illumination pattern) and a 2D image may be generated using a second set of images (optionally depicting a second illumination pattern). When using a structured light camera, a 3D image may be generated based on a first image that depicts a structured light illumination pattern, and a 2D image may be generated from the first image and a second image that depicts a different illumination pattern.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: June 29, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Barry Gravante, Pietro Russo, Mahesh Saptharishi
  • Publication number: 20210183091
    Abstract: A method, system and computer program product for emulating depth data of a three-dimensional camera device is disclosed. The method includes concurrently operating the radar device and the 3D camera device to generate training radar data and training depth data respectively. Each of the radar device and the 3D camera device has a respective field of view. The field of view of the radar device overlaps the field of view of the 3D camera device. The method also includes inputting the training radar and depth data to the neural network. The method also includes employing the training radar and depth data to train the neural network. Once trained, the neural network is configured to receive real radar data as input and to output substitute depth data.
    Type: Application
    Filed: December 12, 2019
    Publication date: June 17, 2021
    Applicant: MOTOROLA SOLUTIONS, INC.
    Inventors: Yanyan Hu, Kevin Piette, Pietro Russo, Mahesh Saptharishi
  • Patent number: 11025891
    Abstract: Methods, systems, and techniques for generating two-dimensional (2D) and three-dimensional (3D) images and image streams. The images and image streams may be generated using active stereo cameras projecting at least one illumination pattern, or by using a structured light camera and a pair of different illumination patterns of which at least one is a structured light illumination pattern. When using an active stereo camera, a 3D image may be generated by performing a stereoscopic combination of a first set of images (depicting a first illumination pattern) and a 2D image may be generated using a second set of images (optionally depicting a second illumination pattern). When using a structured light camera, a 3D image may be generated based on a first image that depicts a structured light illumination pattern, and a 2D image may be generated from the first image and a second image that depicts a different illumination pattern.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: June 1, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Barry Gravante, Pietro Russo, Mahesh Saptharishi
  • Patent number: 10997469
    Abstract: Methods, systems, and techniques for facilitating improved training of a supervised machine learning process, such as a decision tree. First and second object detections of an object depicted in a video are respectively generated using first and second object detectors, with the second object detector requiring more computational resources than the first object detector to detect the object. Whether a similarity and a difference between the first and second object detections respectively satisfy a similarity threshold and a difference threshold is determined. When the similarity threshold is satisfied, the first object detection is stored as a positive example for the machine learning training. When the difference threshold is satisfied, the first object detection is stored as a negative example for the machine learning training.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: May 4, 2021
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Aravind Anantha, Mahesh Saptharishi, Yanyan Hu
  • Publication number: 20210089833
    Abstract: Methods, systems, and techniques for facilitating improved training of a supervised machine learning process, such as a decision tree. First and second object detections of an object depicted in a video are respectively generated using first and second object detectors, with the second object detector requiring more computational resources than the first object detector to detect the object. Whether a similarity and a difference between the first and second object detections respectively satisfy a similarity threshold and a difference threshold is determined. When the similarity threshold is satisfied, the first object detection is stored as a positive example for the machine learning training. When the difference threshold is satisfied, the first object detection is stored as a negative example for the machine learning training.
    Type: Application
    Filed: September 24, 2019
    Publication date: March 25, 2021
    Applicant: MOTOROLA SOLUTIONS, INC.
    Inventors: Aravind Anantha, Mahesh Saptharishi, Yanyan Hu
  • Publication number: 20210051312
    Abstract: Methods, systems, and techniques for enhancing use of two-dimensional (2D) video analytics by using depth data. Two-dimensional image data representing an image comprising a first object is obtained, as well as depth data of a portion of the image that includes the first object. The depth data indicates a depth of the first object. An initial 2D classification of the portion of the image is generated using the 2D image data without using the depth data. The initial 2D classification is stored as an approved 2D classification when the initial 2D classification is determined consistent with the depth data. Additionally or alternatively, a confidence level of the initial 2D classification may be adjusted depending on whether the initial 2D classification is determined to be consistent with the depth data, or the depth data may be used with the 2D image data for classification.
    Type: Application
    Filed: August 13, 2019
    Publication date: February 18, 2021
    Applicant: Avigilon Corporation
    Inventors: Dharanish KEDARISETTI, Pietro RUSSO, Peter L. VENETIANER, Mahesh SAPTHARISHI
  • Patent number: 10891509
    Abstract: There are described methods and systems for facilitating identification of an object-of-interest. A face similarity score and a body similarity score of a query image relative to a gallery image are determined. A fused similarity score of the query image relative to the gallery image is determined by applying a relationship between the face similarity score, the body similarity score, and the fused similarity score. The fused similarity score is indicative of whether or not the object-of-interest and the potential object-of-interest are the same object-of-interest. For example, a machine learning process is used to fuse the face similarity score and the body similarity into the fused similarity score. The process is repeated for multiple gallery images. The gallery images may then be ranked according to their respective fused similarity scores.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: January 12, 2021
    Assignee: Avigilon Corporation
    Inventors: Moussa Doumbouya, Lu He, Yanyan Hu, Mahesh Saptharishi, Hao Zhang, Nicholas John Alcock, Roger David Donaldson, Seyedmostafa Azizabadifarahani, Ken Jessen
  • Publication number: 20200394477
    Abstract: Methods, systems, and techniques for monitoring an object-of-interest within a region involve receiving at least data from two sources monitoring a region and correlating that data to determine that an object-of-interest depicted or represented in data from one of the sources is the same object-of-interest depicted or represented in data from the other source. Metadata identifying that the object-of-interest from the two sources is the same object-of-interest is then stored for later use in, for example, object tracking.
    Type: Application
    Filed: August 27, 2020
    Publication date: December 17, 2020
    Applicant: Avigilon Corporation
    Inventors: Moussa DOUMBOUYA, Yanyan HU, Kevin PIETTE, Pietro RUSSO, Mahesh SAPTHARISHI, Bo Yang YU
  • Publication number: 20200380708
    Abstract: Methods, systems, and techniques for generating two-dimensional (2D) and three-dimensional (3D) images and image streams. The images and image streams may be generated using active stereo cameras projecting at least one illumination pattern, or by using a structured light camera and a pair of different illumination patterns of which at least one is a structured light illumination pattern. When using an active stereo camera, a 3D image may be generated by performing a stereoscopic combination of a first set of images (depicting a first illumination pattern) and a 2D image may be generated using a second set of images (optionally depicting a second illumination pattern). When using a structured light camera, a 3D image may be generated based on a first image that depicts a structured light illumination pattern, and a 2D image may be generated from the first image and a second image that depicts a different illumination pattern.
    Type: Application
    Filed: May 29, 2019
    Publication date: December 3, 2020
    Applicant: Avigilon Corporation
    Inventors: Barry GRAVANTE, Pietro RUSSO, Mahesh SAPTHARISHI
  • Publication number: 20200382765
    Abstract: Methods, systems, and techniques for generating two-dimensional (2D) and three-dimensional (3D) images and image streams. The images and image streams may be generated using active stereo cameras projecting at least one illumination pattern, or by using a structured light camera and a pair of different illumination patterns of which at least one is a structured light illumination pattern. When using an active stereo camera, a 3D image may be generated by performing a stereoscopic combination of a first set of images (depicting a first illumination pattern) and a 2D image may be generated using a second set of images (optionally depicting a second illumination pattern). When using a structured light camera, a 3D image may be generated based on a first image that depicts a structured light illumination pattern, and a 2D image may be generated from the first image and a second image that depicts a different illumination pattern.
    Type: Application
    Filed: June 7, 2019
    Publication date: December 3, 2020
    Applicant: Avigilon Corporation
    Inventors: Barry GRAVANTE, Pietro RUSSO, Mahesh SAPTHARISHI
  • Patent number: 10848716
    Abstract: Disclosed are systems and methods for reducing video communication bandwidth requirements of a video surveillance network camera system that includes network communication paths between network video cameras producing video streams of scenes observed by the network video cameras and content-aware computer networking devices analyzing by video analytics video visual content of the video streams to provide managed video representing, at specified quality levels, samples of the scenes observed. Distribution of the managed video consumes substantially less network bandwidth than would be consumed by delivery through network communication paths a video stream at the specified quality level in the absence of analysis by the video analytics.
    Type: Grant
    Filed: April 3, 2017
    Date of Patent: November 24, 2020
    Assignee: AVIGILON ANALYTICS CORPORATION
    Inventor: Mahesh Saptharishi