Patents Assigned to Avigilon Corporation
  • Patent number: 11086594
    Abstract: A computer-implemented method controls aspects of a surveillance system using gestures and/or voice commands, and comprises: receiving one or both of an operator's skeleton input data and voice input data from a gesture detection camera and a microphone; using a processor, matching one or both of the received skeleton input data with a gesture stored in a database and the received voice input data with a text string stored in the database; matching one or both of the gesture and text string to a corresponding video management program command stored on the database; and transmitting the one or more video management program commands to a video management program of the surveillance system.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: August 10, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Uriel Lapidot, Elliot Rushton, Matthew Adam
  • Patent number: 11087610
    Abstract: A computer-implemented method comprises detecting, with a presence detector such as a radar device, a presence of a person at a location such as an ATM vestibule. In response thereto, a timer is initiated. After initiating the timer, a portal sensor is used to detect a change in a status of a portal permitting access to the location. In response thereto, the timer is adjusted (for example the timer is reset). Thus, multiple consecutive normal usages of the vestibule do not unnecessarily trigger an alarm.
    Type: Grant
    Filed: November 23, 2018
    Date of Patent: August 10, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Peter Anderholm, Yanyan Hu, Kevin Piette, Pietro Russo, Bo Yang Yu
  • Patent number: 11055837
    Abstract: A system is provided, including: a radar sensor configured to transmit and receive a radar signal from a person; a depth camera configured to receive a depth image of the person; one or more processors communicative with memory having stored thereon computer program code configured when executed by the one or more processors to cause the one or more processors to perform a method comprising: detect the person; determine depth information relating to the person using the depth image; determine a correlation between the depth information of the person and the radar signal received from the person; and in response to the correlation not within a range of expected values, generating an alert. The depth information may be a volume or surface area of the person.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: July 6, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Kevin Piette, Pietro Russo, Bo Yang Yu
  • Patent number: 11051001
    Abstract: Methods, systems, and techniques for generating two-dimensional (2D) and three-dimensional (3D) images and image streams. The images and image streams may be generated using active stereo cameras projecting at least one illumination pattern, or by using a structured light camera and a pair of different illumination patterns of which at least one is a structured light illumination pattern. When using an active stereo camera, a 3D image may be generated by performing a stereoscopic combination of a first set of images (depicting a first illumination pattern) and a 2D image may be generated using a second set of images (optionally depicting a second illumination pattern). When using a structured light camera, a 3D image may be generated based on a first image that depicts a structured light illumination pattern, and a 2D image may be generated from the first image and a second image that depicts a different illumination pattern.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: June 29, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Barry Gravante, Pietro Russo, Mahesh Saptharishi
  • Patent number: 11048930
    Abstract: Alias capture to support searching for an object-of-interest is disclosed. A method includes capturing, using a camera with a defined field of view, video image frames that include a moving object-of-interest. The method also includes tracking the object-of-interest over a period of time starting when the object-of-interest enters the field of view and ending when the object-of-interest exits the field of view. The method also includes detecting, at a point in time in-between the start and end of the period of time of the tracking, a threshold exceeding change in an appearance of the object-of-interest. The method also includes creating, before the end of the period of time of the tracking, a new object profile for the object-of-interest in response to the detecting of the threshold exceeding change.
    Type: Grant
    Filed: October 4, 2019
    Date of Patent: June 29, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Moussa Doumbouya, Yanyan Hu, Kevin Piette, Pietro Russo, Peter L. Venetianer, Bo Yang Yu
  • Patent number: 11025891
    Abstract: Methods, systems, and techniques for generating two-dimensional (2D) and three-dimensional (3D) images and image streams. The images and image streams may be generated using active stereo cameras projecting at least one illumination pattern, or by using a structured light camera and a pair of different illumination patterns of which at least one is a structured light illumination pattern. When using an active stereo camera, a 3D image may be generated by performing a stereoscopic combination of a first set of images (depicting a first illumination pattern) and a 2D image may be generated using a second set of images (optionally depicting a second illumination pattern). When using a structured light camera, a 3D image may be generated based on a first image that depicts a structured light illumination pattern, and a 2D image may be generated from the first image and a second image that depicts a different illumination pattern.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: June 1, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Barry Gravante, Pietro Russo, Mahesh Saptharishi
  • Patent number: 11023707
    Abstract: A cropped bounding box selection operation is performed on a video captured by a video capture and playback system, to select one or more cropped bounding boxes from the video for processing by a face detection operation. The cropped bounding box selection operation identifies objects from the video images and assigns a ranking to each identified object based on certain priority criteria; one or more cropped bounding boxes corresponding to the objects with the highest ranking(s) are then processed by the face detection operation to detect a face in each object.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: June 1, 2021
    Assignee: AVIGILON CORPORATION
    Inventors: Keshav Thirumalai Seshadri, Peter L. Venetianer
  • Publication number: 20210110145
    Abstract: A method of detecting unusual motion is provided, including: determining features occurring during a fixed time period; grouping the features into first and second subsets of the fixed time period; grouping the features in each of the first and second subsets into at least one pattern interval; and determining when an unusual event has occurred using at least one of the pattern intervals.
    Type: Application
    Filed: November 24, 2020
    Publication date: April 15, 2021
    Applicant: Avigilon Corporation
    Inventors: Nicholas ALCOCK, Aleksey LIPCHIN, Brenna RANDLETT, Xiao XIAO, Tulio de Souza ALCANTARA
  • Patent number: 10979622
    Abstract: Methods, systems, and techniques for performing object detection using a convolutional neural network (CNN) involve obtaining an image and then processing the image using the CNN to generate a first feature pyramid and a second feature pyramid from the first pyramid. The second pyramid includes an enhanced feature map, which is generated by combining an upsampled feature map and a feature map of the first feature pyramid that has a corresponding or lower resolution of a resolution of the enhanced feature map. The upsampled feature map is generated by upsampling a feature map of the second feature pyramid that is at a shallower position in the CNN than the enhanced feature map. The enhanced feature map is split into channel feature maps of different resolutions, with each of the channel feature maps corresponding to channels of the enhanced feature map. Object detection is performed on the channel feature maps.
    Type: Grant
    Filed: July 30, 2019
    Date of Patent: April 13, 2021
    Assignee: AVIGILON CORPORATION
    Inventor: Yin Wang
  • Publication number: 20210051312
    Abstract: Methods, systems, and techniques for enhancing use of two-dimensional (2D) video analytics by using depth data. Two-dimensional image data representing an image comprising a first object is obtained, as well as depth data of a portion of the image that includes the first object. The depth data indicates a depth of the first object. An initial 2D classification of the portion of the image is generated using the 2D image data without using the depth data. The initial 2D classification is stored as an approved 2D classification when the initial 2D classification is determined consistent with the depth data. Additionally or alternatively, a confidence level of the initial 2D classification may be adjusted depending on whether the initial 2D classification is determined to be consistent with the depth data, or the depth data may be used with the 2D image data for classification.
    Type: Application
    Filed: August 13, 2019
    Publication date: February 18, 2021
    Applicant: Avigilon Corporation
    Inventors: Dharanish KEDARISETTI, Pietro RUSSO, Peter L. VENETIANER, Mahesh SAPTHARISHI
  • Publication number: 20210050035
    Abstract: A method of exporting video clips is provided, comprising: displaying one or more video streams from at least one security camera; selecting a video clip from a video stream of the one or more video streams, the video clip associated with a time; storing information associated with the video clip in a list of video clips, the information associated with the video clip comprising a default name, the time and a default duration of the video clip and a camera from the at least one security camera that is associated with the video stream from the one or more video streams; displaying the list of video clips; on selection of one or more video clips from the list of video clips, allowing editing of the time and the default duration of the selected one or more video clips; and exporting the selected one or more video clips to a file.
    Type: Application
    Filed: August 14, 2020
    Publication date: February 18, 2021
    Applicant: AVIGILON CORPORATION
    Inventors: Tulio de Souza Alcantara, David Flanagan, Zachary Lang, Brady James Schnell, Brenna Randlett
  • Publication number: 20210034671
    Abstract: Methods, systems, and techniques for enhancing a VMS are disclosed. One of the disclosed methods includes populating a user interface page with one or more images, each showing a single person matched to a known identity, and each taken contemporaneously with one or more respective access control event occurrences identifiable to the single person. User selection input is receivable to mark at least one of the images as a reference image for an appearance search to find additional images of the single person captured by video cameras within a surveillance system.
    Type: Application
    Filed: July 30, 2019
    Publication date: February 4, 2021
    Applicant: Avigilon Corporation
    Inventors: Christian Lemay, Steven Lewis, Elaine A. Ling Quek, Iain McVey, William Christopher Weston
  • Publication number: 20210019374
    Abstract: Multiple natural language training text strings are obtained. For example, text portions may be randomly selected and converted into natural language text based on one or more randomly selected rules. A formatted training text string is generated for each natural language training text string, for example using a context-free grammar parser. The formatted training text strings are inputted to a machine learning model. For each formatted training text string, using the machine learning model, a natural language text string is generated. The natural language text string is associated with one of the natural language training text strings. One or more parameters of the machine learning model are adjusted based on one or more differences between at least one of the natural language text strings and its associated natural language training text string.
    Type: Application
    Filed: July 17, 2019
    Publication date: January 21, 2021
    Applicant: Avigilon Corporation
    Inventors: Roger David DONALDSON, Cathy JIAO
  • Patent number: 10891509
    Abstract: There are described methods and systems for facilitating identification of an object-of-interest. A face similarity score and a body similarity score of a query image relative to a gallery image are determined. A fused similarity score of the query image relative to the gallery image is determined by applying a relationship between the face similarity score, the body similarity score, and the fused similarity score. The fused similarity score is indicative of whether or not the object-of-interest and the potential object-of-interest are the same object-of-interest. For example, a machine learning process is used to fuse the face similarity score and the body similarity into the fused similarity score. The process is repeated for multiple gallery images. The gallery images may then be ranked according to their respective fused similarity scores.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: January 12, 2021
    Assignee: Avigilon Corporation
    Inventors: Moussa Doumbouya, Lu He, Yanyan Hu, Mahesh Saptharishi, Hao Zhang, Nicholas John Alcock, Roger David Donaldson, Seyedmostafa Azizabadifarahani, Ken Jessen
  • Patent number: 10878227
    Abstract: A method of detecting unusual motion is provided, including: determining features occurring during a fixed time period; grouping the features into first and second subsets of the fixed time period; grouping the features in each of the first and second subsets into at least one pattern interval; and determining when an unusual event has occurred using at least one of the pattern intervals.
    Type: Grant
    Filed: April 2, 2018
    Date of Patent: December 29, 2020
    Assignee: Avigilon Corporation
    Inventors: Nicholas Alcock, Aleksey Lipchin, Brenna Randlett, Xiao Xiao, Tulio de Souza Alcantara
  • Publication number: 20200393741
    Abstract: A camera includes a housing, a light source positioned within the housing, and a light-refracting apparatus. The light-refracting apparatus comprises a collimator shaped to collimate light emitted by the light source, and a lens comprising an at least partially concave light-emitting surface positioned to receive light collimated by the collimator and shaped to disperse the collimated light.
    Type: Application
    Filed: June 13, 2019
    Publication date: December 17, 2020
    Applicant: Avigilon Corporation
    Inventor: Amar NANDA
  • Publication number: 20200394477
    Abstract: Methods, systems, and techniques for monitoring an object-of-interest within a region involve receiving at least data from two sources monitoring a region and correlating that data to determine that an object-of-interest depicted or represented in data from one of the sources is the same object-of-interest depicted or represented in data from the other source. Metadata identifying that the object-of-interest from the two sources is the same object-of-interest is then stored for later use in, for example, object tracking.
    Type: Application
    Filed: August 27, 2020
    Publication date: December 17, 2020
    Applicant: Avigilon Corporation
    Inventors: Moussa DOUMBOUYA, Yanyan HU, Kevin PIETTE, Pietro RUSSO, Mahesh SAPTHARISHI, Bo Yang YU
  • Publication number: 20200380708
    Abstract: Methods, systems, and techniques for generating two-dimensional (2D) and three-dimensional (3D) images and image streams. The images and image streams may be generated using active stereo cameras projecting at least one illumination pattern, or by using a structured light camera and a pair of different illumination patterns of which at least one is a structured light illumination pattern. When using an active stereo camera, a 3D image may be generated by performing a stereoscopic combination of a first set of images (depicting a first illumination pattern) and a 2D image may be generated using a second set of images (optionally depicting a second illumination pattern). When using a structured light camera, a 3D image may be generated based on a first image that depicts a structured light illumination pattern, and a 2D image may be generated from the first image and a second image that depicts a different illumination pattern.
    Type: Application
    Filed: May 29, 2019
    Publication date: December 3, 2020
    Applicant: Avigilon Corporation
    Inventors: Barry GRAVANTE, Pietro RUSSO, Mahesh SAPTHARISHI
  • Publication number: 20200382765
    Abstract: Methods, systems, and techniques for generating two-dimensional (2D) and three-dimensional (3D) images and image streams. The images and image streams may be generated using active stereo cameras projecting at least one illumination pattern, or by using a structured light camera and a pair of different illumination patterns of which at least one is a structured light illumination pattern. When using an active stereo camera, a 3D image may be generated by performing a stereoscopic combination of a first set of images (depicting a first illumination pattern) and a 2D image may be generated using a second set of images (optionally depicting a second illumination pattern). When using a structured light camera, a 3D image may be generated based on a first image that depicts a structured light illumination pattern, and a 2D image may be generated from the first image and a second image that depicts a different illumination pattern.
    Type: Application
    Filed: June 7, 2019
    Publication date: December 3, 2020
    Applicant: Avigilon Corporation
    Inventors: Barry GRAVANTE, Pietro RUSSO, Mahesh SAPTHARISHI
  • Patent number: 10846554
    Abstract: Methods, systems, and techniques for performing a hash-based appearance search. A processor is used to obtain a hash vector that represents a search subject that is depicted in an image. The hash vector includes one or more hashes as a respective one or more components of the hash vector. The processor determines which one or more of the hashes satisfy a threshold criterion and which one or more of the components of the hash vector qualify as a scoring component. The one or more components that qualify correspond to a respective one or more hashes that satisfy the threshold criterion and that are represented in a scoring database that is generated based on different examples of a search target. The processor determines a score representing a similarity of the search subject to the different examples of the search target.
    Type: Grant
    Filed: July 17, 2018
    Date of Patent: November 24, 2020
    Assignee: Avigilon Corporation
    Inventors: Nicholas John Alcock, Seyedmostafa Azizabadifarahani, Alexander Chau, Roger David Donaldson