Patents by Inventor Zeeshan Rasheed
Zeeshan Rasheed has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230050679Abstract: A feature extractor and novel training objective are provided for content-based image retrieval. For example, a computer-implemented method includes applying a query image and a search image to a neural network of a feature extraction network of a computing device, the query image indicating an object to be searched for in the search image. The feature extraction network includes the neural network, a spatial feature neural network receiving a first output of the neural network pertaining to the search image, and an embedding network receiving a second output of the neural network pertaining to the query image. The method includes generating spatial search features from the spatial feature neural network, generating a query feature from the embedding network, applying the query feature to an artificial neural network (ANN) index, and determining an optimal matching result of an object in the search image based on an operation using the ANN index.Type: ApplicationFiled: July 29, 2022Publication date: February 16, 2023Applicant: Novateur Research SolutionsInventors: Zeeshan RASHEED, Jonathan Jacob AMAZON, Khurram HASSAN-SHAFIQUE
-
Publication number: 20210118136Abstract: Techniques performed by a data processing system for operating a personalized oncology system herein include accessing a first histopathological image of a histopathological slide of a sample taken from a first patient; analyzing the first histopathological image using a first machine learning model configured to extract first features from the first histopathological image; searching a histological database that includes a plurality of second histopathological images and corresponding clinical data for a plurality of second patients to generate search results; analyzing the plurality of third histopathological images and the corresponding clinical data associated with the plurality of third histopathological images using statistical analysis techniques to generate associated statistics and metrics associated with mortality, morbidity, time-to-event, or a combination thereof for the plurality of third patients associated with the third histopathological images; and presenting an interactive visual representatType: ApplicationFiled: October 22, 2020Publication date: April 22, 2021Applicant: Novateur Research Solutions LLCInventors: Khurram HASSAN-SHAFIQUE, Zeeshan RASHEED, Jonathan Jacob AMAZON, Rashid CHOTANI
-
Patent number: 10708548Abstract: Systems, methods and computer-readable media for creating and using video analysis rules that are based on map data are disclosed. A sensor(s), such as a video camera, can track and monitor a geographic location, such as a road, pipeline, or other location or installation. A video analytics engine can receive video streams from the sensor, and identify a location of the imaged view in a geo-registered map space, such as a latitude-longitude defined map space. A user can operate a graphical user interface to draw, enter, select, and/or otherwise input on a map a set of rules for detection of events in the monitored scene, such as tripwires and areas of interest. When tripwires, areas of interest, and/or other features are approached or crossed, the engine can perform responsive actions, such as generating an alert and sending it to a user.Type: GrantFiled: September 24, 2018Date of Patent: July 7, 2020Assignee: AVIGILON FORTRESS CORPORATIONInventors: Zeeshan Rasheed, Dana Eubanks, Weihong Yin, Zhong Zhang, Kyle Glowacki, Allison Beach
-
Publication number: 20200193831Abstract: Techniques for alerting a human operator of a predicted side collision of a vehicle with a detected pedestrian or bicyclist herein including obtaining sensor data indicative of a trajectory of the vehicle, determining a predicted trajectory of the vehicle based on the sensor data, estimating a likelihood of collision between the detected pedestrian or bicyclist and the first side of the vehicle based on at least the first position of the pedestrian or bicyclist determined by the fusion module and the predicted trajectory of the vehicle, determining that a warning should be presented based on at least the estimated likelihood of collision, and presenting an alert to a human operator of the vehicle in response to the determination that the warning should be presented.Type: ApplicationFiled: February 6, 2020Publication date: June 18, 2020Applicant: Novateur Research Solutions LLCInventors: Khurram Hassan-Shafique, Zeeshan Rasheed
-
Patent number: 10687022Abstract: Embodiments relate to systems, devices, and computer-implemented methods for performing automated visual surveillance by obtaining video camera coordinates determined using video data, video camera metadata, and/or digital elevation models, obtaining a surveillance rule associated with rule coordinates, identifying a video camera that is associated with video camera coordinates that include at least part of the rule coordinates, and transmitting the surveillance rule to a computing device associated with the video camera. The rule coordinates can be automatically determined based on received coordinates of an object. Additionally, the surveillance rule can be generated based on instructions from a user in a natural language syntax.Type: GrantFiled: December 4, 2015Date of Patent: June 16, 2020Assignee: AVIGILON FORTRESS CORPORATIONInventors: Zeeshan Rasheed, Dana Eubanks, Weihong Yin, Zhong Zhang, Kyle Glowacki, Allison Beach
-
Patent number: 10599949Abstract: A method for determining a likelihood that a first object captured in a first image and a second object captured in a second image are the same object includes capturing the first image from a first viewpoint and a second image from a second viewpoint, wherein the first object is in the first image, and the second object is in the second image. The method also includes determining a first likelihood that a first visual feature on the first object and a second visual feature on the second object are the same visual feature, and determining a second likelihood that a dimension of the first object and a corresponding dimension of the second object are the same. The method then includes determining a final likelihood that the first object and the second object are the same object based at least partially upon the first likelihood and the second likelihood.Type: GrantFiled: July 3, 2018Date of Patent: March 24, 2020Assignee: Avigilon Fortress CorporationInventors: Gang Qian, Zeeshan Rasheed
-
Publication number: 20190246073Abstract: A system for detecting behavior of a target may include: a target detection engine, adapted to detect at least one target from one or more objects from a video surveillance system recording a scene; a path builder, adapted to create at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein the at least one mature path model includes a model of expected target behavior with respect to the at least one path model; and a target behavior analyzer, adapted to analyze and identify target behavior with respect to the at least one mature path model. The system may further include an alert generator, adapted to generate an alert based on the identified behavior.Type: ApplicationFiled: April 16, 2019Publication date: August 8, 2019Inventors: Niels HAERING, Zeeshan RASHEED, Li YU, Andrew J. CHOSAK
-
Publication number: 20190180624Abstract: Systems and methods for alerting a human operator of a predicted side collision of a transit bus with a pedestrian or bicyclist, including receiving range scanner data from a range scanner located at a first side of the bus covering a first area on the first side of the bus, receiving image data from a sensor located at the first side of the bus and covering an area along the first side of the bus that overlaps the first area, detecting and tracking the pedestrian or bicyclist based on the range scanner data and the image data, estimating a likelihood of collision between the pedestrian or bicyclist and the first side of the bus based on the tracking of the pedestrian or bicyclist, and presenting an alert to the human operator in response to a determination that a warning should be presented based on the estimated likelihood of collision.Type: ApplicationFiled: February 5, 2019Publication date: June 13, 2019Applicant: Novateur Research Solutions LLCInventors: Khurram Hassan-Shafique, Zeeshan Rasheed
-
Patent number: 10291884Abstract: A system for detecting behavior of a target may include: a target detection engine, adapted to detect at least one target from one or more objects from a video surveillance system recording a scene; a path builder, adapted to create at least one mature path model from analysis of the behavior of a plurality of targets in the scene, wherein the at least one mature path model includes a model of expected target behavior with respect to the at least one path model; and a target behavior analyzer, adapted to analyze and identify target behavior with respect to the at least one mature path model. The system may further include an alert generator, adapted to generate an alert based on the identified behavior.Type: GrantFiled: August 8, 2014Date of Patent: May 14, 2019Assignee: AVIGILON FORTRESS CORPORATIONInventors: Niels Haering, Zeeshan Rasheed, Li Yu, Andrew J. Chosak
-
Publication number: 20190037179Abstract: Systems, methods and computer-readable media for creating and using video analysis rules that are based on map data are disclosed. A sensor(s), such as a video camera, can track and monitor a geographic location, such as a road, pipeline, or other location or installation. A video analytics engine can receive video streams from the sensor, and identify a location of the imaged view in a geo-registered map space, such as a latitude-longitude defined map space. A user can operate a graphical user interface to draw, enter, select, and/or otherwise input on a map a set of rules for detection of events in the monitored scene, such as tripwires and areas of interest. When tripwires, areas of interest, and/or other features are approached or crossed, the engine can perform responsive actions, such as generating an alert and sending it to a user.Type: ApplicationFiled: September 24, 2018Publication date: January 31, 2019Inventors: Zeeshan Rasheed, Dana Eubanks, Weihong Yin, Zhong Zhang, Kyle Glowacki, Allison Beach
-
Publication number: 20180314913Abstract: A method for determining a likelihood that a first object captured in a first image and a second object captured in a second image are the same object includes capturing the first image from a first viewpoint and a second image from a second viewpoint, wherein the first object is in the first image, and the second object is in the second image. The method also includes determining a first likelihood that a first visual feature on the first object and a second visual feature on the second object are the same visual feature, and determining a second likelihood that a dimension of the first object and a corresponding dimension of the second object are the same. The method then includes determining a final likelihood that the first object and the second object are the same object based at least partially upon the first likelihood and the second likelihood.Type: ApplicationFiled: July 3, 2018Publication date: November 1, 2018Inventors: Gang Qian, Zeeshan Rasheed
-
Patent number: 10110856Abstract: Systems, methods and computer-readable media for creating and using video analysis rules that are based on map data are disclosed. A sensor(s), such as a video camera, can track and monitor a geographic location, such as a road, pipeline, or other location or installation. A video analytics engine can receive video streams from the sensor, and identify a location of the imaged view in a geo-registered map space, such as a latitude-longitude defined map space. A user can operate a graphical user interface to draw, enter, select, and/or otherwise input on a map a set of rules for detection of events in the monitored scene, such as tripwires and areas of interest. When tripwires, areas of interest, and/or other features are approached or crossed, the engine can perform responsive actions, such as generating an alert and sending it to a user.Type: GrantFiled: December 4, 2015Date of Patent: October 23, 2018Assignee: AVIGILON FORTRESS CORPORATIONInventors: Zeeshan Rasheed, Dana Eubanks, Weihong Yin, Zhong Zhang, Kyle Glowacki, Allison Beach
-
Patent number: 10043104Abstract: A method for determining a likelihood that a first object captured in a first image and a second object captured in a second image are the same object includes capturing the first image from a first viewpoint and a second image from a second viewpoint, wherein the first object is in the first image, and the second object is in the second image. The method also includes determining a first likelihood that a first visual feature on the first object and a second visual feature on the second object are the same visual feature, and determining a second likelihood that a dimension of the first object and a corresponding dimension of the second object are the same. The method then includes determining a final likelihood that the first object and the second object are the same object based at least partially upon the first likelihood and the second likelihood.Type: GrantFiled: December 8, 2015Date of Patent: August 7, 2018Assignee: AVIGILON FORTRESS CORPORATIONInventors: Gang Qian, Zeeshan Rasheed
-
Publication number: 20180105107Abstract: A collision warning system for vehicles includes: a detection, tracking, and localization (DTL) laser module receiving laser data from a first laser range scanner, and generating laser data output, wherein the first laser range scanner covers a first laser area; a detection, tracking, and localization (DTL) thermal module receiving thermal data from a first thermal video sensor, and generating thermal data output, wherein the first thermal video sensor covers a first thermal area; a fusion module receiving the laser data output and the thermal data output, fusing the laser data output and the thermal data output, and generating a situational awareness map; and a collision prediction module receiving the situational awareness map, predicting a collision between a detected object and a vehicle, and warning an operator regarding the predicted collision.Type: ApplicationFiled: March 28, 2017Publication date: April 19, 2018Inventors: Khurram Hassan-Shafique, Zeeshan Rasheed
-
Publication number: 20170068859Abstract: A method for determining a likelihood that a first object captured in a first image and a second object captured in a second image are the same object includes capturing the first image from a first viewpoint and a second image from a second viewpoint, wherein the first object is in the first image, and the second object is in the second image. The method also includes determining a first likelihood that a first visual feature on the first object and a second visual feature on the second object are the same visual feature, and determining a second likelihood that a dimension of the first object and a corresponding dimension of the second object are the same. The method then includes determining a final likelihood that the first object and the second object are the same object based at least partially upon the first likelihood and the second likelihood.Type: ApplicationFiled: December 8, 2015Publication date: March 9, 2017Inventors: Gang Qian, Zeeshan Rasheed
-
Publication number: 20160165193Abstract: Systems, methods and computer-readable media for creating and using video analysis rules that are based on map data are disclosed. A sensor(s), such as a video camera, can track and monitor a geographic location, such as a road, pipeline, or other location or installation. A video analytics engine can receive video streams from the sensor, and identify a location of the imaged view in a geo-registered map space, such as a latitude-longitude defined map space. A user can operate a graphical user interface to draw, enter, select, and/or otherwise input on a map a set of rules for detection of events in the monitored scene, such as tripwires and areas of interest. When tripwires, areas of interest, and/or other features are approached or crossed, the engine can perform responsive actions, such as generating an alert and sending it to a user.Type: ApplicationFiled: December 4, 2015Publication date: June 9, 2016Inventors: Zeeshan Rasheed, Dana Eubanks, Weihong Yin, Zhong Zhang, Kyle Glowacki, Allison Beach
-
Publication number: 20160165187Abstract: Embodiments relate to systems, devices, and computer-implemented methods for performing automated visual surveillance by obtaining video camera coordinates determined using video data, video camera metadata, and/or digital elevation models, obtaining a surveillance rule associated with rule coordinates, identifying a video camera that is associated with video camera coordinates that include at least part of the rule coordinates, and transmitting the surveillance rule to a computing device associated with the video camera. The rule coordinates can be automatically determined based on received coordinates of an object. Additionally, the surveillance rule can be generated based on instructions from a user in a natural language syntax.Type: ApplicationFiled: December 4, 2015Publication date: June 9, 2016Inventors: Zeeshan Rasheed, Dana Eubanks, Weihong Yin, Zhong Zhang, Kyle Glowacki, Allison Beach
-
Publication number: 20160165191Abstract: A method for predicting when an object will arrive at a boundary includes receiving visual media captured by a camera. An object in the visual media is identified. One or more parameters related to the object are detected based on analysis of the visual media. It is predicted when the object will arrive at a boundary using the one or more parameters. An alert is transmitted to a user indicating when the object is predicted to arrive at the boundary.Type: ApplicationFiled: December 4, 2015Publication date: June 9, 2016Inventors: Zeeshan Rasheed, Weihong Yin, Zhong Zhang, Kyle Glowacki, Allison Beach
-
Patent number: 9240051Abstract: A video camera may overlook a monitored area from any feasible position. An object flow estimation module monitor the moving direction of the objects in the monitored area. It may separate the consistently moving objects from the other objects. A object count estimation module may compute the object density (e.g. crowd). A object density classification module may classify the density into customizable categories.Type: GrantFiled: November 21, 2006Date of Patent: January 19, 2016Assignee: Avigilon Fortress CorporationInventors: Haiying Liu, Peter L. Venetianer, Niels Haering, Omar Javed, Alan J. Lipton, Andrew Martone, Zeeshan Rasheed, Weihong Yin, Li Yu, Zhong Zhang
-
Patent number: 9020261Abstract: A method for segmenting video data into foreground and background portions utilizes statistical modeling of the pixels. A statistical model of the background is built for each pixel, and each pixel in an incoming video frame is compared with the background statistical model for that pixel. Pixels are determined to be foreground or background based on the comparisons. The method for segmenting video data may be further incorporated into a method for implementing an intelligent video surveillance system. The method for segmenting video data may be implemented in hardware.Type: GrantFiled: May 3, 2013Date of Patent: April 28, 2015Assignee: Avigilon Fortress CorporationInventors: Alan J. Lipton, Niels Haering, Zeeshan Rasheed, Omar Javed, Zhong Zhang, Weihong Yin, Peter L. Venetianer, Gary W. Myers