Patents by Inventor Harpreet Sawhney

Harpreet Sawhney has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7929017
    Abstract: A unified approach, a fusion technique, a space-time constraint, a methodology, and system architecture are provided. The unified approach is to fuse the outputs of monocular and stereo video trackers, RFID and localization systems and biometric identification systems. The fusion technique is provided that is based on the transformation of the sensory information from heterogeneous sources into a common coordinate system with rigorous uncertainties analysis to account for various sensor noises and ambiguities. The space-time constraint is used to fuse different sensor using the location and velocity information. Advantages include the ability to continuously track multiple humans with their identities in a large area. The methodology is general so that other sensors can be incorporated into the system. The system architecture is provided for the underlying real-time processing of the sensors.
    Type: Grant
    Filed: July 28, 2005
    Date of Patent: April 19, 2011
    Assignee: SRI International
    Inventors: Manoj Aggarwal, Harpreet Sawhney, Keith Hanna, Rakesh Kumar, Tao Zhao, David R. Patterson, David Kalokitis
  • Patent number: 7929728
    Abstract: A method and apparatus for tracking a movable object using a plurality of images, each of which is separated by an interval of time is disclosed. The plurality of images includes first and second images. The method and apparatus include elements for aligning the first and second images as a function of (i) at least one feature of a first movable object captured in the first image, and (ii) at least one feature of a second movable object captured in the second image; and after aligning the first and second images, comparing at least one portion of the first image with at least one portion of the second image.
    Type: Grant
    Filed: December 5, 2005
    Date of Patent: April 19, 2011
    Assignee: SRI International
    Inventors: Yanlin Guo, Harpreet Sawhney, Rakesh Kumar, Ying Shan, Steve Hsu
  • Patent number: 7728833
    Abstract: A method and apparatus for automatically generating a three-dimensional computer model from a “point cloud” of a scene produced by a laser radar (LIDAR) system. Given a point cloud of an indoor or outdoor scene, the method extracts certain structures from the imaged scene, i.e., ceiling, floor, furniture, rooftops, ground, and the like, and models these structures with planes and/or prismatic structures to achieve a three-dimensional computer model of the scene. The method may then add photographic and/or synthetic texturing to the model to achieve a realistic model.
    Type: Grant
    Filed: August 18, 2005
    Date of Patent: June 1, 2010
    Assignee: Sarnoff Corporation
    Inventors: Vivek Verma, Rakesh Kumar, Stephen Charles Hsu, Harpreet Sawhney
  • Publication number: 20100073482
    Abstract: A scalable architecture for providing real-time multi-camera distributed video processing and visualization. An exemplary system comprises at least one video capture and storage system for capturing and storing a plurality of input videos, at least one vision based alarm system for detecting and reporting alarm situations or events, and at least one video rendering system (e.g., a video flashlight system) for displaying an alarm situation in a context that speeds up comprehension and response. One advantage of the present architecture is that these systems are all scalable, such that additional sensors (e.g., cameras, motion sensors, infrared sensors, chemical sensors, biological sensors, temperature sensors and like) can be added in large numbers without overwhelming the ability of security forces to comprehend the alarm situation.
    Type: Application
    Filed: November 24, 2009
    Publication date: March 25, 2010
    Applicant: L-3 Communications Corporation
    Inventors: Supun Samarasekera, Rakesh Kumar, Keith Hanna, Harpreet Sawhney, Aydin Arpa, Manoj Aggarwal, Vincent Paragano
  • Patent number: 7650030
    Abstract: A method and apparatus for unsupervised learning of measures for matching objects between images from at least two non-overlapping cameras is disclosed The method includes collecting at least two pairs of feature maps, where the at least two pairs of feature maps are derived from features of objects captured in the images. The method further includes computing, as a function of at least two pairs of feature maps, at least one first and second match measures, wherein the first match measure is of a same class and the second match measure is of a different class.
    Type: Grant
    Filed: December 5, 2005
    Date of Patent: January 19, 2010
    Assignee: Sarnoff Corporation
    Inventors: Ying Shan, Rakesh Kumar, Harpreet Sawhney
  • Patent number: 7639840
    Abstract: A method and apparatus for video surveillance is disclosed. In one embodiment, a sequence of scene imagery representing a field of view is received. One or more moving objects are identified within the sequence of scene imagery and then classified in accordance with one or more extracted spatio-temporal features. This classification may then be applied to determine whether the moving object and/or its behavior fits one or more known events or behaviors that are causes for alarm.
    Type: Grant
    Filed: July 28, 2005
    Date of Patent: December 29, 2009
    Assignee: Sarnoff Corporation
    Inventors: Keith Hanna, Manoj Aggarwal, Harpreet Sawhney, Rakesh Kumar
  • Patent number: 7633520
    Abstract: A scalable architecture for providing real-time multi-camera distributed video processing and visualization. An exemplary system comprises at least one video capture and storage system for capturing and storing a plurality of input videos, at least one vision based alarm system for detecting and reporting alarm situations or events, and at least one video rendering system (e.g., a video flashlight system) for displaying an alarm situation in a context that speeds up comprehension and response. One advantage of the present architecture is that these systems are all scalable, such that additional sensors (e.g., cameras, motion sensors, infrared sensors, chemical sensors, biological sensors, temperature sensors and like) can be added in large numbers without overwhelming the ability of security forces to comprehend the alarm situation.
    Type: Grant
    Filed: June 21, 2004
    Date of Patent: December 15, 2009
    Assignee: L-3 Communications Corporation
    Inventors: Supun Samarasekera, Rakesh Kumar, Keith Hanna, Harpreet Sawhney, Aydin Arpa, Manoj Aggarwal, Vincent Paragano
  • Patent number: 7623676
    Abstract: A method and/or system for tracking objects, such as humans, over a wide area (that is, over an area that is delineated by a large spatial domain and/or a long-duration temporal domain) is provided. Such tracking is facilitated by processing, in real-time, near real-time or otherwise contemporaneous with receiving, images captured by each of a plurality or network of slightly overlapping stereo sensors, such as stereo cameras. The method includes and the apparatus is adapted for obtaining a plurality of local-track segments, wherein the plurality of local-track segments correspond to an object captured in images taken by a respective plurality of stereo sensors; and combining the local-track segments to form a global track.
    Type: Grant
    Filed: December 21, 2005
    Date of Patent: November 24, 2009
    Assignee: Sarnoff Corporation
    Inventors: Tao Zhao, Manoj Aggarwal, Rakesh Kumar, Harpreet Sawhney
  • Publication number: 20090153661
    Abstract: A computer implemented method for deriving an attribute entity network (AEN) from video data is disclosed, comprising the steps of: extracting at least two entities from the video data; tracking the trajectories of the at least two entities to form at least two tracks; deriving at least one association between at least two entities by detecting at least one event involving the at least two entities, said detecting of at least one event being based on detecting at least one spatio-temporal motion correlation between the at least two entities; and constructing the AEN by creating a graph wherein the at least two objects form at least two nodes and the at least one association forms a link between the at least two nodes.
    Type: Application
    Filed: November 14, 2008
    Publication date: June 18, 2009
    Inventors: Hui Cheng, Jiangjian Xiao, Harpreet Sawhney
  • Patent number: 7519197
    Abstract: A system and method for identifying objects, particularly vehicles, between two non-overlapping cameras. More specifically, the method and system determines whether a vehicle depicted in an image captured by a first camera is the same vehicle or a different vehicle than a vehicle depicted in an image captured by a second camera. This intra-camera analysis determines whether the vehicle viewed by the first camera is the same as the vehicle viewed by the second camera, without directly matching the two vehicle images, thus eliminating the problems and inaccuracies caused by disparate environmental conditions acting on the two cameras, such as dramatic appearance and aspect changes.
    Type: Grant
    Filed: March 30, 2006
    Date of Patent: April 14, 2009
    Assignee: Sarnoff Corporation
    Inventors: Ying Shan, Harpreet Sawhney, Rakesh Kumar
  • Publication number: 20080291279
    Abstract: In an immersive surveillance system, videos or other data from a large number of cameras and other sensors is managed and displayed by a video processing system overlaying the data within a rendered 2D or 3D model of a scene. The system has a viewpoint selector configured to allow a user to selectively identify a viewpoint from which to view the site. A video control system receives data identifying the viewpoint and based on the viewpoint automatically selects a subset of the plurality of cameras that is generating video relevant to the view from the viewpoint, and causes video from the subset of cameras to be transmitted to the video processing system. As the viewpoint changes, the cameras communicating with the video processor are changed to hand off to cameras generating relevant video to the new position. Playback in the immersive environment is provided by synchronization of time stamped recordings of video.
    Type: Application
    Filed: June 1, 2005
    Publication date: November 27, 2008
    Applicant: L-3 Communications Corporation
    Inventors: Supun Samarasekera, Keith Hanna, Harpreet Sawhney, Rakesh Kumar, Aydin Arpa, Vincent Paragano, Thomas Germano, Manoj Aggarwal
  • Publication number: 20080167814
    Abstract: A system and method for efficiently locating in 3D an object of interest in a target scene using video information captured by a plurality of cameras. The system and method provide for multi-camera visual odometry wherein pose estimates are generated for each camera by all of the cameras in the multi-camera configuration. Furthermore, the system and method can locate and identify salient landmarks in the target scene using any of the cameras in the multi-camera configuration and compare the identified landmark against a database of previously identified landmarks. In addition, the system and method provide for the integration of video-based pose estimations with position measurement data captured by one or more secondary measurement sensors, such as, for example, Inertial Measurement Units (IMUs) and Global Positioning System (GPS) units.
    Type: Application
    Filed: December 3, 2007
    Publication date: July 10, 2008
    Inventors: Supun Samarasekera, Rakesh Kumar, Taragay Oskiper, Zhiwei Zhu, Oleg Naroditsky, Harpreet Sawhney
  • Patent number: 7385626
    Abstract: A method and system for detecting moving objects and controlling a surveillance system includes a processing module adapted to receive image information from at least one imaging sensor. The system performs motion detection analysis upon captured images and controls the camera in a specific manner upon detection of a moving object. The image processing uses the camera's physical orientation to a surveillance area to facilitate mapping images captured by the camera to a reference map of the surveillance area. Using the camera orientation, a moving object's position (e.g., latitude, longitude and altitude) within a scene can be derived.
    Type: Grant
    Filed: August 12, 2003
    Date of Patent: June 10, 2008
    Assignee: Sarnoff Corporation
    Inventors: Manoj Aggarwal, Harpreet Sawhney, Supun Samarasakera, Rakesh Kumar, Peter Burt, Jayan Eledath, Keith J. Hanna
  • Publication number: 20080101652
    Abstract: A method and/or system for tracking objects, such as humans, over a wide area (that is, over an area that is delineated by a large spatial domain and/or a long-duration temporal domain) is provided. Such tracking is facilitated by processing, in real-time, near real-time or otherwise contemporaneous with receiving, images captured by each of a plurality or network of slightly overlapping stereo sensors, such as stereo cameras. The method includes and the apparatus is adapted for obtaining a plurality of local-track segments, wherein the plurality of local-track segments correspond to an object captured in images taken by a respective plurality of stereo sensors; and combining the local-track segments to form a global track.
    Type: Application
    Filed: December 21, 2005
    Publication date: May 1, 2008
    Inventors: Tao Zhao, Manoj Aggarwal, Rakesh Kumar, Harpreet Sawhney
  • Publication number: 20080089579
    Abstract: The present invention provides a computer implemented process for detecting multi-view multi-pose objects. The process comprises training of a classifier for each intra-class exemplar, training of a strong classifier and combining the individual exemplar-based classifiers with a single objective function. This function is optimized using the two nested AdaBoost loops. The first loop is the outer loop that selects discriminative candidate exemplars. The second loop, the inner loop selects the discriminative candidate features on the selected exemplars to compute all weak classifiers for a specific position such as a view/pose. Then all the computed weak classifiers are automatically combined into a final classifier (strong classifier) which is the object to be detected.
    Type: Application
    Filed: June 13, 2007
    Publication date: April 17, 2008
    Inventors: Feng Han, Ying Shan, Harpreet Sawhney, Rakesh Kumar
  • Publication number: 20080025568
    Abstract: The present invention provides an improved system and method for object detection with histogram of oriented gradient (HOG) based support vector machine (SVM). Specifically, the system provides a computational framework to stably detect still or not moving objects over a wide range of viewpoints. The framework includes providing a sensor input of images which are received by the “focus of attention” mechanism to identify the regions in the image that potentially contain the target objects. These regions are further computed to generate hypothesized objects, specifically generating selected regions containing the target object hypothesis with respect to their positions. Thereafter, these selected regions are verified by an extended HOG-based SVM classifier to generate the detected objects.
    Type: Application
    Filed: July 19, 2007
    Publication date: January 31, 2008
    Inventors: Feng Han, Ying Shan, Ryan Cekander, Harpreet Sawhney, Rakesh Kumar
  • Publication number: 20070261061
    Abstract: A method and system are provided that enable the processing of security event data is provided. In a first version, instructions for processing security event data are software encoded in separate modules. The software is organized into discrete modules and executed by an information technology system. The software as executed identifies the computational engines of the information technology available for processing the security event data and assigns modules to specific computational engines. A plurality of events stored in a buffer are processed sequentially through two or more modules. The results of each processing of an event by a module are recorded in an extended event structure and made accessible to a successive module. The location of the buffer storing an event is available for overwriting after the event has been fully processed.
    Type: Application
    Filed: November 26, 2005
    Publication date: November 8, 2007
    Inventors: Stuart Staniford, Tanuj Mohan, Harpreet Sawhney, Prashant Bhagdikar
  • Publication number: 20070247525
    Abstract: A method and apparatus for providing a “Video Flashlight” system for managing large numbers of videos by overlaying them within a 2D or 3D model of a scene.
    Type: Application
    Filed: June 1, 2005
    Publication date: October 25, 2007
    Applicant: L-3 Comminications Corporation
    Inventors: Supun Samarasekera, Vincent Paragano, Harpreet Sawhney, Manoj Aggarwal, Keith Hanna, Rakesh Kumar, Aydin Arpa, Philip Miller
  • Patent number: 7259778
    Abstract: Method and apparatus for dynamically placing sensors in a 3D model is provided. Specifically, in one embodiment, the method selects a 3D model and a sensor for placement into the 3D model. The method renders the sensor and the 3D model in accordance with sensor parameters associated with the sensor and parameters desired by a user. In addition, the method determines whether an occlusion to the sensor is present.
    Type: Grant
    Filed: February 13, 2004
    Date of Patent: August 21, 2007
    Assignee: L-3 Communications Corporation
    Inventors: Aydin Arpa, Keith J. Hanna, Supun Samarasekera, Rakesh Kumar, Harpreet Sawhney
  • Publication number: 20070086621
    Abstract: Method for tracking an object recorded within a selected frame of a sequence of frames of video data, using a plurality of layers, where at least one object layer of the plurality of layers represents the object includes initializing layer ownership probabilities for pixels of the selected frame using a non-parametric motion model, estimating a set of motion parameters of the plurality of layers for the selected frame using a parametric maximization algorithm and tracking the object. The non-parametric motion model is optical flow and includes warping the mixing probabilities, the appearances of the plurality of layers, and the observed pixel data from the pixels of the preceding frame to the pixels of the selected frame to initialize the layer ownership probabilities for the pixels of the selected frame.
    Type: Application
    Filed: October 13, 2005
    Publication date: April 19, 2007
    Inventors: Manoj Aggarwal, Harpreet Sawhney, Rakesh Kumar, Supun Samarasekera