Patents by Inventor Wesley Kenneth Cobb

Wesley Kenneth Cobb has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20110052000
    Abstract: Techniques are disclosed for determining anomalous trajectories of objects tracked over a sequence of video frames. In one embodiment, a symbol trajectory may be derived from observing an object moving through a scene. The symbol trajectory represents semantic concepts extracted from the trajectory of the object. Whether the symbol trajectory is anomalous may be determined, based on previously observed symbol trajectories. A user may be alerted upon determining that the symbol trajectory is anomalous.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Inventors: WESLEY KENNETH COBB, MING-JUNG SEOW, GANG XU
  • Publication number: 20110052002
    Abstract: Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.
    Type: Application
    Filed: September 1, 2009
    Publication date: March 3, 2011
    Inventors: WESLEY KENNETH COBB, Ming-Jung Seow, Tao Yang
  • Publication number: 20110052068
    Abstract: Techniques are disclosed for identifying anomaly object types during classification of foreground objects extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to discover object type clusters and classify objects depicted in the image data based on pixel-level micro-features that are extracted from the image data. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects and identifying anomaly object types.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Inventors: WESLEY KENNETH COBB, David FRIEDLANDER, RAJKIRAN KUMAR GOTTUMUKKAL, MING-JUNG SEOW, GANG XU
  • Publication number: 20110050897
    Abstract: Techniques are disclosed for visually conveying classifications derived from pixel-level micro-features extracted from image data. The image data may include an input stream of video frames depicting one or more foreground objects. The classifications represent information learned by a video surveillance system. A request may be received to view a classification. A visual representation of the classification may be generated. A user interface may be configured to display the visual representation of the classification and to allow a user to view and/or modify properties associated with the classification.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, David Samuel Friedlander, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu
  • Publication number: 20110052003
    Abstract: Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the detected foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.
    Type: Application
    Filed: September 1, 2009
    Publication date: March 3, 2011
    Inventors: WESLEY KENNETH COBB, MING-JUNG SEOW, TAO YANG
  • Publication number: 20110050896
    Abstract: Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Inventors: WESLEY KENNETH COBB, BOBBY ERNEST BLYTHE, RAJKIRAN KUMAR GOTTUMUKKAL, MING-JUNG SEOW
  • Publication number: 20110051992
    Abstract: Techniques are described for analyzing a stream of video frames to identify temporal anomalies. A video surveillance system configured to identify when agents depicted in the video stream engage in anomalous behavior, relative to the time-of-day (TOD) or day-of-week (DOW) at which the behavior occurs. A machine-learning engine may establish the normalcy of a scene by observing the scene over a specified period of time. Once the observations of the scene have matured, the actions of agents in the scene may be evaluated and classified as normal or abnormal temporal behavior, relative to the past observations.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Inventors: WESLEY KENNETH COBB, Ming-Jung Seow
  • Publication number: 20110052067
    Abstract: Techniques are disclosed for discovering object type clusters using pixel-level micro-features extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to classify objects depicted in the image data based on the pixel-level micro-features. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Inventors: WESLEY KENNETH COBB, David Friedlander, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu
  • Publication number: 20110043536
    Abstract: Techniques are disclosed for visually conveying a sequence storing an ordered string of symbols generated from kinematic data derived from analyzing an input stream of video frames depicting one or more foreground objects. The sequence may represent information learned by a video surveillance system. A request may be received to view the sequence or a segment partitioned form the sequence. A visual representation of the segment may be generated and superimposed over a background image associated with the scene. A user interface may be configured to display the visual representation of the sequence or segment and to allow a user to view and/or modify properties associated with the sequence or segment.
    Type: Application
    Filed: August 18, 2009
    Publication date: February 24, 2011
    Inventors: WESLEY KENNETH COBB, Bobby Ernest Blythe, David Samuel Friedlander, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal
  • Publication number: 20110043625
    Abstract: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
    Type: Application
    Filed: August 18, 2009
    Publication date: February 24, 2011
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Gang Xu, Tao Yang
  • Publication number: 20110043626
    Abstract: A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.
    Type: Application
    Filed: August 18, 2009
    Publication date: February 24, 2011
    Inventors: WESLEY KENNETH COBB, David Samuel Friedlander, Kishor Adinath Saitwal
  • Publication number: 20110044492
    Abstract: A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.
    Type: Application
    Filed: August 18, 2009
    Publication date: February 24, 2011
    Inventors: WESLEY KENNETH COBB, Bobby Ernest Blythe, David Samuel Friedlander, Kishor Adinath Saitwal, Gang Xu
  • Publication number: 20110044537
    Abstract: Techniques are disclosed for learning and modeling a background for a complex and/or dynamic scene over a period of observations without supervision. A background/foreground component of a computer vision engine may be configured to model a scene using an array of ART networks. The ART networks learn the regularity and periodicity of the scene by observing the scene over a period of time. Thus, the ART networks allow the computer vision engine to model complex and dynamic scene backgrounds in video.
    Type: Application
    Filed: August 18, 2009
    Publication date: February 24, 2011
    Inventors: Wesley Kenneth Cobb, Ming-Jung Seow, Tao Yang
  • Publication number: 20110044498
    Abstract: Techniques are disclosed for visually conveying a trajectory map. The trajectory map provides users with a visualization of data observed by a machine-learning engine of a behavior recognition system. Further, the visualization may provide an interface used to guide system behavior. For example, the interface may be used to specify that the behavior recognition system should alert (or not alert) when a particular trajectory is observed to occur.
    Type: Application
    Filed: August 18, 2009
    Publication date: February 24, 2011
    Inventors: WESLEY KENNETH COBB, BOBBY ERNEST BLYTHE, DAVID SAMUEL FRIEDLANDER, RAJKRAN KUMAR GOTTUMUKKAL, MING-JENG SEOW, GANG XU
  • Publication number: 20110044533
    Abstract: Techniques are disclosed for visually conveying an event map. The event map may represent information learned by a surveillance system. A request may be received to view the event map for a specified scene. The event map may be generated, including a background model of the specified scene and at least one cluster providing a statistical distribution of an event in the specified scene. Each statistical distribution may be derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. Each event may be observed to occur at a location in the specified scene corresponding to a location of the respective cluster in the event map. The event map may be configured to allow a user to view and/or modify properties associated with each cluster. For example, the user may label a cluster and set events matching the cluster to always (or never) generate an alert.
    Type: Application
    Filed: August 18, 2009
    Publication date: February 24, 2011
    Inventors: WESLEY KENNETH COBB, BOBBY ERNEST BLYTHE, RAJKIRAN KUMAR GOTTUMUKKAL, MING-JUNG SEOW
  • Publication number: 20110044536
    Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
    Type: Application
    Filed: August 18, 2009
    Publication date: February 24, 2011
    Inventors: Wesley Kenneth Cobb, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Min-Jung Seow, Gang Xu, Lon William Risinger, Jeff Graham
  • Publication number: 20110044499
    Abstract: A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.
    Type: Application
    Filed: August 18, 2009
    Publication date: February 24, 2011
    Inventors: WESLEY KENNETH COBB, David Samuel Friedlander, Kishor Adinath Saitwal
  • Publication number: 20110043689
    Abstract: Techniques are disclosed for detecting a field-of-view change for a video feed. These techniques differentiate between a new or changed scene and a temporary variation in the scene to accurately detect field-of-view changes for the video feed. A field-of-view change is detected when the position of a camera providing the video feed changes, the video feed is switched to a different camera, the video feed is disconnected, or the camera providing the video feed is obscured. A false-positive field-of-view change is not detected when the scene changes due to a sudden variation in illumination, obstruction of a portion of the camera providing the video feed, blurred images due to an out-of-focus camera, or a transition between bright and dark light when the video feed transitions between color and near infrared capture modes.
    Type: Application
    Filed: August 18, 2009
    Publication date: February 24, 2011
    Inventors: Wesley Kenneth Cobb, Dennis Gene Urech, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Tao Yang, Lon William Risinger
  • Publication number: 20100260376
    Abstract: Techniques are disclosed for detecting the occurrence of unusual events in a sequence of video frames Importantly, what is determined as unusual need not be defined in advance, but can be determined over time by observing a stream of primitive events and a stream of context events. A mapper component may be configured to parse the event streams and supply input data sets to multiple adaptive resonance theory (ART) networks. Each individual ART network may generate clusters from the set of inputs data supplied to that ART network. Each cluster represents an observed statistical distribution of a particular thing or event being observed that ART network.
    Type: Application
    Filed: April 14, 2009
    Publication date: October 14, 2010
    Inventors: Wesley Kenneth Cobb, Ming-Jung Seow
  • Publication number: 20100208986
    Abstract: Techniques are disclosed for a computer vision engine to update both a background model and thresholds used to classify pixels as depicting scene foreground or background in response to detecting that a sudden illumination changes has occurred in a sequence of video frames. The threshold values may be used to specify how much pixel a given pixel may differ from corresponding values in the background model before being classified as depicting foreground. When a sudden illumination change is detected, the values for pixels affected by sudden illumination change may be used to update the value in the background image to reflect the value for that pixel following the sudden illumination change as well as update the threshold for classifying that pixel as depicting foreground/background in subsequent frames of video.
    Type: Application
    Filed: February 18, 2009
    Publication date: August 19, 2010
    Inventors: WESLEY KENNETH COBB, Kishor Adinath Saitwal, Bobby Ernest Blythe, Tao Yang