Patents by Inventor Wesley Kenneth Cobb

Wesley Kenneth Cobb has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150347856
    Abstract: A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may be configured to detect and evaluate the presence of sea-surface oil on the water surrounding an offshore oil platform. The computer vision engine may be configured to segment image data into detected patches or blobs of surface oil (foreground) present in the field of view of an infrared camera (or cameras). A machine learning engine may evaluate the detected patches of surface oil to learn to distinguish between sea-surface oil incident to the operation of an offshore platform and the appearance of surface oil that should be investigated by platform personnel.
    Type: Application
    Filed: August 11, 2015
    Publication date: December 3, 2015
    Inventor: Wesley Kenneth COBB
  • Patent number: 9111148
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. In one embodiment, e.g., a machine learning engine may include statistical engines for generating topological feature maps based on observations and a detection module for detecting feature anomalies. The statistical engines may include adaptive resonance theory (ART) networks which cluster observed position-feature characteristics. The statistical engines may further reinforce, decay, merge, and remove clusters. The detection module may calculate a rareness value relative to recurring observations and data in the ART networks. Further, the sensitivity of detection may be adjusted according to the relative importance of recently observed anomalies.
    Type: Grant
    Filed: June 27, 2013
    Date of Patent: August 18, 2015
    Assignee: BEHAVIORAL RECOGNITION SYSTEMS, INC.
    Inventors: Ming-Jung Seow, Wesley Kenneth Cobb
  • Patent number: 9111353
    Abstract: Techniques are disclosed for removing false-positive foreground pixels resulting from environmental illumination effects. The techniques include receiving a foreground image and a background model, and determining an approximated reflectance component of the foreground image based on the foreground image itself and a background model image which is used as a proxy for an illuminance component of the foreground image. Pixels of the foreground image having approximated reflectance values less than a threshold value may be classified as false-positive foreground pixels and removed from the foreground image. Further, the threshold value used may be adjusted based on various factors to account for, e.g., different illumination conditions indoors and outdoors.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: August 18, 2015
    Assignee: BEHAVIORAL RECOGNITION SYSTEMS, INC.
    Inventors: Ming-Jung Seow, Tao Yang, Wesley Kenneth Cobb
  • Patent number: 9104918
    Abstract: A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may be configured to detect and evaluate the presence of sea-surface oil on the water surrounding an offshore oil platform. The computer vision engine may be configured to segment image data into detected patches or blobs of surface oil (foreground) present in the field of view of an infrared camera (or cameras). A machine learning engine may evaluate the detected patches of surface oil to learn to distinguish between sea-surface oil incident to the operation of an offshore platform and the appearance of surface oil that should be investigated by platform personnel.
    Type: Grant
    Filed: August 20, 2013
    Date of Patent: August 11, 2015
    Assignee: BEHAVIORAL RECOGNITION SYSTEMS, INC.
    Inventor: Wesley Kenneth Cobb
  • Publication number: 20150110388
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Application
    Filed: December 29, 2014
    Publication date: April 23, 2015
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
  • Publication number: 20150078656
    Abstract: Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.
    Type: Application
    Filed: July 22, 2014
    Publication date: March 19, 2015
    Inventors: Wesley Kenneth COBB, Bobby Ernest BLYTHE, Rajkiran Kumar GOTTUMUKKAL, Ming-Jung SEOW
  • Publication number: 20150047040
    Abstract: Embodiments presented herein describe a method for processing streams of data of one or more networked computer systems. According to one embodiment of the present disclosure, an ordered stream of normalized vectors corresponding to information security data obtained from one or more sensors monitoring a computer network is received. A neuro-linguistic model of the information security data is generated by clustering the ordered stream of vectors and assigning a letter to each cluster, outputting an ordered sequence of letters based on a mapping of the ordered stream of normalized vectors to the clusters, building a dictionary of words from of the ordered output of letters, outputting an ordered stream of words based on the ordered output of letters, and generating a plurality of phrases based on the ordered output of words.
    Type: Application
    Filed: August 11, 2014
    Publication date: February 12, 2015
    Inventors: Wesley Kenneth COBB, Ming-Jung SEOW, Curtis Edward COLE, JR., Cody Shay FALCON, Benjamin A. KONOSKY, Charles Richard MORGAN, Aaron POFFENBERGER, Thong Toan NGUYEN
  • Publication number: 20150046155
    Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.
    Type: Application
    Filed: August 11, 2014
    Publication date: February 12, 2015
    Inventors: Ming-Jung SEOW, Wesley Kenneth COBB, Gang XU, Tao YANG, Aaron POFFENBERGER, Lon W. RISINGER, Kishor Adinath SAITWAL, Michael S. YANTOSCA, David M. SOLUM, Alex David HEMSATH, Dennis G. URECH, Duy Trong NGUYEN, Charles Richard MORGAN
  • Publication number: 20150003671
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. The techniques include evaluating sequence pairs representing segments of object trajectories. Assuming the objects interact, each of the sequences of the sequence pair may be mapped to a sequence cluster of an adaptive resonance theory (ART) network. A rareness value for the pair of sequence clusters may be determined based on learned joint probabilities of sequence cluster pairs. A statistical anomaly model, which may be specific to an interaction type or general to a plurality of interaction types, is used to determine an anomaly temperature, and alerts are issued based at least on the anomaly temperature. In addition, the ART network and the statistical anomaly model are updated based on the current interaction.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 1, 2015
    Inventors: Kishor Adinath SAITWAL, Dennis G. URECH, Wesley Kenneth COBB
  • Patent number: 8923609
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: April 2, 2013
    Date of Patent: December 30, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
  • Patent number: 8797405
    Abstract: Techniques are disclosed for visually conveying classifications derived from pixel-level micro-features extracted from image data. The image data may include an input stream of video frames depicting one or more foreground objects. The classifications represent information learned by a video surveillance system. A request may be received to view a classification. A visual representation of the classification may be generated. A user interface may be configured to display the visual representation of the classification and to allow a user to view and/or modify properties associated with the classification.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: August 5, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, David Samuel Friedlander, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu
  • Patent number: 8786702
    Abstract: Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: July 22, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Ming-Jung Seow
  • Publication number: 20140132786
    Abstract: A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may provide image stabilization of a video stream obtained from a camera. An image stabilization module in the behavioral recognition system obtains a reference image from the video stream. The image stabilization module identifies alignment regions within the reference image based on the regions of the image that are dense with features. Upon determining that the tracked features of a current image is out of alignment with the reference image, the image stabilization module uses the most feature dense alignment region to estimate an affine transformation matrix to apply to the entire current image to warp the image into proper alignment.
    Type: Application
    Filed: November 11, 2013
    Publication date: May 15, 2014
    Applicant: Behavioral Recognition Systems, Inc.
    Inventors: Kishor Adinath SAITWAL, Wesley Kenneth COBB, Tao YANG
  • Patent number: 8705861
    Abstract: Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated.
    Type: Grant
    Filed: June 12, 2012
    Date of Patent: April 22, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal
  • Publication number: 20140050355
    Abstract: A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may be configured to detect and evaluate the presence of sea-surface oil on the water surrounding an offshore oil platform. The computer vision engine may be configured to segment image data into detected patches or blobs of surface oil (foreground) present in the field of view of an infrared camera (or cameras). A machine learning engine may evaluate the detected patches of surface oil to learn to distinguish between sea-surface oil incident to the operation of an offshore platform and the appearance of surface oil that should be investigated by platform personnel.
    Type: Application
    Filed: August 20, 2013
    Publication date: February 20, 2014
    Applicant: Behavioral Recognition Systems, Inc.
    Inventor: Wesley Kenneth COBB
  • Patent number: 8625884
    Abstract: Techniques are disclosed for visually conveying an event map. The event map may represent information learned by a surveillance system. A request may be received to view the event map for a specified scene. The event map may be generated, including a background model of the specified scene and at least one cluster providing a statistical distribution of an event in the specified scene. Each statistical distribution may be derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. Each event may be observed to occur at a location in the specified scene corresponding to a location of the respective cluster in the event map. The event map may be configured to allow a user to view and/or modify properties associated with each cluster. For example, the user may label a cluster and set events matching the cluster to always (or never) generate an alert.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: January 7, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Ming-Jung Seow
  • Publication number: 20140003710
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. In one embodiment, e.g., a machine learning engine may include statistical engines for generating topological feature maps based on observations and a detection module for detecting feature anomalies. The statistical engines may include adaptive resonance theory (ART) networks which cluster observed position-feature characteristics. The statistical engines may further reinforce, decay, merge, and remove clusters. The detection module may calculate a rareness value relative to recurring observations and data in the ART networks. Further, the sensitivity of detection may be adjusted according to the relative importance of recently observed anomalies.
    Type: Application
    Filed: June 27, 2013
    Publication date: January 2, 2014
    Inventors: Ming-Jung SEOW, Wesley Kenneth COBB
  • Publication number: 20140003720
    Abstract: Techniques are disclosed for removing false-positive foreground pixels resulting from environmental illumination effects. The techniques include receiving a foreground image and a background model, and determining an approximated reflectance component of the foreground image based on the foreground image itself and a background model image which is used as a proxy for an illuminance component of the foreground image. Pixels of the foreground image having approximated reflectance values less than a threshold value may be classified as false-positive foreground pixels and removed from the foreground image. Further, the threshold value used may be adjusted based on various factors to account for, e.g., different illumination conditions indoors and outdoors.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 2, 2014
    Inventors: Ming-Jung SEOW, Tao YANG, Wesley Kenneth COBB
  • Publication number: 20140003713
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. Bounding boxes are determined for a set foreground patches identified in a video frame. For each bounding box, the techniques include determining textures for first areas, each including a foreground pixel and surrounding pixels, and determining textures for second areas including pixels of the background model image corresponding to the pixels of the foreground areas. Further, for each foreground pixel in the bounding box area, a correlation score is determined based on the texture of the corresponding first area and second area. Pixels whose correlation scores exceed a threshold are removed from the foreground patch. The size of the bounding box may also be reduced to fit the modified foreground patch.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 2, 2014
    Inventors: Ming-Jung SEOW, Tao YANG, Wesley Kenneth COBB
  • Publication number: 20140002647
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. The techniques include receiving data for an object within the scene and determining whether the object has remained substantially stationary within the scene for at least a threshold period. If the object is determined to have remained stationary for at least the threshold period, a rareness score is calculated for the object to indicate a likelihood of the object being stationary to the observed degree at the observed location. The rareness score may use a learning model to take into account previous stationary and/or non-stationary behavior of objects within the scene. In general, the learning model may be updated based on observed stationary and/or non-stationary behaviors of the objects. If the rareness score meets reporting conditions, the stationary object event may be reported.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 2, 2014
    Inventors: Gang XU, Wesley Kenneth COBB