Patents by Inventor Wesley Kenneth Cobb

Wesley Kenneth Cobb has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190377951
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Application
    Filed: August 20, 2019
    Publication date: December 12, 2019
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
  • Patent number: 10489679
    Abstract: Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.
    Type: Grant
    Filed: July 22, 2014
    Date of Patent: November 26, 2019
    Assignee: AVIGILON PATENT HOLDING 1 CORPORATION
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Ming-Jung Seow
  • Publication number: 20190311204
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 10, 2019
    Applicant: Omni AI, Inc.
    Inventors: Kishor Adinath SAITWAL, Lon W. RISINGER, Wesley Kenneth COBB
  • Patent number: 10423892
    Abstract: Techniques are disclosed for analyzing and learning behavior in an acquired stream of video frames. In one embodiment, a trajectory analyzer clusters trajectories of objects depicted in video frames and builds a trajectory model including the trajectory clusters, a prior probability of assigning a trajectory to each cluster, and an intra-cluster probability distribution indicating the probability that a trajectory mapping to each cluster is least various distances away from the cluster. Given a new trajectory, a score indicating how unusual the trajectory is may be computed based on the product of the probability of the trajectory mapping to a particular cluster and the intra-cluster probability of the trajectory being a computed distance from the cluster. The distance used to match the trajectory to the cluster and determine intra-cluster probability is computed using a parallel Needleman-Wunsch algorithm, with cells in antidiagonals of a matrix and connected sub-matrices being computed in parallel.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: September 24, 2019
    Assignee: Omni AI, Inc.
    Inventors: Gang Xu, Ming-Jung Seow, Tao Yang, Wesley Kenneth Cobb
  • Patent number: 10423835
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: September 24, 2019
    Assignee: AVIGILON PATENT HOLDING 1 CORPORATION
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
  • Publication number: 20190289215
    Abstract: A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may provide image stabilization of a video stream obtained from a camera. An image stabilization module in the behavioral recognition system obtains a reference image from the video stream. The image stabilization module identifies alignment regions within the reference image based on the regions of the image that are dense with features. Upon determining that the tracked features of a current image is out of alignment with the reference image, the image stabilization module uses the most feature dense alignment region to estimate an affine transformation matrix to apply to the entire current image to warp the image into proper alignment.
    Type: Application
    Filed: February 19, 2019
    Publication date: September 19, 2019
    Applicant: Omni AI, Inc.
    Inventors: Kishor Adinath SAITWAL, Wesley Kenneth COBB, Tao YANG
  • Patent number: 10409910
    Abstract: Techniques are disclosed for generating a syntax for a neuro-linguistic model of input data obtained from one or more sources. A stream of words of a dictionary built from a sequence of symbols are received. The symbols are generated from an ordered stream of normalized vectors generated from input data. Statistics for combinations of words co-occurring in the stream are evaluated. The statistics includes a frequency upon which the combinations of words co-occur. A model of combinations of words based on the evaluated statistics is updated. The model identifies statistically relevant words. A connected graph is generated. Each node in the connected graph represents one of the words in the stream. Edges connecting the nodes represent a probabilistic relationship between words in the stream. Phrases are identified based on the connected graph.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: September 10, 2019
    Assignee: Omni AI, Inc.
    Inventors: Ming-Jung Seow, Gang Xu, Tao Yang, Wesley Kenneth Cobb
  • Patent number: 10410058
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. The techniques include evaluating sequence pairs representing segments of object trajectories. Assuming the objects interact, each of the sequences of the sequence pair may be mapped to a sequence cluster of an adaptive resonance theory (ART) network. A rareness value for the pair of sequence clusters may be determined based on learned joint probabilities of sequence cluster pairs. A statistical anomaly model, which may be specific to an interaction type or general to a plurality of interaction types, is used to determine an anomaly temperature, and alerts are issued based at least on the anomaly temperature. In addition, the ART network and the statistical anomaly model are updated based on the current interaction.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: September 10, 2019
    Assignee: Omni AI, Inc.
    Inventors: Kishor Adinath Saitwal, Dennis G. Urech, Wesley Kenneth Cobb
  • Patent number: 10409909
    Abstract: Techniques are disclosed for building a dictionary of words from combinations of symbols generated based on input data. A neuro-linguistic behavior recognition system includes a neuro-linguistic module that generates a linguistic model that describes data input from a source (e.g., video data, SCADA data, etc.). To generate words for the linguistic model, a lexical analyzer component in the neuro-linguistic module receives a stream of symbols, each symbol generated based on an ordered stream of normalized vectors generated from input data. The lexical analyzer component determines words from combinations of the symbols based on a hierarchical learning model having one or more levels. Each level indicates a length of the words to be identified at that level. Statistics are evaluated for the words identified at each level. The lexical analyzer component identifies one or more of the words having statistical significance.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: September 10, 2019
    Assignee: Omni AI, Inc.
    Inventors: Gang Xu, Ming-Jung Seow, Tao Yang, Wesley Kenneth Cobb
  • Publication number: 20190268570
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. The techniques include receiving data for an object within the scene and determining whether the object has remained substantially stationary within the scene for at least a threshold period. If the object is determined to have remained stationary for at least the threshold period, a rareness score is calculated for the object to indicate a likelihood of the object being stationary to an observed degree at an observed location. The rareness score may use a learning model to take into account previous stationary and/or non-stationary behavior of objects within the scene. In general, the learning model may be updated based on observed stationary and/or non-stationary behaviors of the objects. If the rareness score meets reporting conditions, the stationary object event may be reported.
    Type: Application
    Filed: February 25, 2019
    Publication date: August 29, 2019
    Applicant: Omni Al, Inc.
    Inventors: Gang XU, Wesley Kenneth COBB
  • Publication number: 20190258867
    Abstract: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
    Type: Application
    Filed: February 22, 2019
    Publication date: August 22, 2019
    Applicant: Omni AI, Inc.
    Inventors: Wesley Kenneth COBB, Bobby Ernest BLYTHE, Rajkiran Kumar GOTTUMUKKAL, Kishor Adinath SAITWAL, Gang XU, Tao YANG
  • Patent number: 10373340
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: August 6, 2019
    Assignee: Omni AI, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Patent number: 10373062
    Abstract: Techniques are disclosed for generating a sequence of symbols based on input data for a neuro-linguistic model. The model may be used by a behavior recognition system to analyze the input data. A mapper component of a neuro-linguistic module in the behavior recognition system receives one or more normalized vectors generated from the input data. The mapper component generates one or more clusters based on a statistical distribution of the normalized vectors. The mapper component evaluates statistics and identifies statistically relevant clusters. The mapper component assigns a distinct symbol to each of the identified clusters.
    Type: Grant
    Filed: December 12, 2014
    Date of Patent: August 6, 2019
    Assignee: Omni AI, Inc.
    Inventors: Ming-Jung Seow, Gang Xu, Tao Yang, Wesley Kenneth Cobb
  • Publication number: 20190230108
    Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.
    Type: Application
    Filed: December 11, 2018
    Publication date: July 25, 2019
    Applicant: Omni AI, Inc.
    Inventors: Ming-Jung SEOW, Wesley Kenneth COBB, Gang XU, Tao YANG, Aaron POFFENBERGER, Lon W. RISINGER, Kishor Adinath SAITWAL, Michael S. YANTOSCA, David M. SOLUM, Alex David HEMSATH, Dennis G. URECH, Duy Trong NGUYEN, Charles Richard MORGAN
  • Publication number: 20190188998
    Abstract: Alert directives and focused alert directives allow a user to provide feedback to a behavioral recognition system to always or never publish an alert for certain events. Such an approach bypasses the normal publication methods of the behavioral recognition system yet does not obstruct the system's learning procedures.
    Type: Application
    Filed: August 31, 2018
    Publication date: June 20, 2019
    Applicant: Omni AI, Inc.
    Inventors: Wesley Kenneth COBB, Ming-Jung SEOW, Gang XU, Kishor Adinath SAITWAL, Anthony AKINS, Kerry JOSEPH, Dennis G. URECH
  • Publication number: 20190180135
    Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
    Type: Application
    Filed: July 12, 2018
    Publication date: June 13, 2019
    Applicant: Omni AI, Inc.
    Inventors: Wesley Kenneth COBB, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL, Ming-Jung SEOW, Gang XU, Lon W. RISINGER, Jeff GRAHAM
  • Patent number: 10303955
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: May 28, 2019
    Assignee: Omni Al, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Publication number: 20190122048
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Application
    Filed: December 19, 2018
    Publication date: April 25, 2019
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
  • Publication number: 20190124101
    Abstract: Embodiments presented herein describe a method for processing streams of data of one or more networked computer systems. According to one embodiment of the present disclosure, an ordered stream of normalized vectors corresponding to information security data obtained from one or more sensors monitoring a computer network is received. A neuro-linguistic model of the information security data is generated by clustering the ordered stream of vectors and assigning a letter to each cluster, outputting an ordered sequence of letters based on a mapping of the ordered stream of normalized vectors to the clusters, building a dictionary of words from of the ordered output of letters, outputting an ordered stream of words based on the ordered output of letters, and generating a plurality of phrases based on the ordered output of words.
    Type: Application
    Filed: May 13, 2018
    Publication date: April 25, 2019
    Inventors: Wesley Kenneth COBB, Ming-Jung SEOW, Curtis Edward COLE, Cody Shay FALCON, Benjamin A. KONOSKY, Charles Richard MORGAN, Aaron POFFENBERGER, Thong Toan NGUYEN
  • Patent number: 10257466
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. The techniques include receiving data for an object within the scene and determining whether the object has remained substantially stationary within the scene for at least a threshold period. If the object is determined to have remained stationary for at least the threshold period, a rareness score is calculated for the object to indicate a likelihood of the object being stationary to an observed degree at an observed location. The rareness score may use a learning model to take into account previous stationary and/or non-stationary behavior of objects within the scene. In general, the learning model may be updated based on observed stationary and/or non-stationary behaviors of the objects. If the rareness score meets reporting conditions, the stationary object event may be reported.
    Type: Grant
    Filed: June 29, 2017
    Date of Patent: April 9, 2019
    Assignee: Omni AI, Inc.
    Inventors: Gang Xu, Wesley Kenneth Cobb