Patents by Inventor Ming-Jung Seow

Ming-Jung Seow has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150110388
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Application
    Filed: December 29, 2014
    Publication date: April 23, 2015
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
  • Publication number: 20150078656
    Abstract: Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.
    Type: Application
    Filed: July 22, 2014
    Publication date: March 19, 2015
    Inventors: Wesley Kenneth COBB, Bobby Ernest BLYTHE, Rajkiran Kumar GOTTUMUKKAL, Ming-Jung SEOW
  • Publication number: 20150046155
    Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.
    Type: Application
    Filed: August 11, 2014
    Publication date: February 12, 2015
    Inventors: Ming-Jung SEOW, Wesley Kenneth COBB, Gang XU, Tao YANG, Aaron POFFENBERGER, Lon W. RISINGER, Kishor Adinath SAITWAL, Michael S. YANTOSCA, David M. SOLUM, Alex David HEMSATH, Dennis G. URECH, Duy Trong NGUYEN, Charles Richard MORGAN
  • Publication number: 20150047040
    Abstract: Embodiments presented herein describe a method for processing streams of data of one or more networked computer systems. According to one embodiment of the present disclosure, an ordered stream of normalized vectors corresponding to information security data obtained from one or more sensors monitoring a computer network is received. A neuro-linguistic model of the information security data is generated by clustering the ordered stream of vectors and assigning a letter to each cluster, outputting an ordered sequence of letters based on a mapping of the ordered stream of normalized vectors to the clusters, building a dictionary of words from of the ordered output of letters, outputting an ordered stream of words based on the ordered output of letters, and generating a plurality of phrases based on the ordered output of words.
    Type: Application
    Filed: August 11, 2014
    Publication date: February 12, 2015
    Inventors: Wesley Kenneth COBB, Ming-Jung SEOW, Curtis Edward COLE, JR., Cody Shay FALCON, Benjamin A. KONOSKY, Charles Richard MORGAN, Aaron POFFENBERGER, Thong Toan NGUYEN
  • Patent number: 8923609
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: April 2, 2013
    Date of Patent: December 30, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
  • Patent number: 8797405
    Abstract: Techniques are disclosed for visually conveying classifications derived from pixel-level micro-features extracted from image data. The image data may include an input stream of video frames depicting one or more foreground objects. The classifications represent information learned by a video surveillance system. A request may be received to view a classification. A visual representation of the classification may be generated. A user interface may be configured to display the visual representation of the classification and to allow a user to view and/or modify properties associated with the classification.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: August 5, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, David Samuel Friedlander, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu
  • Patent number: 8786702
    Abstract: Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: July 22, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Ming-Jung Seow
  • Publication number: 20140015984
    Abstract: Techniques are disclosed for detecting an out-of-focus camera in a video analytics system. In one embodiment, a preprocessor component performs a pyramid image decomposition on a video frame captured by a camera. The preprocessor further determines sharp edge areas, candidate blurry edge areas, and actual blurry edge areas, in each level of the pyramid image decomposition. Based on the sharp edge areas, the candidate blurry edge areas, and actual blurry edge areas, the preprocessor determines a sharpness value and a blurriness value which indicate the overall sharpness and blurriness of the video frame, respectively. Based on the sharpness value and the blurriness value, the preprocessor further determines whether the video frame is out-of-focus and whether to send the video frame to components of a computer vision engine and/or a machine learning engine.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 16, 2014
    Inventors: Ming-Jung SEOW, Dennis G. URECH
  • Patent number: 8625884
    Abstract: Techniques are disclosed for visually conveying an event map. The event map may represent information learned by a surveillance system. A request may be received to view the event map for a specified scene. The event map may be generated, including a background model of the specified scene and at least one cluster providing a statistical distribution of an event in the specified scene. Each statistical distribution may be derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. Each event may be observed to occur at a location in the specified scene corresponding to a location of the respective cluster in the event map. The event map may be configured to allow a user to view and/or modify properties associated with each cluster. For example, the user may label a cluster and set events matching the cluster to always (or never) generate an alert.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: January 7, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Ming-Jung Seow
  • Publication number: 20140003710
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. In one embodiment, e.g., a machine learning engine may include statistical engines for generating topological feature maps based on observations and a detection module for detecting feature anomalies. The statistical engines may include adaptive resonance theory (ART) networks which cluster observed position-feature characteristics. The statistical engines may further reinforce, decay, merge, and remove clusters. The detection module may calculate a rareness value relative to recurring observations and data in the ART networks. Further, the sensitivity of detection may be adjusted according to the relative importance of recently observed anomalies.
    Type: Application
    Filed: June 27, 2013
    Publication date: January 2, 2014
    Inventors: Ming-Jung SEOW, Wesley Kenneth COBB
  • Publication number: 20140003720
    Abstract: Techniques are disclosed for removing false-positive foreground pixels resulting from environmental illumination effects. The techniques include receiving a foreground image and a background model, and determining an approximated reflectance component of the foreground image based on the foreground image itself and a background model image which is used as a proxy for an illuminance component of the foreground image. Pixels of the foreground image having approximated reflectance values less than a threshold value may be classified as false-positive foreground pixels and removed from the foreground image. Further, the threshold value used may be adjusted based on various factors to account for, e.g., different illumination conditions indoors and outdoors.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 2, 2014
    Inventors: Ming-Jung SEOW, Tao YANG, Wesley Kenneth COBB
  • Publication number: 20140003713
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. Bounding boxes are determined for a set foreground patches identified in a video frame. For each bounding box, the techniques include determining textures for first areas, each including a foreground pixel and surrounding pixels, and determining textures for second areas including pixels of the background model image corresponding to the pixels of the foreground areas. Further, for each foreground pixel in the bounding box area, a correlation score is determined based on the texture of the corresponding first area and second area. Pixels whose correlation scores exceed a threshold are removed from the foreground patch. The size of the bounding box may also be reduced to fit the modified foreground patch.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 2, 2014
    Inventors: Ming-Jung SEOW, Tao YANG, Wesley Kenneth COBB
  • Patent number: 8620028
    Abstract: Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned.
    Type: Grant
    Filed: March 6, 2012
    Date of Patent: December 31, 2013
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis Gene Urech, Bobby Ernest Blythe, David Samuel Friedlander, Rajkiran Kumar Gottumukkal, Lon William Risinger, Kishor Adinath Saitwal, Ming-Jung Seow, David Marvin Solum, Gang Xu, Tao Yang
  • Patent number: 8548198
    Abstract: Techniques are disclosed for identifying anomaly object types during classification of foreground objects extracted from image data. A self-organizing map and adaptive resonance theory (SOM-ART) network is used to discover object type clusters and classify objects depicted in the image data based on pixel-level micro-features that are extracted from the image data. Importantly, the discovery of the object type clusters is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. The SOM-ART network is adaptive and able to learn while discovering the object type clusters and classifying objects and identifying anomaly object types.
    Type: Grant
    Filed: September 18, 2012
    Date of Patent: October 1, 2013
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, David Friedlander, Rajkiran Kumar Gottumukkal, Ming-Jung Seow, Gang Xu
  • Publication number: 20130242093
    Abstract: Alert directives and focused alert directives allow a user to provide feedback to a behavioral recognition system to always or never publish an alert for certain events. Such an approach bypasses the normal publication methods of the behavioral recognition system yet does not obstruct the system's learning procedures.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 19, 2013
    Applicant: BEHAVIORAL RECOGNITION SYSTEMS, INC.
    Inventors: Wesley Kenneth COBB, Ming-Jung SEOW, Gang XU, Kishor Adinath SAITWAL, Anthony AKINS, Kerry JOSEPH, Dennis G. URECH
  • Patent number: 8494222
    Abstract: Techniques are disclosed for a video surveillance system to learn to recognize complex behaviors by analyzing pixel data using alternating layers of clustering and sequencing. A combination of a self organizing map (SOM) and an adaptive resonance theory (ART) network may be used to identify a variety of different anomalous inputs at each cluster layer. As progressively higher layers of the cortex model component represent progressively higher levels of abstraction, anomalies occurring in the higher levels of the cortex model represent observations of behavioral anomalies corresponding to progressively complex patterns of behavior.
    Type: Grant
    Filed: May 15, 2012
    Date of Patent: July 23, 2013
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, David Friedlander, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu
  • Patent number: 8416296
    Abstract: Techniques are disclosed for detecting the occurrence of unusual events in a sequence of video frames Importantly, what is determined as unusual need not be defined in advance, but can be determined over time by observing a stream of primitive events and a stream of context events. A mapper component may be configured to parse the event streams and supply input data sets to multiple adaptive resonance theory (ART) networks. Each individual ART network may generate clusters from the set of inputs data supplied to that ART network. Each cluster represents an observed statistical distribution of a particular thing or event being observed that ART network.
    Type: Grant
    Filed: April 14, 2009
    Date of Patent: April 9, 2013
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Ming-Jung Seow
  • Patent number: 8411935
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: July 9, 2008
    Date of Patent: April 2, 2013
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
  • Patent number: 8374393
    Abstract: Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.
    Type: Grant
    Filed: July 10, 2012
    Date of Patent: February 12, 2013
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Ming-Jung Seow, Tao Yang
  • Patent number: 8358834
    Abstract: Techniques are disclosed for learning and modeling a background for a complex and/or dynamic scene over a period of observations without supervision. A background/foreground component of a computer vision engine may be configured to model a scene using an array of ART networks. The ART networks learn the regularity and periodicity of the scene by observing the scene over a period of time. Thus, the ART networks allow the computer vision engine to model complex and dynamic scene backgrounds in video.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: January 22, 2013
    Assignee: Behavioral Recognition Systems
    Inventors: Wesley Kenneth Cobb, Ming-Jung Seow, Tao Yang