Patents by Inventor Bobby Ernest Blythe

Bobby Ernest Blythe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10796164
    Abstract: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
    Type: Grant
    Filed: February 22, 2019
    Date of Patent: October 6, 2020
    Assignee: Intellective Ai, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Gang Xu, Tao Yang
  • Patent number: 10489679
    Abstract: Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.
    Type: Grant
    Filed: July 22, 2014
    Date of Patent: November 26, 2019
    Assignee: AVIGILON PATENT HOLDING 1 CORPORATION
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Ming-Jung Seow
  • Publication number: 20190258867
    Abstract: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
    Type: Application
    Filed: February 22, 2019
    Publication date: August 22, 2019
    Applicant: Omni AI, Inc.
    Inventors: Wesley Kenneth COBB, Bobby Ernest BLYTHE, Rajkiran Kumar GOTTUMUKKAL, Kishor Adinath SAITWAL, Gang XU, Tao YANG
  • Patent number: 10248869
    Abstract: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: April 2, 2019
    Assignee: Omni AI, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Gang Xu, Tao Yang
  • Publication number: 20180247133
    Abstract: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
    Type: Application
    Filed: September 29, 2017
    Publication date: August 30, 2018
    Applicant: OMNI AI, INC.
    Inventors: Wesley Kenneth COBB, Bobby Ernest BLYTHE, Rajkiran Kumar GOTTUMUKKAL, Kishor Adinath SAITWAL, Gang XU, Tao YANG
  • Patent number: 9805271
    Abstract: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: October 31, 2017
    Assignee: Omni AI, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Gang Xu, Tao Yang
  • Publication number: 20150078656
    Abstract: Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.
    Type: Application
    Filed: July 22, 2014
    Publication date: March 19, 2015
    Inventors: Wesley Kenneth COBB, Bobby Ernest BLYTHE, Rajkiran Kumar GOTTUMUKKAL, Ming-Jung SEOW
  • Patent number: 8797405
    Abstract: Techniques are disclosed for visually conveying classifications derived from pixel-level micro-features extracted from image data. The image data may include an input stream of video frames depicting one or more foreground objects. The classifications represent information learned by a video surveillance system. A request may be received to view a classification. A visual representation of the classification may be generated. A user interface may be configured to display the visual representation of the classification and to allow a user to view and/or modify properties associated with the classification.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: August 5, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, David Samuel Friedlander, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu
  • Patent number: 8786702
    Abstract: Techniques are disclosed for visually conveying a percept. The percept may represent information learned by a video surveillance system. A request may be received to view a percept for a specified scene. The percept may have been derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. A visual representation of the percept may be generated. A user interface may be configured to display the visual representation of the percept and to allow a user to view and/or modify metadata attributes with the percept. For example, the user may label a percept and set events matching the percept to always (or never) result in alert being generated for users of the video surveillance system.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: July 22, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Ming-Jung Seow
  • Patent number: 8705861
    Abstract: Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated.
    Type: Grant
    Filed: June 12, 2012
    Date of Patent: April 22, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal
  • Patent number: 8625884
    Abstract: Techniques are disclosed for visually conveying an event map. The event map may represent information learned by a surveillance system. A request may be received to view the event map for a specified scene. The event map may be generated, including a background model of the specified scene and at least one cluster providing a statistical distribution of an event in the specified scene. Each statistical distribution may be derived from data streams generated from a sequence of video frames depicting the specified scene captured by a video camera. Each event may be observed to occur at a location in the specified scene corresponding to a location of the respective cluster in the event map. The event map may be configured to allow a user to view and/or modify properties associated with each cluster. For example, the user may label a cluster and set events matching the cluster to always (or never) generate an alert.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: January 7, 2014
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Ming-Jung Seow
  • Patent number: 8620028
    Abstract: Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned.
    Type: Grant
    Filed: March 6, 2012
    Date of Patent: December 31, 2013
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis Gene Urech, Bobby Ernest Blythe, David Samuel Friedlander, Rajkiran Kumar Gottumukkal, Lon William Risinger, Kishor Adinath Saitwal, Ming-Jung Seow, David Marvin Solum, Gang Xu, Tao Yang
  • Patent number: 8493409
    Abstract: Techniques are disclosed for visually conveying a sequence storing an ordered string of symbols generated from kinematic data derived from analyzing an input stream of video frames depicting one or more foreground objects. The sequence may represent information learned by a video surveillance system. A request may be received to view the sequence or a segment partitioned form the sequence. A visual representation of the segment may be generated and superimposed over a background image associated with the scene. A user interface may be configured to display the visual representation of the sequence or segment and to allow a user to view and/or modify properties associated with the sequence or segment.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: July 23, 2013
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, David Samuel Friedlander, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal
  • Patent number: 8300924
    Abstract: A tracker component for a computer vision engine of a machine-learning based behavior-recognition system is disclosed. The behavior-recognition system may be configured to learn, identify, and recognize patterns of behavior by observing a video stream (i.e., a sequence of individual video frames). The tracker component may be configured to track objects depicted in the sequence of video frames and to generate, search, match, and update computational models of such objects.
    Type: Grant
    Filed: September 11, 2008
    Date of Patent: October 30, 2012
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Tao Yang, Bobby Ernest Blythe
  • Patent number: 8295591
    Abstract: A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: October 23, 2012
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, David Samuel Friedlander, Kishor Adinath Saitwal, Gang Xu
  • Publication number: 20120257831
    Abstract: Embodiments of the present invention provide a method and a system for mapping a scene depicted in an acquired stream of video frames that may be used by a machine-learning behavior-recognition system. A background image of the scene is segmented into plurality of regions representing various objects of the background image. Statistically similar regions may be merged and associated. The regions are analyzed to determine their z-depth order in relation to a video capturing device providing the stream of the video frames and other regions, using occlusions between the regions and data about foreground objects in the scene. An annotated map describing the identified regions and their properties is created and updated.
    Type: Application
    Filed: June 12, 2012
    Publication date: October 11, 2012
    Applicant: BEHAVIORAL RECOGNITION SYSTEMS, INC.
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal
  • Patent number: 8285046
    Abstract: Techniques are disclosed for a computer vision engine to update both a background model and thresholds used to classify pixels as depicting scene foreground or background in response to detecting that a sudden illumination changes has occurred in a sequence of video frames. The threshold values may be used to specify how much pixel a given pixel may differ from corresponding values in the background model before being classified as depicting foreground. When a sudden illumination change is detected, the values for pixels affected by sudden illumination change may be used to update the value in the background image to reflect the value for that pixel following the sudden illumination change as well as update the threshold for classifying that pixel as depicting foreground/background in subsequent frames of video.
    Type: Grant
    Filed: February 18, 2009
    Date of Patent: October 9, 2012
    Assignee: Behavioral Recognition Systems, Inc.
    Inventors: Wesley Kenneth Cobb, Kishor Adinath Saitwal, Bobby Ernest Blythe, Tao Yang
  • Patent number: 8280153
    Abstract: Techniques are disclosed for visually conveying a trajectory map. The trajectory map provides users with a visualization of data observed by a machine-learning engine of a behavior recognition system. Further, the visualization may provide an interface used to guide system behavior. For example, the interface may be used to specify that the behavior recognition system should alert (or not alert) when a particular trajectory is observed to occur.
    Type: Grant
    Filed: August 18, 2009
    Date of Patent: October 2, 2012
    Assignee: Behavioral Recognition Systems
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, David Samuel Friedlander, Rajkiran Kumar Gottumukkal, Ming-Jung Seow, Gang Xu
  • Publication number: 20120163670
    Abstract: Embodiments of the present invention provide a method and a system for analyzing and learning behavior based on an acquired stream of video frames. Objects depicted in the stream are determined based on an analysis of the video frames. Each object may have a corresponding search model used to track an object's motion frame-to-frame. Classes of the objects are determined and semantic representations of the objects are generated. The semantic representations are used to determine objects' behaviors and to learn about behaviors occurring in an environment depicted by the acquired video streams. This way, the system learns rapidly and in real-time normal and abnormal behaviors for any environment by analyzing movements or activities or absence of such in the environment and identifies and predicts abnormal and suspicious behavior based on what has been learned.
    Type: Application
    Filed: March 6, 2012
    Publication date: June 28, 2012
    Applicant: BEHAVIORAL RECOGNITION SYSTEMS, INC.
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis Gene URECH, Bobby Ernest BLYTHE, David Samuel FRIEDLANDER, Rajkiran Kumar GOTTUMUKKAL, Lon William RISINGER, Kishor Adinath SAITWAL, Ming-Jung SEOW, David Marvin SOLUM, Gang XU, Tao YANG
  • Patent number: RE45901
    Abstract: A server-side recycle bin system for retaining computer files and information is disclosed. The system comprises a local computer system, and a server including a server-side recycle bin. One or more persistent storage devices, providing the files and directories to be protected, are present either as part of the local computer system or as part of the server. The local computer system and the server may be connected via a wide area computer network, a local area network, the Internet, of any other method or combination of methods. A file manager application running on the local computer system interacts with a file serving application on the server such that there is generated a retained file in the server-side recycle bin.
    Type: Grant
    Filed: May 21, 2009
    Date of Patent: February 23, 2016
    Assignee: Synacor, Inc.
    Inventor: Bobby Ernest Blythe