Patents by Inventor Lon W. Risinger

Lon W. Risinger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10303955
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: May 28, 2019
    Assignee: Omni Al, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Publication number: 20190122048
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Application
    Filed: December 19, 2018
    Publication date: April 25, 2019
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
  • Patent number: 10198636
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: February 5, 2019
    Assignee: AVIGILON PATENT HOLDING 1 CORPORATION
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
  • Patent number: 10187415
    Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.
    Type: Grant
    Filed: March 26, 2017
    Date of Patent: January 22, 2019
    Assignee: Omni AI, Inc.
    Inventors: Ming-Jung Seow, Wesley Kenneth Cobb, Gang Xu, Tao Yang, Aaron Poffenberger, Lon W. Risinger, Kishor Adinath Saitwal, Michael S. Yantosca, David M. Solum, Alex David Hemsath, Dennis G. Urech, Duy Trong Nguyen, Charles Richard Morgan
  • Publication number: 20190012761
    Abstract: Techniques are disclosed which provide a detected object tracker for a video analytics system. As disclosed, the detected object tracker provides a robust foreground object tracking component for a video analytics system which allow other components of the video analytics system to more accurately evaluate the behavior of a given object (as well as to learn to identify different instances or occurrences of the same object) over time. More generally, techniques are disclosed for identifying what pixels of successive video frames depict the same foreground object. Logic implementing certain functions of the detected object tracker can be executed on either a conventional processor (e.g., a CPU) or a hardware acceleration processing device (e.g., a GPU), allowing multiple camera feeds to be evaluated in parallel.
    Type: Application
    Filed: March 23, 2018
    Publication date: January 10, 2019
    Applicant: OMNI AI, INC.
    Inventors: Lon W. RISINGER, Kishor Adinath SAITWAL, Wesley Kenneth COBB
  • Patent number: 10102642
    Abstract: Techniques are disclosed for generating a low-dimensional representation of an image. An image driver receives an image captured by a camera. The image includes features based on pixel values in the image, and each feature describes the image in one or more image regions. The image driver generates, for each of the plurality of features, a feature vector that includes values for that feature corresponding to at least one of the image regions. Each value indicates a degree that the feature is present in the image region. The image driver generates a sample vector from each of the feature vectors. The sample vector includes each of the values included in the generated feature vectors.
    Type: Grant
    Filed: November 25, 2015
    Date of Patent: October 16, 2018
    Assignee: Omni AI, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb, Ming-Jung Seow, Gang Xu
  • Patent number: 10049293
    Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: August 14, 2018
    Assignee: Omni Al, Inc.
    Inventors: Wesley Kenneth Cobb, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu, Lon W. Risinger, Jeff Graham
  • Patent number: 10043100
    Abstract: Techniques are disclosed for generating logical sensors for an image driver. The image driver monitors values corresponding to at least a first feature in one or more regions of a first image in a stream of images received by a first sensor. The image driver identifies at least a first correlation between at least a first and second value of the monitored values. The image driver generates a logical sensor based on the identified correlations. The logical sensor samples one or more features corresponding to the identified correlation from a second image in the stream of images.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: August 7, 2018
    Assignee: Omni AI, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Publication number: 20180204068
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Application
    Filed: March 14, 2018
    Publication date: July 19, 2018
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
  • Patent number: 9965382
    Abstract: Techniques are disclosed for dynamic memory allocation in a behavioral recognition system. According to one embodiment of the disclosure, one or more variable-sized chunks of memory is allocated from a device memory for a memory pool. An application allocates at least one of the chunks of memory from the memory pool for processing a plurality of input data streams in real-time. A request to allocate memory from the memory pool for input data is received. Upon determining that one of the chunks is available in the memory pool to store the input data, the chunk is allocated from the memory pool in response to the request.
    Type: Grant
    Filed: April 4, 2016
    Date of Patent: May 8, 2018
    Assignee: Omni AI, Inc.
    Inventors: Lon W. Risinger, Kishor Adinath Saitwal
  • Patent number: 9946934
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: April 21, 2017
    Date of Patent: April 17, 2018
    Assignee: AVIGILON PATENT HOLDING 1 CORPORATION
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
  • Publication number: 20180082130
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Application
    Filed: April 28, 2017
    Publication date: March 22, 2018
    Applicant: Omni AI, Inc.
    Inventors: Kishor Adinath SAITWAL, Lon W. RISINGER, Wesley Kenneth COBB
  • Publication number: 20180082442
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Application
    Filed: April 28, 2017
    Publication date: March 22, 2018
    Applicant: Omni AI, Inc.
    Inventors: Kishor Adinath SAITWAL, Lon W. RISINGER, Wesley Kenneth COBB
  • Publication number: 20180046613
    Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.
    Type: Application
    Filed: March 26, 2017
    Publication date: February 15, 2018
    Applicant: Omni AI, Inc.
    Inventors: Ming-Jung SEOW, Wesley Kenneth COBB, Gang XU, Tao YANG, Aaron POFFENBERGER, Lon W. RISINGER, Kishor Adinath SAITWAL, Michael S. YANTOSCA, David M. SOLUM, Alex David HEMSATH, Dennis G. URECH, Duy Trong NGUYEN, Charles Richard MORGAN
  • Publication number: 20180032834
    Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
    Type: Application
    Filed: March 16, 2017
    Publication date: February 1, 2018
    Inventors: Wesley Kenneth COBB, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL, Ming-Jung SEOW, Gang XU, Lon W. RISINGER, Jeff GRAHAM
  • Publication number: 20170287104
    Abstract: Techniques are disclosed for dynamic memory allocation in a behavioral recognition system. According to one embodiment of the disclosure, input data is received from each of a plurality of data streams. A composite of the input data is generated from each of the data streams in a host memory. The composite of the input data is transferred to a device memory. The composite of the input data is processed in parallel via the host memory on the CPU and the device memory on the GPU.
    Type: Application
    Filed: April 4, 2016
    Publication date: October 5, 2017
    Inventors: Lon W. RISINGER, Kishor Adinath SAITWAL
  • Publication number: 20170286284
    Abstract: Techniques are disclosed for dynamic memory allocation in a behavioral recognition system. According to one embodiment of the disclosure, one or more variable-sized chunks of memory is allocated from a device memory for a memory pool. An application allocates at least one of the chunks of memory from the memory pool for processing a plurality of input data streams in real-time. A request to allocate memory from the memory pool for input data is received. Upon determining that one of the chunks is available in the memory pool to store the input data, the chunk is allocated from the memory pool in response to the request.
    Type: Application
    Filed: April 4, 2016
    Publication date: October 5, 2017
    Inventors: Lon W. RISINGER, Kishor Adinath SAITWAL
  • Publication number: 20170286800
    Abstract: Techniques are disclosed for generating logical sensors for an image driver. The image driver monitors values corresponding to at least a first feature in one or more regions of a first image in a stream of images received by a first sensor. The image driver identifies at least a first correlation between at least a first and second value of the monitored values. The image driver generates a logical sensor based on the identified correlations. The logical sensor samples one or more features corresponding to the identified correlation from a second image in the stream of images.
    Type: Application
    Filed: April 5, 2016
    Publication date: October 5, 2017
    Inventors: Kishor Adinath SAITWAL, Lon W. RISINGER, Wesley Kenneth COBB
  • Publication number: 20170228598
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Application
    Filed: April 21, 2017
    Publication date: August 10, 2017
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
  • Patent number: 9665774
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: October 28, 2016
    Date of Patent: May 30, 2017
    Assignee: Avigilon Patent Holding 1 Corporation
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal