Patents by Inventor Lon W. Risinger

Lon W. Risinger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230005238
    Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
    Type: Application
    Filed: September 13, 2022
    Publication date: January 5, 2023
    Applicant: Intellective Ai, Inc.
    Inventors: Wesley Kenneth COBB, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL, Ming-Jung SEOW, Gang XU, Lon W. RISINGER, Jeff GRAHAM
  • Patent number: 11468660
    Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particularly objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specify object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups object into object type clusters based on the micro-feature vectors.
    Type: Grant
    Filed: July 17, 2020
    Date of Patent: October 11, 2022
    Assignee: Intellective Ai, Inc.
    Inventors: Wesley Kenneth Cobb, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu, Lon W. Risinger, Jeff Graham
  • Publication number: 20220006825
    Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.
    Type: Application
    Filed: July 6, 2021
    Publication date: January 6, 2022
    Applicant: Intellective Ai, Inc.
    Inventors: Ming-Jung SEOW, Wesley Kenneth COBB, Gang XU, Tao YANG, Aaron POFFENBERGER, Lon W. RISINGER, Kishor Adinath SAITWAL, Michael S. YANTOSCA, David M. SOLUM, Alex David HEMSATH, Dennis G. URECH, Duy Trong NGUYEN, Charles Richard MORGAN
  • Publication number: 20210042556
    Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particularly objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specify object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups object into object type clusters based on the micro-feature vectors.
    Type: Application
    Filed: July 17, 2020
    Publication date: February 11, 2021
    Applicant: Intellective Ai, Inc.
    Inventors: Wesley Kenneth COBB, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL, Ming-Jung SEOW, Gang XU, Lon W. RISINGER, Jeff GRAHAM
  • Patent number: 10916039
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: February 9, 2021
    Assignee: Intellective Ai, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Patent number: 10872243
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: December 22, 2020
    Assignee: Intellective Ai, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Patent number: 10853961
    Abstract: Techniques are disclosed for generating a low-dimensional representation of an image. An image driver receives an image captured by a camera. The image includes features based on pixel values in the image, and each feature describes the image in one or more image regions. The image driver generates, for each of the plurality of features, a feature vector that includes values for that feature corresponding to at least one of the image regions. Each value indicates a degree that the feature is present in the image region. The image driver generates a sample vector from each of the feature vectors. The sample vector includes each of the values included in the generated feature vectors.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: December 1, 2020
    Assignee: Intellective Ai, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb, Ming-Jung Seow, Gang Xu
  • Patent number: 10755131
    Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: August 25, 2020
    Assignee: Intellective Ai, Inc.
    Inventors: Wesley Kenneth Cobb, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu, Lon W. Risinger, Jeff Graham
  • Patent number: 10726294
    Abstract: Techniques are disclosed for generating logical sensors for an image driver. The image driver monitors values corresponding to at least a first feature in one or more regions of a first image in a stream of images received by a first sensor. The image driver identifies at least a first correlation between at least a first and second value of the monitored values. The image driver generates a logical sensor based on the identified correlations. The logical sensor samples one or more features corresponding to the identified correlation from a second image in the stream of images.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: July 28, 2020
    Assignee: Intellective Ai, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Patent number: 10706284
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: July 7, 2020
    Assignee: AVIGILON PATENT HOLDING 1 CORPORATION
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
  • Patent number: 10679315
    Abstract: Techniques are disclosed which provide a detected object tracker for a video analytics system. As disclosed, the detected object tracker provides a robust foreground object tracking component for a video analytics system which allow other components of the video analytics system to more accurately evaluate the behavior of a given object (as well as to learn to identify different instances or occurrences of the same object) over time. More generally, techniques are disclosed for identifying what pixels of successive video frames depict the same foreground object. Logic implementing certain functions of the detected object tracker can be executed on either a conventional processor (e.g., a CPU) or a hardware acceleration processing device (e.g., a GPU), allowing multiple camera feeds to be evaluated in parallel.
    Type: Grant
    Filed: March 23, 2018
    Date of Patent: June 9, 2020
    Assignee: Intellective Ai, Inc.
    Inventors: Lon W. Risinger, Kishor Adinath Saitwal, Wesley Kenneth Cobb
  • Publication number: 20200167963
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Application
    Filed: June 28, 2019
    Publication date: May 28, 2020
    Applicant: Omni AI, Inc.
    Inventors: Kishor Adinath SAITWAL, Lon W. RISINGER, Wesley Kenneth COBB
  • Patent number: 10628296
    Abstract: Techniques are disclosed for dynamic memory allocation in a machine learning anomaly detection system. According to one embodiment of the disclosure, one or more variable-sized chunks of memory is allocated from a device memory for a memory pool. An application allocates at least one of the chunks of memory from the memory pool for processing a plurality of input data streams in real-time. A request to allocate memory from the memory pool for input data is received. Upon determining that one of the chunks is available in the memory pool to store the input data, the chunk is allocated from the memory pool in response to the request.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: April 21, 2020
    Assignee: OMNI AI, INC.
    Inventors: Lon W. Risinger, Kishor Adinath Saitwal
  • Publication number: 20190377951
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Application
    Filed: August 20, 2019
    Publication date: December 12, 2019
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
  • Publication number: 20190311204
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 10, 2019
    Applicant: Omni AI, Inc.
    Inventors: Kishor Adinath SAITWAL, Lon W. RISINGER, Wesley Kenneth COBB
  • Patent number: 10423835
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: September 24, 2019
    Assignee: AVIGILON PATENT HOLDING 1 CORPORATION
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
  • Patent number: 10373340
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: August 6, 2019
    Assignee: Omni AI, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Publication number: 20190230108
    Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.
    Type: Application
    Filed: December 11, 2018
    Publication date: July 25, 2019
    Applicant: Omni AI, Inc.
    Inventors: Ming-Jung SEOW, Wesley Kenneth COBB, Gang XU, Tao YANG, Aaron POFFENBERGER, Lon W. RISINGER, Kishor Adinath SAITWAL, Michael S. YANTOSCA, David M. SOLUM, Alex David HEMSATH, Dennis G. URECH, Duy Trong NGUYEN, Charles Richard MORGAN
  • Publication number: 20190180135
    Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
    Type: Application
    Filed: July 12, 2018
    Publication date: June 13, 2019
    Applicant: Omni AI, Inc.
    Inventors: Wesley Kenneth COBB, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL, Ming-Jung SEOW, Gang XU, Lon W. RISINGER, Jeff GRAHAM
  • Patent number: 10303955
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: May 28, 2019
    Assignee: Omni Al, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb