Patents by Inventor Lon W. Risinger
Lon W. Risinger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230005238Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.Type: ApplicationFiled: September 13, 2022Publication date: January 5, 2023Applicant: Intellective Ai, Inc.Inventors: Wesley Kenneth COBB, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL, Ming-Jung SEOW, Gang XU, Lon W. RISINGER, Jeff GRAHAM
-
Patent number: 11468660Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particularly objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specify object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups object into object type clusters based on the micro-feature vectors.Type: GrantFiled: July 17, 2020Date of Patent: October 11, 2022Assignee: Intellective Ai, Inc.Inventors: Wesley Kenneth Cobb, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu, Lon W. Risinger, Jeff Graham
-
Publication number: 20220006825Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.Type: ApplicationFiled: July 6, 2021Publication date: January 6, 2022Applicant: Intellective Ai, Inc.Inventors: Ming-Jung SEOW, Wesley Kenneth COBB, Gang XU, Tao YANG, Aaron POFFENBERGER, Lon W. RISINGER, Kishor Adinath SAITWAL, Michael S. YANTOSCA, David M. SOLUM, Alex David HEMSATH, Dennis G. URECH, Duy Trong NGUYEN, Charles Richard MORGAN
-
Publication number: 20210042556Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particularly objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specify object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups object into object type clusters based on the micro-feature vectors.Type: ApplicationFiled: July 17, 2020Publication date: February 11, 2021Applicant: Intellective Ai, Inc.Inventors: Wesley Kenneth COBB, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL, Ming-Jung SEOW, Gang XU, Lon W. RISINGER, Jeff GRAHAM
-
Patent number: 10916039Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.Type: GrantFiled: June 28, 2019Date of Patent: February 9, 2021Assignee: Intellective Ai, Inc.Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
-
Patent number: 10872243Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.Type: GrantFiled: April 16, 2019Date of Patent: December 22, 2020Assignee: Intellective Ai, Inc.Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
-
Patent number: 10853961Abstract: Techniques are disclosed for generating a low-dimensional representation of an image. An image driver receives an image captured by a camera. The image includes features based on pixel values in the image, and each feature describes the image in one or more image regions. The image driver generates, for each of the plurality of features, a feature vector that includes values for that feature corresponding to at least one of the image regions. Each value indicates a degree that the feature is present in the image region. The image driver generates a sample vector from each of the feature vectors. The sample vector includes each of the values included in the generated feature vectors.Type: GrantFiled: September 13, 2018Date of Patent: December 1, 2020Assignee: Intellective Ai, Inc.Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb, Ming-Jung Seow, Gang Xu
-
Patent number: 10755131Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.Type: GrantFiled: July 12, 2018Date of Patent: August 25, 2020Assignee: Intellective Ai, Inc.Inventors: Wesley Kenneth Cobb, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal, Ming-Jung Seow, Gang Xu, Lon W. Risinger, Jeff Graham
-
Patent number: 10726294Abstract: Techniques are disclosed for generating logical sensors for an image driver. The image driver monitors values corresponding to at least a first feature in one or more regions of a first image in a stream of images received by a first sensor. The image driver identifies at least a first correlation between at least a first and second value of the monitored values. The image driver generates a logical sensor based on the identified correlations. The logical sensor samples one or more features corresponding to the identified correlation from a second image in the stream of images.Type: GrantFiled: July 5, 2018Date of Patent: July 28, 2020Assignee: Intellective Ai, Inc.Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
-
Patent number: 10706284Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.Type: GrantFiled: August 20, 2019Date of Patent: July 7, 2020Assignee: AVIGILON PATENT HOLDING 1 CORPORATIONInventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
-
Patent number: 10679315Abstract: Techniques are disclosed which provide a detected object tracker for a video analytics system. As disclosed, the detected object tracker provides a robust foreground object tracking component for a video analytics system which allow other components of the video analytics system to more accurately evaluate the behavior of a given object (as well as to learn to identify different instances or occurrences of the same object) over time. More generally, techniques are disclosed for identifying what pixels of successive video frames depict the same foreground object. Logic implementing certain functions of the detected object tracker can be executed on either a conventional processor (e.g., a CPU) or a hardware acceleration processing device (e.g., a GPU), allowing multiple camera feeds to be evaluated in parallel.Type: GrantFiled: March 23, 2018Date of Patent: June 9, 2020Assignee: Intellective Ai, Inc.Inventors: Lon W. Risinger, Kishor Adinath Saitwal, Wesley Kenneth Cobb
-
Publication number: 20200167963Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.Type: ApplicationFiled: June 28, 2019Publication date: May 28, 2020Applicant: Omni AI, Inc.Inventors: Kishor Adinath SAITWAL, Lon W. RISINGER, Wesley Kenneth COBB
-
Patent number: 10628296Abstract: Techniques are disclosed for dynamic memory allocation in a machine learning anomaly detection system. According to one embodiment of the disclosure, one or more variable-sized chunks of memory is allocated from a device memory for a memory pool. An application allocates at least one of the chunks of memory from the memory pool for processing a plurality of input data streams in real-time. A request to allocate memory from the memory pool for input data is received. Upon determining that one of the chunks is available in the memory pool to store the input data, the chunk is allocated from the memory pool in response to the request.Type: GrantFiled: January 26, 2018Date of Patent: April 21, 2020Assignee: OMNI AI, INC.Inventors: Lon W. Risinger, Kishor Adinath Saitwal
-
Publication number: 20190377951Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.Type: ApplicationFiled: August 20, 2019Publication date: December 12, 2019Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
-
Publication number: 20190311204Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.Type: ApplicationFiled: April 16, 2019Publication date: October 10, 2019Applicant: Omni AI, Inc.Inventors: Kishor Adinath SAITWAL, Lon W. RISINGER, Wesley Kenneth COBB
-
Patent number: 10423835Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.Type: GrantFiled: December 19, 2018Date of Patent: September 24, 2019Assignee: AVIGILON PATENT HOLDING 1 CORPORATIONInventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
-
Patent number: 10373340Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.Type: GrantFiled: April 28, 2017Date of Patent: August 6, 2019Assignee: Omni AI, Inc.Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
-
Publication number: 20190230108Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.Type: ApplicationFiled: December 11, 2018Publication date: July 25, 2019Applicant: Omni AI, Inc.Inventors: Ming-Jung SEOW, Wesley Kenneth COBB, Gang XU, Tao YANG, Aaron POFFENBERGER, Lon W. RISINGER, Kishor Adinath SAITWAL, Michael S. YANTOSCA, David M. SOLUM, Alex David HEMSATH, Dennis G. URECH, Duy Trong NGUYEN, Charles Richard MORGAN
-
Publication number: 20190180135Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.Type: ApplicationFiled: July 12, 2018Publication date: June 13, 2019Applicant: Omni AI, Inc.Inventors: Wesley Kenneth COBB, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL, Ming-Jung SEOW, Gang XU, Lon W. RISINGER, Jeff GRAHAM
-
Patent number: 10303955Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.Type: GrantFiled: April 28, 2017Date of Patent: May 28, 2019Assignee: Omni Al, Inc.Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb