Patents by Inventor Kishor Adinath Saitwal

Kishor Adinath Saitwal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200167963
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Application
    Filed: June 28, 2019
    Publication date: May 28, 2020
    Applicant: Omni AI, Inc.
    Inventors: Kishor Adinath SAITWAL, Lon W. RISINGER, Wesley Kenneth COBB
  • Patent number: 10628296
    Abstract: Techniques are disclosed for dynamic memory allocation in a machine learning anomaly detection system. According to one embodiment of the disclosure, one or more variable-sized chunks of memory is allocated from a device memory for a memory pool. An application allocates at least one of the chunks of memory from the memory pool for processing a plurality of input data streams in real-time. A request to allocate memory from the memory pool for input data is received. Upon determining that one of the chunks is available in the memory pool to store the input data, the chunk is allocated from the memory pool in response to the request.
    Type: Grant
    Filed: January 26, 2018
    Date of Patent: April 21, 2020
    Assignee: OMNI AI, INC.
    Inventors: Lon W. Risinger, Kishor Adinath Saitwal
  • Publication number: 20190377951
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Application
    Filed: August 20, 2019
    Publication date: December 12, 2019
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
  • Publication number: 20190311204
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 10, 2019
    Applicant: Omni AI, Inc.
    Inventors: Kishor Adinath SAITWAL, Lon W. RISINGER, Wesley Kenneth COBB
  • Patent number: 10423835
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: September 24, 2019
    Assignee: AVIGILON PATENT HOLDING 1 CORPORATION
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
  • Publication number: 20190289215
    Abstract: A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may provide image stabilization of a video stream obtained from a camera. An image stabilization module in the behavioral recognition system obtains a reference image from the video stream. The image stabilization module identifies alignment regions within the reference image based on the regions of the image that are dense with features. Upon determining that the tracked features of a current image is out of alignment with the reference image, the image stabilization module uses the most feature dense alignment region to estimate an affine transformation matrix to apply to the entire current image to warp the image into proper alignment.
    Type: Application
    Filed: February 19, 2019
    Publication date: September 19, 2019
    Applicant: Omni AI, Inc.
    Inventors: Kishor Adinath SAITWAL, Wesley Kenneth COBB, Tao YANG
  • Patent number: 10410058
    Abstract: Techniques are disclosed for analyzing a scene depicted in an input stream of video frames captured by a video camera. The techniques include evaluating sequence pairs representing segments of object trajectories. Assuming the objects interact, each of the sequences of the sequence pair may be mapped to a sequence cluster of an adaptive resonance theory (ART) network. A rareness value for the pair of sequence clusters may be determined based on learned joint probabilities of sequence cluster pairs. A statistical anomaly model, which may be specific to an interaction type or general to a plurality of interaction types, is used to determine an anomaly temperature, and alerts are issued based at least on the anomaly temperature. In addition, the ART network and the statistical anomaly model are updated based on the current interaction.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: September 10, 2019
    Assignee: Omni AI, Inc.
    Inventors: Kishor Adinath Saitwal, Dennis G. Urech, Wesley Kenneth Cobb
  • Publication number: 20190258867
    Abstract: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
    Type: Application
    Filed: February 22, 2019
    Publication date: August 22, 2019
    Applicant: Omni AI, Inc.
    Inventors: Wesley Kenneth COBB, Bobby Ernest BLYTHE, Rajkiran Kumar GOTTUMUKKAL, Kishor Adinath SAITWAL, Gang XU, Tao YANG
  • Patent number: 10373340
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: August 6, 2019
    Assignee: Omni AI, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Publication number: 20190230108
    Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.
    Type: Application
    Filed: December 11, 2018
    Publication date: July 25, 2019
    Applicant: Omni AI, Inc.
    Inventors: Ming-Jung SEOW, Wesley Kenneth COBB, Gang XU, Tao YANG, Aaron POFFENBERGER, Lon W. RISINGER, Kishor Adinath SAITWAL, Michael S. YANTOSCA, David M. SOLUM, Alex David HEMSATH, Dennis G. URECH, Duy Trong NGUYEN, Charles Richard MORGAN
  • Publication number: 20190188998
    Abstract: Alert directives and focused alert directives allow a user to provide feedback to a behavioral recognition system to always or never publish an alert for certain events. Such an approach bypasses the normal publication methods of the behavioral recognition system yet does not obstruct the system's learning procedures.
    Type: Application
    Filed: August 31, 2018
    Publication date: June 20, 2019
    Applicant: Omni AI, Inc.
    Inventors: Wesley Kenneth COBB, Ming-Jung SEOW, Gang XU, Kishor Adinath SAITWAL, Anthony AKINS, Kerry JOSEPH, Dennis G. URECH
  • Publication number: 20190180135
    Abstract: Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
    Type: Application
    Filed: July 12, 2018
    Publication date: June 13, 2019
    Applicant: Omni AI, Inc.
    Inventors: Wesley Kenneth COBB, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL, Ming-Jung SEOW, Gang XU, Lon W. RISINGER, Jeff GRAHAM
  • Patent number: 10303955
    Abstract: Techniques are disclosed for creating a background model of a scene using both a pixel based approach and a context based approach. The combined approach provides an effective technique for segmenting scene foreground from background in frames of a video stream. Further, this approach can scale to process large numbers of camera feeds simultaneously, e.g., using parallel processing architectures, while still generating an accurate background model. Further, using both a pixel based approach and context based approach ensures that the video analytics system can effectively and efficiently respond to changes in a scene, without overly increasing computational complexity. In addition, techniques are disclosed for updating the background model, from frame-to-frame, by absorbing foreground pixels into the background model via an absorption window, and dynamically updating background/foreground thresholds.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: May 28, 2019
    Assignee: Omni Al, Inc.
    Inventors: Kishor Adinath Saitwal, Lon W. Risinger, Wesley Kenneth Cobb
  • Publication number: 20190122048
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Application
    Filed: December 19, 2018
    Publication date: April 25, 2019
    Inventors: John Eric EATON, Wesley Kenneth COBB, Dennis G. URECH, David S. FRIEDLANDER, Gang XU, Ming-Jung SEOW, Lon W. RISINGER, David M. SOLUM, Tao YANG, Rajkiran K. GOTTUMUKKAL, Kishor Adinath SAITWAL
  • Patent number: 10248869
    Abstract: Techniques are disclosed for matching a current background scene of an image received by a surveillance system with a gallery of scene presets that each represent a previously captured background scene. A quadtree decomposition analysis is used to improve the robustness of the matching operation when the scene lighting changes (including portions containing over-saturation/under-saturation) or a portion of the content changes. The current background scene is processed to generate a quadtree decomposition including a plurality of window portions. Each of the window portions is processed to generate a plurality of phase spectra. The phase spectra are then projected onto a corresponding plurality of scene preset image matrices of one or more scene preset. When a match between the current background scene and one of the scene presets is identified, the matched scene preset is updated. Otherwise a new scene preset is created based on the current background scene.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: April 2, 2019
    Assignee: Omni AI, Inc.
    Inventors: Wesley Kenneth Cobb, Bobby Ernest Blythe, Rajkiran Kumar Gottumukkal, Kishor Adinath Saitwal, Gang Xu, Tao Yang
  • Patent number: 10237483
    Abstract: A behavioral recognition system may include both a computer vision engine and a machine learning engine configured to observe and learn patterns of behavior in video data. Certain embodiments may provide image stabilization of a video stream obtained from a camera. An image stabilization module in the behavioral recognition system obtains a reference image from the video stream. The image stabilization module identifies alignment regions within the reference image based on the regions of the image that are dense with features. Upon determining that the tracked features of a current image is out of alignment with the reference image, the image stabilization module uses the most feature dense alignment region to estimate an affine transformation matrix to apply to the entire current image to warp the image into proper alignment.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: March 19, 2019
    Inventors: Kishor Adinath Saitwal, Wesley Kenneth Cobb, Tao Yang
  • Patent number: 10198636
    Abstract: A machine-learning engine is disclosed that is configured to recognize and learn behaviors, as well as to identify and distinguish between normal and abnormal behavior within a scene, by analyzing movements and/or activities (or absence of such) over time. The machine-learning engine may be configured to evaluate a sequence of primitive events and associated kinematic data generated for an object depicted in a sequence of video frames and a related vector representation. The vector representation is generated from a primitive event symbol stream and a phase space symbol stream, and the streams describe actions of the objects depicted in the sequence of video frames.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: February 5, 2019
    Assignee: AVIGILON PATENT HOLDING 1 CORPORATION
    Inventors: John Eric Eaton, Wesley Kenneth Cobb, Dennis G. Urech, David S. Friedlander, Gang Xu, Ming-Jung Seow, Lon W. Risinger, David M. Solum, Tao Yang, Rajkiran K. Gottumukkal, Kishor Adinath Saitwal
  • Publication number: 20190034737
    Abstract: A sequence layer in a machine-learning engine configured to learn from the observations of a computer vision engine. In one embodiment, the machine-learning engine uses the voting experts to segment adaptive resonance theory (ART) network label sequences for different objects observed in a scene. The sequence layer may be configured to observe the ART label sequences and incrementally build, update, and trim, and reorganize an ngram trie for those label sequences. The sequence layer computes the entropies for the nodes in the ngram trie and determines a sliding window length and vote count parameters. Once determined, the sequence layer may segment newly observed sequences to estimate the primitive events observed in the scene as well as issue alerts for inter-sequence and intra-sequence anomalies.
    Type: Application
    Filed: September 28, 2018
    Publication date: January 31, 2019
    Inventors: Wesley Kenneth COBB, David Samuel FRIEDLANDER, Kishor Adinath SAITWAL
  • Patent number: 10187415
    Abstract: Embodiments presented herein describe techniques for generating a linguistic model of input data obtained from a data source (e.g., a video camera). According to one embodiment of the present disclosure, a sequence of symbols is generated based on an ordered stream of normalized vectors generated from the input data. A dictionary of words is generated from combinations of the ordered sequence of symbols based on a frequency at which combinations of symbols appear in the ordered sequence of symbols. A plurality of phrases is generated based an ordered sequence of words from the dictionary observed in the ordered sequence of symbols based on a frequency by which combinations of words in ordered sequence of words appear relative to one another.
    Type: Grant
    Filed: March 26, 2017
    Date of Patent: January 22, 2019
    Assignee: Omni AI, Inc.
    Inventors: Ming-Jung Seow, Wesley Kenneth Cobb, Gang Xu, Tao Yang, Aaron Poffenberger, Lon W. Risinger, Kishor Adinath Saitwal, Michael S. Yantosca, David M. Solum, Alex David Hemsath, Dennis G. Urech, Duy Trong Nguyen, Charles Richard Morgan
  • Publication number: 20190012761
    Abstract: Techniques are disclosed which provide a detected object tracker for a video analytics system. As disclosed, the detected object tracker provides a robust foreground object tracking component for a video analytics system which allow other components of the video analytics system to more accurately evaluate the behavior of a given object (as well as to learn to identify different instances or occurrences of the same object) over time. More generally, techniques are disclosed for identifying what pixels of successive video frames depict the same foreground object. Logic implementing certain functions of the detected object tracker can be executed on either a conventional processor (e.g., a CPU) or a hardware acceleration processing device (e.g., a GPU), allowing multiple camera feeds to be evaluated in parallel.
    Type: Application
    Filed: March 23, 2018
    Publication date: January 10, 2019
    Applicant: OMNI AI, INC.
    Inventors: Lon W. RISINGER, Kishor Adinath SAITWAL, Wesley Kenneth COBB