Patents by Inventor Ivan Kovtun

Ivan Kovtun has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240087365
    Abstract: A multisensor processing platform includes, in at least some embodiments, a face detector and embedding network for analyzing unstructured data to detect, identify and track any combination of objects (including people) or activities through computer vision algorithms and machine learning. In some embodiments, the unstructured data is compressed by identifying the appearance of an object across a series of frames of the data, aggregating those appearances and effectively summarizing those appearances of the object by a single representative image displayed to a user for each set of aggregated appearances to enable the user to assess the summarized data substantially at a glance. The data can be filtered into tracklets, groups and clusters, based on system confidence in the identification of the object or activity, to provide multiple levels of granularity.
    Type: Application
    Filed: January 19, 2021
    Publication date: March 14, 2024
    Applicant: PERCIPIENT.AI INC.
    Inventors: Timo PYLVAENAEINEN, Craig SENNABAUM, Mike HIGUERA, Ivan KOVTUN, Atul KANAUJIA, Alison HIGUERA, Jerome BERCLAZ, Rajendra SHAH, Balan AYYAR, Vasudev PARAMESWARAN
  • Publication number: 20230334291
    Abstract: A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.
    Type: Application
    Filed: April 24, 2023
    Publication date: October 19, 2023
    Applicant: PERCIPIENT .AI INC.
    Inventors: Vasudev Parameswaran, Atui KANAUJIA, Simon CHEN, Jerome BERCLAZ, Ivan KOVTUN, Alison HIGUERA, Vidyadayini TALAPADY, Derek YOUNG, Balan AYYAR, Rajendra SHAH, Timo PYLVANAINEN
  • Patent number: 11636312
    Abstract: A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.
    Type: Grant
    Filed: October 4, 2022
    Date of Patent: April 25, 2023
    Assignee: PERCIPIENT.AI INC.
    Inventors: Vasudev Parameswaran, Atul Kanaujia, Simon Chen, Jerome Berclaz, Ivan Kovtun, Alison Higuera, Vidyadayini Talapady, Derek Young, Balan Ayyar, Rajendra Shah, Timo Pylvanainen
  • Publication number: 20230023164
    Abstract: A computer vision system configured for detection and recognition of objects in video and still imagery in a live or historical setting uses a teacher-student object detector training approach to yield a merged student model capable of detecting all of the classes of objects any of the teacher models is trained to detect. Further, training is simplified by providing an iterative training process wherein a relatively small number of images is labeled manually as initial training data, after which an iterated model cooperates with a machine-assisted labeling process and an active learning process where detector model accuracy improves with each iteration, yielding improved computational efficiency. Further, synthetic data is generated by which an object of interest can be placed in a variety of setting sufficient to permit training of models. A user interface guides the operator in the construction of a custom model capable of detecting a new object.
    Type: Application
    Filed: October 4, 2022
    Publication date: January 26, 2023
    Applicant: PERCIPIENT.AI INC.
    Inventors: Vasudev PARAMESWARAN, Atul KANAUJIA, Simon CHEN, Jerome BERCLAZ, Ivan KOVTUN, Alison HIGUERA, Vidyadayini TALAPADY, Derek YOUNG, Balan AYYAR, Rajendra SHAH, Timo PYLVANAINEN
  • Publication number: 20190073520
    Abstract: This description describes a system for identifying individuals within a digital file. The system accesses a digital file describing the movement of unidentified individuals and detects a face for an unidentified individual at a plurality of locations in the video. The system divides the digital file into a set of segments and detects a face of an unidentified individual by applying a detection algorithm to each segment. For each detected face, the system applies a recognition algorithm to extract feature vectors representative of the identity of the detected faces which are stored in computer memory. The system applies a recognition algorithm to query the extracted feature vectors for target individuals by matching unidentified individuals to target individuals, determining a confidence level describing the likelihood that the match is correct, and generating a report to be presented to a user of the system.
    Type: Application
    Filed: August 31, 2018
    Publication date: March 7, 2019
    Inventors: Balan Rama Ayyar, Anantha Krishnan Bangalore, Jerome Francois Berclaz, Reechik Chatterjee, Nikhil Kumar Gupta, Ivan Kovtun, Vasudev Parameswaran, Timo Pekka Pylvaenaeinen, Rajendra Jayantilal Shah
  • Patent number: 9654700
    Abstract: A camera device includes monochromatic and color image sensors that capture an image as a clear image in monochrome and as a Bayer image. The camera device implements image processing algorithms to produce an enhanced, high-resolution HDR color image. The Bayer image is demosaiced to generate an initial color image, and a disparity map is generated to establish correspondence between pixels of the initial color image and clear image. A mapped color image is generated to map the initial color image onto the clear image. A denoised clear image is applied as a guide image of a guided filter that filters the mapped color image to generate a filtered color image. The filtered color image and the denoised clear image are then fused to produce an enhanced, high-resolution HDR color image, and the disparity map and the mapped color image are updated based on the enhanced, high-resolution HDR color image.
    Type: Grant
    Filed: September 16, 2014
    Date of Patent: May 16, 2017
    Assignee: Google Technology Holdings LLC
    Inventors: Ivan Kovtun, Volodymyr Kysenko, Yuriy Musatenko, Adrian M Proca, Philip S Stetson, Yevhen Ivannikov
  • Publication number: 20160080626
    Abstract: A camera device includes monochromatic and color image sensors that capture an image as a clear image in monochrome and as a Bayer image. The camera device implements image processing algorithms to produce an enhanced, high-resolution HDR color image. The Bayer image is demosaiced to generate an initial color image, and a disparity map is generated to establish correspondence between pixels of the initial color image and clear image. A mapped color image is generated to map the initial color image onto the clear image. A denoised clear image is applied as a guide image of a guided filter that filters the mapped color image to generate a filtered color image. The filtered color image and the denoised clear image are then fused to produce an enhanced, high-resolution HDR color image, and the disparity map and the mapped color image are updated based on the enhanced, high-resolution HDR color image.
    Type: Application
    Filed: September 16, 2014
    Publication date: March 17, 2016
    Inventors: Ivan Kovtun, Volodymyr Kysenko, Yuriy Musatenko, Adrian M. Proca, Philip S. Stetson, Yevhen Ivannikov
  • Patent number: 8971591
    Abstract: A server determines a plurality of faceprints representing a plurality of users to be recognized at a client device. Each faceprint contains a number of reference images for a given user that are used to recognize facial images of the user detecting in media captured at the client device. The faceprints delivered to the client device are determined for the client device based on the users likely to be detected in images captured at the client device. The reference images with a given faceprint delivered to the client device are selected by the server based on their recognition value in identifying the users likely to be detected in images captured at the client device.
    Type: Grant
    Filed: December 10, 2012
    Date of Patent: March 3, 2015
    Assignee: Google Technology Holdings LLC
    Inventors: Ivan Kovtun, Volodymyr Kyyko, Yuriy S. Musatenko, Michael Jason Mitura, Laurent Gil
  • Patent number: 8457368
    Abstract: A method for processing digital media is described. The method, in one example embodiment, includes identification of objects in a video stream by detecting, for each video frame, an object in the video frame and selectively associating the object with an object cluster. The method may further include comparing the object in the object cluster to a reference object and selectively associating object data of the reference object with all objects within the object cluster based on the comparing. The method may further include manually associating the object data of the reference object with all objects within the object duster having no associated reference object and populating a reference database with the reference object for the object cluster.
    Type: Grant
    Filed: October 18, 2012
    Date of Patent: June 4, 2013
    Assignee: Viewdle Inc.
    Inventors: Ivan Kovtun, Oleksandr Zhukov, Yuriy Musatenko, Mykhailo Schlesinger
  • Patent number: 8315430
    Abstract: A method for processing digital media is described. The method, in one example embodiment, includes identification of objects in a video stream by detecting, for each video frame, an object in the video frame and selectively associating the object with an object cluster. The method may further include comparing the object in the object cluster to a reference object and selectively associating object data of the reference object with all objects within the object cluster based on the comparing. The method may further include manually associating the object data of the reference object with all objects within the object cluster having no associated reference object and populating a reference database with the reference object for the object cluster.
    Type: Grant
    Filed: December 3, 2007
    Date of Patent: November 20, 2012
    Assignee: Viewdle Inc.
    Inventors: Ivan Kovtun, Oleksandr Zhukov, Yuriy Musatenko, Mykhailo Schlesinger
  • Patent number: 8150169
    Abstract: Embodiments of computer implemented methods and systems for object clustering and identification are described. One example embodiment includes receiving an unclustered video object, determining a first distance between the unclustered video object and an arbitrary representative video object, the arbitrary representative video object being selected from representative video objects, estimating distances between the unclustered video object and the representative video objects based on the first distance and precalculated distances between the arbitrary representative video object and the representative video objects, and, based on the estimated distances, selectively associating the unclustered video object with a video cluster, thereby producing a clustered video object.
    Type: Grant
    Filed: September 16, 2008
    Date of Patent: April 3, 2012
    Assignee: Viewdle Inc.
    Inventors: Ivan Kovtun, Yuriy Musatenko, Mykhailo Schlesinger
  • Patent number: 8064641
    Abstract: A method for processing digital media is described. In one example embodiment, the method may include detecting an unknown object in a video frame, receiving inputs representing probable identities of the unknown object in the video frame from various sources, and associating each input with the unknown object detected in the video frame. The received inputs may be processed, compared with reference data and, based on the comparison, probable identities of the object associated with the input derived. The method may further include retrieving a likelihood of the input to match the unknown object from historical data and producing weights corresponding to the inputs, fusing the inputs and the relative weight associated with each input, and identifying the unknown object based on a comparison of the weighted distances from the unknown identify to a reference identity.
    Type: Grant
    Filed: December 3, 2007
    Date of Patent: November 22, 2011
    Assignee: Viewdle Inc.
    Inventors: Yegor Anchyshkin, Yuriy Musatenko, Kostyantyn Milshteyn, Volodymyr Kyyko, Ivan Kovtun, Vyacheslav Matsello, Mykhailo Schlesinger
  • Publication number: 20100067745
    Abstract: Embodiments of computer implemented methods and systems for object clustering and identification are described. One example embodiment includes receiving an unclustered video object, determining a first distance between the unclustered video object and an arbitrary representative video object, the arbitrary representative video object being selected from representative video objects, estimating distances between the unclustered video object and the representative video objects based on the first distance and precalculated distances between the arbitrary representative video object and the representative video objects, and, based on the estimated distances, selectively associating the unclustered video object with a video cluster, thereby producing a clustered video object.
    Type: Application
    Filed: September 16, 2008
    Publication date: March 18, 2010
    Inventors: Ivan Kovtun, Yuriy Musatenko, Mykhailo Schlesinger
  • Publication number: 20090141988
    Abstract: A method for processing digital media is described. The method, in one example embodiment, includes identification of objects in a video stream by detecting, for each video frame, an object in the video frame and selectively associating the object with an object cluster. The method may further include comparing the object in the object cluster to a reference object and selectively associating object data of the reference object with all objects within the object cluster based on the comparing. The method may further include manually associating the object data of the reference object with all objects within the object cluster having no associated reference object and populating a reference database with the reference object for the object cluster.
    Type: Application
    Filed: December 3, 2007
    Publication date: June 4, 2009
    Inventors: Ivan Kovtun, Oleksandr Zhukov, Yuriy Musatenko, Mykhailo Schlezinger
  • Publication number: 20090116695
    Abstract: A method for processing digital media is described. In one example embodiment, the method may include detecting an unknown object in a video frame, receiving inputs representing probable identities of the unknown object in the video frame from various sources, and associating each input with the unknown object detected in the video frame. The received inputs may be processed, compared with reference data and, based on the comparison, probable identities of the object associated with the input derived. The method may further include retrieving a likelihood of the input to match the unknown object from historical data and producing weights corresponding to the inputs, fusing the inputs and the relative weight associated with each input, and identifying the unknown object based on a comparison of the weighted distances from the unknown identify to a reference identity.
    Type: Application
    Filed: December 3, 2007
    Publication date: May 7, 2009
    Inventors: Yegor Anchyshkin, Yuriy Musatenko, Kostyantyn Milshteyn, Volodymyr Kyyko, Ivan Kovtun, Vyacheslav Matsello, Mykhailo Schlezinger