Patents by Inventor Oleksiy Bolgarov

Oleksiy Bolgarov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11729458
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: August 15, 2023
    Assignee: ROKU, INC.
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Patent number: 11564001
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: January 24, 2023
    Assignee: ROKU, INC.
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Patent number: 11140439
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: October 5, 2021
    Assignee: Roku, Inc.
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Patent number: 10986399
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: April 20, 2021
    Assignee: Gracenote, Inc.
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20210012810
    Abstract: Example methods and apparatus to add a tagged snippet of multimedia content to a playlist are disclosed. An example apparatus comprises an automatic content recognition search service to search a fingerprint database to find a match between query fingerprints for a snippet of multimedia content captured from a multimedia program at a timestamp and reference fingerprints of matching reference multimedia content stored in the fingerprint database, a tag service to generate a tag representing the snippet of multimedia content, wherein the tag, the timestamp, meta information associated with the matching reference multimedia content, and a monitored variable for a number of viewers of the snippet of multimedia content are stored in a database storage as a tagged snippet of multimedia content, and to add the tagged snippet of multimedia content to a playlist for an identified multimedia program if the number of viewers of the tagged snippet exceeds a threshold.
    Type: Application
    Filed: June 24, 2020
    Publication date: January 14, 2021
    Inventors: Sunil Suresh Kulkarni, Oleksiy Bolgarov
  • Patent number: 10714145
    Abstract: Example methods and apparatus to add a tagged snippet of multimedia content to a playlist are disclosed. An example apparatus comprises an automatic content recognition search service to search a fingerprint database to find a match between query fingerprints for a snippet of multimedia content captured from a multimedia program at a timestamp and reference fingerprints of matching reference multimedia content stored in the fingerprint database, a tag service to generate a tag representing the snippet of multimedia content, wherein the tag, the timestamp, meta information associated with the matching reference multimedia content, and a monitored variable for a number of viewers of the snippet of multimedia content are stored in a database storage as a tagged snippet of multimedia content, and to add the tagged snippet of multimedia content to a playlist for an identified multimedia program if the number of viewers of the tagged snippet exceeds a threshold.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: July 14, 2020
    Assignee: Gracenote, Inc.
    Inventors: Sunil Suresh Kulkarni, Oleksiy Bolgarov
  • Publication number: 20190387273
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 19, 2019
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20190379927
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 12, 2019
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20190379928
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 12, 2019
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20190373312
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 5, 2019
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20190373311
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 5, 2019
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20190348078
    Abstract: Example methods and apparatus to add a tagged snippet of multimedia content to a playlist are disclosed. An example apparatus comprises an automatic content recognition search service to search a fingerprint database to find a match between query fingerprints for a snippet of multimedia content captured from a multimedia program at a timestamp and reference fingerprints of matching reference multimedia content stored in the fingerprint database, a tag service to generate a tag representing the snippet of multimedia content, wherein the tag, the timestamp, meta information associated with the matching reference multimedia content, and a monitored variable for a number of viewers of the snippet of multimedia content are stored in a database storage as a tagged snippet of multimedia content, and to add the tagged snippet of multimedia content to a playlist for an identified multimedia program if the number of viewers of the tagged snippet exceeds a threshold.
    Type: Application
    Filed: April 16, 2019
    Publication date: November 14, 2019
    Inventors: Sunil Suresh Kulkarni, Oleksiy Bolgarov
  • Patent number: 10297286
    Abstract: Example methods and apparatus to add a tagged snippet of multimedia content to a playlist are disclosed. An example apparatus comprises an automatic content recognition search service to search a fingerprint database to find a match between query fingerprints for a snippet of multimedia content captured from a multimedia program at a timestamp and reference fingerprints of matching reference multimedia content stored in the fingerprint database, a tag service to generate a tag representing the snippet of multimedia content, wherein the tag, the timestamp, meta information associated with the matching reference multimedia content, and a monitored variable for a number of viewers of the snippet of multimedia content are stored in a database storage as a tagged snippet of multimedia content, and to add the tagged snippet of multimedia content to a playlist for an identified multimedia program if the number of viewers of the tagged snippet exceeds a threshold.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: May 21, 2019
    Assignee: Gracenote, Inc.
    Inventors: Sunil Suresh Kulkarni, Oleksiy Bolgarov
  • Publication number: 20180254068
    Abstract: Example methods and apparatus to add a tagged snippet of multimedia content to a playlist are disclosed. An example apparatus comprises an automatic content recognition search service to search a fingerprint database to find a match between query fingerprints for a snippet of multimedia content captured from a multimedia program at a timestamp and reference fingerprints of matching reference multimedia content stored in the fingerprint database, a tag service to generate a tag representing the snippet of multimedia content, wherein the tag, the timestamp, meta information associated with the matching reference multimedia content, and a monitored variable for a number of viewers of the snippet of multimedia content are stored in a database storage as a tagged snippet of multimedia content, and to add the tagged snippet of multimedia content to a playlist for an identified multimedia program if the number of viewers of the tagged snippet exceeds a threshold.
    Type: Application
    Filed: May 7, 2018
    Publication date: September 6, 2018
    Inventors: Sunil Suresh Kulkarni, Oleksiy Bolgarov
  • Patent number: 9966112
    Abstract: In order to organize and reference multimedia content that is presented on a television or other media devices, a tagging system and method are utilized. An approach is described to tag multimedia content at specific times, record text, audio and/or video, and comment at specific times in the multimedia content. For tagging, commenting, and sharing particular moments of multimedia content, automatic content recognition (ACR) is used. ACR supports tagging and recording of snippets of the multimedia content at specific times. Snippets are displayed by using a thumbnail of pictures or small multimedia clips. The snippets can be commented on and shared with selected users or groups of users. An automatic highlight playlist of a multimedia content can be generated, and various filtering operations of the tags, comments and snippets can also be performed.
    Type: Grant
    Filed: April 17, 2014
    Date of Patent: May 8, 2018
    Assignee: Gracenote, Inc.
    Inventors: Sunil Suresh Kulkarni, Oleksiy Bolgarov
  • Publication number: 20170201793
    Abstract: A content segmentation, categorization and identification method on consumer devices (clients) is described. Methods for content tracking are illustrated that are suitable for large scale deployment and applications such as broadcast monitoring, novel content publishing and interaction. Time-aligned (synchronous) applications such as multi-language selection, customized advertisements, second screen services and content monitoring applications can be economically deployed at large scales. The client performs fingerprinting, scene change detection, audio turn detection, and logo detection on incoming video and gathers database search results, logos and text to identify and segment video streams into content, promos, and commercials. A learning engine is configured to learn rules for optimal identification and segmentation at each client for each channel and program. Content sensed at the client site is tracked with reduced computation and applications are executed with timing precision.
    Type: Application
    Filed: October 19, 2016
    Publication date: July 13, 2017
    Applicant: Gracenote, Inc.
    Inventors: Jose Pio Pereira, Sunil Suresh Kulkarni, Oleksiy Bolgarov, Prashant Ramanathan, Shashank Merchant, Mihailo M. Stojancic
  • Publication number: 20160364389
    Abstract: Techniques for efficient database formation and search in applications embedded in a media device are provided. The search may be performed synchronously with presentation of media programming content on a nearby media presentation device. A mobile media device captures some temporal fragments of the presented audio/video content on its microphone and camera, and then generates query fingerprints for the captured fragment. A local reference database resides on the mobile media device and a master reference database resides on a remote server with a most recent chunk of reference fingerprints transferred dynamically to the local mobile media device. A chunk of the query fingerprints generated locally on the mobile media device are searched on the local reference database for continuous content search and identification. The method presented automatically switches between the local search on the mobile media device and a remote search on an external search server.
    Type: Application
    Filed: May 16, 2016
    Publication date: December 15, 2016
    Applicant: Gracenote, Inc.
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Patent number: 9510044
    Abstract: Content segmentation, categorization and identification methods are described. Content tracking approaches are illustrated that are suitable for large scale deployment. Time-aligned applications such as multi-language selection, customized advertisements, second screen services and content monitoring applications can be economically deployed at large scales. A client performs fingerprinting, scene change detection, audio turn detection, and logo detection on incoming video and gathers database search results, logos and text to identify and segment video streams into content, promos, and commercials. A learning engine is configured to learn rules for optimal identification and segmentation at each client for each channel and program. Content sensed at the client site is tracked with reduced computation and applications are executed with timing precision. A user interface for time-aligned publishing of content and subsequent usage and interaction on one or more displays is also described.
    Type: Grant
    Filed: December 15, 2011
    Date of Patent: November 29, 2016
    Assignee: GRACENOTE, INC.
    Inventors: Jose Pio Pereira, Sunil Suresh Kulkarni, Oleksiy Bolgarov, Prashant Ramanathan, Shashank Merchant, Mihailo Stojancic
  • Patent number: 9367544
    Abstract: Techniques for efficient database formation and search in applications embedded in a media device are provided. The search may be performed synchronously with presentation of media programming content on a nearby media presentation device. A mobile media device captures some temporal fragments of the presented audio/video content on its microphone and camera, and then generates query fingerprints for the captured fragment. A local reference database resides on the mobile media device and a master reference database resides on a remote server with a most recent chunk of reference fingerprints transferred dynamically to the local mobile media device. A chunk of the query fingerprints generated locally on the mobile media device are searched on the local reference database for continuous content search and identification. The method presented automatically switches between the local search on the mobile media device and a remote search on an external search server.
    Type: Grant
    Filed: March 14, 2013
    Date of Patent: June 14, 2016
    Assignee: Gracenote, Inc.
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20130246457
    Abstract: Techniques for efficient database formation and search in applications embedded in a media device are provided. The search may be performed synchronously with presentation of media programming content on a nearby media presentation device. A mobile media device captures some temporal fragments of the presented audio/video content on its microphone and camera, and then generates query fingerprints for the captured fragment. A local reference database resides on the mobile media device and a master reference database resides on a remote server with a most recent chunk of reference fingerprints transferred dynamically to the local mobile media device. A chunk of the query fingerprints generated locally on the mobile media device are searched on the local reference database for continuous content search and identification. The method presented automatically switches between the local search on the mobile media device and a remote search on an external search server.
    Type: Application
    Filed: March 14, 2013
    Publication date: September 19, 2013
    Applicant: Zeitera, LLC
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov