Patents by Inventor Shashank Merchant

Shashank Merchant has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200162788
    Abstract: In one aspect, an example method includes (i) presenting, by a playback device, first media content from a first source; (ii) encountering, by the playback device, a trigger to switch from presenting the first media content from the first source to presenting second media content from a second source; (iii) determining, by the playback device, that the playback device is presenting the first media content from the first source in a muted state; and (iv) responsive to encountering the trigger, and based on the determining that the playback device is presenting the first media content from the first source in a muted state, presenting, by the playback device, the second media content from the second source in the muted state.
    Type: Application
    Filed: June 6, 2019
    Publication date: May 21, 2020
    Inventors: Markus K. Cremer, Shashank Merchant
  • Publication number: 20200081914
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to improve media identification. An example apparatus includes a hash handler to generate a first set of reference matches by performing hash functions on a subset of media data associated with media to generate hashed media data based on a first bucket size, a candidate determiner to identify a second set of reference matches that include ones of the first set, the second set including ones having first quantities of hits that did not satisfy a threshold, determine second quantities of hits for ones of the second set by matching ones to the hash tables based on a second bucket size, and identify one or more candidate matches based on at least one of (1) ones of the first set or (2) ones of the second set, and a report generator to generate a report including a media identification.
    Type: Application
    Filed: July 31, 2019
    Publication date: March 12, 2020
    Inventors: Jeffrey Scott, Matthew James Wilkinson, Robert Coover, Shashank Merchant
  • Publication number: 20200050074
    Abstract: Example systems and methods to transform events and/or mood associated with playing media into lighting effects are disclosed herein. An example apparatus includes a content identifier to identify a first event occurring during presentation of media content at a first time. The example apparatus includes a content driven analyzer to determine a first lighting effect to be produced by a light-producing device based on the first event and instruct the light-producing device to produce the first lighting effect based on the first event during presentation of the media content. The content identifier is to identify a second media event occurring during presentation of the media content at a second time after the first time. The content driven analyzer is to instruct the light-producing device to one of maintain the first lighting effect based on the second event or produce a second lighting effect based on the second event during presentation of the media content.
    Type: Application
    Filed: October 21, 2019
    Publication date: February 13, 2020
    Inventors: Markus Kurt Cremer, Shashank Merchant, Aneesh Vartakavi
  • Publication number: 20200029129
    Abstract: In one aspect, an example method includes (i) providing, by a playback device, replacement media content for display; (ii) determining, by the playback device that while the playback device is displaying the replacement media content a remote control transmitted an instruction to a media device that provides media content to the playback device; (iii) determining, by the playback device, a playback-modification action corresponding to the instruction and the media device; and (iv) modifying, by the playback device, playback of the replacement media content in accordance with the playback-modification action.
    Type: Application
    Filed: November 6, 2018
    Publication date: January 23, 2020
    Inventors: Kurt R. Thielen, Shashank Merchant, Peter Dunker, Markus K. Cremer, Chungwon Seo, Seunghyeong Lee, Steven D. Scherf
  • Publication number: 20200029118
    Abstract: In one aspect, an example method includes (i) identifying, by a playback device, a media device based on a control message received from the media device by way of an audio and/or video interface, where the media device provides media content to the playback device; (ii) providing, by the playback device, replacement media content for display; (iii) determining, by the playback device, that while the playback device is displaying the replacement media content a remote control transmitted an instruction to the identified media device; (iv) determining, by the playback device, a playback-modification action corresponding to the instruction and the identified media device; and (v) modifying, by the playback device, playback of the replacement media content in accordance with the playback-modification action.
    Type: Application
    Filed: November 6, 2018
    Publication date: January 23, 2020
    Inventors: Kurt R. Thielen, Peter Dunker, Markus K. Cremer, Steven D. Scherf, Shashank Merchant
  • Publication number: 20200021789
    Abstract: In one aspect, an example method includes (i) providing, by a playback device, replacement media content for display; (ii) determining, by the playback device, that a remote control transmitted to the playback device an instruction configured to cause a modification to operation of the playback device while the playback device displays the replacement media content; (iii) determining, by the playback device based on the instruction, an overlay that the playback device is configured to provide for display in conjunction with the modification; (iv) determining, by the playback device, a region within a display of the playback device corresponding to the overlay; and (v) modifying, by the playback device, a transparency of the region such that the overlay is visible through the replacement media content when the playback device provides the overlay for display.
    Type: Application
    Filed: November 6, 2018
    Publication date: January 16, 2020
    Inventors: Kurt R. Thielen, Shashank Merchant, Peter Dunker, Markus K. Cremer, Chungwon Seo, Seunghyeong Lee, Steven D. Scherf
  • Publication number: 20200004781
    Abstract: A multi-dimensional database and indexes and operations on the multi-dimensional database are described which include video search applications or other similar sequence or structure searches. Traversal indexes utilize highly discriminative information about images and video sequences or about object shapes. Global and local signatures around keypoints are used for compact and robust retrieval and discriminative information content of images or video sequences of interest. For other objects or structures relevant signature of pattern or structure are used for traversal indexes. Traversal indexes are stored in leaf nodes along with distance measures and occurrence of similar images in the database. During a sequence query, correlation scores are calculated for single frame, for frame sequence, and video clips, or for other objects or structures.
    Type: Application
    Filed: June 14, 2019
    Publication date: January 2, 2020
    Inventors: Jose Pio Pereira, Mihailo M. Stojancic, Shashank Merchant
  • Publication number: 20200004779
    Abstract: A multi-dimensional database and indexes and operations on the multi-dimensional database are described which include video search applications or other similar sequence or structure searches. Traversal indexes utilize highly discriminative information about images and video sequences or about object shapes. Global and local signatures around keypoints are used for compact and robust retrieval and discriminative information content of images or video sequences of interest. For other objects or structures relevant signature of pattern or structure are used for traversal indexes. Traversal indexes are stored in leaf nodes along with distance measures and occurrence of similar images in the database. During a sequence query, correlation scores are calculated for single frame, for frame sequence, and video clips, or for other objects or structures.
    Type: Application
    Filed: June 14, 2019
    Publication date: January 2, 2020
    Inventors: Jose Pio Piereira, Mihailo M. Stojancic, Shashank Merchant
  • Publication number: 20200004782
    Abstract: A multi-dimensional database and indexes and operations on the multi-dimensional database are described which include video search applications or other similar sequence or structure searches. Traversal indexes utilize highly discriminative information about images and video sequences or about object shapes. Global and local signatures around keypoints are used for compact and robust retrieval and discriminative information content of images or video sequences of interest. For other objects or structures relevant signature of pattern or structure are used for traversal indexes. Traversal indexes are stored in leaf nodes along with distance measures and occurrence of similar images in the database. During a sequence query, correlation scores are calculated for single frame, for frame sequence, and video clips, or for other objects or structures.
    Type: Application
    Filed: June 14, 2019
    Publication date: January 2, 2020
    Inventors: Jose Pio Pereira, Mihailo M. Stojancic, Shashank Merchant
  • Publication number: 20200004780
    Abstract: A multi-dimensional database and indexes and operations on the multi-dimensional database are described which include video search applications or other similar sequence or structure searches. Traversal indexes utilize highly discriminative information about images and video sequences or about object shapes. Global and local signatures around keypoints are used for compact and robust retrieval and discriminative information content of images or video sequences of interest. For other objects or structures relevant signature of pattern or structure are used for traversal indexes. Traversal indexes are stored in leaf nodes along with distance measures and occurrence of similar images in the database. During a sequence query, correlation scores are calculated for single frame, for frame sequence, and video clips, or for other objects or structures.
    Type: Application
    Filed: June 14, 2019
    Publication date: January 2, 2020
    Inventors: Jose Pio Pereira, Mihailo M. Stojancic, Shashank Merchant
  • Publication number: 20190384786
    Abstract: The overall architecture and details of a scalable video fingerprinting and identification system that is robust with respect to many classes of video distortions is described. In this system, a fingerprint for a piece of multimedia content is composed of a number of compact signatures, along with traversal hash signatures and associated metadata. Numerical descriptors are generated for features found in a multimedia clip, signatures are generated from these descriptors, and a reference signature database is constructed from these signatures. Query signatures are also generated for a query multimedia clip. These query signatures are searched against the reference database using a fast similarity search procedure, to produce a candidate list of matching signatures. This candidate list is further analyzed to find the most likely reference matches. Signature correlation is performed between the likely reference matches and the query clip to improve detection accuracy.
    Type: Application
    Filed: August 28, 2019
    Publication date: December 19, 2019
    Inventors: Prashant Ramanathan, Jose Pio Pereira, Shashank Merchant, Mihailo M. Stojancic
  • Publication number: 20190387273
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 19, 2019
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20190379927
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 12, 2019
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20190379931
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 12, 2019
    Inventors: Mihailo M. Stojancic, Daniel H. Eakins, Shashank Merchant, Prashant Ramanathan, Jose Pio Pereira
  • Publication number: 20190379928
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 12, 2019
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20190379929
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 12, 2019
    Inventors: Mihailo M. Stojancic, Daniel H. Eakins, Shashank Merchant, Prashant Ramanathan, Jose Pio Pereira
  • Publication number: 20190379930
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 12, 2019
    Inventors: Mihailo M. Stojancic, Daniel H. Eakins, Shashank Merchant, Prashant Ramanathan, Jose Pio Pereira
  • Publication number: 20190373312
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 5, 2019
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Publication number: 20190373311
    Abstract: A mobile device responds in real time to media content presented on a media device, such as a television. The mobile device captures temporal fragments of audio-video content on its microphone, camera, or both and generates corresponding audio-video query fingerprints. The query fingerprints are transmitted to a search server located remotely or used with a search function on the mobile device for content search and identification. Audio features are extracted and audio signal global onset detection is used for input audio frame alignment. Additional audio feature signatures are generated from local audio frame onsets, audio frame frequency domain entropy, and maximum change in the spectral coefficients. Video frames are analyzed to find a television screen in the frames, and a detected active television quadrilateral is used to generate video fingerprints to be combined with audio fingerprints for more reliable content identification.
    Type: Application
    Filed: June 14, 2019
    Publication date: December 5, 2019
    Inventors: Mihailo M. Stojancic, Sunil Suresh Kulkarni, Shashank Merchant, Jose Pio Pereira, Oleksiy Bolgarov
  • Patent number: 10451952
    Abstract: Example systems and methods to transform events and/or mood associated with playing media into lighting effects are disclosed herein. An example apparatus includes a content identifier to identify a first event occurring during presentation of media content at a first time. The example apparatus includes a content driven analyzer to determine a first lighting effect to be produced by a light-producing device based on the first event and instruct the light-producing device to produce the first lighting effect based on the first event during presentation of the media content. The content identifier is to identify a second media event occurring during presentation of the media content at a second time after the first time. The content driven analyzer is to instruct the light-producing device to one of maintain the first lighting effect based on the second event or produce a second lighting effect based on the second event during presentation of the media content.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: October 22, 2019
    Assignee: GRACENOTE, INC.
    Inventors: Markus Kurt Cremer, Shashank Merchant, Aneesh Vartakavi