Patents by Inventor Gautham Mysore

Gautham Mysore has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210304799
    Abstract: Certain embodiments involve transcript-based techniques for facilitating insertion of secondary video content into primary video content. For instance, a video editor presents a video editing interface having a primary video section displaying a primary video, a text-based navigation section having navigable portions of a primary video transcript, and a secondary video menu section displaying candidate secondary videos. In some embodiments, candidate secondary videos are obtained by using target terms detected in the transcript to query a remote data source for the candidate secondary videos. In embodiments involving video insertion, the video editor identifies a portion of the primary video corresponding to a portion of the transcript selected within the text-based navigation section. The video editor inserts a secondary video, which is selected from the candidate secondary videos based on an input received at the secondary video menu section, at the identified portion of the primary video.
    Type: Application
    Filed: June 11, 2021
    Publication date: September 30, 2021
    Inventors: Bernd Huber, Bryan Russell, Gautham Mysore, Hijung Valentina Shin, Oliver Wang
  • Patent number: 11049525
    Abstract: Certain embodiments involve transcript-based techniques for facilitating insertion of secondary video content into primary video content. For instance, a video editor presents a video editing interface having a primary video section displaying a primary video, a text-based navigation section having navigable portions of a primary video transcript, and a secondary video menu section displaying candidate secondary videos. In some embodiments, candidate secondary videos are obtained by using target terms detected in the transcript to query a remote data source for the candidate secondary videos. In embodiments involving video insertion, the video editor identifies a portion of the primary video corresponding to a portion of the transcript selected within the text-based navigation section. The video editor inserts a secondary video, which is selected from the candidate secondary videos based on an input received at the secondary video menu section, at the identified portion of the primary video.
    Type: Grant
    Filed: February 21, 2019
    Date of Patent: June 29, 2021
    Assignee: ADOBE INC.
    Inventors: Bernd Huber, Bryan Russell, Gautham Mysore, Hijung Valentina Shin, Oliver Wang
  • Publication number: 20200273493
    Abstract: Certain embodiments involve transcript-based techniques for facilitating insertion of secondary video content into primary video content. For instance, a video editor presents a video editing interface having a primary video section displaying a primary video, a text-based navigation section having navigable portions of a primary video transcript, and a secondary video menu section displaying candidate secondary videos. In some embodiments, candidate secondary videos are obtained by using target terms detected in the transcript to query a remote data source for the candidate secondary videos. In embodiments involving video insertion, the video editor identifies a portion of the primary video corresponding to a portion of the transcript selected within the text-based navigation section. The video editor inserts a secondary video, which is selected from the candidate secondary videos based on an input received at the secondary video menu section, at the identified portion of the primary video.
    Type: Application
    Filed: February 21, 2019
    Publication date: August 27, 2020
    Inventors: Bernd Huber, Bryan Russell, Gautham Mysore, Hijung Valentina Shin, Oliver Wang
  • Patent number: 10453475
    Abstract: In some aspects, errors are replaced within an audio file by receiving a first audio sequence and a second audio sequence. The first audio sequence includes an erroneous subsequence and the second audio sequence includes a corrected subsequence for inclusion in the first audio sequence to replace the erroneous subsequence. The location of the erroneous subsequence in the first audio sequence is determined by applying a suitable matching operation (e.g., dynamic time warping). One or more matching subsequences of the first audio sequence located proximate to the erroneous subsequence in the first audio sequence and matching corresponding subsequences of the second audio sequence are located proximate to the corrected subsequence. A corrected first audio sequence is generated by replacing the erroneous subsequence and a matching subsequence of the first audio sequence with the corrected subsequence and the matching corresponding subsequence of the second audio sequence.
    Type: Grant
    Filed: February 14, 2017
    Date of Patent: October 22, 2019
    Assignee: Adobe Inc.
    Inventors: Shrikant Venkataramani, Paris Smaragdis, Gautham Mysore
  • Patent number: 10446123
    Abstract: Embodiments of the present invention relate to automatically identifying structures of a music stream. A segment structure may be generated that visually indicates repeating segments of a music stream. To generate a segment structure, a feature that corresponds to a music attribute from a waveform corresponding to the music stream is extracted from a waveform, such as an input signal. Utilizing a signal segmentation algorithm, such as a Variable Markov Oracle (VMO) algorithm, a symbolized signal, such as a VMO structure, is generated. From the symbolized signal, a matrix is generated. The matrix may be, for instance, a VMO-SSM. A segment structure is then generated from the matrix. The segment structure illustrates a segmentation of the music stream and the segments that are repetitive.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: October 15, 2019
    Assignee: ADOBE INC.
    Inventors: Cheng-i Wang, Gautham Mysore
  • Publication number: 20180374459
    Abstract: Embodiments of the present invention relate to automatically identifying structures of a music stream. A segment structure may be generated that visually indicates repeating segments of a music stream. To generate a segment structure, a feature that corresponds to a music attribute from a waveform corresponding to the music stream is extracted from a waveform, such as an input signal. Utilizing a signal segmentation algorithm, such as a Variable Markov Oracle (VMO) algorithm, a symbolized signal, such as a VMO structure, is generated. From the symbolized signal, a matrix is generated. The matrix may be, for instance, a VMO-SSM. A segment structure is then generated from the matrix. The segment structure illustrates a segmentation of the music stream and the segments that are repetitive.
    Type: Application
    Filed: August 7, 2018
    Publication date: December 27, 2018
    Inventors: Cheng-i Wang, Gautham Mysore
  • Patent number: 10079028
    Abstract: Embodiments of the present invention relate to enhancing sound through reverberation matching. In sonic implementations, a first sound recording recorded in a first environment is received. The first sound recording is decomposed to a first clean signal and a first reverb kernel. A second reverb kernel corresponding with a second sound recording recorded in a second environment is accessed, for example, based on a user indication to enhance the first sound recording to sound as though recorded in the second environment. An enhanced sound recording is generated based on the first clean signal and the second reverb kernel. The enhanced sound recording is a modification of the first sound recording to sound as though recorded in the second environment.
    Type: Grant
    Filed: December 8, 2015
    Date of Patent: September 18, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Ramin Anushiravani, Paris Smaragdis, Gautham Mysore
  • Patent number: 10074350
    Abstract: Embodiments of the present invention relate to automatically identifying structures of a music stream. A segment structure may be generated that visually indicates repeating segments of a music stream. To generate a segment structure, a feature that corresponds to a music attribute from a waveform corresponding to the music stream is extracted from a waveform, such as an input signal. Utilizing a signal segmentation algorithm, such as a Variable Markov Oracle (VMO) algorithm, a symbolized signal, such as a VMO structure, is generated. From the symbolized signal, a matrix is generated. The matrix may be, for instance, a VMO-SSM. A segment structure is then generated from the matrix. The segment structure illustrates a segmentation of the music stream and the segments that are repetitive.
    Type: Grant
    Filed: November 23, 2015
    Date of Patent: September 11, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Cheng-i Wang, Gautham Mysore
  • Publication number: 20180233162
    Abstract: In some aspects, errors are replaced within an audio file by receiving a first audio sequence and a second audio sequence. The first audio sequence includes an erroneous subsequence and the second audio sequence includes a corrected subsequence for inclusion in the first audio sequence to replace the erroneous subsequence. The location of the erroneous subsequence in the first audio sequence is determined by applying a suitable matching operation (e.g., dynamic time warping). One or more matching subsequences of the first audio sequence located proximate to the erroneous subsequence in the first audio sequence and matching corresponding subsequences of the second audio sequence are located proximate to the corrected subsequence. A corrected first audio sequence is generated by replacing the erroneous subsequence and a matching subsequence of the first audio sequence with the corrected subsequence and the matching corresponding subsequence of the second audio sequence.
    Type: Application
    Filed: February 14, 2017
    Publication date: August 16, 2018
    Inventors: Shrikant Venkataramani, Paris Smaragdis, Gautham Mysore
  • Patent number: 9734844
    Abstract: Embodiments of the present invention relate to detecting irregularities in audio, such as music. An input signal corresponding to an audio stream is received. The input signal is transformed from a time domain into a frequency domain to generate a plurality of frames that each comprises frequency information for a portion of the input signal. An irregular event in a portion of the input signal corresponding to a set of frames in the plurality of frames is identified based on a comparison of frequency information of the set of frames to the frequency information of other sets of frames of the plurality of frames. This allows an indication of the irregular event to be provided, or for the input signal to be automatically synchronized to a multimedia event.
    Type: Grant
    Filed: November 23, 2015
    Date of Patent: August 15, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Minje Kim, Gautham Mysore, Peter Merrill, Paris Smaragdis
  • Publication number: 20170162213
    Abstract: Embodiments of the present invention relate to enhancing sound through reverberation matching. In sonic implementations, a first sound recording recorded in a first environment is received. The first sound recording is decomposed to a first clean signal and a first reverb kernel. A second reverb kernel corresponding with a second sound recording recorded in a second environment is accessed, for example, based on a user indication to enhance the first sound recording to sound as though recorded in the second environment. An enhanced sound recording is generated based on the first clean signal and the second reverb kernel. The enhanced sound recording is a modification of the first sound recording to sound as though recorded in the second environment.
    Type: Application
    Filed: December 8, 2015
    Publication date: June 8, 2017
    Inventors: Ramin Anushiravani, Paris Smaragdis, Gautham Mysore
  • Publication number: 20170148424
    Abstract: Embodiments of the present invention relate to automatically identifying structures of a music stream. A segment structure may be generated that visually indicates repeating segments of a music stream. To generate a segment structure, a feature that corresponds to a music attribute from a waveform corresponding to the music stream is extracted from a waveform, such as an input signal. Utilizing a signal segmentation algorithm, such as a Variable Markov Oracle (VMO) algorithm, a symbolized signal, such as a VMO structure, is generated. From the symbolized signal, a matrix is generated. The matrix may be, for instance, a VMO-SSM. A segment structure is then generated from the matrix. The segment structure illustrates a segmentation of the music stream and the segments that are repetitive.
    Type: Application
    Filed: November 23, 2015
    Publication date: May 25, 2017
    Inventors: Cheng-i Wang, Gautham Mysore
  • Publication number: 20170148468
    Abstract: Embodiments of the present invention relate to detecting irregularities in audio, such as music. An input signal corresponding to an audio stream is received. The input signal is transformed from a time domain into a frequency domain to generate a plurality of frames that each comprises frequency information for a portion of the input signal. An irregular event in a portion of the input signal corresponding to a set of frames in the plurality of frames is identified based on a comparison of frequency information of the set of frames to the frequency information of other sets of frames of the plurality of frames. This allows an indication of the irregular event to be provided, or for the input signal to be automatically synchronized to a multimedia event.
    Type: Application
    Filed: November 23, 2015
    Publication date: May 25, 2017
    Inventors: Minje Kim, Gautham Mysore, Peter Merrill, Paris Smaragdis