Patents by Inventor Kent Bennett Terry

Kent Bennett Terry has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10015612
    Abstract: Some methods may involve receiving a block of audio data, the block including N pulse code modulated (PCM) audio channels, including audio samples for each of the N channels, receiving metadata associated with the block of audio data and receiving a first set of values corresponding to reference audio samples. A second set of values, corresponding to audio samples from the block of audio data, may be determined. The first and second set of values may be compared. Based on the comparison, it may be determined whether the block of audio data is synchronized with the metadata.
    Type: Grant
    Filed: May 23, 2017
    Date of Patent: July 3, 2018
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Kent Bennett Terry, Scott Gregory Norcross, Jeffrey Riedmiller
  • Publication number: 20170347215
    Abstract: Some methods may involve receiving a block of audio data, the block including N pulse code modulated (PCM) audio channels, including audio samples for each of the N channels, receiving metadata associated with the block of audio data and receiving a first set of values corresponding to reference audio samples. A second set of values, corresponding to audio samples from the block of audio data, may be determined. The first and second set of values may be compared. Based on the comparison, it may be determined whether the block of audio data is synchronized with the metadata.
    Type: Application
    Filed: May 23, 2017
    Publication date: November 30, 2017
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Kent Bennett TERRY, Scott Gregory NORCROSS, Jeffrey RIEDMILLER
  • Patent number: 8626504
    Abstract: Signatures that can be used to identify video and audio content are generated from the content by generating measures of dissimilarity between features of corresponding groups of pixels in frames of video content and by generating low-resolution time-frequency representations of audio segments. The signatures are generated by applying a hash function to intermediate values derived from the measures of dissimilarity and to the low-resolution time-frequency representations. The generated signatures may be used in a variety of applications such as restoring synchronization between video and audio content streams and identifying copies of original video and audio content. The generated signatures can provide reliable identifications despite intentional and unintentional modifications to the content.
    Type: Grant
    Filed: August 30, 2012
    Date of Patent: January 7, 2014
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Regunathan Radhakrishnan, Claus Bauer, Kent Bennett Terry, Brian David Link, Hyung-Suk Kim, Eric Gsell
  • Patent number: 8400566
    Abstract: Features are extracted from video and audio content that have a known temporal relationship with one another. The extracted features are used to generate video and audio signatures, which are assembled with an indication of the temporal relationship into a synchronization signature construct. the construct may be used to calculate synchronization errors between video and audio content received at a remote destination. Measures of confidence are generated at the remote destination to optimize processing and to provide an indication of reliability of the calculated synchronization error.
    Type: Grant
    Filed: August 17, 2009
    Date of Patent: March 19, 2013
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Kent Bennett Terry, Regunathan Radhakrishnan
  • Publication number: 20130064416
    Abstract: Signatures that can be used to identify video and audio content are generated from the content by generating measures of dissimilarity between features of corresponding groups of pixels in frames of video content and by generating low-resolution time-frequency representations of audio segments. The signatures are generated by applying a hash function to intermediate values derived from the measures of dissimilarity and to the low-resolution time-frequency representations. The generated signatures may be used in a variety of applications such as restoring synchronization between video and audio content streams and identifying copies of original video and audio content. The generated signatures can provide reliable identifications despite intentional and unintentional modifications to the content.
    Type: Application
    Filed: August 30, 2012
    Publication date: March 14, 2013
    Applicant: Dolby Laboratories Licensing Corporation
    Inventors: Regunathan Radhakrishnan, Claus Bauer, Kent Bennett Terry, Brian David Link, Hyung-Suk Kim, Eric Gsell
  • Patent number: 8259806
    Abstract: Signatures that can be used to identify video and audio content are generated from the content by generating measures of dissimilarity between features of corresponding groups of pixels in frames of video content and by generating low-resolution time-frequency representations of audio segments. The signatures are generated by applying a hash function to intermediate values derived from the measures of dissimilarity and to the low-resolution time-frequency representations. The generated signatures may be used in a variety of applications such as restoring synchronization between video and audio content streams and identifying copies of original video and audio content. The generated signatures can provide reliable identifications despite intentional and unintentional modifications to the content.
    Type: Grant
    Filed: November 29, 2007
    Date of Patent: September 4, 2012
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Regunathan Radhakrishnan, Claus Bauer, Kent Bennett Terry, Brian David Link, Hyung-Suk Kim, Eric Gsell
  • Publication number: 20110261257
    Abstract: Features are extracted from video and audio content that have a known temporal relationship with one another. The extracted features are used to generate video and audio signatures, which are assembled with an indication of the temporal relationship into a synchronization signature construct. the construct may be used to calculate synchronization errors between video and audio content received at a remote destination. Measures of confidence are generated at the remote destination to optimize processing and to provide an indication of reliability of the calculated synchronization error.
    Type: Application
    Filed: August 17, 2009
    Publication date: October 27, 2011
    Applicant: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Kent Bennett Terry, Regunathan Radhakrishnan
  • Publication number: 20090304082
    Abstract: Signatures that can be used to identify video and audio content are generated from the content by generating measures of dissimilarity between features of corresponding groups of pixels in frames of video content and by generating low-resolution time-frequency representations of audio segments. The signatures are generated by applying a hash function to intermediate values derived from the measures of dissimilarity and to the low-resolution time-frequency representations. The generated signatures may be used in a variety of applications such as restoring synchronization between video and audio content streams and identifying copies of original video and audio content. The generated signatures can provide reliable identifications despite intentional and unintentional modifications to the content.
    Type: Application
    Filed: November 29, 2007
    Publication date: December 10, 2009
    Inventors: Regunathan Radhakrishnan, Claus Bauer, Kent Bennett Terry, Brian David Link, Hyung-Suk Kim, Eric Gsell