Patents by Inventor Regunathan Radhakrishnan
Regunathan Radhakrishnan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8488061Abstract: A signature that can be used to identify video content in a series of video frames is generated by first calculating the average and variance of picture elements in a low-resolution composite image that represents a temporal and spatial composite of the video content in the series of frames. The signature is generated by applying a hash function to values derived from the average and variance composite representations. The video content of a signal can be represented by a set of signatures that are generated for multiple series of frames within the signal. A set of signatures can provide reliable identifications despite intentional and unintentional modifications to the content.Type: GrantFiled: May 1, 2008Date of Patent: July 16, 2013Assignee: Dolby Laboratories Licensing CorporationInventors: Regunathan Radhakrishnan, Claus Bauer
-
Patent number: 8428301Abstract: Content identification and quality monitoring are provided. The method involves obtaining a first fingerprint derived from a first media content, processing the first media content to generate a second media content, obtaining a second fingerprint derived from the second media content, and comparing the first fingerprint and the second fingerprint to determine one or more of: a similarity between the first fingerprint and the second fingerprint that indicates that the second media content is generated from the first media content or a difference between the first fingerprint and the second fingerprint to identify a quality degradation between the first media content and the second media content.Type: GrantFiled: August 21, 2009Date of Patent: April 23, 2013Assignee: Dolby Laboratories Licensing CorporationInventors: Regunathan Radhakrishnan, Jeffrey Riedmiller, Claus Bauer, Wenyu Jiang
-
Patent number: 8406462Abstract: Deriving a fingerprint of an image corresponding to media content involves selecting at least two different regions of the same image, determining a relationship between the two regions, and deriving a fingerprint of the image based on the relationship between the two regions of the image.Type: GrantFiled: August 17, 2009Date of Patent: March 26, 2013Assignee: Dolby Laboratories Licensing CorporationInventors: Regunathan Radhakrishnan, Claus Bauer
-
Patent number: 8400566Abstract: Features are extracted from video and audio content that have a known temporal relationship with one another. The extracted features are used to generate video and audio signatures, which are assembled with an indication of the temporal relationship into a synchronization signature construct. the construct may be used to calculate synchronization errors between video and audio content received at a remote destination. Measures of confidence are generated at the remote destination to optimize processing and to provide an indication of reliability of the calculated synchronization error.Type: GrantFiled: August 17, 2009Date of Patent: March 19, 2013Assignee: Dolby Laboratories Licensing CorporationInventors: Kent Bennett Terry, Regunathan Radhakrishnan
-
Publication number: 20130064416Abstract: Signatures that can be used to identify video and audio content are generated from the content by generating measures of dissimilarity between features of corresponding groups of pixels in frames of video content and by generating low-resolution time-frequency representations of audio segments. The signatures are generated by applying a hash function to intermediate values derived from the measures of dissimilarity and to the low-resolution time-frequency representations. The generated signatures may be used in a variety of applications such as restoring synchronization between video and audio content streams and identifying copies of original video and audio content. The generated signatures can provide reliable identifications despite intentional and unintentional modifications to the content.Type: ApplicationFiled: August 30, 2012Publication date: March 14, 2013Applicant: Dolby Laboratories Licensing CorporationInventors: Regunathan Radhakrishnan, Claus Bauer, Kent Bennett Terry, Brian David Link, Hyung-Suk Kim, Eric Gsell
-
Patent number: 8392179Abstract: The invention relates to the coding of audio signals that may include both speech-like and non-speech-like signal components. It describes methods and apparatus for code excited linear prediction (CELP) audio encoding and decoding that employ linear predictive coding (LPC) synthesis filters controlled by LPC parameters, a plurality of codebooks each having codevectors, at least one codebook providing an excitation more appropriate for non-speech-like signals and at least one codebook providing an excitation more appropriate for speech-like signals, and a plurality of gain factors, each associated with a codebook. The encoding methods and apparatus select from the codebooks codevectors and/or associated gain factors by minimizing a measure of the difference between the audio signal and a reconstruction of the audio signal derived from the codebook excitations. The decoding methods and apparatus generate a reconstructed output signal from the LPC parameters, codevectors, and gain factors.Type: GrantFiled: March 12, 2009Date of Patent: March 5, 2013Assignee: Dolby Laboratories Licensing CorporationInventors: Rongshan Yu, Regunathan Radhakrishnan, Robert Andersen, Grant Davidson
-
Patent number: 8351643Abstract: Quantized energy values are accessed to initially represent a temporally related group of content elements in a media sequence. The values are accessed over a matrix of regions into which the initial representation is partitioned. The initial representation may be downsampled and/or cropped from the content. A basis vector set is estimated in a dimensional space from the values. The initial representation is transformed into a subsequent representation, which is in another dimensional space. The subsequent representation projects the initial representation, based on the basis vectors. The subsequent representation reliably corresponds to the media content portion over a change in a geometric orientation thereof. Repeated for other media content portions of the group, subsequent representations of the first and other portions are averaged or transformed over time. The averaged/transformed values reliably correspond to the content portion over speed changes.Type: GrantFiled: October 6, 2008Date of Patent: January 8, 2013Assignee: Dolby Laboratories Licensing CorporationInventors: Regunathan Radhakrishnan, Claus Bauer
-
Patent number: 8316011Abstract: A value is computed for a feature in an instance of query content and compared to a threshold value. Based on the comparison, first and second bits in a hash value, which is derived from the query content feature, are determined. Conditional probability values are computed for the likelihood that quantized values of the first and the second bits equal corresponding quantized bit values of a target or reference feature value. The conditional probabilities are compared and a relative strength determined for the first and second bits, which directly corresponds to the conditional probability. The bit with the lowest bit strength is selected as the weakbit. The value of the weakbit is toggled to generate a variation of the query hash value. The query may be extended using the query hash value variation.Type: GrantFiled: June 30, 2011Date of Patent: November 20, 2012Assignee: Dolby Laboratories Licensing CorporationInventors: Junfeng He, Regunathan Radhakrishnan, Wenyu Jiang
-
Patent number: 8259806Abstract: Signatures that can be used to identify video and audio content are generated from the content by generating measures of dissimilarity between features of corresponding groups of pixels in frames of video content and by generating low-resolution time-frequency representations of audio segments. The signatures are generated by applying a hash function to intermediate values derived from the measures of dissimilarity and to the low-resolution time-frequency representations. The generated signatures may be used in a variety of applications such as restoring synchronization between video and audio content streams and identifying copies of original video and audio content. The generated signatures can provide reliable identifications despite intentional and unintentional modifications to the content.Type: GrantFiled: November 29, 2007Date of Patent: September 4, 2012Assignee: Dolby Laboratories Licensing CorporationInventors: Regunathan Radhakrishnan, Claus Bauer, Kent Bennett Terry, Brian David Link, Hyung-Suk Kim, Eric Gsell
-
Publication number: 20120215329Abstract: Techniques for re-associating dynamic metadata with media data are provided. A media processing system creates, with a first media processing stage, binding information comprising dynamic metadata and a time relationship between the dynamic metadata and media data. The binding information may be derived from the media data. While the first media processing stage delivers the media data to a second media processing stage in a first data path, the first media processing stage passes the binding information to the second media processing stage in a second data path. The media processing system re-associates, with the second media processing stage, the dynamic metadata and the media data using the binding information.Type: ApplicationFiled: February 22, 2012Publication date: August 23, 2012Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Wenyu Jiang, Regunathan Radhakrishnan, Claus Bauer
-
Publication number: 20120201386Abstract: Metadata comprising a set of gain values for creating a dominance effect is automatically generated. Automatically generating the metadata includes receiving multiple audio streams and a dominance criterion for at least one of the audio streams. A set of gains is computed for one or more audio streams based on the dominance criterion for the at least one audio stream and metadata is generated with the set of gains.Type: ApplicationFiled: October 5, 2010Publication date: August 9, 2012Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Jeffrey C. Riedmiller, Regunathan Radhakrishnan, Hannes Muesch
-
Publication number: 20120054194Abstract: Attributes are identified in media content. A classification value of the media content is computed based on the identified attributes. Thereafter, a fingerprint derived from the media content is stored or searched for based on the classification value of the media content.Type: ApplicationFiled: May 5, 2010Publication date: March 1, 2012Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Tianshi Gao, Regunathan Radhakrishnan, Wenyu Jiang, Claus Bauer
-
Publication number: 20120011128Abstract: A value is computed for a feature in an instance of query content and compared to a threshold value. Based on the comparison, first and second bits in a hash value, which is derived from the query content feature, are determined. Conditional probability values are computed for the likelihood that quantized values of the first and the second bits equal corresponding quantized bit values of a target or reference feature value. The conditional probabilities are compared and a relative strength determined for the first and second bits, which directly corresponds to the conditional probability. The bit with the lowest bit strength is selected as the weakbit. The value of the weakbit is toggled to generate a variation of the query hash value. The query may be extended using the query hash value variation.Type: ApplicationFiled: June 30, 2011Publication date: January 12, 2012Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Junfeng He, Regunathan Radhakrishnan, Wenyu Jiang
-
Publication number: 20110299721Abstract: Multiple candidate feature components of media content or projection matrices (or other hash functions, e.g., non-linear projections) are identified. Each of the candidate projection matrices (or other hash functions) includes an array of coefficients that relate to the candidate features. A subgroup of the candidate features or the projection matrices (or other hash functions) are selected based at least partially on an optimized combination of at least two characteristics of the candidate features or projection matrices (or other hash functions). Media fingerprints that uniquely identify the media content are derived from the selected optimized subgroup. Optimal projection matrices (or other hash functions) may be designed. Performance or sensitivity (e.g., search time) characteristics of the fingerprints are thus balanced with robustness characteristics thereof.Type: ApplicationFiled: May 25, 2011Publication date: December 8, 2011Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Junfeng He, Regunathan Radhakrishnan, Claus Bauer
-
Publication number: 20110268315Abstract: Derivation of a fingerprint includes generating feature matrices based on one or more training images, generating projection matrices based on the feature matrices in a training process, and deriving a fingerprint for one or more images by, at least in part, projecting a feature matrix based on the one or more images onto the projection matrices generated in the training process.Type: ApplicationFiled: January 7, 2010Publication date: November 3, 2011Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Claus Bauer, Regunathan Radhakrishnan, Wenyu Jiang, Glenn N. Dickins
-
Publication number: 20110261257Abstract: Features are extracted from video and audio content that have a known temporal relationship with one another. The extracted features are used to generate video and audio signatures, which are assembled with an indication of the temporal relationship into a synchronization signature construct. the construct may be used to calculate synchronization errors between video and audio content received at a remote destination. Measures of confidence are generated at the remote destination to optimize processing and to provide an indication of reliability of the calculated synchronization error.Type: ApplicationFiled: August 17, 2009Publication date: October 27, 2011Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Kent Bennett Terry, Regunathan Radhakrishnan
-
Publication number: 20110216937Abstract: A portion of media content is accessed. Components from a first and each subsequent spatial regions of the media content are sampled. Each spatial region has an unsegmented area. Each subsequent spatial region includes those within its area as elements thereof or the spatial regions may partially overlap. The regions may overlap independent of a hierarchical relationship between the regions. A media fingerprint is derived from the components of each of the spatial regions, which reliably corresponds to the media content portion, e.g., over geometric attacks such as rotation.Type: ApplicationFiled: November 17, 2009Publication date: September 8, 2011Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Regunathan Radhakrishnan, Claus Bauer
-
Publication number: 20110188704Abstract: Content identification and quality monitoring are provided. The method involves obtaining a first fingerprint derived from a first media content, processing the first media content to generate a second media content, obtaining a second fingerprint derived from the second media content, and comparing the first fingerprint and the second fingerprint to determine one or more of: a similarity between the first fingerprint and the second fingerprint that indicates that the second media content is generated from the first media content or a difference between the first fingerprint and the second fingerprint to identify a quality degradation between the first media content and the second media content.Type: ApplicationFiled: August 21, 2009Publication date: August 4, 2011Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Regunathan Radhakrishnan, Jeffrey Riedmiller, Claus Bauer, Wenyu Jiang
-
Publication number: 20110153050Abstract: Robust media fingerprints are derived from a portion of audio content. A portion of content in an audio signal is categorized. The audio content is characterized based, at least in part, on one or more of its features. The features may include a component that relates to one of several sound categories, e.g., speech and/or noise, which may be mixed with the audio signal. Upon categorizing the audio content as free of the speech or noise related components, the audio signal component is processed. Upon categorizing the audio content as including the speech related component and/or the noise related components, the speech or noise related components are separated from the audio signal. The audio signal is processed independent of the speech related component and/or the noise related component. Processing the audio signal includes computing the audio fingerprint, which ably corresponds to the audio signal.Type: ApplicationFiled: August 26, 2009Publication date: June 23, 2011Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Claus Bauer, Regunathan Radhakrishnan
-
Publication number: 20110142348Abstract: Deriving a fingerprint of an image corresponding to media content involves selecting at least two different regions of the same image, determining a relationship between the two regions, and deriving a fingerprint of the image based on the relationship between the two regions of the image.Type: ApplicationFiled: August 17, 2009Publication date: June 16, 2011Applicant: DOLBY LABORATORIES LICENSING CORPORATIONInventors: Regunathan Radhakrishnan, Claus Bauer