Patents by Inventor Katherine H. Cornog

Katherine H. Cornog has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11350105
    Abstract: Videos may be characterized by objective metrics that quantify video quality. Embodiments are directed to target bitrate prediction methods in which one or more objective metrics may serve as inputs into a model that predicts a mean opinion score (MOS), a measure of perceptual quality, as a function of metric values. The model may be derived by generating training data through conducting subjective tests on a set of video encodings, obtaining MOS data from the subjective tests, and correlating the MOS data with metric measurements on the training data. The MOS predictions may be extended to predict the target (encoding) bitrate that achieves a desired MOS value. The target bitrate prediction methods may be applied to segments of a video. The methods may be made computationally faster by applying temporal subsampling.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: May 31, 2022
    Assignee: Euclid Discoveries, LLC
    Inventors: Dane P. Kottke, Katherine H. Cornog, John J. Guo, Myo Tun, Jeyun Lee, Nigel Lee
  • Patent number: 11228766
    Abstract: Videos may be characterized by objective metrics that quantify video quality. Embodiments are directed to target bitrate prediction methods in which one or more objective metrics may serve as inputs into a model that predicts a mean opinion score (MOS), a measure of perceptual quality, as a function of metric values. The model may be derived by generating training data through conducting subjective tests on a set of video encodings, obtaining MOS data from the subjective tests, and correlating the MOS data with metric measurements on the training data. The MOS predictions may be extended to predict the target (encoding) bitrate that achieves a desired MOS value. The target bitrate prediction methods may be applied to segments of a video. The methods may be made computationally faster by applying temporal subsampling.
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: January 18, 2022
    Assignee: EUCLID DISCOVERIES, LLC
    Inventors: Dane P. Kottke, Katherine H. Cornog, John J. Guo, Myo Tun, Jeyun Lee, Nigel Lee
  • Patent number: 11159801
    Abstract: Videos may be characterized by objective metrics that quantify video quality. Embodiments are directed to target bitrate prediction methods in which one or more objective metrics may serve as inputs into a model that predicts a mean opinion score (MOS), a measure of perceptual quality, as a function of metric values. The model may be derived by generating training data through conducting subjective tests on a set of video encodings, obtaining MOS data from the subjective tests, and correlating the MOS data with metric measurements on the training data. The MOS predictions may be extended to predict the target (encoding) bitrate that achieves a desired MOS value. The target bitrate prediction methods may be applied to segments of a video. The methods may be made computationally faster by applying temporal subsampling.
    Type: Grant
    Filed: July 10, 2020
    Date of Patent: October 26, 2021
    Assignee: EUCLID DISCOVERIES, LLC
    Inventors: Dane P. Kottke, Katherine H. Cornog, John J. Guo, Myo Tun, Jeyun Lee, Nigel Lee
  • Publication number: 20210203951
    Abstract: Videos may be characterized by objective metrics that quantify video quality. Embodiments are directed to target bitrate prediction methods in which one or more objective metrics may serve as inputs into a model that predicts a mean opinion score (MOS), a measure of perceptual quality, as a function of metric values. The model may be derived by generating training data through conducting subjective tests on a set of video encodings, obtaining MOS data from the subjective tests, and correlating the MOS data with metric measurements on the training data. The MOS predictions may be extended to predict the target (encoding) bitrate that achieves a desired MOS value. The target bitrate prediction methods may be applied to segments of a video. The methods may be made computationally faster by applying temporal subsampling.
    Type: Application
    Filed: January 8, 2021
    Publication date: July 1, 2021
    Inventors: Dane P. Kottke, Katherine H. Cornog, John J. Guo, Myo Tun, Jeyun Lee, Nigel Lee
  • Publication number: 20210203950
    Abstract: Videos may be characterized by objective metrics that quantify video quality. Embodiments are directed to target bitrate prediction methods in which one or more objective metrics may serve as inputs into a model that predicts a mean opinion score (MOS), a measure of perceptual quality, as a function of metric values. The model may be derived by generating training data through conducting subjective tests on a set of video encodings, obtaining MOS data from the subjective tests, and correlating the MOS data with metric measurements on the training data. The MOS predictions may be extended to predict the target (encoding) bitrate that achieves a desired MOS value. The target bitrate prediction methods may be applied to segments of a video. The methods may be made computationally faster by applying temporal subsampling.
    Type: Application
    Filed: January 7, 2021
    Publication date: July 1, 2021
    Inventors: Dane P. Kottke, Katherine H. Cornog, John J. Guo, Myo Tun, Jeyun Lee, Nigel Lee
  • Publication number: 20200413067
    Abstract: Videos may be characterized by objective metrics that quantify video quality. Embodiments are directed to target bitrate prediction methods in which one or more objective metrics may serve as inputs into a model that predicts a mean opinion score (MOS), a measure of perceptual quality, as a function of metric values. The model may be derived by generating training data through conducting subjective tests on a set of video encodings, obtaining MOS data from the subjective tests, and correlating the MOS data with metric measurements on the training data. The MOS predictions may be extended to predict the target (encoding) bitrate that achieves a desired MOS value. The target bitrate prediction methods may be applied to segments of a video. The methods may be made computationally faster by applying temporal subsampling.
    Type: Application
    Filed: July 10, 2020
    Publication date: December 31, 2020
    Inventors: Dane P. Kottke, Katherine H. Cornog, John J. Guo, Myo Tun, Jeyun Lee, Nigel Lee
  • Patent number: 10757419
    Abstract: Videos may be characterized by objective metrics that quantify video quality. Embodiments are directed to target bitrate prediction methods in which one or more objective metrics may serve as inputs into a model that predicts a mean opinion score (MOS), a measure of perceptual quality, as a function of metric values. The model may be derived by generating training data through conducting subjective tests on a set of video encodings, obtaining MOS data from the subjective tests, and correlating the MOS data with metric measurements on the training data. The MOS predictions may be extended to predict the target (encoding) bitrate that achieves a desired MOS value. The target bitrate prediction methods may be applied to segments of a video. The methods may be made computationally faster by applying temporal subsampling.
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: August 25, 2020
    Assignee: Euclid Discoveries, LLC
    Inventors: Dane P. Kottke, Katherine H. Cornog, John J. Guo, Myo Tun, Jeyun Lee, Nigel Lee
  • Publication number: 20190289296
    Abstract: Videos may be characterized by objective metrics that quantify video quality. Embodiments are directed to target bitrate prediction methods in which one or more objective metrics may serve as inputs into a model that predicts a mean opinion score (MOS), a measure of perceptual quality, as a function of metric values. The model may be derived by generating training data through conducting subjective tests on a set of video encodings, obtaining MOS data from the subjective tests, and correlating the MOS data with metric measurements on the training data. The MOS predictions may be extended to predict the target (encoding) bitrate that achieves a desired MOS value. The target bitrate prediction methods may be applied to segments of a video. The methods may be made computationally faster by applying temporal subsampling.
    Type: Application
    Filed: May 23, 2019
    Publication date: September 19, 2019
    Inventors: Dane P. Kottke, Katherine H. Cornog, John J. Guo, Myo Tun, Jeyun Lee, Nigel Lee
  • Patent number: 8654181
    Abstract: A set of tools in a media composition system for stereoscopic video provides visualizations of the perceived depth field in video clips, including depth maps, depth histograms, time-based depth histogram ribbons and curves displayed in association with a media timeline, and multi-panel displays including views of clips temporally adjacent to a clip being edited. Temporal changes in perceived depth that may cause viewer discomfort are automatically detected, and when they exceed a predetermined threshold, the editor is alerted. Depth grading tools facilitate matching depths in an outgoing clip to those in an incoming clip. Depth grading can be performed automatically upon detection of excessively large or rapid perceived depth changes.
    Type: Grant
    Filed: March 28, 2011
    Date of Patent: February 18, 2014
    Assignee: Avid Technology, Inc.
    Inventors: Katherine H. Cornog, Shailendra Mathur, Stephen McNeill
  • Publication number: 20120249746
    Abstract: A set of tools in a media composition system for stereoscopic video provides visualizations of the perceived depth field in video clips, including depth maps, depth histograms, time-based depth histogram ribbons and curves displayed in association with a media timeline, and multi-panel displays including views of clips temporally adjacent to a clip being edited. Temporal changes in perceived depth that may cause viewer discomfort are automatically detected, and when they exceed a predetermined threshold, the editor is alerted. Depth grading tools facilitate matching depths in an outgoing clip to those in an incoming clip. Depth grading can be performed automatically upon detection of excessively large or rapid perceived depth changes.
    Type: Application
    Filed: March 28, 2011
    Publication date: October 4, 2012
    Inventors: Katherine H. Cornog, Shailendra Mathur, Stephen McNeill
  • Patent number: 8213673
    Abstract: The problem of watermarking a sequence of images from a motion picture can be divided into two parts. The first part is embedding watermarks in the sequence of images. The second part is detecting embedded watermarks in a target sequence of images where the target sequence may have resulted from one or more attacks on an original sequence of images in which the watermarks were embedded. Motion pictures are watermarked by embedding information in different ways in different images. In general, the information to be embedded is used to define a plurality of watermark images. Each watermark image is an apparent pattern of noise in both signal and frequency domains, and is different from the other watermark images. Preferably, each watermark image is temporally uncorrelated with the other watermark images. Each watermark image is used to modify a corresponding image from the motion picture.
    Type: Grant
    Filed: June 9, 2009
    Date of Patent: July 3, 2012
    Assignee: Avio Technology, Inc.
    Inventor: Katherine H. Cornog
  • Publication number: 20100310114
    Abstract: The problem of watermarking a sequence of images from a motion picture can be divided into two parts. The first part is embedding watermarks in the sequence of images. The second part is detecting embedded watermarks in a target sequence of images where the target sequence may have resulted from one or more attacks on an original sequence of images in which the watermarks were embedded. Motion pictures are watermarked by embedding information in different ways in different images. In general, the information to be embedded is used to define a plurality of watermark images. Each watermark image is an apparent pattern of noise in both signal and frequency domains, and is different from the other watermark images. Preferably, each watermark image is temporally uncorrelated with the other watermark images. Each watermark image is used to modify a corresponding image from the motion picture.
    Type: Application
    Filed: June 9, 2009
    Publication date: December 9, 2010
    Inventor: Katherine H. Cornog
  • Patent number: 7729423
    Abstract: High quality intraframe-only compression of video can be achieved using rate distortion optimization and without resizing or bit depth modification. The compression process involves transforming portions of the image to generate frequency domain coefficients for each portion. A bit rate for each transformed portion using a plurality of scale factors is determined. Distortion for each portion is estimated according to the plurality of scale factors. A scale factor is selected for each portion to minimize the total distortion in the image to achieve a desired bit rate. A quantization matrix is selected according to the desired bit rate. The frequency domain coefficients for each portion are quantized using the selected plurality of quantizers as scaled by the selected scale factor for the portion. The quantized frequency domain coefficients are encoded using a variable length encoding to provide compressed data for each of the defined portions.
    Type: Grant
    Filed: June 26, 2008
    Date of Patent: June 1, 2010
    Assignee: Avid Technology, Inc.
    Inventors: Dane P. Kottke, Katherine H. Cornog
  • Patent number: 7587062
    Abstract: A structured watermark may be embedded in data by applying an irregular mapping of variations defined by the structured watermark to frequency domain values representing the data. In particular, the frequency domain representation of the data comprises an ordered set of frequency domain values. The structured watermark is used to define an ordered set of variations to be applied to the frequency domain values. Each variation is a value defined by the structured watermark. An irregular mapping from positions in the ordered set of variations to positions in the ordered set of frequency domain values is defined. This irregular mapping is one-to-one and invertible. Application of the irregular mapping to the set of variations results in a set of values that may appear to be noise both in the frequency domain and in the signal domain of the data. The signal domain of the data may be n-dimensional, and may be a spatial, temporal or other domain from which data may be converted to the frequency domain.
    Type: Grant
    Filed: May 7, 2004
    Date of Patent: September 8, 2009
    Assignees: Avid Technology, Inc., University of New Hampshire
    Inventors: Katherine H. Cornog, Mitrajit Dutta
  • Patent number: 7545957
    Abstract: In calculating motion between two images, a single channel image may be generated for each image based on measurement of a desired characteristic of those images. Given a desired characteristic (such as edge strength or edge magnitude) in an image, a function measures the strength of the desired characteristic in a region around a pixel in an image. A range of values can represent the likelihood, or measure of confidence, of the occurrence of the desired characteristic in the region around the pixel. Thus, each pixel in the single channel image has a value from the range of values that is determined according to a function. This function operates on a neighborhood in the input image that corresponds to the pixel in the single channel image, and measures the likelihood of occurrence of, or strength of, the desired characteristic in that neighborhood.
    Type: Grant
    Filed: April 20, 2001
    Date of Patent: June 9, 2009
    Assignee: Avid Technology, Inc.
    Inventors: Katherine H. Cornog, Randy M. Fayan
  • Publication number: 20090003438
    Abstract: High quality intraframe-only compression of video can be achieved using rate distortion optimization and without resizing or bit depth modification. The compression process involves transforming portions of the image to generate frequency domain coefficients for each portion. A bit rate for each transformed portion using a plurality of scale factors is determined. Distortion for each portion is estimated according to the plurality of scale factors. A scale factor is selected for each portion to minimize the total distortion in the image to achieve a desired bit rate. A quantization matrix is selected according to the desired bit rate. The frequency domain coefficients for each portion are quantized using the selected plurality of quantizers as scaled by the selected scale factor for the portion. The quantized frequency domain coefficients are encoded using a variable length encoding to provide compressed data for each of the defined portions.
    Type: Application
    Filed: June 26, 2008
    Publication date: January 1, 2009
    Inventors: Dane P. Kottke, Katherine H. Cornog
  • Patent number: 7403561
    Abstract: High quality intraframe-only compression of video can be achieved using rate distortion optimization and without resizing or bit depth modification. The compression process involves transforming portions of the image to generate frequency domain coefficients for each portion. A bit rate for each transformed portion using a plurality of scale factors is determined. Distortion for each portion is estimated according to the plurality of scale factors. A scale factor is selected for each portion to minimize the total distortion in the image to achieve a desired bit rate. A quantization matrix is selected according to the desired bit rate. The frequency domain coefficients for each portion are quantized using the selected plurality of quantizers as scaled by the selected scale factor for the portion. The quantized frequency domain coefficients are encoded using a variable length encoding to provide compressed data for each of the defined portions.
    Type: Grant
    Filed: April 2, 2004
    Date of Patent: July 22, 2008
    Assignee: Avid Technology, Inc.
    Inventors: Dane P. Kottke, Katherine H. Cornog
  • Patent number: 7194676
    Abstract: A retiming function that defines a rampable retiming effect is used to generate new audio and video samples at appropriate output times. In particular, for each output time, a corresponding input time is determined from the output time by using the retiming function. The retiming function may be a speed curve, a position curve that maps output times to input times directly or a mapping defining correspondence times between points in the video data and points in the audio data. An output sample is computed for the output time based on at least the data in the neighborhood of the corresponding input time, using a resampling function for the type of media data. Synchronization is achieved by ensuring that the input times determined to correspond to output times for video samples correspond to the input times determined to correspond to the same output times for audio samples.
    Type: Grant
    Filed: March 1, 2002
    Date of Patent: March 20, 2007
    Assignee: Avid Technology, Inc.
    Inventors: Randy M. Fayan, Katherine H. Cornog
  • Patent number: 7103231
    Abstract: Two images are analyzed to compute a set of motion vectors that describes motion between the first and second images. A motion vector is computed for each pixel in an image at a time between the first and second images. This set of motion vectors may be defined at any time between the first and second images, such as the midpoint. The motion vectors may be computed using any of several techniques. An example technique is based on the constant brightness constraint, also referred to as optical flow. Each vector is specified at a pixel center in an image defined at the time between the first and second images. The vectors may point to points in the first and second images that are not on pixel centers. The motion vectors are used to warp the first and second images to a point in time of an output image between the first and second images using a factor that represents the time between the first and second image at which the output image occurs.
    Type: Grant
    Filed: November 3, 2003
    Date of Patent: September 5, 2006
    Assignee: Avid Technology, Inc.
    Inventors: Katherine H. Cornog, Garth A. Dickie, Peter J. Fasciano, Randy M. Fayan, Robert A. Gonsalves
  • Patent number: 7043058
    Abstract: Visibie artifacts in images created using image processing based on motion vector maps may be reduced by providing one or more mechanisms for correcting the vector map. In general, the set of motion vectors is changed by selecting one or more portions of the image. The vectors corresponding to the selected one or more portions are modified. Various image processing operations, such as motion compensated interpolation, may be performed using the changed set of motion vectors.
    Type: Grant
    Filed: April 20, 2001
    Date of Patent: May 9, 2006
    Assignee: Avid Technology, Inc.
    Inventors: Katherine H. Cornog, Randy M. Fayan, Garth Dickie