Patents by Inventor Wilfried M. Osberger

Wilfried M. Osberger has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7586515
    Abstract: A realtime video quality measurement instrument may be configured for both double-ended and single-ended operation. For double-ended operation reference and test video signals are stored in respective buffers and spatial/temporally aligned. Desired quality measurements are performed on the aligned frames of the test and reference video signals according to stored setup instructions. For single-ended operation the reference video signal and a signature for the reference video signal are pre-stored together with the desired quality measurements for the frames of the reference video signal. Then the test video signal is received, signatures determined, and the test and reference video signals aligned using the signatures. The desired quality measurements are then performed on the aligned frames of the test and reference video signals.
    Type: Grant
    Filed: May 23, 2005
    Date of Patent: September 8, 2009
    Assignee: Tektronix, Inc.
    Inventors: Gale L. Straney, Wilfried M. Osberger
  • Patent number: 7099518
    Abstract: A method for determine blurring in a test video sequence due to video processing includes detecting blocks within each frame of the test video sequence that have valid image edges. An edge point within each valid image edge block is selected and a series of points defining an edge profile in the block along a line normal to the valid image edge at each edge point is defined from an enhanced edge map in which video processing blockiness artifacts have been removed. From the edge profile a blurring value is estimated for each frame or group of frames within the test video sequence. Additionally a reference blurring value may be derived from a reference video sequence corresponding to the test video sequence, which reference blurring value may be generated at a source of the reference video sequence and transmitted with the test video sequence to a receiver or may be generated at the receiver.
    Type: Grant
    Filed: July 18, 2002
    Date of Patent: August 29, 2006
    Assignee: Tektronix, Inc.
    Inventors: Bei Li, Wilfried M. Osberger
  • Patent number: 6738099
    Abstract: A robust technique for estimating camera motion parameters calculates the motion vectors for a current frame vis a vis a previous frame using a multi-scale block matching technique. The means of the motion vectors for the current and the previous frames are compared for a temporal discontinuity, the detection of such temporal discontinuity as a temporal repetition, such as frozen field or 3:2 pulldown, terminating the processing of the current frame with the camera motion parameter estimate for the previous frame being used for the current frame. Otherwise motion vectors for spatially flat areas and text/graphic overlay areas are discarded and an error-of-fit for the previous frame is tested to determine initial parameters for an iterative camera motion estimation process. If the error-of-fit is less than a predetermined threshold, then the camera motion parameters for the previous frame are used as the initial parameters, otherwise a best least squares fit is used as the initial parameters.
    Type: Grant
    Filed: February 16, 2001
    Date of Patent: May 18, 2004
    Assignee: Tektronix, Inc.
    Inventor: Wilfried M. Osberger
  • Publication number: 20040013315
    Abstract: A method for determine blurring in a test video sequence due to video processing includes detecting blocks within each frame of the test video sequence that have valid image edges. An edge point within each valid image edge block is selected and a series of points defining an edge profile in the block along a line normal to the valid image edge at each edge point is defined from an enhanced edge map in which video processing blockiness artifacts have been removed. From the edge profile a blurring value is estimated for each frame or group of frames within the test video sequence. Additionally a reference blurring value may be derived from a reference video sequence corresponding to the test video sequence, which reference blurring value may be generated at a source of the reference video sequence and transmitted with the test video sequence to a receiver or may be generated at the receiver.
    Type: Application
    Filed: July 18, 2002
    Publication date: January 22, 2004
    Inventors: Bei Li, Wilfried M. Osberger
  • Patent number: 6670963
    Abstract: An improved visual attention model uses a robust adaptive segmentation algorithm to divide a current frame of a video sequence into a plurality of regions based upon both color and luminance, with each region being processed in parallel by a plurality of spatial feature algorithms including color and skin to produce respective spatial importance maps. The current frame and a previous frame are also processed to produce motion vectors for each block of the current frame, the motion vectors being compensated for camera motion, and the compensated motion vectors being converted to produce a temporal importance map. The spatial and temporal importance maps are combined using weighting based upon eye movement studies.
    Type: Grant
    Filed: January 17, 2001
    Date of Patent: December 30, 2003
    Assignee: Tektronix, Inc.
    Inventor: Wilfried M. Osberger
  • Publication number: 20020126891
    Abstract: An improved visual attention model uses a robust adaptive segmentation algorithm to divide a current frame of a video sequence into a plurality of regions based upon both color and luminance, with each region being processed in parallel by a plurality of spatial feature algorithms including color and skin to produce respective spatial importance maps. The current frame and a previous frame are also processed to produce motion vectors for each block of the current frame, the motion vectors being compensated for camera motion, and the compensated motion vectors being converted to produce a temporal importance map. The spatial and temporal importance maps are combined using weighting based upon eye movement studies.
    Type: Application
    Filed: January 17, 2001
    Publication date: September 12, 2002
    Inventor: Wilfried M. Osberger
  • Publication number: 20020113901
    Abstract: A robust technique for estimating camera motion parameters calculates the motion vectors for a current frame vis a vis a previous frame using a multi-scale block matching technique. The means of the motion vectors for the current and the previous frames are compared for a temporal discontinuity, the detection of such temporal discontinuity as a temporal repetition, such as frozen field or 3:2 pulldown, terminating the processing of the current frame with the camera motion parameter estimate for the previous frame being used for the current frame. Otherwise motion vectors for spatially flat areas and text/graphic overlay areas are discarded and an error-of-fit for the previous frame is tested to determine initial parameters for an iterative camera motion estimation process. If the error-of-fit is less than a predetermined threshold, then the camera motion parameters for the previous frame are used as the initial parameters, otherwise a best least squares fit is used as the initial parameters.
    Type: Application
    Filed: February 16, 2001
    Publication date: August 22, 2002
    Inventor: Wilfried M. Osberger