Patents by Inventor Aggelos K. Katsaggelos

Aggelos K. Katsaggelos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8170094
    Abstract: A scalable video compression system (100) having an encoder (120), bit extractor (140), and decoder (160) for efficiently encoding and decoding a scalable embedded bitstream (130) at different video resolution, framerate, and video quality levels is provided. Bits can be extracted in order of refinement layer (136), followed by temporal level (132), followed by spatial layer (134), wherein each bit extracted provides an incremental improvement in video decoding quality. Bit extraction can be truncated at a position in the embedded bitstream corresponding to a maximum refinement layer, a maximum temporal level, and a maximum spatial layer. For a given refinement layer, bits are extracted from all spatial layers in a lower temporal level prior to extracting bits from spatial layers in a higher temporal level for prioritizing coding gain to increase video decoding quality, and prior to moving to a next refinement layer.
    Type: Grant
    Filed: May 23, 2007
    Date of Patent: May 1, 2012
    Assignee: Motorola Mobility, Inc.
    Inventors: Mark R. Trandel, Aggelos K. Katsaggelos, Sevket D. Babacan, Shih-Ta Hsiang, Faisal Ishtiaq
  • Publication number: 20110002391
    Abstract: Disclosed is an image encoder that divides a digital image into a set of “macroblocks.” If appropriate, a macroblock is “downsampled” to a lower resolution. The lower-resolution macroblock is then encoded by applying spatial (and possibly temporal) prediction. The “residual” of the macroblock is calculated as the difference between the predicted and actual contents of the macroblock. The low-resolution residual is then either transmitted to an image decoder or stored for later use. In some embodiments, the encoder calculates the rate-distortion costs of encoding the original-resolution macroblock and the lower-resolution macroblock and then only encodes the lower-resolution macroblock if its cost is lower. When a decoder receives a lower-resolution residual, it recovers the lower-resolution macroblock using standard prediction techniques. Then, the macroblock is “upsampled” to its original resolution by interpolating the values left out by the encoder.
    Type: Application
    Filed: June 7, 2010
    Publication date: January 6, 2011
    Applicants: MOTOROLA, INC., NORTHWESTERN UNIVERSITY
    Inventors: Serhan Uslubas, Aggelos K. Katsaggelos, Faisal Ishtiaq, Shih-Ta Hsiang, Ehsan Maani
  • Publication number: 20110002554
    Abstract: Disclosed is an image encoder that divides a digital image into a set of “macroblocks.” Each macroblock is encoded by applying spatial (and possibly temporal) prediction. The “residual” of the macroblock is calculated as the difference between the predicted content of the macroblock and the actual content of the macroblock. The residual is then “decimated” by taking an orderly subset of its values. The decimated residual is then either transmitted to an image decoder or is stored for later use. To recreate the original image, the macroblocks are first recreated from their received residuals. When a decimated residual is received, the values of the residual left out during decimation are interpolated from the values actually received. Using the prediction techniques along with the residual, the original content of the macroblock is recovered. The macroblocks are then joined to form the original digital image.
    Type: Application
    Filed: June 7, 2010
    Publication date: January 6, 2011
    Applicant: MOTOROLA, INC.
    Inventors: Serhan Uslubas, Aggelos K. Katsaggelos, Faisal Ishtiaq, Shih-Ta Hsiang, Ehsan Maani
  • Publication number: 20100091841
    Abstract: A device for use with a frame generating portion that is arranged to receive picture data corresponding to a plurality of pictures and to generate encoded video data for transmission across a transmission channel having an available bandwidth. The frame generating portion can generate a frame for each of the plurality of pictures to create a plurality of frames. The encoded video data is based on the received picture data. The device includes a distortion estimating portion and inclusion determining portion and an extracting portion. The distortion estimating portion can estimate a distortion. The inclusion determining portion can establish an inclusion boundary based on the estimated distortion. The extracting portion can extract a frame from the plurality of frames based on the inclusion boundary.
    Type: Application
    Filed: October 7, 2009
    Publication date: April 15, 2010
    Applicants: MOTOROLA, INC., NORTHWESTERN UNIVERSITY
    Inventors: Faisal Ishtiaq, Shih-Ta Hsiang, Aggelos K. Katsaggelos, Ehsan Maani, Serhan Uslubas
  • Publication number: 20080130757
    Abstract: A scalable video compression system (100) having an encoder (120), bit extractor (140), and decoder (160) for efficiently encoding and decoding a scalable embedded bitstream (130) at different video resolution, framerate, and video quality levels is provided. Bits can be extracted in order of refinement layer (136), followed by temporal level (132), followed by spatial layer (134), wherein each bit extracted provides an incremental improvement in video decoding quality. Bit extraction can be truncated at a position in the embedded bitstream corresponding to a maximum refinement layer, a maximum temporal level, and a maximum spatial layer. For a given refinement layer, bits are extracted from all spatial layers in a lower temporal level prior to extracting bits from spatial layers in a higher temporal level for prioritizing coding gain to increase video decoding quality, and prior to moving to a next refinement layer.
    Type: Application
    Filed: May 23, 2007
    Publication date: June 5, 2008
    Applicant: MOTOROLA, INC.
    Inventors: MARK R. TRANDEL, AGGELOS K. KATSAGGELOS, SEVKET D. BABACAN, SHIH-TA HSIANG, FAISAL ISHTIAQ
  • Patent number: 6996172
    Abstract: A scalability type selection method and structure for hybrid SNR-temporal scalability that employs a decision mechanism capable of selecting between SNR and temporal scalability based upon desired criteria is disclosed. This method and structure utilizes models of the desired criteria such as the extent of motion between two encoded frames, the temporal distance between two encoded frames, the gain in visual quality achieved by using SNR scalability over temporal scalability, and the bandwidth available to the scaleable layer in deciding which form of scalability to use. Furthermore, the method and structure allows for control over the extent to which a certain type of scalability is used. By the selection of parameters used in the models, a certain type of scalability can be emphasized while another type of scalability can be given less preference. This invention not only allows the type of scalability to be selected but also to the degree to which the scalability will be used.
    Type: Grant
    Filed: December 21, 2001
    Date of Patent: February 7, 2006
    Assignee: Motorola, Inc.
    Inventors: Faisal Ishtiaq, Aggelos K. Katsaggelos
  • Patent number: 6963378
    Abstract: At least one visual significance metric is determined (12) for at least some frames belonging to an original series of frames (11). Key frames are identified (13) as a function, at least in part, of the visual significance metric. Cumulative visual significance values are then determined (14) for at least some of the frames that intervene between each pair of key frames. These cumulative visual significance values are then used to identify (15) frames of additional content interest. Various frames are then selected (16) for use in a visual summary. In one embodiment, all of the key frames and frames of additional content interest are selected for inclusion in the visual summary.
    Type: Grant
    Filed: November 1, 2002
    Date of Patent: November 8, 2005
    Assignee: Motorola, Inc.
    Inventors: Zhu Li, Bhavan Gandhi, Aggelos K. Katsaggelos
  • Publication number: 20040085483
    Abstract: At least one visual significance metric is determined (12) for at least some frames belonging to an original series of frames (11). Key frames are identified (13) as a function, at least in part, of the visual significance metric. Cumulative visual significance values are then determined (14) for at least some of the frames that intervene between each pair of key frames. These cumulative visual significance values are then used to identify (15) frames of additional content interest. Various frames are then selected (16) for use in a visual summary. In one embodiment, all of the key frames and frames of additional content interest are selected for inclusion in the visual summary.
    Type: Application
    Filed: November 1, 2002
    Publication date: May 6, 2004
    Applicant: Motorola, Inc.
    Inventors: Zhu Li, Bhavan Gandhi, Aggelos K. Katsaggelos
  • Publication number: 20030118096
    Abstract: A scalability type selection method and structure for hybrid SNR-temporal scalability that employs a decision mechanism capable of selecting between SNR and temporal scalability based upon desired criteria is disclosed. This method and structure utilizes models of the desired criteria such as the extent of motion between two encoded frames, the temporal distance between two encoded frames, the gain in visual quality achieved by using SNR scalability over temporal scalability, and the bandwidth available to the scaleable layer in deciding which form of scalability to use. Furthermore, the method and structure allows for control over the extent to which a certain type of scalability is used. By the selection of parameters used in the models, a certain type of scalability can be emphasized while another type of scalability can be given less preference. This invention not only allows the type of scalability to be selected but also to the degree to which the scalability will be used.
    Type: Application
    Filed: December 21, 2001
    Publication date: June 26, 2003
    Inventors: Faisal Ishtiaq, Aggelos K. Katsaggelos
  • Patent number: 5764307
    Abstract: The present invention provides a method (200) and an apparatus (100) for spatially adaptive filtering for video encoding. The apparatus filters a video sequence prior the encoding process. The apparatus comprises a noise variance determiner (102), a local variance determiner (104), a noise visibility function determiner (106), a Gaussian kernel determiner (108), and a convolver (110). The apparatus removes noise directly from a Displaced Frame Difference, DFD, signal. This novel approach removes noise and miscellaneous high frequency components from the DFD signal without the introduction of the filtering artifacts characteristic of current techniques. By reducing the miscellaneous high frequency components, the present invention is capable of reducing the amount of information that must be encoded by the video encoder without substantially degrading the decoded video sequence.
    Type: Grant
    Filed: July 24, 1995
    Date of Patent: June 9, 1998
    Assignees: Motorola, Inc., Northwestern University
    Inventors: Taner Ozcelik, James C. Brailean, Aggelos K. Katsaggelos, Ozan Erdogan, Cheung Auyeung
  • Patent number: 5764921
    Abstract: A method (100, 200), device (300) and microprocessor (400) are provided for selectively compressing video frames of a motion compensated prediction-based video codec based on a predetermined set of compression techniques. An energy estimate of the current displaced frame difference, DFD, is used to compute a ratio between the estimate and a historical mean of energy estimates. The ratio is iteratively compared to a predetermined set of thresholds which are associated with the predetermined set of compression techniques. The comparisons are used to choose a technique based on the thresholds, and a technique is selected to be used for encoding the current DFD.
    Type: Grant
    Filed: October 26, 1995
    Date of Patent: June 9, 1998
    Assignees: Motorola, Northwestern University
    Inventors: Mark R. Banham, James C. Brailean, Stephen N. Levine, Aggelos K. Katsaggelos, Guido M. Schuster
  • Patent number: 5717463
    Abstract: A method and system for estimating the motion within a video sequence provides very accurate estimates of both the displacement vector field, as well as, the boundaries of moving objects. The system comprises a preprocessor (102), a spatially adaptive pixel motion estimator (104), a motion boundary estimator (106), and a motion analyzer (108). The preprocessor (102) provides a first estimate of the displacement vector field, and the spatially adaptive pixel motion estimator (104) provides a first estimate of object boundaries. The motion boundary estimator (106) and the motion analyzer (108) improve the accuracy of the first estimates.
    Type: Grant
    Filed: July 24, 1995
    Date of Patent: February 10, 1998
    Assignee: Motorola, Inc.
    Inventors: James C. Brailean, Taner Ozcelik, Aggelos K. Katsaggelos
  • Patent number: 5646867
    Abstract: The present invention provides a method (600) and system (100) for predicting a differential vector field. The method and system enable the detection and encoding of an area where motion compensating the past image frame to the current image frame, fails. Based on the DFD signal, the present invention detects regions where the motion compensation has failed (102). The boundaries of these regions are encoded and sent to the decoder (104). The intensity values contained in this region, by the current intensity frame, are also encoded and sent to the decoder. Based on the decoded region boundaries, the decoder decodes the intensity values and places them into the correct regions.
    Type: Grant
    Filed: July 24, 1995
    Date of Patent: July 8, 1997
    Assignee: Motorola Inc.
    Inventors: Taner Ozcelik, James C. Brailean, Aggelos K. Katsaggelos, Stephen N. Levine
  • Patent number: 5612745
    Abstract: The present invention provides a method and apparatus for detecting occluded areas in a video frame. A previous displacement vector field, DVF, is motion compensated (402) and used to provide an occlusion test parameter (404) which is compared to an optimal threshold (408) to detect occluded areas. The optimal threshold is calculated based on the previous DVF and a predetermined threshold (406).
    Type: Grant
    Filed: July 24, 1995
    Date of Patent: March 18, 1997
    Assignee: Motorola, Inc.
    Inventors: Taner Ozcelik, James C. Brailean, Aggelos K. Katsaggelos
  • Patent number: 5574663
    Abstract: The present invention provides a method (300) and apparatus (100) for regenerating a dense motion vector field, which describes the motion between two temporally adjacent frames of a video sequence, utilizing a previous dense motion vector field. In this method, a spatial DVF and a temporal DVF are determined (302 and 304) and summed to provide a DVF prediction (306). This method and apparatus enables a dense motion vector field to be used in the encoding and decoding process of a video sequence. This is very important since a dense motion vector field provides a much higher quality prediction of the current frame as compared to the standard block matching motion estimation techniques. The problem to date with utilizing a dense motion vector field is that the information contained in a dense motion field is too large to transmit. The present invention eliminates the need to transmit any motion information.
    Type: Grant
    Filed: July 24, 1995
    Date of Patent: November 12, 1996
    Assignee: Motorola, Inc.
    Inventors: Taner Ozcelik, James C. Brailean, Aggelos K. Katsaggelos