Patents by Inventor Sitaram Bhagavathy
Sitaram Bhagavathy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20130163661Abstract: Methods and apparatus are provided for encoding video signals using example-based data pruning for improved video compression efficiency. An apparatus for encoding a picture in a video sequence includes a patch library creator for creating a first patch library from an original version of the picture and a second patch library from a reconstructed version of the picture. Each of the first patch library and the second patch library includes a plurality of high resolution replacement patches for replacing one or more pruned blocks during a recovery of a pruned version of the picture. The apparatus also includes a pruner for generating the pruned version of the picture from the first patch library, and a metadata generator for generating metadata from the second patch library. The metadata is for recovering the pruned version of the picture. The apparatus further includes an encoder for encoding the pruned version of the picture and the metadata.Type: ApplicationFiled: September 9, 2011Publication date: June 27, 2013Applicant: THOMSON LICENSINGInventors: Dong-Qing Zhang, Sitaram Bhagavathy
-
Publication number: 20130163679Abstract: Methods and apparatus are provided for decoding video signals using example-based data pruning for improved video compression efficiency. An apparatus for recovering a pruned version of a picture in a video sequence includes a divider for dividing the pruned version of the picture into a plurality of non-overlapping blocks, a metadata decoder for decoding metadata for use in recovering the pruned version of the picture, and a patch library creator for creating a patch library from a reconstructed version of the picture. The patch library includes a plurality of high-resolution replacement patches for replacing the one or more pruned blocks during a recovery of the pruned version of the picture.Type: ApplicationFiled: September 9, 2011Publication date: June 27, 2013Inventors: Dong-Qing Zhang, Sitaram Bhagavathy
-
Patent number: 8467626Abstract: One particular automatic parameter estimation method and apparatus estimates low level filtering parameters from one or more user controlled high-level filtering parameters. The high level filtering parameters are strength and quality, where strength indicates how much noise reduction will be performed, and quality indicates a tolerance which controls the balance between filtering uniformity and loss of detail. The low level filtering parameters that can be estimated include the spatial neighborhood and/or temporal neighborhood size from which pixel candidates are selected, and thresholds used to verify the “goodness” of the spatially or temporally predicted candidate pixels. More generally, a criterion for filtering digital image data is accessed, and a value is determined for a parameter for use in filtering digital image data, the value being determined based on whether the value results in the criterion being satisfied for at least a portion of a digital image.Type: GrantFiled: June 25, 2007Date of Patent: June 18, 2013Assignee: Thomson LicensingInventors: Sitaram Bhagavathy, Joan Llach
-
Publication number: 20130125155Abstract: A Context-Aware Content-Presentation system includes a viewer context feedback for determining the viewer context relative to a display device. A content receiving device controls at least one parameter of video content streamed to the display device from a streaming server in accordance with the viewer context. In this way, when the viewer context allows for lower quality video content, the content receiving device can signal the streaming server to reduce the quality of the video content, thereby saving bandwidth.Type: ApplicationFiled: February 16, 2011Publication date: May 16, 2013Applicant: THOMSON LICENSINGInventors: Sitaram Bhagavathy, Cristina Gomila
-
Patent number: 8391547Abstract: A method is disclosed for detecting and locating players in soccer video frames without errors caused by artifacts by a shape analysis-based approach to identify the players and the ball from roughly extracted foregrounds obtained by color segmentation and connected component analysis, by performing a Euclidean distance transform to extract skeletons for every foreground blob, by performing a shape analysis to remove false alarms (non-players and non-ball), and then by performing skeleton pruning and a reverse Euclidean distance transform to cut-off the artifacts primarily caused by playing field lines.Type: GrantFiled: November 6, 2007Date of Patent: March 5, 2013Assignee: Thomson LicensingInventors: Yu Huang, Joan Llach, Sitaram Bhagavathy
-
Publication number: 20130054645Abstract: Systems and methods of identifying media content, such as video content, that employ fingerprint matching at the level of video frames. The presently disclosed systems and methods of identifying media content can extract one or more fingerprints from a plurality of video frames included in query video content, and, for each of the plurality of video frames from the query video content, perform frame-level fingerprint matching of the extracted fingerprints against fingerprints extracted from video frames included in a plurality of reference video content. Using the results of such frame-level fingerprint matching, the presently disclosed systems and methods of identifying media content can identify the query content in relation to an overall sequence of video frames from at least one of the plurality of reference content, and/or in relation to respective video frames included in a sequence of video frames from the reference content.Type: ApplicationFiled: August 23, 2011Publication date: February 28, 2013Inventors: Sitaram Bhagavathy, Jeffrey A. Bloom, Dekun Zou, Wen Chen
-
Publication number: 20130028330Abstract: Methods and apparatus are provided for reducing vector quantization error through patch shifting. A method generates, from an input video sequence, one of more high resolution replacement patches, the one or more high resolution replacement patches for replacing one or more low resolution patches during a reconstruction of the input video sequence. This generating step generates the one or more high resolution replacement patches using data corresponding to a patch spatial shifting process, the patch spatial shifting process for reducing jittery artifacts caused by a motion-induced vector quantization error in the one or more high resolution replacement patches, the data for at least deriving a patch size of the one or more high resolution replacement patches such that the one or more high resolution replacement patches are generated to have the patch size greater than a patch size of the one or more low resolution patches in order to be suitable for use in the patch spatial shifting process.Type: ApplicationFiled: February 1, 2011Publication date: January 31, 2013Inventors: Dong-Qing Zhang, Joan Llach, Sitaram Bhagavathy
-
Patent number: 8355079Abstract: A caption detection system wherein all detected caption boxes over time for one caption area are identical, thereby reducing temporal instability and inconsistency. This is achieved by grouping candidate pixels in the 3D spatiotemporal space and generating a 3D bounding box for one caption area. 2D bounding boxes are obtained by slicing the 3D bounding boxes, thereby reducing temporal instability as all 2D bounding boxes corresponding to a caption area are sliced from one 3D bounding box and are therefore identical over time.Type: GrantFiled: February 9, 2010Date of Patent: January 15, 2013Assignee: Thomson LicensingInventors: Dong-Qing Zhang, Sitaram Bhagavathy
-
Publication number: 20120307074Abstract: Systems and methods of objective video quality measurement that employ a reduced-reference approach to video quality measurement. Such systems and methods of objective video quality measurement can extract information pertaining to one or more features of a target video whose perceptual quality is to be measured, extract information pertaining to one or more features of a reference video, and employ one or more prediction functions involving the target features and the reference features to provide a measurement of the perceptual quality of the target video.Type: ApplicationFiled: June 2, 2011Publication date: December 6, 2012Inventors: Sitaram Bhagavathy, Jeffrey A. Bloom, Dekun Zou, Ran Ding, Beibei Wang, Tao Liu, Niranjan Narvekar
-
Publication number: 20120294530Abstract: Methods and apparatus for video object segmentation are provided, suitable for use in a super-resolution system. The method comprises alignment of frames of a video sequence, pixel alignment to generate initial foreground masks using a similarity metric, consensus filtering to generate an intermediate foreground mask, and refinement of the mask using spatio-temporal information from the video sequence. In various embodiments, the similarity metric is computed using a sum of squared differences approach, a correlation, or a modified normalized correlation metric. Soft thresholding of the similarity metric is also used in one embodiment of the present principles. Weighting factors are also applied to certain critical frames in the consensus filtering stage in one embodiment using the present principles.Type: ApplicationFiled: January 20, 2011Publication date: November 22, 2012Inventors: Malavika Bhaskaranand, Sitaram Bhagavathy
-
Publication number: 20120294369Abstract: Methods and apparatus are provided for sampling-based super resolution video encoding and decoding. The encoding method receives high resolution pictures and generates low resolution pictures and metadata there from, the metadata for guiding post-decoding post-processing of the low resolution pictures and the metadata; and then encodes the low resolution pictures and the metadata using at least one encoder. The corresponding decoding method receives a bitstream and decodes low resolution pictures and metadata there from using a decoder; and then reconstructs high resolution pictures respectively corresponding to the low resolution pictures using the low resolution pictures and the metadata.Type: ApplicationFiled: January 20, 2011Publication date: November 22, 2012Inventors: Sitaram Bhagavathy, Joan Llach, Dong-Qing Zhang
-
Publication number: 20120288015Abstract: Methods and apparatuses for data pruning for video compression using example-based super resolution are provided. A method and apparatus for encoding is provided in which patches of video are extracted from input video, grouped together using a clustering method, and representative patches are packed into patch frames. The original video is downsized and sent either along with, or in addition to, the patch frames. At a decoder, the method and apparatus provided extract patches from the patch frames and create a patch library. The regular video frames are upsized and the low resolution patches are replaced by patches from the patch library by searching the library using the patches in the decoded regular frames as keywords. If there are no appropriate patches, no replacement is made. A post processing procedure is used to enhance the spatiotemporal smoothness of the recovered video.Type: ApplicationFiled: January 21, 2011Publication date: November 15, 2012Applicant: THOMSON LICENSINGInventors: Dong-Qing Zhang, Sitaram Bhagavathy, Joan Llach
-
Publication number: 20120263437Abstract: A method and associated apparatus for using a trajectory-based technique to detect a moving object in a video sequence at incorporates human interaction through a user interface. The method comprises steps of identifying and evaluating sets of connected components in a video frame, filtering the list of connected components by comparing features of the connected components to predetermined criteria, identifying candidate trajectories across multiple frames, evaluating the candidate trajectories to determine a selected trajectory, eliminating incorrect trajectories through use of the interface and processing images in said video sequence responsive to the evaluating and eliminating steps.Type: ApplicationFiled: December 10, 2010Publication date: October 18, 2012Applicant: THOMSON LICENSINGInventors: Jesus Barcons-Palau, Sitaram Bhagavathy, Joan Llach, Dong-Qing Zhang
-
Publication number: 20120224629Abstract: A method of object-aware video coding is provided that comprises the steps of: receiving a video sequence having a plurality of frames; selecting at least two frames; determing total area of at least one object of interest in each of the at least two frames; comparing the total area to a threshold area; classifying each of the at least two frames as being a low object weighted frame or a high object weighted frame, low object weighted frames being frames having the total area exceeding the threshold area and high object weighted frames being frame having the total area not exceeding the threshold area; and encoding each low object weighted frame according to one encoding mode and encoding each high object weighted frame according to a different encoding mode.Type: ApplicationFiled: December 8, 2010Publication date: September 6, 2012Inventors: Sitaram Bhagavathy, Joan Llach, Dong-Qing Zhang, Jesus Barcons-Palau
-
Publication number: 20120206610Abstract: Systems and methods of perceptual quality monitoring of video information, communications, and entertainment that can estimate the perceptual quality of video with high accuracy, and can be used to produce quality scores that better correlate with subjective quality scores of an end user. The systems and methods of perceptual quality monitoring of video can generate, from an encoded input video bitstream, estimates of one or more quality parameters relating to the video, such as the coding bit rate parameter, the video frame rate parameter, and the packet loss rate parameter, and provide these video quality parameter estimates to a predetermined video quality estimation model. Because the estimates of the video quality parameters are generated from the encoded input video bitstream as it is being received, the systems and methods are suitable for use as QoE monitoring tools.Type: ApplicationFiled: February 11, 2011Publication date: August 16, 2012Inventors: Beibei Wang, Dekun Zou, Ran Ding, Tao Liu, Sitaram Bhagavathy, Niranjan Narvekar, Jeffrey A. Bloom, Glenn L. Cash
-
Patent number: 8204334Abstract: In an implementation, a pixel is selected from a target digital image. Multiple candidate pixels, from one or more digital images, are evaluated based on values of the multiple candidate pixels. For the selected pixel, a corresponding set of pixels is determined from the multiple candidate pixels based on the evaluations of the multiple candidate pixels and on whether a predetermined threshold number of pixels have been included in the corresponding set. Further for the selected pixel, a substitute value is determined based on the values of the pixels in the corresponding set of pixels. Various implementations described provide adaptive pixel-based spatio-temporal filtering of images or video to reduce film grain or noise. Implementations may achieve an “even” amount of noise reduction at each pixel while preserving as much picture detail as possible by, for example, averaging each pixel with a constant number, N, of temporally and/or spatially correlated pixels.Type: GrantFiled: June 29, 2006Date of Patent: June 19, 2012Assignee: Thomson LicensingInventors: Sitaram Bhagavathy, Joan Llach
-
Publication number: 20120121174Abstract: A method is disclosed for analyzing video to detect far-view scenes in sports video to determine when certain image processing algorithms should be applied. The method comprises analyzing and classifying the fields of view of images from a video signal, creating and classifying the fields of view of sets of sequential images, and selectively applying image processing algorithms to sets of sequential images representing a particular type of field of view.Type: ApplicationFiled: July 19, 2010Publication date: May 17, 2012Inventors: Sitaram Bhagavathy, Dong-Qing Zhang
-
Publication number: 20120114184Abstract: The present invention concerns a method and associated apparatus for using a trajectory-based technique to detect a moving object in a video sequence, such as the ball in a soccer game. In one embodiment, the method comprises steps of identifying and evaluating sets of connected components in a video frame, filtering the list of connected components by comparing features of the connected components to predetermined criteria, identifying candidate trajectories across multiple frames, evaluating the candidate trajectories to determine a selected trajectory, and processing images in the video sequence based at least in part upon the selected trajectory.Type: ApplicationFiled: July 20, 2010Publication date: May 10, 2012Applicant: THOMSON LICENSINGInventors: Jesus Barcons-Palau, Sitaram Bhagavathy, Joan Llach, Dong-Qing Zhang
-
Publication number: 20110293247Abstract: A method for propagating user-provided foreground-background constraint information for a first video frame to subsequent frames allows extraction of moving foreground objects with minimal user interaction. Video matting is performed wherein constraints derived from user input with respect to a first frame are propagated to subsequent frames using the estimated alpha matte of each frame. The matte of a frame is processed in order to arrive at a rough foreground-background segmentation which is then used for estimating the matte of the next frame. At each frame, the propagated constraints are used by an image matting method for estimating the corresponding matte which is in turn used for propagating the constraints to the next frame, and so on.Type: ApplicationFiled: January 5, 2010Publication date: December 1, 2011Inventors: Sitaram Bhagavathy, Joan Llach
-
Publication number: 20110157229Abstract: Various implementations are described. Several implementations relate to view synthesis with heuristic view blending for 3D Video (3DV) applications. According to one aspect, at least one reference picture, or a portion thereof, is warped from at least one reference view location to a virtual view location to produce at least one warped reference. A first candidate pixel and a second candidate pixel are identified in the at least one warped reference. The first candidate pixel and the second candidate pixel are candidates for a target pixel location in a virtual picture from the virtual view location. A value for a pixel at the target pixel location is determined based on values of the first and second candidate pixels.Type: ApplicationFiled: August 28, 2009Publication date: June 30, 2011Inventors: Zefeng Ni, Dong Tian, Sitaram Bhagavathy, Joan Llach