Patents by Inventor Sitaram Bhagavathy

Sitaram Bhagavathy has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20110148858
    Abstract: Several implementations relate to view synthesis with heuristic view merging for 3D Video (3DV) applications. According to one aspect, a first candidate pixel from a first warped reference view and a second candidate pixel from a second warped reference view are assessed based on at least one of a backward synthesis process to assess a quality of the first and second candidate pixels, a hole distribution around the first and second candidate pixels, or on an amount of energy around the first and second candidate pixels above a specified frequency. The assessing occurs as part of merging at least the first and second warped reference views into a signal synthesized view. Based on the assessing, a result is determined for a given target pixel in the single synthesized view. The result may be determining a value for the given target pixel, or marking the given target pixel as a hole.
    Type: Application
    Filed: August 28, 2009
    Publication date: June 23, 2011
    Inventors: Zefeng Ni, Dong Tian, Sitaram Bhagavathy, Joan Llach
  • Publication number: 20110026607
    Abstract: The visibility of an object in a digital picture is enhanced by comparing an input video of the digital picture with stored information representative of the nature and characteristics of the object to develop object localization information that identifies and locates the object.
    Type: Application
    Filed: April 7, 2009
    Publication date: February 3, 2011
    Inventors: Sitaram Bhagavathy, Joan Llach, Yu Huang
  • Publication number: 20110026606
    Abstract: The visibility of an object in a digital picture is enhanced by comparing an input video of the digital picture with stored information representative of the nature and characteristics of the object to develop object localization information that identifies and locates the object. The visibility of the object and the region in which the object is located is enhanced by image processing and the enhanced input video is encoded.
    Type: Application
    Filed: April 7, 2009
    Publication date: February 3, 2011
    Inventors: Sitaram Bhagavathy, Joan Llach, Yu Huang
  • Publication number: 20100201871
    Abstract: A caption detection system wherein all detected caption boxes over time for one caption area are identical, thereby reducing temporal instability and inconsistency. This is achieved by grouping candidate pixels in the 3D spatiotemporal space and generating a 3D bounding box for one caption area. 2D bounding boxes are obtained by slicing the 3D bounding boxes, thereby reducing temporal instability as all 2D bounding boxes corresponding to a caption area are sliced from one 3D bounding box and are therefore identical over time.
    Type: Application
    Filed: February 9, 2010
    Publication date: August 12, 2010
    Inventors: Dong-Qing Zhang, Sitaram Bhagavathy
  • Publication number: 20100098307
    Abstract: A method is disclosed for detecting and locating players in soccer video frames without errors caused by artifacts by a shape analysis-based approach to identify the players and the ball from roughly extracted foregrounds obtained by color segmentation and connected component analysis, by performing a Euclidean distance transform to extract skeletons for every foreground blob, by performing a shape analysis to remove false alarms (non-players and non-ball), and then by performing skeleton pruning and a reverse Euclidean distance transform to cut-off the artifacts primarily caused by playing field lines.
    Type: Application
    Filed: November 6, 2007
    Publication date: April 22, 2010
    Inventors: Yu Huang, Joan Llach, Sitaram Bhagavathy
  • Publication number: 20090324121
    Abstract: One particular automatic parameter estimation method and apparatus estimates low level filtering parameters from one or more user controlled high-level filtering parameters. The high level filtering parameters are strength and quality, where strength indicates how much noise reduction will be performed, and quality indicates a tolerance which controls the balance between filtering uniformity and loss of detail. The low level filtering parameters that can be estimated include the spatial neighborhood and/or temporal neighborhood size from which pixel candidates are selected, and thresholds used to verify the “goodness” of the spatially or temporally predicted candidate pixels. More generally, a criterion for filtering digital image data is accessed, and a value is determined for a parameter for use in filtering digital image data, the value being determined based on whether the value results in the criterion being satisfied for at least a portion of a digital image.
    Type: Application
    Filed: June 25, 2007
    Publication date: December 31, 2009
    Applicant: THOMSON LICENSING
    Inventors: Sitaram Bhagavathy, Joan Llach
  • Publication number: 20090304270
    Abstract: One or more implementations access a digital image containing one or more bands. Adjacent bands of the one or more bands have a difference in color resulting in a contour between the adjacent bands. The one or more implementations apply an algorithm to at least a portion of the digital image for reducing visibility of a contour. The algorithm is based on a value representing the fraction of pixels in a region of the digital image having a particular color value.
    Type: Application
    Filed: January 18, 2008
    Publication date: December 10, 2009
    Inventors: Sitaram Bhagavathy, Joan Llach, Jie fu Zhai
  • Publication number: 20090278988
    Abstract: In an implementation, a pixel is selected from a target digital image. Multiple candidate pixels, from one or more digital images, are evaluated based on values of the multiple candidate pixels. For the selected pixel, a corresponding set of pixels is determined from the multiple candidate pixels based on the evaluations of the multiple candidate pixels and on whether a predetermined threshold number of pixels have been included in the corresponding set. Further for the selected pixel, a substitute value is determined based on the values of the pixels in the corresponding set of pixels. Various implementations described provide adaptive pixel-based spatio-temporal filtering of images or video to reduce film grain or noise. Implementations may achieve an “even” amount of noise reduction at each pixel while preserving as much picture detail as possible by, for example, averaging each pixel with a constant number, N, of temporally and/or spatially correlated pixels.
    Type: Application
    Filed: June 29, 2006
    Publication date: November 12, 2009
    Inventors: Sitaram Bhagavathy, Joan Llach