Patents by Inventor Mrityunjay Kumar

Mrityunjay Kumar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8913835
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A set of key video frames are selected based on the determined video frame clusters.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: December 16, 2014
    Assignee: Kodak Alaris Inc.
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Publication number: 20140037269
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A summary is formed based on the determined video frame clusters.
    Type: Application
    Filed: August 3, 2012
    Publication date: February 6, 2014
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Publication number: 20140037215
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. A set of key video frames are selected based on the determined video frame clusters.
    Type: Application
    Filed: August 3, 2012
    Publication date: February 6, 2014
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Publication number: 20140037216
    Abstract: A method for identifying a set of key video frames from a video sequence comprising extracting feature vectors for each video frame and applying a group sparsity algorithm to represent the feature vector for a particular video frame as a group sparse combination of the feature vectors for the other video frames. Weighting coefficients associated with the group sparse combination are analyzed to determine video frame clusters of temporally-contiguous, similar video frames. The video sequence is segmented into scenes by identifying scene boundaries based on the determined video frame clusters.
    Type: Application
    Filed: August 3, 2012
    Publication date: February 6, 2014
    Inventors: Mrityunjay Kumar, Alexander C. Loui, Bruce Harold Pillman
  • Publication number: 20130235939
    Abstract: A method for representing a video sequence including a time sequence of input video frames, the input video frames including some common scene content that is common to all of the input video frames and some dynamic scene content that changes between at least some of the input video frames. Affine transform are determined to align the common scene content in the input video frames. A common video frame including the common scene content is determined by forming a sparse combination of a first basis functions. A dynamic video frame is determined for each input video frame by forming a sparse combination of a second basis functions, wherein the dynamic video frames can be combined with the respective affine transforms and the common video frame to provide reconstructed video frames.
    Type: Application
    Filed: March 7, 2012
    Publication date: September 12, 2013
    Inventors: Mrityunjay Kumar, Abdolreza Abdolhosseini Moghadam, Alexander C. Loui, Jiebo Luo
  • Publication number: 20130235275
    Abstract: A method for determining a scene boundary location dividing a first scene and a second scene in an input video sequence. The scene boundary location is determined responsive to a merit function value, which is a function of the candidate scene boundary location. The merit function value for a particular candidate scene boundary location is determined by representing the dynamic scene content for the input video frames before and after candidate scene boundary using sparse combinations of a set of basis functions, wherein the sparse combinations of the basis functions are determined by finding a sparse vector of weighting coefficients for each of the basis functions. The weighting coefficients determined for each of the input video frames are combined to determine the merit function value. The candidate scene boundary providing the smallest merit function value is designated to be the scene boundary location.
    Type: Application
    Filed: March 7, 2012
    Publication date: September 12, 2013
    Inventors: Mrityunjay Kumar, Abdolreza Abdolhosseini Moghadam, Alexander C. Loui, Jiebo Luo
  • Publication number: 20130177242
    Abstract: A method of providing a super-resolution image is disclosed. The method uses a processor to perform the following steps of acquiring a captured low-resolution image of a scene and resizing the low-resolution image to provide a high-resolution image. The method further includes computing local edge parameters including local edge orientations and local edge centers of gravity from the high-resolution image, selecting edge pixels in the high-resolution image responsive to the local edge parameters, and modifying the high-resolution image in response to the selected edge pixels to provide a super-resolution image.
    Type: Application
    Filed: January 10, 2012
    Publication date: July 11, 2013
    Inventors: James E. Adams, JR., Mrityunjay Kumar, Wei Hao
  • Patent number: 8478062
    Abstract: A method for producing a noise-reduced digital image captured using a digital imaging system having signal-dependent noise characteristics, comprising: capturing one or more noisy digital images of a scene, wherein said at least one noisy digital image has signal-dependent noise characteristics; defining a functional relationship to relate the noisy digital images to a noise-reduced digital image, wherein the functional relationship includes at least two sets of unknown parameters, and wherein at least one of the sets of unknown parameters relates to the signal-dependent noise characteristics; defining an energy function responsive to the functional relationship which includes at least a data fidelity term to enforce similarities between the noisy digital images and the noise-reduced digital image, and a spatial fidelity term to encourage sharp edges in the noise-reduced digital image; and using an optimization process to determine a noise-reduced image responsive to the energy function.
    Type: Grant
    Filed: October 28, 2009
    Date of Patent: July 2, 2013
    Assignee: Apple Inc.
    Inventors: Mrityunjay Kumar, Rodney L. Miller
  • Patent number: 8467610
    Abstract: A method for determining a video summary from a video sequence including a time sequence of video frames, comprising: defining a global feature vector representing the entire video sequence; selecting a plurality of subsets of the video frames; extracting a frame feature vector for each video frame in the selected subsets of video frames; defining a set of basis functions, wherein each basis function is associated with the frame feature vectors for the video frames in a particular subset of video frames; using a data processor to automatically determine a sparse combination of the basis functions representing the global feature vector; determining a summary set of video frames responsive to the sparse combination of the basis functions; and forming the video summary responsive to the summary set of video frames.
    Type: Grant
    Filed: October 20, 2010
    Date of Patent: June 18, 2013
    Assignee: Eastman Kodak Company
    Inventors: Mrityunjay Kumar, Zheshen Wang, Jiebo Luo
  • Patent number: 8467611
    Abstract: A method for identifying a set of key frames from a video sequence including a time sequence of video frames, the method executed at least in part by a data processor, comprising: selecting a set of video frames from the video sequence; identifying a plurality of visually homogeneous regions from each of the selected video frames; defining a set of basis functions, wherein each basis function is associated with a different visually homogeneous region; determining a feature vector for each of the selected video frames; representing each of the determined feature vectors as a sparse combination of the basis functions; for each of the determined feature vectors, determining a sparse set of video frames that contain the visually homogeneous regions corresponding to the basis functions included in the corresponding sparse combination of the basis functions; and analyzing the sparse sets of video frames to identify the set of key frames.
    Type: Grant
    Filed: December 10, 2010
    Date of Patent: June 18, 2013
    Assignee: Eastman Kodak Company
    Inventors: Mrityunjay Kumar, Zheshen Wang, Jiebo Luo
  • Patent number: 8401292
    Abstract: A method for identifying high saliency regions in a digital image, comprising: segmenting the digital image into a plurality of segmented regions; determining a saliency value for each segmented region, merging neighboring segmented regions that share a common boundary in response to determining that one or more specified merging criteria are satisfied; and designating one or more of the segmented regions to be high saliency regions. The determination of the saliency value for a segmented region includes: determining a surround region including a set of image pixels surrounding the segmented region; analyzing the image pixels in the segmented region to determine one or more segmented region attributes; analyzing the image pixels in the surround region to determine one or more corresponding surround region attributes; determining a region saliency value responsive to differences between the one or more segmented region attributes and the corresponding surround region attributes.
    Type: Grant
    Filed: April 26, 2011
    Date of Patent: March 19, 2013
    Assignee: Eastman Kodak Company
    Inventors: Minwoo Park, Alexander C. Loui, Mrityunjay Kumar
  • Patent number: 8345130
    Abstract: A method for reducing noise in an image captured using a digital image sensor having pixels being arranged in a rectangular minimal repeating unit, comprising: computing first weighted pixel differences by combining first pixel differences between the pixel value of a central pixel and pixel values for nearby pixels of the first channel in a plurality of directions with corresponding local edge-responsive weighting values; computing second weighted pixel differences by combining second pixel differences between pixel values for pixels of at least one different channel in the plurality of directions with corresponding local edge-responsive weighting values; and computing a noise-reduced pixel value for the central pixel by combining the first and second weighted pixel differences with the pixel value for the central pixel.
    Type: Grant
    Filed: January 29, 2010
    Date of Patent: January 1, 2013
    Assignee: Eastman Kodak Company
    Inventors: James E. Adams, Jr., Mrityunjay Kumar, Efrain O. Morales
  • Patent number: 8330825
    Abstract: A method for sharpening an input digital image captured using a digital camera having a zoom lens, determining a parameterized representation of lens acuity of the zoom lens as a function of at least the lens focal length and lens F# by fitting a parameterized function to lens acuity data for the zoom lens at a plurality of lens focal length and lens F/#; using a processor to sharpen the input digital image responsive to the particular lens focal length and lens F/#corresponding to the input digital image using the parameterized representation of the lens acuity.
    Type: Grant
    Filed: February 22, 2010
    Date of Patent: December 11, 2012
    Assignee: Eastman Kodak Company
    Inventors: Mrityunjay Kumar, Bruce H. Pillman
  • Publication number: 20120275701
    Abstract: A method for identifying high saliency regions in a digital image, comprising: segmenting the digital image into a plurality of segmented regions; determining a saliency value for each segmented region, merging neighboring segmented regions that share a common boundary in response to determining that one or more specified merging criteria are satisfied; and designating one or more of the segmented regions to be high saliency regions. The determination of the saliency value for a segmented region includes: determining a surround region including a set of image pixels surrounding the segmented region; analyzing the image pixels in the segmented region to determine one or more segmented region attributes; analyzing the image pixels in the surround region to determine one or more corresponding surround region attributes; determining a region saliency value responsive to differences between the one or more segmented region attributes and the corresponding surround region attributes.
    Type: Application
    Filed: April 26, 2011
    Publication date: November 1, 2012
    Inventors: Minwoo Park, Alexander C. Loui, Mrityunjay Kumar
  • Patent number: 8295631
    Abstract: A method for reducing noise in a color image captured using a digital image sensor having pixels being arranged in a rectangular minimal repeating unit. The method comprises, for a first color channel, determining noise reduced-pixel values using a first noise reducing process that includes computing weighted pixel differences by combining the pixel differences with corresponding local edge-responsive weighting values. The method further comprises a second noise reducing process that includes computing weighted chroma differences by combining chroma differences with corresponding local edge-responsive weighting values.
    Type: Grant
    Filed: January 29, 2010
    Date of Patent: October 23, 2012
    Assignee: Eastman Kodak Company
    Inventors: James E. Adams, Jr., Mrityunjay Kumar, Efrain O. Morales
  • Patent number: 8253832
    Abstract: A method is described for forming a full-color output image from a color filter array image comprising capturing an image using an image sensor including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a rectangular minimal repeating unit wherein for a first color response, the color pixels having the first color response alternate with panchromatic pixels in at least two directions, and for each of the other color responses there is at least one row, column or diagonal of the repeating pattern that only has color pixels of the given color response and panchromatic pixels. The method further comprising, computing an interpolated panchromatic image from the color filter array image; computing an interpolated color image from the color filter array image; and forming the full color output image from the interpolated panchromatic image and the interpolated color image.
    Type: Grant
    Filed: June 9, 2009
    Date of Patent: August 28, 2012
    Assignee: OmniVision Technologies, Inc.
    Inventors: James E. Adams, Jr., Mrityunjay Kumar, Bruce H. Pillman, James A. Hamilton
  • Patent number: 8237831
    Abstract: A method of forming a full-color output image from a color filter array image having a plurality of color pixels having at least two different color responses and panchromatic pixels, comprising capturing a color filter array image using an image sensor including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a repeating pattern having a square minimal repeating unit having at least three rows and three columns, the color pixels being arranged along one of the diagonals of the minimal repeating unit, and all other pixels being panchromatic pixels; computing an interpolated panchromatic image from the color filter array image; computing an interpolated color image from the color filter array image; and forming the full color output image from the interpolated panchromatic image and the interpolated color image.
    Type: Grant
    Filed: May 28, 2009
    Date of Patent: August 7, 2012
    Assignee: OmniVision Technologies, Inc.
    Inventors: James E. Adams, Jr., Mrityunjay Kumar, Bruce H. Pillman, James A. Hamilton
  • Patent number: 8224082
    Abstract: A method for forming a final digital color image with reduced motion blur including of a processor for providing images having panchromatic pixels and color pixels corresponding to at least two color photo responses, interpolating between the panchromatic pixels and color pixels to produce a panchromatic image and a full-resolution color image to produce a full-resolution synthetic panchromatic image from the full-resolution color image; and developing color correction weights in response to the synthetic panchromatic image and the panchromatic image; and using the color correction weights to modify the full-resolution color image to provide a final color digital image.
    Type: Grant
    Filed: March 10, 2009
    Date of Patent: July 17, 2012
    Assignee: OmniVision Technologies, Inc.
    Inventors: Mrityunjay Kumar, James E. Adams, Jr., Bruce H. Pillman
  • Patent number: 8213745
    Abstract: A method for modifying an input digital image having input dimensions defined by a number of input rows and input columns to form an output digital image where the number of rows or columns is reduced by one, comprising an image energy map determined from the input image; determining a seam path responsive to the image energy map; imposing constraints on the seam path; and removing pixels along the seam path to modify the input digital image.
    Type: Grant
    Filed: October 9, 2009
    Date of Patent: July 3, 2012
    Assignee: Eastman Kodak Company
    Inventors: Mrityunjay Kumar, David D. Conger, Jiebo Luo, Rodney L. Miller
  • Patent number: 8203633
    Abstract: An image sensor for capturing a color image comprising a two dimensional array of light-sensitive pixels including panchromatic pixels and color pixels having at least two different color responses, the pixels being arranged in a repeating pattern having a square minimal repeating unit having at least three rows and three columns, the color pixels being arranged along one of the diagonals of the minimal repeating unit, and all other pixels being panchromatic pixels.
    Type: Grant
    Filed: May 27, 2009
    Date of Patent: June 19, 2012
    Assignee: OmniVision Technologies, Inc.
    Inventors: James E. Adams, Jr., Mrityunjay Kumar, Bruce H. Pillman, James A. Hamilton