Patents by Inventor Onur Guleryuz
Onur Guleryuz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230108253Abstract: A processor identifies keypoints on a hand in a two-dimensional image that is captured by a camera. A three-dimensional pose of the hand is determined using locations of the keypoints to access lookup tables (LUTs) that represent potential poses of the hand as a function of the locations of the keypoints. In some embodiments, the keypoints include locations of tips of fingers and a thumb, joints that connect phalanxes of the fingers and the thumb, palm knuckles that represent a point of attachment of the fingers and the thumb to a palm, and a wrist location that indicates a point of attachment of the hand to a forearm. Some embodiments of the LUTs represent 2D coordinates of the fingers and the thumb in corresponding finger pose planes as a function of the locations of the tips of the fingers or thumb relative to the corresponding palm knuckles.Type: ApplicationFiled: December 12, 2022Publication date: April 6, 2023Inventor: Onur GULERYUZ
-
Patent number: 11544871Abstract: A processor identifies keypoints on a hand in a two-dimensional image that is captured by a camera. A three-dimensional pose of the hand is determined using locations of the keypoints to access lookup tables (LUTs) that represent potential poses of the hand as a function of the locations of the keypoints. In some embodiments, the keypoints include locations of tips of fingers and a thumb, joints that connect phalanxes of the fingers and the thumb, palm knuckles that represent a point of attachment of the fingers and the thumb to a palm, and a wrist location that indicates a point of attachment of the hand to a forearm. Some embodiments of the LUTs represent 2D coordinates of the fingers and the thumb in corresponding finger pose planes as a function of the locations of the tips of the fingers or thumb relative to the corresponding palm knuckles.Type: GrantFiled: August 24, 2018Date of Patent: January 3, 2023Assignee: GOOGLE LLCInventor: Onur Guleryuz
-
Patent number: 11025957Abstract: A method for performing a transform by using Layered Givens Transform may include: deriving at least one rotation layer, a first permutation layer and a second permutation layer based on a given transform matrix H and an error parameter; obtaining a Layered Givens Transform (LGT) coefficient based on the rotation layer, the first permutation layer and the second permutation layer; and performing quantization and entropy encoding with respect to the LGT coefficient, the first permutation layer and the second permutation layer may include a permutation matrix obtained by permuting a row of an identity matrix, and the LGT coefficient may be obtained by sequentially applying the first permutation layer, the rotation layer and the second permutation layer respectively.Type: GrantFiled: February 1, 2018Date of Patent: June 1, 2021Assignee: LG Electronics Inc.Inventors: Moonmo Koo, Onur Guleryuz
-
Patent number: 10390025Abstract: A method of encoding a video signal includes selecting a set of base filter kernels from a filter bank; determining a prediction filter parameter based on the set of base filter kernels; performing a filtering of a reference region for a target region based on the prediction filter parameter; and predicting the target region based on the filtered reference region, wherein the prediction filter parameter includes at least one of modulation scalar and partition information.Type: GrantFiled: June 25, 2015Date of Patent: August 20, 2019Assignee: LG ELECTRONICS INC.Inventors: Onur Guleryuz, Shunyao Li, Sehoon Yea
-
Publication number: 20190180473Abstract: A processor identifies keypoints on a hand in a two-dimensional image that is captured by a camera. A three-dimensional pose of the hand is determined using locations of the keypoints to access lookup tables (LUTs) that represent potential poses of the hand as a function of the locations of the keypoints. In some embodiments, the keypoints include locations of tips of fingers and a thumb, joints that connect phalanxes of the fingers and the thumb, palm knuckles that represent a point of attachment of the fingers and the thumb to a palm, and a wrist location that indicates a point of attachment of the hand to a forearm. Some embodiments of the LUTs represent 2D coordinates of the fingers and the thumb in corresponding finger pose planes as a function of the locations of the tips of the fingers or thumb relative to the corresponding palm knuckles.Type: ApplicationFiled: August 24, 2018Publication date: June 13, 2019Inventor: Onur GULERYUZ
-
Publication number: 20180220157Abstract: A method for performing a transform by using Layered Givens Transform may include: deriving at least one rotation layer, a first permutation layer and a second permutation layer based on a given transform matrix H and an error parameter; obtaining a Layered Givens Transform (LGT) coefficient based on the rotation layer, the first permutation layer and the second permutation layer; and performing quantization and entropy encoding with respect to the LGT coefficient, the first permutation layer and the second permutation layer may include a permutation matrix obtained by permuting a row of an identity matrix, and the LGT coefficient may be obtained by sequentially applying the first permutation layer, the rotation layer and the second permutation layer respectively.Type: ApplicationFiled: February 1, 2018Publication date: August 2, 2018Inventors: Moonmo KOO, Onur GULERYUZ
-
Publication number: 20170310974Abstract: A method of encoding a video signal includes selecting a set of base filter kernels from a filter bank; determining a prediction filter parameter based on the set of base filter kernels; performing a filtering of a reference region for a target region based on the prediction filter parameter; and predicting the target region based on the filtered reference region, wherein the prediction filter parameter includes at least one of modulation scalar and partition information.Type: ApplicationFiled: June 25, 2015Publication date: October 26, 2017Inventors: Onur GULERYUZ, Shunyao LI, Sehoon YEA
-
Patent number: 8311334Abstract: A method and apparatus is disclosed herein for performing pattern representation, search and/or compression. In one embodiment, the method comprises extracting one or more target patterns from a portion of an image; forming a pattern matrix based on the one or more target patterns; approximating the pattern matrix using a complexity-regularized representation derived from the pattern matrix; and sending a query to search a library of images for vectors in the query to detect, using the a complexity-regularized representation, any image in the library that contains image patches similar to the one or more target patterns.Type: GrantFiled: September 24, 2009Date of Patent: November 13, 2012Assignee: NTT DoCoMo, Inc.Inventors: Onur Guleryuz, Jana Zujovic
-
Publication number: 20100142829Abstract: A method and apparatus is disclosed herein for performing pattern representation, search and/or compression. In one embodiment, the method comprises extracting one or more target patterns from a portion of an image; forming a pattern matrix based on the one or more target patterns; approximating the pattern matrix using a complexity-regularized representation derived from the pattern matrix; and sending a query to search a library of images for vectors in the query to detect, using the a complexity-regularized representation, any image in the library that contains image patches similar to the one or more target patterns.Type: ApplicationFiled: September 24, 2009Publication date: June 10, 2010Inventors: Onur Guleryuz, Jana Zujovic
-
Publication number: 20080101709Abstract: A method and apparatus are disclosed herein for spatial sparsity induced temporal prediction. In one embodiment, the method comprises: performing motion compensation to generate a first motion compensated prediction using a first block from a previously coded frame; generating a second motion compensated prediction for a second block to be coded from the first motion compensated prediction using a plurality of predictions in the spatial domain, including generating each of the plurality of predictions by generating block transform coefficients for the first block using a transform, generating predicted transform coefficients of the second block to be coded using the block transform coefficients, and performing an inverse transform on the predicted transform coefficients to create the second motion compensated prediction in the pixel domain; subtracting the second motion compensated prediction from a block in a current frame to produce a residual frame; and coding the residual frame.Type: ApplicationFiled: October 30, 2007Publication date: May 1, 2008Inventors: Onur Guleryuz, Gang Hua
-
Patent number: 7265876Abstract: The appearance of edges in an image is improved through precise placement of subpixels within pixel cells that are located on or near edges in an image. Image data is examined to identify a “target pixel” near the edge of an object that represents the object and is adjacent to a “background pixel” that represents only background. The target pixel may represent both the object and its background or it may represent the object only. A “second pixel”, adjacent to the target pixel and representing the object, is also identified. The second pixel may represent both the object and its background or it may represent the object only. The target pixel's location with respect to the second pixel is analyzed to determine the placement of a subpixel within the target pixel cell and the placement of a subpixel within the second pixel cell, such that the edge of the object is well-defined and the density of the object is preserved.Type: GrantFiled: May 9, 2002Date of Patent: September 4, 2007Assignee: Seiko Epson CorporationInventors: Jincheng Huang, Onur Guleryuz, Anoop Bhattacharjya, Joseph Shu
-
Publication number: 20070160303Abstract: A method and apparatus is disclosed herein for geometrical image representation and/or compression. In one embodiment, the method comprises creating a representation for image data that includes determining a geometric flow for image data and performing an image processing operation on data in the representation using the geometric flow.Type: ApplicationFiled: December 20, 2006Publication date: July 12, 2007Inventors: Onur Guleryuz, Arthur Cunha
-
Publication number: 20060285590Abstract: A method and apparatus for non-linear prediction filtering are disclosed. In one embodiment, the method comprises performing motion compensation to generate a motion compensated prediction using a block from a previously coded frame, performing non-linear filtering on the motion compensated prediction in the transform domain with a non-linear filter as part of a fractional interpolation process to generate a motion compensated non-linear prediction, subtracting the motion compensated non-linear prediction from a block in a current frame to produce a residual frame, and coding the residual frame.Type: ApplicationFiled: June 20, 2006Publication date: December 21, 2006Inventor: Onur Guleryuz
-
Nonlinear, in-the-loop, denoising filter for quantization noise removal for hybrid video compression
Publication number: 20060153301Abstract: A method and apparatus is disclosed herein for using an in-the-loop denoising filter for quantization noise removal for video compression. In one embodiment, the video encoder comprises a transform coder to apply a transform to a residual frame representing a difference between a current frame and a first prediction, the transform coder outputting a coded differential frame as an output of the video encoder; a transform decoder to generate a reconstructed residual frame in response to the coded differential frame; a first adder to create a reconstructed frame by adding the reconstructed residual frame to the first prediction; a non-linear denoising filter to filter the reconstructed frame by deriving expectations and performing denoising operations based on the expectations; and a prediction module to generate predictions, including the first prediction, based on previously decoded frames.Type: ApplicationFiled: January 12, 2006Publication date: July 13, 2006Inventor: Onur Guleryuz -
Publication number: 20060023942Abstract: A color distortion correction is obtained by calculating a color shift of a first color channel with respect to a second color channel along each pixel column of a scanned calibration image. The color shift defines a correction for the color distortions and the correction is then applied to subsequent scanned images. A system for correcting color distortions and an image processing chip configured to determine the color shift are also described.Type: ApplicationFiled: August 2, 2004Publication date: February 2, 2006Inventor: Onur Guleryuz
-
Publication number: 20050105817Abstract: An algorithm that estimates or predicts a portion x1 of an original signal represented by the vector x=[x0 x1]T, of which x0 is the known portion and x1 the unknown portion, obtains the estimate y=[x0 {circumflex over (x)}1]T by first forming an initial estimate y0=[x0 0]T, that is, an initial estimate of x1, the unknown part of the original signal x. A de-noising matrix D1 is computed by applying a transform matrix to y0 and hard-thresholding coefficients using an initial threshold T0. An operation is performed using D1 to form a second signal estimate y1. The threshold may then be successively decremented by ?T to obtain a next threshold Tn, after which a next de-noising Dn+1 is computed by applying the transform matrix to yn and hard-thresholding coefficients using Tn, and an operation is performed using Dn+1 to form the next signal estimate y(n+1). This loop in which the threshold is successively reduced to form the next signal estimate is performed until a final threshold Tf is reached.Type: ApplicationFiled: July 6, 2004Publication date: May 19, 2005Inventor: Onur Guleryuz
-
Publication number: 20030210409Abstract: The appearance of edges in an image is improved through precise placement of subpixels within pixel cells that are located on or near edges in an image. Image data is examined to identify a “target pixel” near the edge of an object that represents the object and is adjacent to a “background pixel” that represents only background. The target pixel may represent both the object and its background or it may represent the object only. A “second pixel”, adjacent to the target pixel and representing the object, is also identified. The second pixel may represent both the object and its background or it may represent the object only. The target pixel's location with respect to the second pixel is analyzed to determine the placement of a subpixel within the target pixel cell and the placement of a subpixel within the second pixel cell, such that the edge of the object is well-defined and the density of the object is preserved.Type: ApplicationFiled: May 9, 2002Publication date: November 13, 2003Inventors: Jincheng Huang, Onur Guleryuz, Anoop Bhattacharjya, Joseph Shu