Patents by Inventor Kuang-Man Huang

Kuang-Man Huang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240029354
    Abstract: Systems and techniques are provided for generating a texture for a three-dimensional (3D) facial model. For example, a process can include obtaining a first frame, the first frame including a first portion of a face. In some aspects, the process can include generating a 3D facial model based on the first frame and generating a first facial feature corresponding to the first portion of the face. In some examples, the process includes obtaining a second frame, the second frame including a second portion of the face. In some cases, the second portion of the face at least partially overlaps the first portion of the face. In some examples, the process includes combining the first facial feature with the second facial feature to generate an enhanced facial feature, wherein the combining is performed to enhance an appearance of select areas of the enhanced facial feature.
    Type: Application
    Filed: July 19, 2022
    Publication date: January 25, 2024
    Inventors: Ke-Li CHENG, Anupama S, Kuang-Man HUANG, Chieh-Ming KUO, Avani RAO, Chiranjib CHOUDHURI, Michel Adib SARKIS, Ning BI, Ajit Deepak GUPTE
  • Publication number: 20240005607
    Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.
    Type: Application
    Filed: July 17, 2023
    Publication date: January 4, 2024
    Inventors: Ke-Li CHENG, Kuang-Man HUANG, Michel Adib SARKIS, Gerhard REITMAYR, Ning BI
  • Publication number: 20230410447
    Abstract: Systems and techniques are provided for generating a three-dimensional (3D) facial model. For example, a process can include obtaining at least one input image associated with a face. In some aspects, the process can include obtaining a pose for a 3D facial model associated with the face. In some examples, the process can include generating, by a machine learning model, the 3D facial model associated with the face. In some cases, one or more parameters associated with a shape component of the 3D facial model are conditioned on the pose. In some implementations, the 3D facial model is configured to vary in shape based on the pose for the 3D facial model associated with the face.
    Type: Application
    Filed: June 21, 2022
    Publication date: December 21, 2023
    Inventors: Ke-Li CHENG, Anupama S, Kuang-Man HUANG, Chieh-Ming KUO, Avani RAO, Chiranjib CHOUDHURI, Michel Adib SARKIS, Ajit Deepak GUPTE, Ning BI
  • Publication number: 20230326136
    Abstract: Methods, systems, and apparatuses are provided to automatically reconstruct an image, such as a 3D image. For example, a computing device may obtain an image, and may apply a first trained machine learning process to the image to generate coefficient values characterizing the image in a plurality of dimensions. Further, the computing device may generate a mesh based on the coefficient values. The computing device may apply a second trained machine learning process to the coefficient values and the image to generate a displacement map. Based on the mesh and the displacement map, the computing device may generate output data characterizing an aligned mesh. The computing device may store the output data within a data repository. In some examples, the computing device provides the output data for display.
    Type: Application
    Filed: April 6, 2022
    Publication date: October 12, 2023
    Inventors: Wei-Lun CHANG, Michel Adib SARKIS, Chieh-Ming KUO, Kuang-Man HUANG
  • Patent number: 11756334
    Abstract: Systems and techniques are provided for facial expression recognition. In some examples, a system receives an image frame corresponding to a face of a person. The system also determines, based on a three-dimensional model of the face, landmark feature information associated with landmark features of the face. The system then inputs, to at least one layer of a neural network trained for facial expression recognition, the image frame and the landmark feature information. The system further determines, using the neural network, a facial expression associated with the face.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: September 12, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Peng Liu, Lei Wang, Kuang-Man Huang, Michel Adib Sarkis, Ning Bi
  • Patent number: 11748949
    Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: September 5, 2023
    Assignee: QUALCOMM Incorporated
    Inventors: Ke-Li Cheng, Kuang-Man Huang, Michel Adib Sarkis, Gerhard Reitmayr, Ning Bi
  • Publication number: 20230093827
    Abstract: Disclosed are techniques for processing image data. In some aspects, a three-dimensional model can be determined corresponding to an object in an input image. Based on the three-dimensional model, an estimated focal length can be determined that corresponds to the input image. An estimated depth associated with the object in the input image can be calculated based on the estimated focal length and an input image focal length.
    Type: Application
    Filed: August 29, 2022
    Publication date: March 30, 2023
    Inventors: Kuang-Man HUANG, Michel Adib SARKIS
  • Publication number: 20230035282
    Abstract: Systems and techniques are provided for generating one or more models. For example, a process can include obtaining a plurality of input images corresponding to faces of one or more people during a training interval. The process can include determining a value of the coefficient representing at least the portion of the facial expression for each of the plurality of input images during the training interval. The process can include determining, from the determined values of the coefficient representing at least the portion of the facial expression for each of the plurality of input images during the training interval, an extremum value of the coefficient representing at least the portion of the facial expression during the training interval. The process can include generating an updated bounding value for the coefficient representing at least the portion of the facial expression based on the initial bounding value and the extremum value.
    Type: Application
    Filed: July 23, 2021
    Publication date: February 2, 2023
    Inventors: Kuang-Man HUANG, Min-Hui Lin, Ke-Li CHENG, Michel Adib SARKIS
  • Publication number: 20220343602
    Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.
    Type: Application
    Filed: May 13, 2022
    Publication date: October 27, 2022
    Inventors: Ke-Li CHENG, Kuang-Man HUANG, Michel Adib SARKIS, Gerhard REITMAYR, Ning BI
  • Publication number: 20220269879
    Abstract: Systems and techniques are provided for facial expression recognition. In some examples, a system receives an image frame corresponding to a face of a person. The system also determines, based on a three-dimensional model of the face, landmark feature information associated with landmark features of the face. The system then inputs, to at least one layer of a neural network trained for facial expression recognition, the image frame and the landmark feature information. The system further determines, using the neural network, a facial expression associated with the face.
    Type: Application
    Filed: February 25, 2021
    Publication date: August 25, 2022
    Inventors: Peng LIU, Lei WANG, Kuang-Man HUANG, Michel Adib SARKIS, Ning BI
  • Patent number: 11361508
    Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: June 14, 2022
    Assignee: QUALCOMM Incorporated
    Inventors: Ke-Li Cheng, Kuang-Man Huang, Michel Adib Sarkis, Gerhard Reitmayr, Ning Bi
  • Publication number: 20220058871
    Abstract: Techniques are provided for generating three-dimensional models of objects from one or more images or frames. For example, at least one frame of an object in a scene can be obtained. A portion of the object is positioned on a plane in the at least one frame. The plane can be detected in the at least one frame and, based on the detected plane, the object can be segmented from the plane in the at least one frame. A three-dimensional (3D) model of the object can be generated based on segmenting the object from the plane. A refined mesh can be generated for a portion of the 3D model corresponding to the portion of the object positioned on the plane.
    Type: Application
    Filed: August 20, 2020
    Publication date: February 24, 2022
    Inventors: Ke-Li CHENG, Kuang-Man HUANG, Michel Adib SARKIS, Gerhard REITMAYR, Ning BI
  • Patent number: 10956719
    Abstract: Methods, systems, and devices for image processing are described. The method may include identifying a face in a first image based on identifying one or more biometric features of the face, determining an angular direction of one or more pixels of the identified face, identifying an anchor point on the identified face, sorting each of one or more pixels of the identified face into one of a set of pixel bins based on a combination of the determined angular direction of the pixel and a distance between the pixel and the identified anchor point, and outputting an indication of authenticity associated with the face based on a number of pixels in each bin.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: March 23, 2021
    Assignee: QUALCOMM Incorporated
    Inventors: Ke-Li Cheng, Kuang-Man Huang, Michel Adib Sarkis, Yingyong Qi, Ning Bi
  • Publication number: 20200175260
    Abstract: Methods, systems, and devices for image processing are described. The method may include identifying a face in a first image based on identifying one or more biometric features of the face, determining an angular direction of one or more pixels of the identified face, identifying an anchor point on the identified face, sorting each of one or more pixels of the identified face into one of a set of pixel bins based on a combination of the determined angular direction of the pixel and a distance between the pixel and the identified anchor point, and outputting an indication of authenticity associated with the face based on a number of pixels in each bin.
    Type: Application
    Filed: November 30, 2018
    Publication date: June 4, 2020
    Inventors: Ke-Li Cheng, Kuang-Man Huang, Michel Adib Sarkis, Yingyong Qi, Ning Bi
  • Patent number: 10552970
    Abstract: A depth based scanning system can be configured to determine whether pixel depth values of a depth map are within a depth range; determine a component of the depth map comprised of connected pixels each with a depth value within the depth range; replace the depth values of any pixels of the depth map that are not connected pixels; determine whether each pixel of the connected pixels of the component has at least a threshold number of neighboring pixels that have a depth value within the depth range; and for each pixel of the connected pixels of the component, if the pixel is determined to have at least the threshold number of neighboring pixels, replace its depth value with a filtered depth value that is based on the depth values of the neighboring pixels that have a depth value within the depth range.
    Type: Grant
    Filed: January 12, 2018
    Date of Patent: February 4, 2020
    Assignee: QUALCOMM Incorporated
    Inventors: Kuang-Man Huang, Michel Adib Sarkis, Yingyong Qi
  • Publication number: 20190220987
    Abstract: A depth based scanning system can be configured to determine whether pixel depth values of a depth map are within a depth range; determine a component of the depth map comprised of connected pixels each with a depth value within the depth range; replace the depth values of any pixels of the depth map that are not connected pixels; determine whether each pixel of the connected pixels of the component has at least a threshold number of neighboring pixels that have a depth value within the depth range; and for each pixel of the connected pixels of the component, if the pixel is determined to have at least the threshold number of neighboring pixels, replace its depth value with a filtered depth value that is based on the depth values of the neighboring pixels that have a depth value within the depth range.
    Type: Application
    Filed: January 12, 2018
    Publication date: July 18, 2019
    Inventors: Kuang-Man Huang, Michel Adib Sarkis, Yingyong Qi
  • Patent number: 9213890
    Abstract: A gesture recognition system using a skin-color based method combined with motion information to achieve real-time segmentation. A Kalman filter is used to track the centroid of the hand. The palm center, palm bottom, as well as the largest distance from the palm center to the contour from extracted hand mask are computed. The computed distance to a threshold is then compared to decide if the current posture is “open” or “closed.” In a preferred embodiment, the transition between the “open” and “closed” posture to decide if the current gesture is in “select” or “grab” state.
    Type: Grant
    Filed: September 17, 2010
    Date of Patent: December 15, 2015
    Assignee: SONY CORPORATION
    Inventors: Kuang-Man Huang, Ming-Chang Liu, Liangyin Yu
  • Patent number: 9210468
    Abstract: A system and method for effectively implementing a stroboscopic visual effect with a television device includes a strobe engine that analyzes video data to create a sequence of stroboscopic images based upon motion information from the video data. The television utilizes a display manager to present the stroboscopic images and the video data on a display device during a strobe display mode. A processor device of the television typically controls the operations of the strobe engine and the display manager to implement the stroboscopic visual effect.
    Type: Grant
    Filed: March 22, 2011
    Date of Patent: December 8, 2015
    Assignee: Sony Corporation
    Inventors: Ming-Chang Liu, Kuang-Man Huang, Chuen-Chien Lee
  • Patent number: 9031326
    Abstract: A system for performing an image categorization procedure includes an image manager with a keypoint generator, a support region filter, an orientation filter, and a matching module. The keypoint generator computes initial descriptors for keypoints in a test image. The support region filter and the orientation filter perform respective filtering procedures upon the initial descriptors to produce filtered descriptors. The matching module compares the filtered descriptors to one or more database image sets for categorizing said test image. A processor of an electronic device typically controls the image manager to effectively perform the image categorization procedure.
    Type: Grant
    Filed: February 16, 2012
    Date of Patent: May 12, 2015
    Assignee: Sony Corporation
    Inventors: Liangyin Yu, Ming-Chang Liu, Kuang-Man Huang
  • Patent number: 8620024
    Abstract: A gesture recognition system and method that inputs videos of a moving hand and outputs the recognized gesture states for the input sequence. In each image, the hand area is segmented from the background and used to estimate parameters of all five fingers. The system further classifies the hand image as one of the postures in the pre-defined database and applies a geometric classification algorithm to recognize the gesture. The system combines a skin color model with motion information to achieve real-time hand segmentation performance, and considers each dynamic gesture as a multi-dimensional volume and uses a geometric algorithm to classify each volume.
    Type: Grant
    Filed: September 17, 2010
    Date of Patent: December 31, 2013
    Assignee: Sony Corporation
    Inventors: Kuang-Man Huang, Ming-Chang Liu, Liangyin Yu