Patents by Inventor Qiang Eric Li

Qiang Eric Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11587279
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Grant
    Filed: February 28, 2022
    Date of Patent: February 21, 2023
    Assignee: INTEL CORPORATION
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
  • Publication number: 20220237845
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Application
    Filed: February 28, 2022
    Publication date: July 28, 2022
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
  • Patent number: 11383144
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: July 12, 2022
    Assignee: Intel Corporation
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
  • Patent number: 11295502
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Grant
    Filed: August 7, 2020
    Date of Patent: April 5, 2022
    Assignee: Intel Corporation
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
  • Publication number: 20210069571
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Application
    Filed: November 9, 2020
    Publication date: March 11, 2021
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
  • Publication number: 20210056746
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Application
    Filed: August 7, 2020
    Publication date: February 25, 2021
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
  • Patent number: 10828549
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: November 10, 2020
    Assignee: Intel Corporation
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
  • Patent number: 10740944
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: August 11, 2020
    Assignee: Intel Corporation
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
  • Publication number: 20200193864
    Abstract: Systems and techniques for sensor-derived swing hit and direction detection are described herein. A set of sensor values may be compressed into a first lower dimension (2105). Features may be extracted from the compressed set of sensor values (2110). The features may be clustered into a set of clusters (2115). A swing action may be detected based on a distance between members of the set of clusters (2120).
    Type: Application
    Filed: September 8, 2017
    Publication date: June 18, 2020
    Inventors: Yikai Fang, Xiaofeng Tong, Lidan Zhang, Qiang Eric Li, Wenlong Li
  • Patent number: 10540800
    Abstract: Examples of systems and methods for non-facial animation in facial performance driven avatar system are generally described herein. A method for facial gesture driven body animation may include capturing a series of images of a face, and computing facial motion data for each of the images in the series of images. The method may include identifying an avatar body animation based on the facial motion data, and animating a body of an avatar using the avatar body animation.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: January 21, 2020
    Assignee: Intel Corporation
    Inventors: Xiaofeng Tong, Qiang Eric Li, Yangzhou Du, Wenlong Li, Johnny C. Yip
  • Publication number: 20190304155
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Application
    Filed: October 26, 2018
    Publication date: October 3, 2019
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
  • Publication number: 20180353836
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Application
    Filed: December 30, 2016
    Publication date: December 13, 2018
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen
  • Publication number: 20180300925
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Application
    Filed: November 27, 2017
    Publication date: October 18, 2018
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
  • Publication number: 20180204368
    Abstract: Examples of systems and methods for non-facial animation in facial performance driven avatar system are generally described herein. A method for facial gesture driven body animation may include capturing a series of images of a face, and computing facial motion data for each of the images in the series of images. The method may include identifying an avatar body animation based on the facial motion data, and animating a body of an avatar using the avatar body animation.
    Type: Application
    Filed: October 23, 2017
    Publication date: July 19, 2018
    Applicant: Intel Corporation
    Inventors: XIAOFENG TONG, QIANG ERIC LI, YANGZHOU DU, WENLONG LI, JOHNNY C. YIP
  • Patent number: 9830728
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: November 28, 2017
    Assignee: Intel Corporation
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
  • Patent number: 9824502
    Abstract: Examples of systems and methods for three-dimensional model customization for avatar animation using a sketch image selection are generally described herein. A method for rendering a three-dimensional model may include presenting a plurality of sketch images to a user on a user interface, and receiving a selection of sketch images from the plurality of sketch images to compose a face. The method may include rendering the face as a three-dimensional model, the three-dimensional model for use as an avatar.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: November 21, 2017
    Assignee: Intel Corporation
    Inventors: Xiaofeng Tong, Qiang Eric Li, Yangzhou Du, Wenlong Li
  • Patent number: 9799133
    Abstract: Examples of systems and methods for non-facial animation in facial performance driven avatar system are generally described herein. A method for facial gesture driven body animation may include capturing a series of images of a face, and computing facial motion data for each of the images in the series of images. The method may include identifying an avatar body animation based on the facial motion data, and animating a body of an avatar using the avatar body animation.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: October 24, 2017
    Assignee: Intel Corporation
    Inventors: Xiaofeng Tong, Qiang Eric Li, Yangzhou Du, Wenlong Li, Johnny C. Yip
  • Publication number: 20170111616
    Abstract: Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
    Type: Application
    Filed: December 30, 2016
    Publication date: April 20, 2017
    Applicant: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Qiang Eric Li, Yimin Zhang, Wei Hu, John G. Tennant, Hui A. Li
  • Publication number: 20170111615
    Abstract: Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
    Type: Application
    Filed: December 30, 2016
    Publication date: April 20, 2017
    Applicant: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Qiang Eric Li, Yimin Zhang, Wei Hu, John G. Tennant, Hui A. Li
  • Publication number: 20170054945
    Abstract: Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
    Type: Application
    Filed: June 16, 2016
    Publication date: February 23, 2017
    Applicant: Intel Corporation
    Inventors: WENLONG LI, XIAOFENG TONG, YANGZHOU DU, QIANG ERIC LI, YIMIN ZHANG, WEI HU, JOHN G TENNANT, HUI A LI