Patents by Inventor Xiaolu SHEN

Xiaolu SHEN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119653
    Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled, at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor.
    Type: Application
    Filed: December 19, 2023
    Publication date: April 11, 2024
    Applicant: Tahoe Research, Ltd.
    Inventors: Minje PARK, Tae-Hoon KIM, Myung-Ho JU, Jihyeon YI, Xiaolu SHEN, Lidan ZHANG, Qiang LI
  • Patent number: 11887231
    Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled, at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor.
    Type: Grant
    Filed: October 17, 2019
    Date of Patent: January 30, 2024
    Assignee: Tahoe Research, Ltd.
    Inventors: Minje Park, Tae-Hoon Kim, Myung-Ho Ju, Jihyeon Yi, Xiaolu Shen, Lidan Zhang, Qiang Li
  • Patent number: 11841935
    Abstract: Example gesture matching mechanisms are disclosed herein. An example machine readable storage device or disc includes instructions that, when executed, cause programmable circuitry to at least: prompt a user to perform gestures to register the user, randomly select at least one of the gestures for authentication of the user, prompt the user to perform the at least one selected gesture, translate the gesture into an animated avatar for display at a display device, the animated avatar including a face, analyze performance of the gesture by the user, and authenticate the user based on the performance of the gesture.
    Type: Grant
    Filed: September 19, 2022
    Date of Patent: December 12, 2023
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaolu Shen, Lidan Zhang, Jose E. Lorenzo, Qiang Li, Steven Holmes, Xiaofeng Tong, Yangzhou Du, Mary Smiley, Alok Mishra
  • Publication number: 20230019957
    Abstract: Example gesture matching mechanisms are disclosed herein. An example machine readable storage device or disc includes instructions that, when executed, cause programmable circuitry to at least: prompt a user to perform gestures to register the user, randomly select at least one of the gestures for authentication of the user, prompt the user to perform the at least one selected gesture, translate the gesture into an animated avatar for display at a display device, the animated avatar including a face, analyze performance of the gesture by the user, and authenticate the user based on the performance of the gesture.
    Type: Application
    Filed: September 19, 2022
    Publication date: January 19, 2023
    Inventors: Wenlong LI, Xiaolu SHEN, Lidan ZHANG, Jose E. LORENZO, Qiang LI, Steven HOLMES, Xiaofeng TONG, Yangzhou DU, Mary SMILEY, Alok MISHRA
  • Patent number: 11449592
    Abstract: An example apparatus is disclosed herein that includes a memory and at least one processor. The at least one processor is to execute instructions to: select a gesture from a database, the gesture including a sequence of poses; translate the selected gesture into an animated avatar performing the selected gesture for display at a display device; display a prompt for the user to perform the selected gesture performed by the animated avatar; capture an image of the user performing the selected gesture; and perform a comparison between a gesture performed by the user in the captured image and the selected gesture to determine whether there is a match between the gesture performed by the user and the selected gesture.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: September 20, 2022
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaolu Shen, Lidan Zhang, Jose E. Lorenzo, Qiang Li, Steven Holmes, Xiaofeng Tong, Yangzhou Du, Mary Smiley, Alok Mishra
  • Patent number: 11383144
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: July 12, 2022
    Assignee: Intel Corporation
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
  • Publication number: 20210069571
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Application
    Filed: November 9, 2020
    Publication date: March 11, 2021
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
  • Publication number: 20210026941
    Abstract: An example apparatus is disclosed herein that includes a memory and at least one processor. The at least one processor is to execute instructions to: select a gesture from a database, the gesture including a sequence of poses; translate the selected gesture into an animated avatar performing the selected gesture for display at a display device; display a prompt for the user to perform the selected gesture performed by the animated avatar; capture an image of the user performing the selected gesture; and perform a comparison between a gesture performed by the user in the captured image and the selected gesture to determine whether there is a match between the gesture performed by the user and the selected gesture.
    Type: Application
    Filed: October 8, 2020
    Publication date: January 28, 2021
    Inventors: Wenlong LI, Xiaolu SHEN, Lidan ZHANG, Jose E. LORENZO, Qiang LI, Steven HOLMES, Xiaofeng TONG, Yangzhou DU, Mary SMILEY, Alok MISHRA
  • Patent number: 10828549
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: November 10, 2020
    Assignee: Intel Corporation
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
  • Patent number: 10803157
    Abstract: A mechanism is described to facilitate gesture matching according to one embodiment. A method of embodiments, as described herein, includes selecting a gesture from a database during an authentication phase, translating the selected gesture into an animated avatar, displaying the avatar, prompting a user to perform the selected gesture, capturing a real-time image of the user and comparing the gesture performed by the user in the captured image to the selected gesture to determine whether there is a match.
    Type: Grant
    Filed: March 28, 2015
    Date of Patent: October 13, 2020
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaolu Shen, Lidan Zhang, Jose E. Lorenzo, Qiang Li, Steven Holmes, Xiaofeng Tong, Yangzhou Du, Mary Smiley, Alok Mishra
  • Patent number: 10776980
    Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: September 15, 2020
    Assignee: Intel Corporation
    Inventors: Shaohui Jiao, Xiaolu Shen, Lidan Zhang, Qiang Li, Wenlong Li
  • Publication number: 20200051306
    Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor.
    Type: Application
    Filed: October 17, 2019
    Publication date: February 13, 2020
    Applicant: INTEL CORPORATION
    Inventors: Minje Park, Tae-Hoon Kim, Myung-Ho Ju, Jihyeon Yi, Xiaolu Shen, Lidan Zhang, Qiang Li
  • Patent number: 10475225
    Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor.
    Type: Grant
    Filed: December 18, 2015
    Date of Patent: November 12, 2019
    Assignee: Intel Corporation
    Inventors: Minje Park, Tae-Hoon Kim, Myung-Ho Ju, Jihyeon Yi, Xiaolu Shen, Lidan Zhang, Qiang Li
  • Publication number: 20190213774
    Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: January 7, 2019
    Publication date: July 11, 2019
    Inventors: Shaohui JIAO, Xiaolu SHEN, Lidan ZHANG, Qiang LI, Wenlong LI
  • Patent number: 10176619
    Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: July 30, 2015
    Date of Patent: January 8, 2019
    Assignee: Intel Corporation
    Inventors: Shaohui Jiao, Xiaolu Shen, Lidan Zhang, Qiang Li, Wenlong Li
  • Publication number: 20180353836
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Application
    Filed: December 30, 2016
    Publication date: December 13, 2018
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen
  • Patent number: 10002308
    Abstract: Provided are a positioning method and apparatus. The positioning method includes acquiring a plurality of positioning results including positions of key points of a facial area included in an input image, respectively using a plurality of predetermined positioning models, evaluating the plurality of positioning results using an evaluation model of the positions of the key points, and updating at least one of the plurality of predetermined positioning models and the evaluation model based on a positioning result that is selected, based on a result of the evaluating, from among the plurality of positioning results.
    Type: Grant
    Filed: June 3, 2016
    Date of Patent: June 19, 2018
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Xiaolu Shen, Biao Wang, Xuetao Feng, Jae Joon Han
  • Publication number: 20180060550
    Abstract: A mechanism is described to facilitate gesture matching according to one embodiment. A method of embodiments, as described herein, includes selecting a gesture from a database during an authentication phase, translating the selected gesture into an animated avatar, displaying the avatar, prompting a user to perform the selected gesture, capturing a real-time image of the user and comparing the gesture performed by the user in the captured image to the selected gesture to determine whether there is a match.
    Type: Application
    Filed: March 28, 2015
    Publication date: March 1, 2018
    Applicant: Intel Corporation
    Inventors: Wenlong LI, Xiaolu SHEN, Lidan ZHANG, Jose E. LORENZO, Qiang LI, Steven HOLMES, Xiaofeng TONG, Yangzhou DU, Mary SMILEY, Alok MISHRA
  • Publication number: 20170256086
    Abstract: Avatar animation systems disclosed herein provide high quality, real-time avatar animation that is based on the varying countenance of a human face. In some example embodiments, the real-time provision of high quality avatar animation is enabled at least in part, by a multi-frame regressor that is configured to map information descriptive of facial expressions depicted in two or more images to information descriptive of a single avatar blend shape. The two or more images may be temporally sequential images. This multi-frame regressor implements a machine learning component that generates the high quality avatar animation from information descriptive of a subject's face and/or information descriptive of avatar animation frames previously generated by the multi-frame regressor.
    Type: Application
    Filed: December 18, 2015
    Publication date: September 7, 2017
    Applicant: INTEL CORPORATION
    Inventors: MINJE PARK, TAE-HOON KIM, MYUNG-HO JU, JIHYEON YI, XIAOLU SHEN, LIDAN ZHANG, QIANG LI
  • Publication number: 20170206694
    Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: July 30, 2015
    Publication date: July 20, 2017
    Inventors: Shaohui JIAO, Xiaolu SHEN, Lidan ZHANG, Qiang LI, Wenlong LI