Patents by Inventor Shaohui JIAO

Shaohui JIAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240037856
    Abstract: Provided a walkthrough view generation method, apparatus and device, and a storage medium.
    Type: Application
    Filed: January 29, 2022
    Publication date: February 1, 2024
    Inventors: Shaohui JIAO, Xin LIU, Yue WANG, Yongjie ZHANG
  • Publication number: 20240031518
    Abstract: A method for replacing a background in a picture, a device, a storage medium and a program product provided in the embodiments relate to image processing technologies and include: acquiring a basic background image of a preset scene, where the basic background image is an image obtained by shooting the preset scene in advance; acquiring a captured picture obtained by currently shooting the preset scene, and determining a current background image of the preset scene according to the basic background image and the captured picture; replacing the current background image in the captured picture with a preset background image to obtain a final captured picture.
    Type: Application
    Filed: April 12, 2022
    Publication date: January 25, 2024
    Inventors: Yingyi CHEN, Shaohui Jiao
  • Patent number: 11383144
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Grant
    Filed: November 9, 2020
    Date of Patent: July 12, 2022
    Assignee: Intel Corporation
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
  • Publication number: 20210069571
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Application
    Filed: November 9, 2020
    Publication date: March 11, 2021
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
  • Patent number: 10828549
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: November 10, 2020
    Assignee: Intel Corporation
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
  • Patent number: 10776980
    Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: January 7, 2019
    Date of Patent: September 15, 2020
    Assignee: Intel Corporation
    Inventors: Shaohui Jiao, Xiaolu Shen, Lidan Zhang, Qiang Li, Wenlong Li
  • Patent number: 10699463
    Abstract: In response to movement of an underlying structure, motion of complex objects connected to that structure may be simulated relatively quickly and without requiring extensive processing capabilities. A skeleton extraction method is used to simplify the complex object. Tracking is used to track the motion of the underlying structure, such as the user's head in a case where motion of hair is being simulated. Thus, the simulated motion is driven in response to the extent and direction of head or facial movement. A mass-spring model may be used to accelerate the simulation in some embodiments.
    Type: Grant
    Filed: March 17, 2016
    Date of Patent: June 30, 2020
    Assignee: Intel Corporation
    Inventors: Shaohui Jiao, Qiang Li, Wenlong Li
  • Publication number: 20190213774
    Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: January 7, 2019
    Publication date: July 11, 2019
    Inventors: Shaohui JIAO, Xiaolu SHEN, Lidan ZHANG, Qiang LI, Wenlong LI
  • Patent number: 10324686
    Abstract: An electronic device and an operation method therefor are provided. The electronic device may include: a display panel; an optical element; and a control unit which senses a location of the optical element, generates a 3D image through via display panel and the optical element in a state in which the display panel and the optical element overlap each other, and generates a 2D image via the display panel in a state in which the optical element is detached or separated from the display panel.
    Type: Grant
    Filed: February 13, 2015
    Date of Patent: June 18, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Shaohui Jiao, Haitao Wang, Mingcai Zhou, Tao Hong, Weiming Li, Xiying Wang, Dong Kyung Nam
  • Publication number: 20190035133
    Abstract: In response to movement of an underlying structure, motion of complex objects connected to that structure may be simulated relatively quickly and without requiring extensive processing capabilities. A skeleton extraction method is used to simplify the complex object. Tracking is used to track the motion of the underlying structure, such as the user's head in a case where motion of hair is being simulated. Thus, the simulated motion is driven in response to the extent and direction of head or facial movement. A mass-spring model may be used to accelerate the simulation in some embodiments.
    Type: Application
    Filed: March 17, 2016
    Publication date: January 31, 2019
    Applicant: Intel Corporation
    Inventors: Shaohui JIAO, Qiang LI, Wenlong LI
  • Patent number: 10176619
    Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: July 30, 2015
    Date of Patent: January 8, 2019
    Assignee: Intel Corporation
    Inventors: Shaohui Jiao, Xiaolu Shen, Lidan Zhang, Qiang Li, Wenlong Li
  • Publication number: 20180353836
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Application
    Filed: December 30, 2016
    Publication date: December 13, 2018
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen
  • Publication number: 20180342098
    Abstract: Methods and apparatus relating to a unified environmental mapping framework are described. In an embodiment, Environmental Mapping (EM) logic performs one or more operations to extract illumination information for an object from an environmental map in response to a determination that the object has a diffuse surface and/or specular surface. Memory, coupled to the EM logic, stores data corresponding to the environmental map. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: December 25, 2015
    Publication date: November 29, 2018
    Applicant: Intel Corporation
    Inventors: Yuanzhang Chang, Shaohui Jiao, Xiaofeng Tong, Qiang Li, Wenlong Li
  • Patent number: 10142616
    Abstract: An eyeglass-less 3D display device, and a device and method that compensate for a displayed margin of error are provided. The display device acquires an image of an integral image display (IID) image captured by a single camera, and compensates for a margin of error which arises due to a discrepancy between the designed position of a micro lens array located on one surface of a 2D panel and the actual position thereof, so as to provide a high-quality 3D image.
    Type: Grant
    Filed: April 16, 2015
    Date of Patent: November 27, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Weiming Li, Mingcai Zhou, Shaohui Jiao, Tao Hong, Xiying Wang, Haitao Wang, Dong Kyung Nam
  • Patent number: 10015479
    Abstract: A three-dimensional (3D) display method includes generating N first visual images, N being a natural number greater than 1; generating M second visual images from each of the N first visual images, M being a natural number greater than 1; acquiring N visual image groups corresponding to the N first visual images, respectively, such that, for each one of the N visual image groups, the visual image group includes the M second visual images generated from the first visual image, from among the N first visual images, to which the visual image group corresponds; generating M elemental image array (EIA) images based on the N visual image groups; and time-share displaying the M EIA images.
    Type: Grant
    Filed: September 1, 2014
    Date of Patent: July 3, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Mingcai Zhou, Shandong Wang, Shaohui Jiao, Tao Hong, Weiming Li, Haitao Wang, Ji Yeun Kim
  • Patent number: 9977981
    Abstract: Provided are methods and apparatuses for calibrating a three-dimensional (3D) image in a tiled display including a display panel and a plurality of lens arrays. The method includes capturing a plurality of structured light images displayed on the display panel, calibrating a geometric model of the tiled display based on the plurality of structured light images, generating a ray model based on the calibrated geometric model of the tiled display, and rendering an image based on the ray model.
    Type: Grant
    Filed: February 20, 2015
    Date of Patent: May 22, 2018
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Weiming Li, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Tao Hong, Haitao Wang, Ji Yeun Kim
  • Publication number: 20170309216
    Abstract: Disclosed are a content display method and an electronic device. The content display method includes reducing display brightness of some of a plurality of pixels constituting content based on pixel information of the plurality of pixels. The content display method includes displaying the content based on the reduced display brightness.
    Type: Application
    Filed: September 10, 2015
    Publication date: October 26, 2017
    Inventors: Shaohui JIAO, Haining HU
  • Publication number: 20170206694
    Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: July 30, 2015
    Publication date: July 20, 2017
    Inventors: Shaohui JIAO, Xiaolu SHEN, Lidan ZHANG, Qiang LI, Wenlong LI
  • Patent number: 9691172
    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, the apparatus may comprise an avatar animation engine to receive a plurality of fur shell texture data maps associated with a furry avatar, and drive an avatar model to animate the furry avatar, using the plurality of fur shell texture data maps. The plurality of fur shell texture data maps may be generated through sampling of fur strands across a plurality of horizontal planes. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: September 24, 2014
    Date of Patent: June 27, 2017
    Assignee: Intel Corporation
    Inventors: Shaohui Jiao, Xiaofeng Tong, Qiang Li, Wenlong Li
  • Publication number: 20170171537
    Abstract: A three-dimensional (3D) display method includes generating N first visual images, N being a natural number greater than 1; generating M second visual images from each of the N first visual images, M being a natural number greater than 1; acquiring N visual image groups corresponding to the N first visual images, respectively, such that, for each one of the N visual image groups, the visual image group includes the M second visual images generated from the first visual image, from among the N first visual images, to which the visual image group corresponds; generating M elemental image array (EIA) images based on the N visual image groups; and time-share displaying the M EIA images.
    Type: Application
    Filed: September 1, 2014
    Publication date: June 15, 2017
    Inventors: Mingcai ZHOU, Shandong WANG, Shaohui JIAO, Tao HONG, Weiming LI, Haitao WANG, Ji Yeun KIM