Patents by Inventor Shaohui JIAO
Shaohui JIAO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240037856Abstract: Provided a walkthrough view generation method, apparatus and device, and a storage medium.Type: ApplicationFiled: January 29, 2022Publication date: February 1, 2024Inventors: Shaohui JIAO, Xin LIU, Yue WANG, Yongjie ZHANG
-
Publication number: 20240031518Abstract: A method for replacing a background in a picture, a device, a storage medium and a program product provided in the embodiments relate to image processing technologies and include: acquiring a basic background image of a preset scene, where the basic background image is an image obtained by shooting the preset scene in advance; acquiring a captured picture obtained by currently shooting the preset scene, and determining a current background image of the preset scene according to the basic background image and the captured picture; replacing the current background image in the captured picture with a preset background image to obtain a final captured picture.Type: ApplicationFiled: April 12, 2022Publication date: January 25, 2024Inventors: Yingyi CHEN, Shaohui Jiao
-
Patent number: 11383144Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.Type: GrantFiled: November 9, 2020Date of Patent: July 12, 2022Assignee: Intel CorporationInventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
-
Publication number: 20210069571Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.Type: ApplicationFiled: November 9, 2020Publication date: March 11, 2021Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
-
Patent number: 10828549Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.Type: GrantFiled: December 30, 2016Date of Patent: November 10, 2020Assignee: Intel CorporationInventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen, Lidan Zhang, Xiaofeng Tong, Fucen Zeng
-
Patent number: 10776980Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.Type: GrantFiled: January 7, 2019Date of Patent: September 15, 2020Assignee: Intel CorporationInventors: Shaohui Jiao, Xiaolu Shen, Lidan Zhang, Qiang Li, Wenlong Li
-
Patent number: 10699463Abstract: In response to movement of an underlying structure, motion of complex objects connected to that structure may be simulated relatively quickly and without requiring extensive processing capabilities. A skeleton extraction method is used to simplify the complex object. Tracking is used to track the motion of the underlying structure, such as the user's head in a case where motion of hair is being simulated. Thus, the simulated motion is driven in response to the extent and direction of head or facial movement. A mass-spring model may be used to accelerate the simulation in some embodiments.Type: GrantFiled: March 17, 2016Date of Patent: June 30, 2020Assignee: Intel CorporationInventors: Shaohui Jiao, Qiang Li, Wenlong Li
-
Publication number: 20190213774Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.Type: ApplicationFiled: January 7, 2019Publication date: July 11, 2019Inventors: Shaohui JIAO, Xiaolu SHEN, Lidan ZHANG, Qiang LI, Wenlong LI
-
Patent number: 10324686Abstract: An electronic device and an operation method therefor are provided. The electronic device may include: a display panel; an optical element; and a control unit which senses a location of the optical element, generates a 3D image through via display panel and the optical element in a state in which the display panel and the optical element overlap each other, and generates a 2D image via the display panel in a state in which the optical element is detached or separated from the display panel.Type: GrantFiled: February 13, 2015Date of Patent: June 18, 2019Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Shaohui Jiao, Haitao Wang, Mingcai Zhou, Tao Hong, Weiming Li, Xiying Wang, Dong Kyung Nam
-
Publication number: 20190035133Abstract: In response to movement of an underlying structure, motion of complex objects connected to that structure may be simulated relatively quickly and without requiring extensive processing capabilities. A skeleton extraction method is used to simplify the complex object. Tracking is used to track the motion of the underlying structure, such as the user's head in a case where motion of hair is being simulated. Thus, the simulated motion is driven in response to the extent and direction of head or facial movement. A mass-spring model may be used to accelerate the simulation in some embodiments.Type: ApplicationFiled: March 17, 2016Publication date: January 31, 2019Applicant: Intel CorporationInventors: Shaohui JIAO, Qiang LI, Wenlong LI
-
Patent number: 10176619Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.Type: GrantFiled: July 30, 2015Date of Patent: January 8, 2019Assignee: Intel CorporationInventors: Shaohui Jiao, Xiaolu Shen, Lidan Zhang, Qiang Li, Wenlong Li
-
Publication number: 20180353836Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.Type: ApplicationFiled: December 30, 2016Publication date: December 13, 2018Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen
-
Publication number: 20180342098Abstract: Methods and apparatus relating to a unified environmental mapping framework are described. In an embodiment, Environmental Mapping (EM) logic performs one or more operations to extract illumination information for an object from an environmental map in response to a determination that the object has a diffuse surface and/or specular surface. Memory, coupled to the EM logic, stores data corresponding to the environmental map. Other embodiments are also disclosed and claimed.Type: ApplicationFiled: December 25, 2015Publication date: November 29, 2018Applicant: Intel CorporationInventors: Yuanzhang Chang, Shaohui Jiao, Xiaofeng Tong, Qiang Li, Wenlong Li
-
Patent number: 10142616Abstract: An eyeglass-less 3D display device, and a device and method that compensate for a displayed margin of error are provided. The display device acquires an image of an integral image display (IID) image captured by a single camera, and compensates for a margin of error which arises due to a discrepancy between the designed position of a micro lens array located on one surface of a 2D panel and the actual position thereof, so as to provide a high-quality 3D image.Type: GrantFiled: April 16, 2015Date of Patent: November 27, 2018Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Weiming Li, Mingcai Zhou, Shaohui Jiao, Tao Hong, Xiying Wang, Haitao Wang, Dong Kyung Nam
-
Patent number: 10015479Abstract: A three-dimensional (3D) display method includes generating N first visual images, N being a natural number greater than 1; generating M second visual images from each of the N first visual images, M being a natural number greater than 1; acquiring N visual image groups corresponding to the N first visual images, respectively, such that, for each one of the N visual image groups, the visual image group includes the M second visual images generated from the first visual image, from among the N first visual images, to which the visual image group corresponds; generating M elemental image array (EIA) images based on the N visual image groups; and time-share displaying the M EIA images.Type: GrantFiled: September 1, 2014Date of Patent: July 3, 2018Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Mingcai Zhou, Shandong Wang, Shaohui Jiao, Tao Hong, Weiming Li, Haitao Wang, Ji Yeun Kim
-
Patent number: 9977981Abstract: Provided are methods and apparatuses for calibrating a three-dimensional (3D) image in a tiled display including a display panel and a plurality of lens arrays. The method includes capturing a plurality of structured light images displayed on the display panel, calibrating a geometric model of the tiled display based on the plurality of structured light images, generating a ray model based on the calibrated geometric model of the tiled display, and rendering an image based on the ray model.Type: GrantFiled: February 20, 2015Date of Patent: May 22, 2018Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Weiming Li, Mingcai Zhou, Shandong Wang, Shaohui Jiao, Tao Hong, Haitao Wang, Ji Yeun Kim
-
Publication number: 20170309216Abstract: Disclosed are a content display method and an electronic device. The content display method includes reducing display brightness of some of a plurality of pixels constituting content based on pixel information of the plurality of pixels. The content display method includes displaying the content based on the reduced display brightness.Type: ApplicationFiled: September 10, 2015Publication date: October 26, 2017Inventors: Shaohui JIAO, Haining HU
-
Publication number: 20170206694Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.Type: ApplicationFiled: July 30, 2015Publication date: July 20, 2017Inventors: Shaohui JIAO, Xiaolu SHEN, Lidan ZHANG, Qiang LI, Wenlong LI
-
Patent number: 9691172Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, the apparatus may comprise an avatar animation engine to receive a plurality of fur shell texture data maps associated with a furry avatar, and drive an avatar model to animate the furry avatar, using the plurality of fur shell texture data maps. The plurality of fur shell texture data maps may be generated through sampling of fur strands across a plurality of horizontal planes. Other embodiments may be described and/or claimed.Type: GrantFiled: September 24, 2014Date of Patent: June 27, 2017Assignee: Intel CorporationInventors: Shaohui Jiao, Xiaofeng Tong, Qiang Li, Wenlong Li
-
Publication number: 20170171537Abstract: A three-dimensional (3D) display method includes generating N first visual images, N being a natural number greater than 1; generating M second visual images from each of the N first visual images, M being a natural number greater than 1; acquiring N visual image groups corresponding to the N first visual images, respectively, such that, for each one of the N visual image groups, the visual image group includes the M second visual images generated from the first visual image, from among the N first visual images, to which the visual image group corresponds; generating M elemental image array (EIA) images based on the N visual image groups; and time-share displaying the M EIA images.Type: ApplicationFiled: September 1, 2014Publication date: June 15, 2017Inventors: Mingcai ZHOU, Shandong WANG, Shaohui JIAO, Tao HONG, Weiming LI, Haitao WANG, Ji Yeun KIM