Patents by Inventor Wenlong Li

Wenlong Li has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10723869
    Abstract: A polypropylene composition, a preparation method thereof, and a film or a sheet prepared from the polypropylene composition and use thereof. The composition includes 45 parts to 75 parts of a polypropylene, 10 parts to 35 parts of an elastomer, 5 parts to 20 parts of a polyethylene, 0.1 parts to 0.5 parts of an antioxidant, and 0.1 parts to 0.5 parts of a lubricant. A half peak width of a crystallization peak of the polypropylene is 5° C. to 10° C., and a peak temperature of the crystallization peak of the polypropylene is 105° C. to 115° C. The polypropylene composition has a good tenacity, especially a?30° C. low-temperature impact performance. The film or the sheet prepared from the composition and applied to automotive interior parts, can enable the external accessories not only to be less likely to generate sharp fragments while being strongly impacted, but also to have a matte characteristic.
    Type: Grant
    Filed: September 14, 2016
    Date of Patent: July 28, 2020
    Assignees: Kingfa Sci. & Tech. Co., Ltd., Yanfeng Automotive Trim Systems Co., Ltd.
    Inventors: Bo Yang, Guangwei Zhang, Zhongfu Luo, Chao Ding, Xueyong Zhang, Lan Zhao, Jianfeng Hou, Wenlong Li, Yinghui Zhou, Nanbiao Ye, Peng Wang
  • Publication number: 20200236428
    Abstract: Video analysis may be used to determine who is watching television and their level of interest in the current programming Lists of favorite programs may be derived for each of a plurality of viewers of programming on the same television receiver.
    Type: Application
    Filed: November 21, 2019
    Publication date: July 23, 2020
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Jianguo Li, Peng Wang
  • Publication number: 20200215410
    Abstract: An embodiment of a semiconductor package apparatus may include technology to recognize an action in a video, determine a synchronization point in the video based on the recognized action, and align sensor-related information with the video based on the synchronization point. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: September 29, 2017
    Publication date: July 9, 2020
    Applicant: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Qiang Li, Nadia P. Banks, Doron T. Houminer
  • Patent number: 10699463
    Abstract: In response to movement of an underlying structure, motion of complex objects connected to that structure may be simulated relatively quickly and without requiring extensive processing capabilities. A skeleton extraction method is used to simplify the complex object. Tracking is used to track the motion of the underlying structure, such as the user's head in a case where motion of hair is being simulated. Thus, the simulated motion is driven in response to the extent and direction of head or facial movement. A mass-spring model may be used to accelerate the simulation in some embodiments.
    Type: Grant
    Filed: March 17, 2016
    Date of Patent: June 30, 2020
    Assignee: Intel Corporation
    Inventors: Shaohui Jiao, Qiang Li, Wenlong Li
  • Publication number: 20200193864
    Abstract: Systems and techniques for sensor-derived swing hit and direction detection are described herein. A set of sensor values may be compressed into a first lower dimension (2105). Features may be extracted from the compressed set of sensor values (2110). The features may be clustered into a set of clusters (2115). A swing action may be detected based on a distance between members of the set of clusters (2120).
    Type: Application
    Filed: September 8, 2017
    Publication date: June 18, 2020
    Inventors: Yikai Fang, Xiaofeng Tong, Lidan Zhang, Qiang Eric Li, Wenlong Li
  • Patent number: 10540800
    Abstract: Examples of systems and methods for non-facial animation in facial performance driven avatar system are generally described herein. A method for facial gesture driven body animation may include capturing a series of images of a face, and computing facial motion data for each of the images in the series of images. The method may include identifying an avatar body animation based on the facial motion data, and animating a body of an avatar using the avatar body animation.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: January 21, 2020
    Assignee: Intel Corporation
    Inventors: Xiaofeng Tong, Qiang Eric Li, Yangzhou Du, Wenlong Li, Johnny C. Yip
  • Patent number: 10524005
    Abstract: Video analysis may be used to determine who is watching television and their level of interest in the current programming Lists of favorite programs may be derived for each of a plurality of viewers of programming on the same television receiver.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: December 31, 2019
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Jianguo Li, Peng Wang
  • Publication number: 20190320144
    Abstract: Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar; initiating communication; detecting a user input; identifying the user input; identifying an animation command based on the user input; generating avatar parameters; and transmitting at least one of the animation command and the avatar parameters.
    Type: Application
    Filed: June 26, 2019
    Publication date: October 17, 2019
    Applicant: Intel Corporation
    Inventors: XIAOFENG TONG, WENLONG LI, YANGZHOU DU, WEI HU, YIMIN ZHANG
  • Publication number: 20190304155
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Application
    Filed: October 26, 2018
    Publication date: October 3, 2019
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim
  • Patent number: 10419804
    Abstract: Methods, apparatuses and storage medium associated with cooperative provision of personalized user functions using shared device and personal device are disclosed herein. In various embodiments, a personal device (PD) method may include receiving, by a personal device of a user, a request to perform a user function to be cooperatively provided by the personal device and a shared device (SD) configured for use by multiple users; and cooperating with the shared device, by the personal device, to provide the requested user function personalized to the user of the personal device. In various embodiments, a SD method may include similar receiving and cooperating operations, performed by the SD. Other embodiments may be disclosed or claimed.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: September 17, 2019
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Honesty Young, Randolph Wang
  • Publication number: 20190213774
    Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: January 7, 2019
    Publication date: July 11, 2019
    Inventors: Shaohui JIAO, Xiaolu SHEN, Lidan ZHANG, Qiang LI, Wenlong LI
  • Publication number: 20190180493
    Abstract: Methods, systems, and storage media for generating and displaying animations of simulated biomechanical motions are disclosed. In embodiments, a computer device may obtain sensor data of a sensor affixed to a user's body or equipment used by the user, and may use inverse kinematics to determine desired positions and orientations of an avatar based on the sensor data. In embodiments, the computer device may adjust or alter the avatar based on the inverse kinematics, and generate an animation for display based on the adjusted avatar. Other embodiments may be disclosed and/or claimed.
    Type: Application
    Filed: September 20, 2016
    Publication date: June 13, 2019
    Inventors: Xiaofeng TONG, Yuanzhang CHANG, Qiang LI, Wenlong LI
  • Publication number: 20190048176
    Abstract: A polypropylene composition, a preparation method thereof, and a film or a sheet prepared from the polypropylene composition and use thereof. The composition includes 45 parts to 75 parts of a polypropylene, 10 parts to 35 parts of an elastomer, 5 parts to 20 parts of a polyethylene, 0.1 parts to 0.5 parts of an antioxidant, and 0.1 parts to 0.5 parts of a lubricant. A half peak width of a crystallization peak of the polypropylene is 5° C. to 10° C., and a peak temperature of the crystallization peak of the polypropylene is 105° C. to 115° C. The polypropylene composition has a good tenacity, especially a ?30° C. low-temperature impact performance. The film or the sheet prepared from the composition and applied to automotive interior parts, can enable the external accessories not only to be less likely to generate sharp fragments while being strongly impacted, to guarantee the safety of people, but also to have a matte characteristic.
    Type: Application
    Filed: September 14, 2016
    Publication date: February 14, 2019
    Applicants: Kingfa Sci. & Tech. Co., Ltd., YanFeng Automotive Trim Systems Co., Ltd.
    Inventors: Bo Yang, Guangwei Zhang, Zhongfu Luo, Chao Ding, Xueyong Zhang, Lan Zhao, Jianfeng Hou, Wenlong Li, Yinghui Zhou, Nanbiao Ye, Peng Wang
  • Publication number: 20190035133
    Abstract: In response to movement of an underlying structure, motion of complex objects connected to that structure may be simulated relatively quickly and without requiring extensive processing capabilities. A skeleton extraction method is used to simplify the complex object. Tracking is used to track the motion of the underlying structure, such as the user's head in a case where motion of hair is being simulated. Thus, the simulated motion is driven in response to the extent and direction of head or facial movement. A mass-spring model may be used to accelerate the simulation in some embodiments.
    Type: Application
    Filed: March 17, 2016
    Publication date: January 31, 2019
    Applicant: Intel Corporation
    Inventors: Shaohui JIAO, Qiang LI, Wenlong LI
  • Patent number: 10176619
    Abstract: Apparatuses, methods and storage medium associated with emotion augmented animation of avatars are disclosed herein. In embodiments, an apparatus may comprise an animation augmentation engine to receive facial data of a user, analyze the facial data to determine an emotion state of the user, and drive additional animation that supplements animation of the avatar based at least in part on a result of the determination of the emotion state of the user. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: July 30, 2015
    Date of Patent: January 8, 2019
    Assignee: Intel Corporation
    Inventors: Shaohui Jiao, Xiaolu Shen, Lidan Zhang, Qiang Li, Wenlong Li
  • Publication number: 20180353836
    Abstract: System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
    Type: Application
    Filed: December 30, 2016
    Publication date: December 13, 2018
    Inventors: Qiang Eric Li, Wenlong Li, Shaohui Jiao, Yikai Fang, Xiaolu Shen
  • Publication number: 20180342098
    Abstract: Methods and apparatus relating to a unified environmental mapping framework are described. In an embodiment, Environmental Mapping (EM) logic performs one or more operations to extract illumination information for an object from an environmental map in response to a determination that the object has a diffuse surface and/or specular surface. Memory, coupled to the EM logic, stores data corresponding to the environmental map. Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: December 25, 2015
    Publication date: November 29, 2018
    Applicant: Intel Corporation
    Inventors: Yuanzhang Chang, Shaohui Jiao, Xiaofeng Tong, Qiang Li, Wenlong Li
  • Patent number: 10129571
    Abstract: Techniques for media quality control may include receiving media information and determining the quality of the media information. The media information may be presented when the quality of the media information meets a quality control threshold. A warning may be generated when the quality of the media information does not meet the quality control threshold. Other embodiments are described and claimed.
    Type: Grant
    Filed: February 13, 2017
    Date of Patent: November 13, 2018
    Assignee: INTEL CORPORATION
    Inventors: Yangzhou Du, Yurong Chen, Qiang Li, Wenlong Li
  • Patent number: 10108315
    Abstract: Various embodiments are generally directed to cooperation among networked devices to obtain and use a multiple-frame screenshot. In one embodiment, an apparatus comprises a processor circuit executing instructions that cause the processor circuit to receive a signal conveying a video stream from a source device; visually present video frames of the video stream on a display associated with the apparatus; maintain a rolling buffer comprising a plurality of video frames; recurringly update the plurality of video frames to represent a subset of video frames of the video stream most recently presented on the display; receive a signal indicative of a capture command; and preserve the subset of video frames as a multiple-frame screenshot in response to the capture command.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: October 23, 2018
    Assignee: INTEL CORPORATION
    Inventor: Wenlong Li
  • Publication number: 20180300925
    Abstract: Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
    Type: Application
    Filed: November 27, 2017
    Publication date: October 18, 2018
    Inventors: Yikai Fang, Yangzhou Du, Qiang Eric Li, Xiaofeng Tong, Wenlong Li, Minje Park, Myung-Ho Ju, Jihyeon Kate Yi, Tae-Hoon Pete Kim