Patents by Inventor Yunzhen Wang

Yunzhen Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190130435
    Abstract: Receiving tracking information at an analysis network is disclosed. Initially, a first link to a first video ad serving template including first instructions and a second link to a video creative is received at the analysis network and from a creative provider. A second video ad serving template is then generated based on the first link to the first video ad serving template. The second video ad serving template includes the first link to the first video ad serving template and a third link to second instructions for generating second tracking information. A link to the second video ad serving template is forwarded from the analysis network to the creative provider. A request for the second instructions is then received at the analysis network and from the content provider. The second instructions are then forwarded from the analysis network to the content provider. Second tracking information generated by the second instructions is then received at the analysis network from the content provider.
    Type: Application
    Filed: December 26, 2018
    Publication date: May 2, 2019
    Inventors: Rajesh Bashetty, Yunzhen Wang, Thomas Pottjegort, Alfonso Corretti Gerbaudo
  • Patent number: 10019825
    Abstract: Apparatus, systems, media and/or methods may involve animating avatars. User facial motion data may be extracted that corresponds to one or more user facial gestures observed by an image capture device when a user emulates a source object. An avatar animation may be provided based on the user facial motion data. Also, script data may be provided to the user and/or the user facial motion data may be extracted when the user utilizes the script data. Moreover, audio may be captured and/or converted to a predetermined tone. Source facial motion data may be extracted and/or an avatar animation may be provided based on the source facial motion data. A degree of match may be determined between the user facial motion data of a plurality of users and the source facial motion data. The user may select an avatar as a user avatar and/or a source object avatar.
    Type: Grant
    Filed: June 5, 2013
    Date of Patent: July 10, 2018
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Thomas Sachson, Yunzhen Wang
  • Patent number: 9792714
    Abstract: Systems and methods may provide for identifying one or more facial expressions of a subject in a video signal and generating avatar animation data based on the one or more facial expressions. Additionally, the avatar animation data may be incorporated into an audio file associated with the video signal. In one example, the audio file is sent to a remote client device via a messaging application. Systems and methods may also facilitate the generation of avatar icons and doll animations that mimic the actual facial features and/or expressions of specific individuals.
    Type: Grant
    Filed: March 20, 2013
    Date of Patent: October 17, 2017
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson, Yunzhen Wang
  • Publication number: 20160379017
    Abstract: An apparatus, system and other techniques for a smart card device, one or more host devices and a modular computing system comprising a smart card device and one or more host devices are described. For example, an apparatus or example smart card device may comprise one or more processor circuits, an interface coupled to the one or more processor circuits, the smart card device sized to be removably inserted into a host device and the interface configured to removably couple the smart card device to the host device, and logic, at least a portion of which is in hardware, the logic to configure the smart card device based on one or more characteristics of the host device. Other embodiments are described and claimed.
    Type: Application
    Filed: December 27, 2013
    Publication date: December 29, 2016
    Inventors: Randolph Y. Wang, Eugene Y. Tang, Zeyi Liu, Jiqiang Song, Paul J. Peng, Haiyang Zhu, Yunzhen Wang, Chengwei Bi, Fang Wang, Sun C. Chan, Yuanjian Chen, Dawei Wang, Bing Han
  • Publication number: 20160309590
    Abstract: Apparatuses and a method are described. For example, an apparatus or example smart card device may comprise a casing to enclose at least a portion of a processing logic, a first plurality of contact pads disposed substantially in a row near an edge of a first side of the casing, and a second plurality of contact pads disposed in a centralized group, the second plurality of contact pads substantially separate from the first plurality of contact pads.
    Type: Application
    Filed: January 6, 2014
    Publication date: October 20, 2016
    Inventors: HUAJIAN DING, YING GAO, EUGENE TANG, BING HAN, ZEYI LIU, JIQIANG SONG, YUNZHEN WANG, DAWEI WANG, PAUL PENG
  • Publication number: 20160300259
    Abstract: Receiving tracking information at an analysis network is disclosed. Initially, a first link to a first video ad serving template including first instructions and a second link to a video creative is received at the analysis network and from a creative provider. A second video ad serving template is then generated based on the first link to the first video ad serving template. The second video ad serving template includes the first link to the first video ad serving template and a third link to second instructions for generating second tracking information. A link to the second video ad serving template is forwarded from the analysis network to the creative provider. A request for the second instructions is then received at the analysis network and from the content provider. The second instructions are then forwarded from the analysis network to the content provider. Second tracking information generated by the second instructions is then received at the analysis network from the content provider.
    Type: Application
    Filed: April 8, 2015
    Publication date: October 13, 2016
    Inventors: Rajesh BASHETTY, Yunzhen WANG, Thomas Pottjegort, Alfonso Corretti Gerbaudo
  • Patent number: 9460541
    Abstract: Systems and methods may provide for detecting a condition with respect to one or more frames of a video signal associated with a set of facial motion data and modifying, in response to the condition, the set of facial motion data to indicate that the one or more frames lack facial motion data. Additionally, an avatar animation may be initiated based on the modified set of facial motion data. In one example, the condition is one or more of a buffer overflow condition and a tracking failure condition.
    Type: Grant
    Filed: March 29, 2013
    Date of Patent: October 4, 2016
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson, Yunzhen Wang
  • Publication number: 20160005206
    Abstract: Systems and methods may provide for detecting a condition with respect to one or more frames of a video signal associated with a set of facial motion data and modifying, in response to the condition, the set of facial motion data to indicate that the one or more frames lack facial motion data. Additionally, an avatar animation may be initiated based on the modified set of facial motion data. In one example, the condition is one or more of a buffer overflow condition and a tracking failure condition.
    Type: Application
    Filed: March 29, 2013
    Publication date: January 7, 2016
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, THOMAS SACHSON, YUNZHEN WANG
  • Publication number: 20150379752
    Abstract: Systems and methods may provide for identifying one or more facial expressions of a subject in a video signal and generating avatar animation data based on the one or more facial expressions. Additionally, the avatar animation data may be incorporated into an audio file associated with the video signal. In one example, the audio file is sent to a remote client device via a messaging application. Systems and methods may also facilitate the generation of avatar icons and doll animations that mimic the actual facial features and/or expressions of specific individuals.
    Type: Application
    Filed: March 20, 2013
    Publication date: December 31, 2015
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, THOMAS SACHSON, YUNZHEN WANG
  • Publication number: 20140361974
    Abstract: Apparatus, systems, media and/or methods may involve animating avatars. User facial motion data may be extracted that corresponds to one or more user facial gestures observed by an image capture device when a user emulates a source object. An avatar animation may be provided based on the user facial motion data. Also, script data may be provided to the user and/or the user facial motion data may be extracted when the user utilizes the script data. Moreover, audio may be captured and/or converted to a predetermined tone. Source facial motion data may be extracted and/or an avatar animation may be provided based on the source facial motion data. A degree of match may be determined between the user facial motion data of a plurality of users and the source facial motion data. The user may select an avatar as a user avatar and/or a source object avatar.
    Type: Application
    Filed: June 5, 2013
    Publication date: December 11, 2014
    Inventors: Wenlong Li, Thomas Sachson, Yunzhen Wang