Patents by Inventor Thomas Sachson

Thomas Sachson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200353362
    Abstract: Managing access to digital content at a server system in communication with mobile devices including: receiving a user account identifier and performance information from a mobile device; accessing a collectible database storing collectible records; comparing the received performance information with the performance information in the collectible records; accessing a user account database that stores user account records; adding the retrieved collectible identifier to the identified user account record; retrieving a music-related asset identifier from the identified collectible record; and sending a confirmation to the mobile device that indicates the collectible asset has been collected and indicates the retrieved music-related asset identifier.
    Type: Application
    Filed: November 15, 2019
    Publication date: November 12, 2020
    Inventors: Thomas Sachson, Bradley Spahr
  • Patent number: 10044849
    Abstract: Technologies for distributed generation of an avatar with a facial expression corresponding to a facial expression of a user include capturing real-time video of a user of a local computing device. The computing device extracts facial parameters of the user's facial expression using the captured video and transmits the extracted facial parameters to a server. The server generates an avatar video of an avatar having a facial expression corresponding to the user's facial expression as a function of the extracted facial parameters and transmits the avatar video to a remote computing device.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: August 7, 2018
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson, Yimin Zhang
  • Patent number: 10019825
    Abstract: Apparatus, systems, media and/or methods may involve animating avatars. User facial motion data may be extracted that corresponds to one or more user facial gestures observed by an image capture device when a user emulates a source object. An avatar animation may be provided based on the user facial motion data. Also, script data may be provided to the user and/or the user facial motion data may be extracted when the user utilizes the script data. Moreover, audio may be captured and/or converted to a predetermined tone. Source facial motion data may be extracted and/or an avatar animation may be provided based on the source facial motion data. A degree of match may be determined between the user facial motion data of a plurality of users and the source facial motion data. The user may select an avatar as a user avatar and/or a source object avatar.
    Type: Grant
    Filed: June 5, 2013
    Date of Patent: July 10, 2018
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Thomas Sachson, Yunzhen Wang
  • Patent number: 9984487
    Abstract: Examples of systems and methods for transmitting facial motion data and animating an avatar are generally described herein. A system may include an image capture device to capture a series of images of a face, a facial recognition module to compute facial motion data for each of the images in the series of images, and a communication module to transmit the facial motion data to an animation device, wherein the animation device is to use the facial motion data to animate an avatar on the animation device.
    Type: Grant
    Filed: September 24, 2014
    Date of Patent: May 29, 2018
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson
  • Patent number: 9947125
    Abstract: Examples of systems and methods for transmitting facial motion data and animating an avatar are generally described herein. A system may include an image capture device to capture a series of images of a face, a facial recognition module to compute facial motion data for each of the images in the series of images, and a communication module to transmit the facial motion data to an animation device, wherein the animation device is to use the facial motion data to animate an avatar on the animation device.
    Type: Grant
    Filed: September 24, 2014
    Date of Patent: April 17, 2018
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson
  • Patent number: 9792714
    Abstract: Systems and methods may provide for identifying one or more facial expressions of a subject in a video signal and generating avatar animation data based on the one or more facial expressions. Additionally, the avatar animation data may be incorporated into an audio file associated with the video signal. In one example, the audio file is sent to a remote client device via a messaging application. Systems and methods may also facilitate the generation of avatar icons and doll animations that mimic the actual facial features and/or expressions of specific individuals.
    Type: Grant
    Filed: March 20, 2013
    Date of Patent: October 17, 2017
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson, Yunzhen Wang
  • Patent number: 9761032
    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, In embodiments, an apparatus may include an avatar animation engine configured to receive a plurality of facial motion parameters and a plurality of head gestures parameters, respectively associated with a face and a head of a user. The plurality of facial motion parameters may depict facial action movements of the face, and the plurality of head gesture parameters may depict head pose gestures of the head. Further, the avatar animation engine may be configured to drive an avatar model with facial and skeleton animations to animate an avatar, using the facial motion parameters and the head gestures parameters, to replicate a facial expression of the user on the avatar that includes impact of head post rotation of the user. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: July 25, 2014
    Date of Patent: September 12, 2017
    Assignee: Intel Corporation
    Inventors: Xiaofeng Tong, Qiang Li, Thomas Sachson, Wenlong Li
  • Publication number: 20160328874
    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, In embodiments, an apparatus may include an avatar animation engine configured to receive a plurality of facial motion parameters and a plurality of head gestures parameters, respectively associated with a face and a head of a user. The plurality of facial motion parameters may depict facial action movements of the face, and the plurality of head gesture parameters may depict head pose gestures of the head. Further, the avatar animation engine may be configured to drive an avatar model with facial and skeleton animations to animate an avatar, using the facial motion parameters and the head gestures parameters, to replicate a facial expression of the user on the avatar that includes impact of head post rotation of the user. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: July 25, 2014
    Publication date: November 10, 2016
    Inventors: Xiaofeng TONG, Qiang LI, Thomas SACHSON, Wenlong LI
  • Patent number: 9489760
    Abstract: A mechanism is described for facilitating dynamic simulation of avatars based on user performances according to one embodiment. A method of embodiments, as described herein, includes capturing, in real-time, an image of a user, the image including a video image over a plurality of video frames. The method may further include tracking changes in size of the user image, the tracking of the changes may include locating one or more positions of the user image within each of the plurality of video frames, computing, in real-time, user performances based on the changes in the size of the user image over the plurality of video frames, and dynamically scaling an avatar associated with the user such that the avatar is dynamically simulated corresponding to the user performances.
    Type: Grant
    Filed: November 14, 2013
    Date of Patent: November 8, 2016
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson
  • Publication number: 20160292903
    Abstract: Examples of systems and methods for transmitting avatar sequencing data in an audio file are generally described herein. A method can include receiving, at a second device from a first device, an audio file comprising: facial motion data, the facial motion data derived from a series of facial images captured at the first device, an avatar sequencing data structure from the first device, the avatar sequencing data structure comprising an avatar identifier and a duration, and an audio stream. The method can include presenting an animation of an avatar, at the second device, using the facial motion data and the audio stream.
    Type: Application
    Filed: September 24, 2014
    Publication date: October 6, 2016
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson
  • Publication number: 20160292901
    Abstract: Examples of systems and methods for transmitting facial motion data and animating an avatar are generally described herein. A system may include an image capture device to capture a series of images of a face, a facial recognition module to compute facial motion data for each of the images in the series of images, and a communication module to transmit the facial motion data to an animation device, wherein the animation device is to use the facial motion data to animate an avatar on the animation device.
    Type: Application
    Filed: September 24, 2014
    Publication date: October 6, 2016
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson
  • Patent number: 9460541
    Abstract: Systems and methods may provide for detecting a condition with respect to one or more frames of a video signal associated with a set of facial motion data and modifying, in response to the condition, the set of facial motion data to indicate that the one or more frames lack facial motion data. Additionally, an avatar animation may be initiated based on the modified set of facial motion data. In one example, the condition is one or more of a buffer overflow condition and a tracking failure condition.
    Type: Grant
    Filed: March 29, 2013
    Date of Patent: October 4, 2016
    Assignee: Intel Corporation
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson, Yunzhen Wang
  • Publication number: 20160042548
    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, an apparatus may include a facial mesh tracker to receive a plurality of image frames, detect facial action movements of a face and head pose gestures of a head within the plurality of image frames, and output a plurality of facial motion parameters and head pose parameters that depict facial action movements and head pose gestures detected, all in real time, for animation and rendering of an avatar. The facial action movements and head pose gestures may be detected through inter-frame differences for a mouth and an eye, or the head, based on pixel sampling of the image frames. The facial action movements may include opening or closing of a mouth, and blinking of an eye. The head pose gestures may include head rotation such as pitch, yaw, roll, and head movement along horizontal and vertical direction, and the head comes closer or goes farther from the camera.
    Type: Application
    Filed: March 19, 2014
    Publication date: February 11, 2016
    Inventors: Yangzhou DU, Tae-Hoon KIM, Wenlong LI, Qiang LI, Xiaofeng TONG, Tao WANG, Minje PARK, Olivier DUCHENNE, Yimin ZHANG, Yeongjae CHEON, Bongjin JUN, Wooju RYU, Thomas SACHSON, Mary D. SMILEY
  • Publication number: 20160027066
    Abstract: A mechanism is described for facilitating dynamic user-based customization of advertisement content at computing devices according to one embodiment. A method of embodiments, as described herein, includes receiving an advertiser content to be published on an avatar list, where the advertiser content is associated with an advertising entity, and verifying the advertiser content for publication, where verifying further includes assigning a ranking to the advertiser content. The ranking represents a position on the avatar list. The method may further include transmitting a publication notification identifying the ranking assigned to the advertiser content, and facilitating an auction for bidding to allow the advertising entity to obtain a higher ranking for the advertiser content than the assigned ranking, if the assigned ranking is rejected by the advertising entity.
    Type: Application
    Filed: December 27, 2013
    Publication date: January 28, 2016
    Applicant: INTEL CORPORATION
    Inventors: THOMAS SACHSON, MICHAEL DALE WITTEMAN, JOSE ELMER SAAVEDRA LORENZO, MARY D. SMILEY, YIMIN ZHANG, WENLONG LI, SHIBANI KAPOOR SHAH, PHILIP JOSEPH CORRIVEAU, KANCHAN JAHAGIRDAR, YINNI GUO, CRISTINA CHOPRA NEMETH, NEWFEL HARRAT
  • Publication number: 20160005206
    Abstract: Systems and methods may provide for detecting a condition with respect to one or more frames of a video signal associated with a set of facial motion data and modifying, in response to the condition, the set of facial motion data to indicate that the one or more frames lack facial motion data. Additionally, an avatar animation may be initiated based on the modified set of facial motion data. In one example, the condition is one or more of a buffer overflow condition and a tracking failure condition.
    Type: Application
    Filed: March 29, 2013
    Publication date: January 7, 2016
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, THOMAS SACHSON, YUNZHEN WANG
  • Publication number: 20150379752
    Abstract: Systems and methods may provide for identifying one or more facial expressions of a subject in a video signal and generating avatar animation data based on the one or more facial expressions. Additionally, the avatar animation data may be incorporated into an audio file associated with the video signal. In one example, the audio file is sent to a remote client device via a messaging application. Systems and methods may also facilitate the generation of avatar icons and doll animations that mimic the actual facial features and/or expressions of specific individuals.
    Type: Application
    Filed: March 20, 2013
    Publication date: December 31, 2015
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, THOMAS SACHSON, YUNZHEN WANG
  • Publication number: 20150325029
    Abstract: A mechanism is described for facilitating dynamic simulation of avatars based on user performances according to one embodiment. A method of embodiments, as described herein, includes capturing, in real-time, an image of a user, the image including a video image over a plurality of video frames. The method may further include tracking changes in size of the user image, the tracking of the changes may include locating one or more positions of the user image within each of the plurality of video frames, computing, in real-time, user performances based on the changes in the size of the user image over the plurality of video frames, and dynamically scaling an avatar associated with the user such that the avatar is dynamically simulated corresponding to the user performances.
    Type: Application
    Filed: November 14, 2013
    Publication date: November 12, 2015
    Inventors: WENLONG LI, XIAOFENG TONG, YANGZHUO DU, THOMAS SACHSON
  • Publication number: 20140361974
    Abstract: Apparatus, systems, media and/or methods may involve animating avatars. User facial motion data may be extracted that corresponds to one or more user facial gestures observed by an image capture device when a user emulates a source object. An avatar animation may be provided based on the user facial motion data. Also, script data may be provided to the user and/or the user facial motion data may be extracted when the user utilizes the script data. Moreover, audio may be captured and/or converted to a predetermined tone. Source facial motion data may be extracted and/or an avatar animation may be provided based on the source facial motion data. A degree of match may be determined between the user facial motion data of a plurality of users and the source facial motion data. The user may select an avatar as a user avatar and/or a source object avatar.
    Type: Application
    Filed: June 5, 2013
    Publication date: December 11, 2014
    Inventors: Wenlong Li, Thomas Sachson, Yunzhen Wang
  • Publication number: 20140267544
    Abstract: Technologies for distributed generation of an avatar with a facial expression corresponding to a facial expression of a user include capturing real-time video of a user of a local computing device. The computing device extracts facial parameters of the user's facial expression using the captured video and transmits the extracted facial parameters to a server. The server generates an avatar video of an avatar having a facial expression corresponding to the user's facial expression as a function of the extracted facial parameters and transmits the avatar video to a remote computing device.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Inventors: Wenlong Li, Xiaofeng Tong, Yangzhou Du, Thomas Sachson, Yimin Zhang
  • Publication number: 20130232053
    Abstract: This disclosure allows parties to virtualize prospective and existing financial instruments (including derivative and other analogous complex financial instruments) and other data set into one or more computer software applications via a user authoring software toolkit, and to upload such one or more virtualized instrument to a cloud hosting environment (or analogous online storage ecosystem) for further sharing of such virtualized instruments with interested parties over a communications network where other market participants may engage with the cloud hosting environment and search for, download, and review such virtualized instruments. Further, the Disclosure allows for downloading parties and authors to communicate directly and to do such “peer to peer” communications on an anonymous, partially anonymous, or non-anonymous basis, and have such communications be secure and/or encrypted.
    Type: Application
    Filed: April 4, 2013
    Publication date: September 5, 2013
    Applicant: DerivaTrust Technologies. Inc.
    Inventors: Thomas SACHSON, Hieu TRAN