Patents by Inventor Jose Elmer S. Lorenzo

Jose Elmer S. Lorenzo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170186103
    Abstract: Systems and methods may identify local gesture data in a wearable device including a wrist-worn form factor and identify remote gesture data in a wireless transmission received by the wearable device. Additionally, a loyalty tracker may be incremented based on a correlation between the local gesture data and the remote gesture data. In one example, an attachment of an interchangeable component to the wearable device may be detected, wherein the loyalty tracker is incremented in response to the attachment.
    Type: Application
    Filed: December 24, 2015
    Publication date: June 29, 2017
    Inventors: Jose Elmer S. Lorenzo, Mary D. Smiley, Steven T. Holmes, Lakshmanan Aruunachalam, Gary Y. Kwan
  • Patent number: 9633463
    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, the apparatus may include a gesture tracker and an animation engine. The gesture tracker may be configured to detect and track a user gesture that corresponds to a canned facial expression, the user gesture including a duration component corresponding to a duration the canned facial expression is to be animated. Further, the gesture tracker may be configured to respond to a detection and tracking of the user gesture, and output one or more animation messages that describe the detected/tracked user gesture or identify the canned facial expression, and the duration. The animation engine may be configured to receive the one or more animation messages, and drive an avatar model, in accordance with the one or more animation messages, to animate the avatar with animation of the canned facial expressions for the duration. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: September 24, 2014
    Date of Patent: April 25, 2017
    Assignee: Intel Corporation
    Inventors: Qiang Li, Xiaofeng Tong, Yangzhou Du, Wenlong Li, Caleb J. Ozer, Jose Elmer S. Lorenzo
  • Publication number: 20170046065
    Abstract: Apparatuses, methods and storage medium associated with the provision of an avatar keyboard to a communication/computing device are disclosed herein. In embodiments, an apparatus for communicating may comprise one or more processors to execute an application; and a keyboard module coupled with the one or more processor to provide a plurality of keyboards in a corresponding plurality of keyboard modes for inputting to the application, including an avatar keyboard, in an avatar keyboard mode. The avatar keyboard may include a plurality of avatar keys with corresponding avatars that can be dynamically customized or animated based at least in part on facial expressions or head poses of a user, prior to input to the application. Other embodiments may be disclosed and/or claimed.
    Type: Application
    Filed: April 7, 2015
    Publication date: February 16, 2017
    Inventors: Fucen ZENG, Wenlong LI, Jose Elmer S. LORENZO
  • Publication number: 20160247309
    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, the apparatus may include a gesture tracker and an animation engine. The gesture tracker may be configured to detect and track a user gesture that corresponds to a canned facial expression, the user gesture including a duration component corresponding to a duration the canned facial expression is to be animated. Further, the gesture tracker may be configured to respond to a detection and tracking of the user gesture, and output one or more animation messages that describe the detected/tracked user gesture or identify the canned facial expression, and the duration. The animation engine may be configured to receive the one or more animation messages, and drive an avatar model, in accordance with the one or more animation messages, to animate the avatar with animation of the canned facial expressions for the duration. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: September 24, 2014
    Publication date: August 25, 2016
    Inventors: Qiang Li, Xiaofeng Tong, Yangzhou Du, Wenlong Li, Caleb J. Ozer, Jose Elmer S. Lorenzo
  • Publication number: 20150031342
    Abstract: A user communication device to receive and process data captured by one or more sensors during playback of an incoming communication on the user communication device and to identify user characteristics based on the captured data. The sensors may capture particular attributes of the user indicative of the user's reaction and/or mood in response to the incoming communication. The user characteristics include, but are not limited to, physical characteristics of the user, including facial expressions and physical movements in the form of gestures, as well as voice input, including tone of voice, from the user. The user communication device is further configured to identify media based on the user characteristics for inclusion in a communication to be transmitted in response to the incoming communication, the identified media including subject matter indicative of and corresponding to the mood of the user in response to the playback of the incoming communication.
    Type: Application
    Filed: July 24, 2013
    Publication date: January 29, 2015
    Inventor: Jose Elmer S. Lorenzo