Patents by Inventor Javier Hernandez Rivera

Javier Hernandez Rivera has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240290462
    Abstract: A technique for providing multi-sensory content receives input information that expresses a physiological state and experienced emotional state of a user. The technique generates prompt information that describes at least an objective of guidance to be delivered and the input information. The technique maps the prompt information to output information using a pattern completion component. The output information contains control instructions for controlling an output system to deliver the guidance via generated content. In some implementations, the pattern completion component is a machine-trained pattern completion model. In some implementations, a reward-driven machine-trained model further processes the input information and/or the output information. The reward-driven machine-trained model is trained by reinforcement learning to promote the objective of the guidance. In other implementations, the reward-driven machine-trained model operates by itself, without the pattern completion component.
    Type: Application
    Filed: February 28, 2023
    Publication date: August 29, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Judith AMORES FERNANDEZ, Mary Patricia CZERWINSKI, Javier HERNANDEZ RIVERA
  • Publication number: 20230419581
    Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.
    Type: Application
    Filed: September 11, 2023
    Publication date: December 28, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. MCDUFF, Javier HERNANDEZ RIVERA, Tadas BALTRUSAITIS, Erroll William WOOD
  • Publication number: 20230334514
    Abstract: Aspects of the present disclosure relate to generating an engagement model to predict actions that may have a high probability of maintaining user engagement in-application or causing a user to reengage with the application. To generate the engagement model, an approach has been developed which incorporates features analysis of the application and application users. Users may be grouped based on similar features that are used to generate machine learning engagement models. The output of an engagement model may be a prediction on whether a user will continue to engage with an application. The prediction may be provided to a reengagement model which may output prompts to help increase user engagement with the application. The prompts may be based on an understanding of application users and their preferences.
    Type: Application
    Filed: April 18, 2022
    Publication date: October 19, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Javier HERNANDEZ RIVERA, Mar GONZALEZ FRANCO, Melanie J. KNEISEL, Adam B. GLASS, Jarnail CHUDGE, Tiffany LIU, Antonella MASELLI, Amos MILLER
  • Patent number: 11790586
    Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: October 17, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. McDuff, Javier Hernandez Rivera, Tadas Baltrusaitis, Erroll William Wood
  • Publication number: 20230083418
    Abstract: Various methods and apparatus relating to estimating and mitigating a stress level of a user are disclosed herein. Methods can include collecting potential stress indicator data from the user interacting with a computing device. The potential stress indicator data can include one or more of environmental data and contextual data associated with the user. Methods can include estimating the stress level of the user based on the potential stress indicator data. Methods can include performing an evaluation of whether to mitigate the stress level of the user via one or more stress mitigation interventions. Methods can include presenting the one or more stress mitigation interventions to the user via a graphical user interface (GUI) when the evaluation indicates that the stress level should be mitigated.
    Type: Application
    Filed: September 14, 2021
    Publication date: March 16, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. MCDUFF, Javier HERNANDEZ RIVERA, Mary P. CZERWINSKI, Kael R. ROWAN, Jin A. SUH, Gonzalo A. RAMOS
  • Publication number: 20220400994
    Abstract: Systems and methods for providing a user characteristic to a service provider for a virtual conference with a user are provided. In particular, a computing device may collect raw media data associated with the user during the virtual conference between the user and the service provider. During the virtual conference, the computing device may perform a first processing of the raw media data to extract intermediate user data, wherein the intermediate user data comprises one or more of a physiological signal and a behavioral signal associated the user. The computing device may further transform the raw media data into transformed media data and transmit the transformed media data with the intermediate user data to a server for second processing of the intermediate user data.
    Type: Application
    Filed: June 16, 2021
    Publication date: December 22, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Javier Hernandez RIVERA, Daniel J. MCDUFF, Jin A. SUH, Kael R. ROWAN, Mary P. CZERWINSKI
  • Publication number: 20220358308
    Abstract: The present disclosure relate to highlighting audience members with reactions to a presenter of an online meeting. Unlike physical, fact-to-face meeting that enables spontaneous interactions among the presenter and the audiences that are collocated with the presenter, presenting materials during an online meeting raises an issue of the present not being able to see real-time reactions or feedback by the audience members. The present disclosure addresses the issue by dynamically determining one or more audience members who indicate reactions during the online meeting or presentation and displaying faces of the one or more audience members under spotlight to the presenter. The presenter sees faces of the audience members with reactions during the online presentation and responds to the audience members and keep the audience engaged. The spotlight audience server analyzes video frames and determines types of reactions of the audience members.
    Type: Application
    Filed: June 24, 2021
    Publication date: November 10, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Javier HERNANDEZ RIVERA, Daniel J. MCDUFF, Jin A. SUH, Kael R. ROWAN, Mary P. CZERWINSKI, Prasanth MURALI, Mohammad AKRAM
  • Publication number: 20220138583
    Abstract: Generally discussed herein are devices, systems, and methods for. A method can include obtaining a normalizing autoencoder, the normalizing autoencoder trained based on first data samples of a template person and second data samples of a variety of people, normalizing, by the normalizing autoencoder, an input data sample by combining dynamic characteristics of a person in the input data sample with static characteristics in the first data samples, to generate normalized data, and providing the normalized data as input to a classifier model to classify the input data based on the dynamic characteristics of the input data and the static characteristics of the first data samples.
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Inventors: Javier Hernandez Rivera, Daniel McDuff, Mary P. Czerwinski
  • Publication number: 20210398337
    Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.
    Type: Application
    Filed: June 19, 2020
    Publication date: December 23, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. MCDUFF, Javier HERNANDEZ RIVERA, Tadas BALTRUSAITIS, Erroll William WOOD
  • Patent number: 8943526
    Abstract: Technologies described herein relate to estimating engagement of a person with respect to content being presented to the person. A sensor outputs a stream of data relating to the person as the person is consuming the content. At least one feature is extracted from the stream of data, and a level of engagement of the person is estimated based at least in part upon the at least one feature. A computing function is performed based upon the estimated level of engagement of the person.
    Type: Grant
    Filed: April 19, 2013
    Date of Patent: January 27, 2015
    Assignee: Microsoft Corporation
    Inventors: Javier Hernandez Rivera, Zicheng Liu, Geoff Hulten, Michael Conrad, Kyle Krum, David DeBarr, Zhengyou Zhang
  • Publication number: 20130232515
    Abstract: Technologies described herein relate to estimating engagement of a person with respect to content being presented to the person. A sensor outputs a stream of data relating to the person as the person is consuming the content. At least one feature is extracted from the stream of data, and a level of engagement of the person is estimated based at least in part upon the at least one feature. A computing function is performed based upon the estimated level of engagement of the person.
    Type: Application
    Filed: April 19, 2013
    Publication date: September 5, 2013
    Applicant: Microsoft Corporation
    Inventors: Javier Hernandez Rivera, Zicheng Liu, Geoff Hulten, Michael Conrad, Kyle Krum, David DeBarr, Zhengyou Zhang