Patents by Inventor Daniel J. McDuff

Daniel J. McDuff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240086761
    Abstract: The present concepts include a neuroergonomic service that processes multimodal physiological, digital, and/or environmental inputs from a user and predicts cognitive states of the user. Thus, the neuroergonomic service provides personalized feedback to the user about her current mental and physiological wellbeing to enable modulation of mood, stress, attention, and other cognitive measures for improved productivity and satisfaction. The neuroergonomic service utilizes machine learning models that are trained offline using sensor inputs taken from participants in a controlled environment that purposefully induce an array of cognitive states upon the participants.
    Type: Application
    Filed: September 13, 2022
    Publication date: March 14, 2024
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Aashish PATEL, Weiwei YANG, Hayden HELM, Daniel J. MCDUFF, Siddharth SIDDHARTH, Jen-Tse DONG
  • Publication number: 20230419581
    Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.
    Type: Application
    Filed: September 11, 2023
    Publication date: December 28, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. MCDUFF, Javier HERNANDEZ RIVERA, Tadas BALTRUSAITIS, Erroll William WOOD
  • Patent number: 11813091
    Abstract: Systems and methods are directed to animating subtle physiological processes directly on avatars and photos. That is, physiologically-grounded spatial, color space, and temporal modifications may be made to the appearance of an avatar to simulate a physiological characteristic, such as blood flow. More specifically, a frame of a video sequence and a physiological signal may be received. An attention mask may be generated based on the received physiological signal, where the attention mask includes attention weights indicative of a strength of the physiological signal for differing portions of the frame of the video sequence. Accordingly, a pixel adjustment value based on the physiological signal and the attention mask may be generated and applied to an identified pixel in the frame of the video sequence.
    Type: Grant
    Filed: February 9, 2023
    Date of Patent: November 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Daniel J. McDuff
  • Patent number: 11790586
    Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: October 17, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. McDuff, Javier Hernandez Rivera, Tadas Baltrusaitis, Erroll William Wood
  • Publication number: 20230181124
    Abstract: Systems and methods are directed to animating subtle physiological processes directly on avatars and photos. That is, physiologically-grounded spatial, color space, and temporal modifications may be made to the appearance of an avatar to simulate a physiological characteristic, such as blood flow. More specifically, a frame of a video sequence and a physiological signal may be received. An attention mask may be generated based on the received physiological signal, where the attention mask includes attention weights indicative of a strength of the physiological signal for differing portions of the frame of the video sequence. Accordingly, a pixel adjustment value based on the physiological signal and the attention mask may be generated and applied to an identified pixel in the frame of the video sequence.
    Type: Application
    Filed: February 9, 2023
    Publication date: June 15, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventor: Daniel J. MCDUFF
  • Publication number: 20230083418
    Abstract: Various methods and apparatus relating to estimating and mitigating a stress level of a user are disclosed herein. Methods can include collecting potential stress indicator data from the user interacting with a computing device. The potential stress indicator data can include one or more of environmental data and contextual data associated with the user. Methods can include estimating the stress level of the user based on the potential stress indicator data. Methods can include performing an evaluation of whether to mitigate the stress level of the user via one or more stress mitigation interventions. Methods can include presenting the one or more stress mitigation interventions to the user via a graphical user interface (GUI) when the evaluation indicates that the stress level should be mitigated.
    Type: Application
    Filed: September 14, 2021
    Publication date: March 16, 2023
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. MCDUFF, Javier HERNANDEZ RIVERA, Mary P. CZERWINSKI, Kael R. ROWAN, Jin A. SUH, Gonzalo A. RAMOS
  • Patent number: 11602314
    Abstract: Systems and methods are directed to animating subtle physiological processes directly on avatars and photos. That is, physiologically-grounded spatial, color space, and temporal modifications may be made to the appearance of an avatar to simulate a physiological characteristic, such as blood flow. More specifically, a frame of a video sequence and a physiological signal may be received. An attention mask may be generated based on the received physiological signal, where the attention mask includes attention weights indicative of a strength of the physiological signal for differing portions of the frame of the video sequence. Accordingly, a pixel adjustment value based on the physiological signal and the attention mask may be generated and applied to an identified pixel in the frame of the video sequence.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: March 14, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Daniel J. McDuff
  • Publication number: 20220400994
    Abstract: Systems and methods for providing a user characteristic to a service provider for a virtual conference with a user are provided. In particular, a computing device may collect raw media data associated with the user during the virtual conference between the user and the service provider. During the virtual conference, the computing device may perform a first processing of the raw media data to extract intermediate user data, wherein the intermediate user data comprises one or more of a physiological signal and a behavioral signal associated the user. The computing device may further transform the raw media data into transformed media data and transmit the transformed media data with the intermediate user data to a server for second processing of the intermediate user data.
    Type: Application
    Filed: June 16, 2021
    Publication date: December 22, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Javier Hernandez RIVERA, Daniel J. MCDUFF, Jin A. SUH, Kael R. ROWAN, Mary P. CZERWINSKI
  • Publication number: 20220358308
    Abstract: The present disclosure relate to highlighting audience members with reactions to a presenter of an online meeting. Unlike physical, fact-to-face meeting that enables spontaneous interactions among the presenter and the audiences that are collocated with the presenter, presenting materials during an online meeting raises an issue of the present not being able to see real-time reactions or feedback by the audience members. The present disclosure addresses the issue by dynamically determining one or more audience members who indicate reactions during the online meeting or presentation and displaying faces of the one or more audience members under spotlight to the presenter. The presenter sees faces of the audience members with reactions during the online presentation and responds to the audience members and keep the audience engaged. The spotlight audience server analyzes video frames and determines types of reactions of the audience members.
    Type: Application
    Filed: June 24, 2021
    Publication date: November 10, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Javier HERNANDEZ RIVERA, Daniel J. MCDUFF, Jin A. SUH, Kael R. ROWAN, Mary P. CZERWINSKI, Prasanth MURALI, Mohammad AKRAM
  • Publication number: 20210398337
    Abstract: Systems and methods are provided that are directed to generating video sequences including physio-realistic avatars. In examples, an albedo for an avatar is received, a sub-surface skin color associated with the albedo is modified based on physiological data associated with physiologic characteristic, and an avatar based on the albedo and the modified sub-surface skin color is rendered. The rendered avatar may then be synthesized in a frame of video. In some examples, a video including the synthesized avatar may be used to train a machine learning model to detect a physiological characteristic. The machine learning model may receive a plurality of video segments, where one or more of the video segments includes a synthetic physio-realistic avatar generated with the physiological characteristic. The machine learning model may be trained using the plurality of video segments. The trained model may be provided to a requesting entity.
    Type: Application
    Filed: June 19, 2020
    Publication date: December 23, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. MCDUFF, Javier HERNANDEZ RIVERA, Tadas BALTRUSAITIS, Erroll William WOOD
  • Publication number: 20210386383
    Abstract: Systems and methods are directed to animating subtle physiological processes directly on avatars and photos. That is, physiologically-grounded spatial, color space, and temporal modifications may be made to the appearance of an avatar to simulate a physiological characteristic, such as blood flow. More specifically, a frame of a video sequence and a physiological signal may be received. An attention mask may be generated based on the received physiological signal, where the attention mask includes attention weights indicative of a strength of the physiological signal for differing portions of the frame of the video sequence. Accordingly, a pixel adjustment value based on the physiological signal and the attention mask may be generated and applied to an identified pixel in the frame of the video sequence.
    Type: Application
    Filed: June 15, 2020
    Publication date: December 16, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventor: Daniel J. McDuff
  • Publication number: 20200279553
    Abstract: A conversational agent that is implemented as a voice-only agent or embodied with a face may match the speech and facial expressions of a user. Linguistic style-matching by the conversational agent may be implemented by identifying prosodic characteristics of the user's speech and synthesizing speech for the virtual agent with the same or similar characteristics. The facial expressions of the user can be identified and mimicked by the face of an embodied conversational agent. Utterances by the virtual agent may be based on a combination of predetermined scripted responses and open-ended responses generated by machine learning techniques. A conversational agent that aligns with the conversational style and facial expressions of the user may be perceived as more trustworthy, easier to understand, and create a more natural human-machine interaction.
    Type: Application
    Filed: February 28, 2019
    Publication date: September 3, 2020
    Inventors: Daniel J. McDUFF, Kael R. ROWAN, Mary P. CZERWINSKI, Deepali ANEJA, Rens HOEGEN
  • Publication number: 20110251493
    Abstract: Method and system for measuring physiological parameters. The method includes capturing a sequence of images of a human face and identifying the location of the face in a frame of the video and establishing a region of interest including the face. Pixels are separated in the region of interest in a frame into at least two channel values forming raw traces over time. The raw traces are decomposed into at least two independent source signals. At least one of the source signals is processed to obtain a physiological parameter.
    Type: Application
    Filed: March 16, 2011
    Publication date: October 13, 2011
    Applicant: Massachusetts Institute of Technology
    Inventors: Ming-Zher Poh, Daniel J. McDuff, Rosalind W. Picard