Patents by Inventor Daniel Jonathan McDuff

Daniel Jonathan McDuff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220378310
    Abstract: A head-mounted device includes one or more eye-tracking cameras and one or more computer-readable hardware storage devices having stored thereon computer-executable instructions, including a machine-learned artificial intelligence (AI) model. The head-mounted device is configured to cause the one or more eye-tracking cameras to take a series of images of one or more areas of skin around one or more eyes of a wearer, and use the machine-learned AI model to analyze the series of images to extract a photoplethysmography waveform. A heart rate is then detected based on the photoplethysmography waveform.
    Type: Application
    Filed: May 27, 2021
    Publication date: December 1, 2022
    Inventors: Jonathan Eric FOSTER, Daniel Jonathan MCDUFF
  • Patent number: 10799182
    Abstract: Frames of a video frame sequence capturing one or more skin regions of a body are provided to a first neural network. The first neural network generates respective appearance representations based on the frames. An appearance representation generated based on a particular frame is indicative of a spatial distribution of a physiological signal across the particular frame. Simultaneously with providing the frames to the first neural network, the frames are also provided to a second neural network. The second neural network determines the physiological signal based on the frames. Determining the physiological signal by the second neural network includes applying the appearance representations, generated by the first neural network, to outputs of one or more layers of the second neural network to emphasize regions, in the frames, that exhibit relatively stronger presence of the physiological signal and deemphasize regions, in the frames, that exhibit relatively weaker presence of physiological signal.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: October 13, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel Jonathan McDuff, Weixuan Chen
  • Publication number: 20200121256
    Abstract: Frames of a video frame sequence capturing one or more skin regions of a body are provided to a first neural network. The first neural network generates respective appearance representations based on the frames. An appearance representation generated based on a particular frame is indicative of a spatial distribution of a physiological signal across the particular frame. Simultaneously with providing the frames to the first neural network, the frames are also provided to a second neural network. The second neural network determines the physiological signal based on the frames. Determining the physiological signal by the second neural network includes applying the appearance representations, generated by the first neural network, to outputs of one or more layers of the second neural network to emphasize regions, in the frames, that exhibit relatively stronger presence of the physiological signal and deemphasize regions, in the frames, that exhibit relatively weaker presence of physiological signal.
    Type: Application
    Filed: October 19, 2018
    Publication date: April 23, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daniel Jonathan MCDUFF, Weixuan CHEN
  • Publication number: 20130204535
    Abstract: Described herein are various technologies pertaining to estimating affective states of a user by way of monitoring data streams output by sensors and user activity on a computing device. Models of valence, arousal, and engagement can be learned during a training phase, and such models can be employed to compute values that are indicative of valence, arousal, and engagement of a user in near-real time. A visualization that represents estimated affective states of a user over time is generated to facilitate user reflection.
    Type: Application
    Filed: February 3, 2012
    Publication date: August 8, 2013
    Applicant: Microsoft Corporation
    Inventors: Ashish Kapoor, Amy Karlson, Mary P. Czerwinski, Asta Roseway, Daniel Jonathan McDuff