Patents by Inventor Daniel McDuff

Daniel McDuff has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11657288
    Abstract: Disclosed embodiments provide for deep convolutional neural network computing. The convolutional computing is accomplished using a multilayered analysis engine. The multilayered analysis engine includes a deep learning network using a convolutional neural network (CNN). The multilayered analysis engine is used to analyze multiple images in a supervised or unsupervised learning process. Multiple images are provided to the multilayered analysis engine, and the multilayered analysis engine is trained with those images. A subject image is then evaluated by the multilayered analysis engine. The evaluation is accomplished by analyzing pixels within the subject image to identify a facial portion and identifying a facial expression based on the facial portion. The results of the evaluation are output. The multilayered analysis engine is retrained using a second plurality of images.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: May 23, 2023
    Assignee: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Daniel McDuff
  • Patent number: 11430260
    Abstract: Techniques for performing viewing verification using a plurality of classifiers are disclosed. Images of an individual may be obtained concurrently with an electronic display presenting one or more images. Image classifiers for facial and head pose analysis may be obtained. The images of the individual may be analyzed to identify a face of the individual in one of the plurality of images. A viewing verification metric may be calculated using the image classifiers and a verified viewing duration of the screen images by the individual may be calculated based on the plurality of images and the analyzing. Viewing verification can involve determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. A viewing verification metric can be generated in order to determine a level of interest of the individual in particular media and images.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: August 30, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Nicholas Langeveld, Daniel McDuff, Seyedmohammad Mavadati
  • Patent number: 11393133
    Abstract: A machine learning system is accessed. The machine learning system is used to translate content into a representative icon. The machine learning system is used to manipulate emoji. The machine learning system is used to process an image of an individual. The machine learning processing includes identifying a face of the individual. The machine learning processing includes classifying the face to determine facial content using a plurality of image classifiers. The classifying includes generating confidence values for a plurality of action units for the face. The facial content is translated into a representative icon. The translating the facial content includes summing the confidence values for the plurality of action units. The representative icon comprises an emoji. A set of emoji can be imported. The representative icon is selected from the set of emoji. The emoji selection is based on emotion content analysis of the face.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: July 19, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, May Amr Fouad, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Daniel McDuff
  • Publication number: 20220138583
    Abstract: Generally discussed herein are devices, systems, and methods for. A method can include obtaining a normalizing autoencoder, the normalizing autoencoder trained based on first data samples of a template person and second data samples of a variety of people, normalizing, by the normalizing autoencoder, an input data sample by combining dynamic characteristics of a person in the input data sample with static characteristics in the first data samples, to generate normalized data, and providing the normalized data as input to a classifier model to classify the input data based on the dynamic characteristics of the input data and the static characteristics of the first data samples.
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Inventors: Javier Hernandez Rivera, Daniel McDuff, Mary P. Czerwinski
  • Patent number: 11232290
    Abstract: Images are analyzed using sub-sectional component evaluation in order to augment classifier usage. An image of an individual is obtained. The face of the individual is identified, and regions within the face are determined. The individual is evaluated to be within a sub-sectional component of a population based on a demographic or based on an activity. An evaluation of content of the face is performed based on the individual being within a sub-sectional component of a population. The sub-sectional component of a population is used for disambiguating among content types for the content of the face. A Bayesian framework that includes a conditional probability is used to perform the evaluation of the content of the face, and the evaluation is further based on a prior event that occurred.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: January 25, 2022
    Assignee: Affectiva, Inc.
    Inventors: Daniel McDuff, Rana el Kaliouby
  • Patent number: 10874310
    Abstract: In illustrative implementations of this invention, a photoplethysmographic device measures variations of light that is reflected from, or transmitted through, human skin. In some implementations, the device includes a camera that takes the measurements remotely. In others, the device touches the skin during the measurements. The device includes a camera or other light sensor, which includes at least orange, green and cyan color channels. In some cases, such as a contact device, the device includes three or more colors of active light sources, including at least orange, green and cyan light sources. A computer analyzes the sensor data, in order to estimate a cardiac blood volume pulse wave. For each cardiac pulse, a computer detects the systolic peak and diastolic inflection of the wave, by calculating a second derivative of the wave. From the estimated wave, a computer estimates heart rate, heart rate variability and respiration rate.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: December 29, 2020
    Assignee: Massachusetts Institute of Technology
    Inventors: Daniel McDuff, Rosalind Picard, Sarah Pratt
  • Publication number: 20200302235
    Abstract: Disclosed embodiments provide for deep convolutional neural network computing. The convolutional computing is accomplished using a multilayered analysis engine. The multilayered analysis engine includes a deep learning network using a convolutional neural network (CNN). The multilayered analysis engine is used to analyze multiple images in a supervised or unsupervised learning process. Multiple images are provided to the multilayered analysis engine, and the multilayered analysis engine is trained with those images. A subject image is then evaluated by the multilayered analysis engine. The evaluation is accomplished by analyzing pixels within the subject image to identify a facial portion and identifying a facial expression based on the facial portion. The results of the evaluation are output. The multilayered analysis engine is retrained using a second plurality of images.
    Type: Application
    Filed: June 8, 2020
    Publication date: September 24, 2020
    Applicant: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Daniel McDuff
  • Publication number: 20200219295
    Abstract: A machine learning system is accessed. The machine learning system is used to translate content into a representative icon. The machine learning system is used to manipulate emoji. The machine learning system is used to process an image of an individual. The machine learning processing includes identifying a face of the individual. The machine learning processing includes classifying the face to determine facial content using a plurality of image classifiers. The classifying includes generating confidence values for a plurality of action units for the face. The facial content is translated into a representative icon. The translating the facial content includes summing the confidence values for the plurality of action units. The representative icon comprises an emoji. A set of emoji can be imported. The representative icon is selected from the set of emoji. The emoji selection is based on emotion content analysis of the face.
    Type: Application
    Filed: March 19, 2020
    Publication date: July 9, 2020
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, May Amr Fouad, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Daniel McDuff
  • Publication number: 20200175262
    Abstract: Techniques for performing robotic assistance are disclosed. A plurality of images of an individual is obtained by an imagery module associated with an autonomous mobile robot. Cognitive state data including facial data for the individual in the plurality of images is identified by an analysis module associated with the autonomous mobile robot. A facial expression metric, based on the facial data for the individual in the plurality of images, is calculated. A cognitive state metric for the individual is generated by the analysis module based on the cognitive state data. The autonomous mobile robot initiates one or more responses based on the cognitive state metric. The one or more responses include one or more electromechanical responses. The one or more electromechanical responses cause the robot to change locations.
    Type: Application
    Filed: February 4, 2020
    Publication date: June 4, 2020
    Applicant: Affectiva, Inc.
    Inventors: Boisy G. Pitre, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Daniel McDuff, Panu James Turcot, Gabriele Zijderveld
  • Publication number: 20200134295
    Abstract: Techniques for performing viewing verification using a plurality of classifiers are disclosed. Images of an individual may be obtained concurrently with an electronic display presenting one or more images. Image classifiers for facial and head pose analysis may be obtained. The images of the individual may be analyzed to identify a face of the individual in one of the plurality of images. A viewing verification metric may be calculated using the image classifiers and a verified viewing duration of the screen images by the individual may be calculated based on the plurality of images and the analyzing. Viewing verification can involve determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. A viewing verification metric can be generated in order to determine a level of interest of the individual in particular media and images.
    Type: Application
    Filed: December 24, 2019
    Publication date: April 30, 2020
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Nicholas Langeveld, Daniel McDuff, Seyedmohammad Mavadati
  • Publication number: 20190034706
    Abstract: Concepts for facial tracking with classifiers are disclosed. A plurality of images is captured, received, and partitioned into a series of image frames. The plurality of images is captured on an individual viewing a display. One or more faces is identified and tracked in the image frames using a plurality of classifiers. The plurality of classifiers is used to perform head pose estimation. The plurality of images is analyzed to evaluate a query of determining whether the electronic display was attended by the individual with the face. The analyzing includes determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. An engagement score and emotional responses are determined for media and images provided on the display. A result is rendered for the query, based on the analysis.
    Type: Application
    Filed: September 28, 2018
    Publication date: January 31, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Nicholas Langeveld, Daniel McDuff, Seyedmohammad Mavadati
  • Publication number: 20180279893
    Abstract: In illustrative implementations of this invention, a photoplethysmographic device measures variations of light that is reflected from, or transmitted through, human skin. In some implementations, the device includes a camera that takes the measurements remotely. In others, the device touches the skin during the measurements. The device includes a camera or other light sensor, which includes at least orange, green and cyan color channels. In some cases, such as a contact device, the device includes three or more colors of active light sources, including at least orange, green and cyan light sources. A computer analyzes the sensor data, in order to estimate a cardiac blood volume pulse wave. For each cardiac pulse, a computer detects the systolic peak and diastolic inflection of the wave, by calculating a second derivative of the wave. From the estimated wave, a computer estimates heart rate, heart rate variability and respiration rate.
    Type: Application
    Filed: May 31, 2018
    Publication date: October 4, 2018
    Inventors: Daniel McDuff, Rosalind Picard, Sarah Pratt
  • Patent number: 10028669
    Abstract: In illustrative implementations of this invention, a photoplethysmographic device measures variations of light that is reflected from, or transmitted through, human skin. In some implementations, the device includes a camera that takes the measurements remotely. In others, the device touches the skin during the measurements. The device includes a camera or other light sensor, which includes at least orange, green and cyan color channels. In some cases, such as a contact device, the device includes three or more colors of active light sources, including at least orange, green and cyan light sources. A computer analyzes the sensor data, in order to estimate a cardiac blood volume pulse wave. For each cardiac pulse, a computer detects the systolic peak and diastolic inflection of the wave, by calculating a second derivative of the wave. From the estimated wave, a computer estimates heart rate, heart rate variability and respiration rate.
    Type: Grant
    Filed: April 2, 2015
    Date of Patent: July 24, 2018
    Assignee: Massachusetts Institute of Technology
    Inventors: Daniel McDuff, Rosalind Picard, Sarah Gontarek
  • Publication number: 20170330029
    Abstract: Disclosed embodiments provide for deep convolutional computing image analysis. The convolutional computing is accomplished using a multilayered analysis engine. The multilayered analysis engine includes a deep learning network using a convolutional neural network (CNN). The multilayered analysis engine is used to analyze multiple images in a supervised or unsupervised learning process. The multilayered engine is provided multiple images, and the multilayered analysis engine is trained with those images. A subject image is then evaluated by the multilayered analysis engine by analyzing pixels within the subject image to identify a facial portion and identifying a facial expression based on the facial portion. Mental states are inferred using the deep convolutional computer multilayered analysis engine based on the facial expression.
    Type: Application
    Filed: August 1, 2017
    Publication date: November 16, 2017
    Inventors: Panu James Turcot, Rana el Kaliouby, Daniel McDuff
  • Publication number: 20170109571
    Abstract: Images are analyzed using sub-sectional component evaluation in order to augment classifier usage. An image of an individual is obtained. The face of the individual is identified, and regions within the face are determined. The individual is evaluated to be within a sub-sectional component of a population based on a demographic or based on an activity. An evaluation of content of the face is performed based on the individual being within a sub-sectional component of a population. The sub-sectional component of a population is used for disambiguating among content types for the content of the face. A Bayesian framework that includes a conditional probability is used to perform the evaluation of the content of the face, and the evaluation is further based on a prior event that occurred.
    Type: Application
    Filed: December 30, 2016
    Publication date: April 20, 2017
    Applicant: Affectiva, Inc.
    Inventors: Daniel McDuff, Rana el Kaliouby
  • Publication number: 20170098122
    Abstract: Image content is analyzed in order to present an associated representation expression. Images of one or more individuals are obtained and the processors are used to identify the faces of the one or more individuals in the images. Facial features are extracted from the identified faces and facial landmark detection is performed. Classifiers are used to map the facial landmarks to various emotional content. The identified facial landmarks are translated into a representative icon, where the translation is based on classifiers. A set of emoji can be imported and the representative icon is selected from the set of emoji. The emoji selection is based on emotion content analysis of the face. The selected emoji can be static, animated, or cartoon representations of emotion. The individuals can share the selected emoji through insertion into email, texts, and social sharing websites.
    Type: Application
    Filed: December 9, 2016
    Publication date: April 6, 2017
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, May Amr Fouad, Abdelrahman Mahmoud, Seyedmohammad Mavadati, Daniel McDuff
  • Publication number: 20170011258
    Abstract: Facial expressions are evaluated for control of robots. One or more images of a face are captured. The images are analyzed for mental state data. The images are analyzed to determine a facial expression of the face within an identified a region of interest. Mental state information is generated. A context for the robot operation is determined. A context for the individual is determined. The actions of a robot are then controlled based on the facial expressions and the mental state information that was generated. Displays, color, sound, motion, and voice response for the robot are controlled based on the facial expressions of one or more people.
    Type: Application
    Filed: September 23, 2016
    Publication date: January 12, 2017
    Inventors: Boisy G. Pitre, Rana el Kaliouby, Abdelrahman Mahmoud, Seyedmohammad Mavadati, Daniel McDuff, Panu James Turcot, Gabriele Zijderveld
  • Publication number: 20160379505
    Abstract: Mental state event signatures are used to assess how members of a specific social group react to various stimuli such as video advertisements. The likelihood that a video will go viral is computed based on mental state event signatures. Automated facial expression analysis is utilized to determine an emotional response curve for viewers of a video. The emotional response curve is used to derive a virality probability index for the video. The virality probability index is an indicator of the propensity to go viral for a given video. The emotional response curves are processed according to various demographic criteria in order to account for cultural differences amongst various demographic groups and geographic regions.
    Type: Application
    Filed: September 12, 2016
    Publication date: December 29, 2016
    Inventors: Rana el Kaliouby, Evan Kodra, Daniel McDuff, Thomas James Vandal
  • Publication number: 20160191995
    Abstract: Facial evaluation is performed on one or more videos captured from an individual viewing a display. The images are evaluated to determine whether the display was viewed by the individual. The individual views a media presentation that includes incorporated tags and is rendered on the display. Based on the tags, video of the individual is captured and evaluated using a classifier. The evaluating includes determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. An engagement score and emotional responses are determined for media and images provided on the display.
    Type: Application
    Filed: March 4, 2016
    Publication date: June 30, 2016
    Inventors: Rana el Kaliouby, Nicholas Langeveld, Daniel McDuff, Seyedmohammad Mavadati
  • Publication number: 20160007935
    Abstract: A sensor system includes one or more gyroscopes and one or more accelerometers, for measuring subtle motions of a user's body. The system estimates physiological parameters of a user, such as heart rate, breathing rate and heart rate variability. When making the estimates, different weights are assigned to data from different sensors. For at least one estimate, weight assigned to data from at least one gyroscope is different than weight assigned to data from at least one accelerometer. Also, for at least one estimate, a weight assigned to one or more sensors located in a first region relative to the user's body is different than a weight assigned to one or more sensors located in a second region relative to the user's body. Furthermore, weight assigned to data from at least one sensor changes over time.
    Type: Application
    Filed: September 22, 2015
    Publication date: January 14, 2016
    Applicant: MASSACHUSETTS INSTITUTE OF TECHNOLOGY
    Inventors: Javier Hernandez, Daniel McDuff, Rosalind Picard