Patents Assigned to Affectiva, Inc.
  • Patent number: 11292477
    Abstract: Vehicle manipulation uses cognitive state engineering. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A computing device is used to analyze the images to determine a cognitive state. Audio information from the occupant is obtained and the analyzing is augmented based on the audio information. The cognitive state is mapped to a loading curve, where the loading curve represents a continuous spectrum of cognitive state loading variation. The vehicle is manipulated, based on the mapping to the loading curve, where the manipulating uses cognitive state alteration engineering. The manipulating includes changing vehicle occupant sensory stimulation. Additional images of additional occupants of the vehicle are obtained and analyzed to determine additional cognitive states. Additional cognitive states are used to adjust the mapping. A cognitive load is estimated based on eye gaze tracking.
    Type: Grant
    Filed: June 2, 2019
    Date of Patent: April 5, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Taniya Mishra
  • Publication number: 20220101146
    Abstract: Techniques for machine learning based on neural network training with bias mitigation are disclosed. Facial images for a neural network configuration and a neural network training dataset are obtained. The training dataset is associated with the neural network configuration. The facial images are partitioned into multiple subgroups, wherein the subgroups represent demographics with potential for biased training. A multifactor key performance indicator (KPI) is calculated per image. The calculating is based on analyzing performance of two or more image classifier models. The neural network configuration and the training dataset are promoted to a production neural network, wherein the promoting is based on the KPI. The KPI identifies bias in the training dataset. Promotion of the neural network configuration and the neural network training dataset is based on identified bias.
    Type: Application
    Filed: September 23, 2021
    Publication date: March 31, 2022
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Sneha Bhattacharya, Taniya Mishra, Shruti Ranjalkar
  • Publication number: 20220067519
    Abstract: Disclosed techniques include neural network architecture using encoder-decoder models. A facial image is obtained for processing on a neural network. The facial image includes unpaired facial image attributes. The facial image is processed through a first encoder-decoder pair and a second encoder-decoder pair. The first encoder-decoder pair decomposes a first image attribute subspace. The second encoder-decoder pair decomposes a second image attribute subspace. The first encoder-decoder pair outputs a transformation mask based on the first image attribute subspace. The second encoder-decoder pair outputs a second image transformation mask based on the second image attribute subspace. The first image transformation mask and the second image transformation mask are concatenated to enable downstream processing. The concatenated transformation masks are processed on a third encoder-decoder pair and a resulting image is output. The resulting image eliminates a paired training data requirement.
    Type: Application
    Filed: August 27, 2021
    Publication date: March 3, 2022
    Applicant: Affectiva, Inc.
    Inventors: Taniya Mishra, Sandipan Banerjee, Ajjen Das Joshi
  • Patent number: 11232290
    Abstract: Images are analyzed using sub-sectional component evaluation in order to augment classifier usage. An image of an individual is obtained. The face of the individual is identified, and regions within the face are determined. The individual is evaluated to be within a sub-sectional component of a population based on a demographic or based on an activity. An evaluation of content of the face is performed based on the individual being within a sub-sectional component of a population. The sub-sectional component of a population is used for disambiguating among content types for the content of the face. A Bayesian framework that includes a conditional probability is used to perform the evaluation of the content of the face, and the evaluation is further based on a prior event that occurred.
    Type: Grant
    Filed: December 30, 2016
    Date of Patent: January 25, 2022
    Assignee: Affectiva, Inc.
    Inventors: Daniel McDuff, Rana el Kaliouby
  • Publication number: 20210339759
    Abstract: Image-based analysis techniques are used for cognitive state vehicle navigation, including an autonomous or a semi-autonomous vehicle. Images including facial data of a vehicle occupant are obtained using an in-vehicle imaging device. The vehicle occupant can be an operator of or a passenger within the vehicle. A first computing device is used to analyze the images to determine occupant cognitive state data. The analysis can occur at various times along a vehicle travel route. The cognitive state data is mapped to location data along the vehicle travel route. Information about the vehicle travel route is updated based on the cognitive state data and mode data for the vehicle. The updated information is provided for vehicle control. The mode data is configurable based on a mode setting. The mode data is weighted based on additional information.
    Type: Application
    Filed: July 19, 2021
    Publication date: November 4, 2021
    Applicant: Affectiva, Inc.
    Inventors: Maha Amr Mohamed Fouad, Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot
  • Patent number: 11151610
    Abstract: Video of one or more vehicle occupants is obtained and analyzed. Heart rate information is determined from the video. The heart rate information is used in cognitive state analysis. The heart rate information and resulting cognitive state analysis are correlated to stimuli, such as digital media, which is consumed or with which a vehicle occupant interacts. The heart rate information is used to infer cognitive states. The inferred cognitive states are used to output a mood measurement. The cognitive states are used to modify the behavior of a vehicle. The vehicle is an autonomous or semi-autonomous vehicle. Training is employed in the analysis. Machine learning is engaged to facilitate the training. Near-infrared image processing is used to obtain the video. The analysis is augmented by audio information obtained from the vehicle occupant.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: October 19, 2021
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Viprali Bhatkar, Niels Haering, Youssef Kashef, Ahmed Adel Osman
  • Publication number: 20210279514
    Abstract: Disclosed embodiments provide for vehicle manipulation with convolutional image processing. The convolutional image processing uses a multilayered analysis engine. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers and hidden layers. The evaluating provides a cognitive state analysis. Further images are evaluated using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Manipulation data is provided to the second vehicle based on the evaluating the further images. An additional plurality of images of one or more occupants of one or more additional vehicles is obtained. The additional images provide opted-in, crowdsourced image training. The crowdsourced image training enables retraining the multilayered analysis engine.
    Type: Application
    Filed: May 24, 2021
    Publication date: September 9, 2021
    Applicant: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
  • Patent number: 11073899
    Abstract: Techniques for multidevice, multimodal emotion services monitoring are disclosed. An expression to be detected is determined. The expression relates to a cognitive state of an individual. Input on the cognitive state of the individual is obtained using a device local to the individual. Monitoring for the expression is performed. The monitoring uses a background process on a device remote from the individual. An occurrence of the expression is identified. The identification is performed by the background process. Notification that the expression was identified is provided. The notification is provided from the background process to a device distinct from the device running the background process. The expression is defined as a multimodal expression. The multimodal expression includes image data and audio data from the individual. The notification enables emotion services to be provided. The emotion services augment messaging, social media, and automated help applications.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: July 27, 2021
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Gregory Poulin, Panu James Turcot
  • Patent number: 11067405
    Abstract: Image-based analysis techniques are used for cognitive state vehicle navigation, including an autonomous or a semi-autonomous vehicle. Images including facial data of a vehicle occupant are obtained using an in-vehicle imaging device. The vehicle occupant can be an operator of or a passenger within the vehicle. A first computing device is used to analyze the images to determine occupant cognitive state data. The analysis can occur at various times along a vehicle travel route. The cognitive state data is mapped to location data along the vehicle travel route. Information about the vehicle travel route is updated based on the cognitive state data. The updated information is provided for vehicle control. The updated information is rendered on a second computing device. The updated information includes road ratings for segments of the vehicle travel route. The updated information includes an emotion metric for vehicle travel route segments.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: July 20, 2021
    Assignee: Affectiva, Inc.
    Inventors: Maha Amr Mohamed Fouad, Chilton Lyons Cabot, Rana el Kaliouby, Forest Jay Handford
  • Patent number: 11056225
    Abstract: Analytics are used for live streaming based on image analysis within a shared digital environment. A group of images is obtained from a group of participants involved in an interactive digital environment. The interactive digital environment can be a shared digital environment. The interactive digital environment can be a gaming environment. Emotional content within the group of images is analyzed for a set of participants within the group of participants. Results of the analyzing of the emotional content within the group of images are provided to a second set of participants within the group of participants. The analyzing emotional content includes identifying an image of an individual, identifying a face of the individual, determining facial regions, and performing content evaluation based on applying image classifiers.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: July 6, 2021
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, James Henry Deal, Jr., Forest Jay Handford, Panu James Turcot, Gabriele Zijderveld
  • Publication number: 20210201003
    Abstract: Machine learning is performed using synthetic data for neural network training using vectors. Facial images are obtained for a neural network training dataset. Facial elements from the facial images are encoded into vector representations of the facial elements. A generative adversarial network (GAN) generator is trained to provide one or more synthetic vectors based on the one or more vector representations, wherein the one or more synthetic vectors enable avoidance of discriminator detection in the GAN. The training a GAN further comprises determining a generator accuracy using the discriminator. The generator accuracy can enable a classifier, where the classifier comprises a multi-layer perceptron. Additional synthetic vectors are generated in the GAN, wherein the additional synthetic vectors avoid discriminator detection. A machine learning neural network is trained using the additional synthetic vectors.
    Type: Application
    Filed: December 29, 2020
    Publication date: July 1, 2021
    Applicant: Affectiva, Inc.
    Inventors: Sandipan Banerjee, Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Taniya Mishra
  • Publication number: 20210188291
    Abstract: Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.
    Type: Application
    Filed: December 11, 2020
    Publication date: June 24, 2021
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 11017250
    Abstract: Disclosed embodiments provide for vehicle manipulation using convolutional image processing. The convolutional image processing is accomplished using a computer, where the computer can include a multilayered analysis engine. The multilayered analysis engine can include a convolutional neural network (CNN). The computer is initialized for convolutional processing. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers hidden layers. The multilayered analysis engine is used for cognitive state analysis. The evaluating provides a cognitive state analysis. Further images are analyzed using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Voice data is collected to augment the cognitive state analysis.
    Type: Grant
    Filed: March 2, 2018
    Date of Patent: May 25, 2021
    Assignee: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
  • Publication number: 20210125065
    Abstract: Deep learning in situ retraining uses deep learning nodes to provide a human perception state on a user device. A plurality of images including facial data is obtained for human perception state analysis. A server device trains a set of weights on a set of layers for deep learning that implements the analysis, where the training is performed with a first set of training data. A subset of weights is deployed on deep learning nodes on a user device, where the deploying enables at least part of the human perception state analysis. An additional set of weights is retrained on the user device, where the additional set of weights is trained using a second set of training data. A human perception state based on the subset of the set of weights, the additional set of weights, and input images obtained by the user device is provided on the user device.
    Type: Application
    Filed: October 23, 2020
    Publication date: April 29, 2021
    Applicant: Affectiva, Inc.
    Inventors: Panu James Turcot, Seyedmohammad Mavadati
  • Patent number: 10922567
    Abstract: Cognitive state-based vehicle manipulation uses near-infrared image processing. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The images include facial data of the vehicle occupant. The images include visible light-based images and near-infrared based images. A classifier is trained based on the visible light content of the images to determine cognitive state data for the vehicle occupant. The classifier is modified based on the near-infrared image content. The modified classifier is deployed for analysis of additional images of the vehicle occupant, where the additional images are near-infrared based images. The additional images are analyzed to determine a cognitive state. The vehicle is manipulated based on the cognitive state that was analyzed. The cognitive state is rendered on a display located within the vehicle.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: February 16, 2021
    Assignee: Affectiva, Inc.
    Inventors: Abdelrahman N. Mahmoud, Rana el Kaliouby, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 10922566
    Abstract: Disclosed embodiments provide cognitive state evaluation for vehicle navigation. The cognitive state evaluation is accomplished using a computer, where the computer can perform learning using a neural network such as a deep neural network (DNN) or a convolutional neural network (CNN). Images including facial data are obtained of a first occupant of a first vehicle. The images are analyzed to determine cognitive state data. Layers and weights are learned for the deep neural network. Images of a second occupant of a second vehicle are collected and analyzed to determine additional cognitive state data. The additional cognitive state data is analyzed, and the second vehicle is manipulated. A second imaging device is used to collect images of a person outside the second vehicle to determine cognitive state data. The second vehicle can be manipulated based on the cognitive state data of the person outside the vehicle.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: February 16, 2021
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 10911829
    Abstract: Techniques are disclosed for vehicle video recommendation via affect. A first media presentation is played to a vehicle occupant. The playing is accomplished using a video client. Cognitive state data for the vehicle occupant is captured, where the cognitive state data includes video facial data from the vehicle occupant during the first media presentation playing. The first media presentation is ranked, on an analysis server, relative to another media presentation based on the cognitive state data which was captured for the vehicle occupant. The ranking is determined for the vehicle occupant. The cognitive state data which was captured for the vehicle occupant is correlated, on the analysis server, to cognitive state data collected from other people who experienced the first media presentation. One or more further media presentation selections are recommended to the vehicle occupant, based on the ranking and the correlating.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: February 2, 2021
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot
  • Patent number: 10897650
    Abstract: Content manipulation uses cognitive states for vehicle content recommendation. Images are obtained of a vehicle occupant using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A content ingestion history of the vehicle occupant is obtained, where the content ingestion history includes one or more audio or video selections. A first computing device is used to analyze the one or more images to determine a cognitive state of the vehicle occupant. The cognitive state is correlated to the content ingestion history using a second computing device. One or more further audio or video selections are recommended to the vehicle occupant, based on the cognitive state, the content ingestion history, and the correlating. The analyzing can be compared with additional analyzing performed on additional vehicle occupants. The additional vehicle occupants can be in the same vehicle as the first occupant or different vehicles.
    Type: Grant
    Filed: December 6, 2018
    Date of Patent: January 19, 2021
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
  • Publication number: 20210001862
    Abstract: Vehicular in-cabin facial tracking is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. A set of seating locations for the vehicle interior is determined. The set is based on the images. The set of seating locations is scanned for performing facial detection for each of the seating locations using a facial detection model. A view of a detected face is manipulated. The manipulation is based on a geometry of the vehicle interior. Cognitive state data of the detected face is analyzed. The cognitive state data analysis is based on additional images of the detected face. The cognitive state data analysis uses the view that was manipulated. The cognitive state data analysis is promoted to a using application. The using application provides vehicle manipulation information to the vehicle. The manipulation information is for an autonomous vehicle.
    Type: Application
    Filed: July 14, 2020
    Publication date: January 7, 2021
    Applicant: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby, Panu James Turcot, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed
  • Patent number: 10869626
    Abstract: Techniques are described for image analysis and representation for emotional metric threshold generation. A client device is used to collect image data of a user interacting with a media presentation, where the image data includes facial images of the user. One or more processors are used to analyze the image data to extract emotional content of the facial images. One or more emotional intensity metrics are determined based on the emotional content. The one or more emotional intensity metrics are stored into a digital storage component. The one or more emotional intensity metrics, obtained from the digital storage component, are coalesced into a summary emotional intensity metric. The summary emotional intensity metric is represented.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: December 22, 2020
    Assignee: Affectiva, Inc.
    Inventors: Jason Krupat, Rana el Kaliouby, Jason Radice, Chilton Lyons Cabot