Patents Assigned to Affectiva, Inc.
  • Patent number: 10628741
    Abstract: Techniques are described for machine-trained analysis for multimodal machine learning. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels, wherein the trained weights cover both the audio information and the video information and are trained simultaneously, and wherein the learning facilitates emotional analysis of the audio information and the video information. A second computing device captures further information and analyzes the further information using trained weights to provide an emotion metric based on the further information.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: April 21, 2020
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Panu James Turcot
  • Patent number: 10627817
    Abstract: Vehicle manipulation is performed using occupant image analysis. A camera within a vehicle is used to collect cognitive state data including facial data, on an occupant of a vehicle. A cognitive state profile is learned, on a first computing device, for the occupant based on the cognitive state data. The cognitive state profile includes information on absolute time. The cognitive state profile includes information on trip duration time. Voice data is collected and the cognitive state data is augmented with the voice data. Further cognitive state data is captured, on a second computing device, on the occupant while the occupant is in a second vehicle. The further cognitive state data is compared, on a third computing device, with the cognitive state profile that was learned for the occupant. The second vehicle is manipulated based on the comparing of the further cognitive state data.
    Type: Grant
    Filed: January 19, 2018
    Date of Patent: April 21, 2020
    Assignee: Affectiva, Inc.
    Inventors: Gabriele Zijderveld, Rana el Kaliouby, Abdelrahman N Mahmoud, Seyedmohammad Mavadati
  • Patent number: 10614289
    Abstract: Concepts for facial tracking with classifiers is disclosed. One or more faces are detected and tracked in a series of video frames that include at least one face. Video is captured and partitioned into the series of frames. A first video frame is analyzed using classifiers trained to detect the presence of at least one face in the frame. The classifiers are used to initialize locations for a first set of facial landmarks for the first face. The locations of the facial landmarks are refined using localized information around the landmarks, and a rough bounding box that contains the facial landmarks is estimated. The future locations for the facial landmarks detected in the first video frame are estimated for a future video frame. The detection of the facial landmarks and estimation of future locations of the landmarks are insensitive to rotation, orientation, scaling, or mirroring of the face.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: April 7, 2020
    Assignee: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby, Panu James Turcot
  • Publication number: 20200104616
    Abstract: Drowsiness mental state analysis is performed using blink rate. Video is obtained of an individual or group. The individual or group can be within a vehicle. The video is analyzed to detect a blink event based on a classifier, where the blink event is determined by identifying that eyes are closed for a frame in the video. A blink duration is evaluated for the blink event. Blink-rate information is determined using the blink event and one or more other blink events. The evaluating can include evaluating blinking for a group of people. The blink-rate information is compensated to determine drowsiness, based on the temporal distribution mapping of the blink-rate information. Mental states of the individual are inferred for the blink event based on the blink event, the blink duration of the individual, and the blink-rate information that was compensated. The compensating is biased based on demographic information of the individual.
    Type: Application
    Filed: November 15, 2019
    Publication date: April 2, 2020
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 10592757
    Abstract: Vehicle cognitive data is collected using multiple devices. A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and cognitive states inferred from reactions to the tasks and activities. A first computing device within a vehicle obtains cognitive state data which is collected on an occupant of the vehicle from multiple sources, wherein the multiple sources include at least two sources of facial image data. A second computing device generates analysis of the cognitive state data which is collected from the multiple sources. A third computing device renders an output which is based on the analysis of the cognitive state data. The cognitive state data from multiple sources is tagged. The cognitive state data from the multiple sources is aggregated. The cognitive state data is interpolated when collection is intermittent. The cognitive state analysis is interpolated when the cognitive state data is intermittent.
    Type: Grant
    Filed: February 1, 2018
    Date of Patent: March 17, 2020
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Publication number: 20200074154
    Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.
    Type: Application
    Filed: November 8, 2019
    Publication date: March 5, 2020
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
  • Patent number: 10573313
    Abstract: Audio analysis learning is performed using video data. Video data is obtained, on a first computing device, wherein the video data includes images of one or more people. Audio data is obtained, on a second computing device, which corresponds to the video data. A face within the video data is identified. A first voice, from the audio data, is associated with the face within the video data. The face within the video data is analyzed for cognitive content. Audio features corresponding to the cognitive content of the video data are extracted. The audio data is segmented to correspond to an analyzed cognitive state. An audio classifier is learned, on a third computing device, based on the analyzing of the face within the video data. Further audio data is analyzed using the audio classifier.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: February 25, 2020
    Assignee: Affectiva, Inc.
    Inventors: Taniya Mishra, Rana el Kaliouby
  • Publication number: 20200026347
    Abstract: Techniques for multidevice, multimodal emotion services monitoring are disclosed. An expression to be detected is determined. The expression relates to a cognitive state of an individual. Input on the cognitive state of the individual is obtained using a device local to the individual. Monitoring for the expression is performed. The monitoring uses a background process on a device remote from the individual. An occurrence of the expression is identified. The identification is performed by the background process. Notification that the expression was identified is provided. The notification is provided from the background process to a device distinct from the device running the background process. The expression is defined as a multimodal expression. The multimodal expression includes image data and audio data from the individual. The notification enables emotion services to be provided. The emotion services augment messaging, social media, and automated help applications.
    Type: Application
    Filed: September 30, 2019
    Publication date: January 23, 2020
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Gregory Poulin, Panu James Turcot
  • Patent number: 10517521
    Abstract: Video of one or more people is obtained and analyzed. Heart rate information is determined from the video. The heart rate information is used in mental state analysis. The heart rate information and resulting mental state analysis are correlated to stimuli, such as digital media, which is consumed or with which a person interacts. The heart rate information is used to infer mental states. The inferred mental states are used to output a mood measurement. The mental state analysis, based on the heart rate information, is used to optimize digital media or modify a digital game. Training is employed in the analysis. Machine learning is engaged to facilitate the training.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: December 31, 2019
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Viprali Bhatkar, Niels Haering, Youssef Kashef, Ahmed Adel Osman
  • Patent number: 10482333
    Abstract: Mental state analysis is performed using blink rate within vehicles. Video is obtained of an individual or a group within a vehicle. The video is analyzed to detect a blink event based on a classifier, where the blink event is determined by identifying that eyes are closed for a frame in the video. A blink duration is evaluated for the blink event. Blink-rate information is determined using the blink event and one or more other blink events. The evaluating can include evaluating blinking for a group of people. The blink-rate information is compensated using the processors for a context. Mental states of the individual are inferred for the blink event, where the mental states are based on the blink event, the blink duration of the individual, and the blink-rate information that was compensated. A difference in blinking between the individual and the remainder of a group can be determined.
    Type: Grant
    Filed: September 10, 2018
    Date of Patent: November 19, 2019
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 10474875
    Abstract: Image analysis for facial evaluation is performed using logic encoded in a semiconductor processor. The semiconductor chip analyzes video images that are captured using one or more cameras and evaluates the videos to identify one or more persons in the videos. When a person is identified, the semiconductor chip locates the face of the evaluated person in the video. Facial regions of interest are extracted and differences in the regions of interest in the face are identified. The semiconductor chip uses classifiers to map facial regions for emotional response content and evaluate the emotional response content to produce an emotion score. The classifiers provide gender, age, or ethnicity with an associated probability. Localization logic within the chip is used to localize a second face when one is evaluated in the video. The one or more faces are tracked, and identifiers for the faces are provided.
    Type: Grant
    Filed: November 20, 2015
    Date of Patent: November 12, 2019
    Assignee: Affectiva, Inc.
    Inventors: Boisy G Pitre, Rana el Kaliouby, Panu James Turcot
  • Publication number: 20190283762
    Abstract: Vehicle manipulation uses cognitive state engineering. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A computing device is used to analyze the images to determine a cognitive state. Audio information from the occupant is obtained and the analyzing is augmented based on the audio information. The cognitive state is mapped to a loading curve, where the loading curve represents a continuous spectrum of cognitive state loading variation. The vehicle is manipulated, based on the mapping to the loading curve, where the manipulating uses cognitive state alteration engineering. The manipulating includes changing vehicle occupant sensory stimulation. Additional images of additional occupants of the vehicle are obtained and analyzed to determine additional cognitive states. Additional cognitive states are used to adjust the mapping. A cognitive load is estimated based on eye gaze tracking.
    Type: Application
    Filed: June 2, 2019
    Publication date: September 19, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Taniya Mishra
  • Patent number: 10401860
    Abstract: Image analysis is performed for a two-sided data hub. Data reception on a first computing device is enabled by an individual and a content provider. Cognitive state data including facial data on the individual is collected on a second computing device. The cognitive state data is analyzed on a third computing device and the analysis is provided to the individual. The cognitive state data is evaluated and the evaluation is provided to the content provider. A mood dashboard is displayed to the individual based on the analyzing. The individual opts in to enable data reception for the individual. The content provider provides content via a website.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: September 3, 2019
    Assignee: Affectiva, Inc.
    Inventors: Jason Krupat, Rana el Kaliouby, Jason Radice, Gabriele Zijderveld, Chilton Lyons Cabot
  • Publication number: 20190172462
    Abstract: Audio analysis learning is performed using video data. Video data is obtained, on a first computing device, wherein the video data includes images of one or more people. Audio data is obtained, on a second computing device, which corresponds to the video data. A face within the video data is identified. A first voice, from the audio data, is associated with the face within the video data. The face within the video data is analyzed for cognitive content. Audio features corresponding to the cognitive content of the video data are extracted. The audio data is segmented to correspond to an analyzed cognitive state. An audio classifier is learned, on a third computing device, based on the analyzing of the face within the video data. Further audio data is analyzed using the audio classifier.
    Type: Application
    Filed: February 11, 2019
    Publication date: June 6, 2019
    Applicant: Affectiva, Inc.
    Inventors: Taniya Mishra, Rana el Kaliouby
  • Publication number: 20190172458
    Abstract: Techniques are described for speech analysis for cross-language mental state identification. A first group of utterances in a first language is collected, on a computing device, with an associated first set of mental states. The first group of utterances and the associated first set of mental states are stored on an electronic storage device. A machine learning system is trained using the first group of utterances and the associated first set of mental states that were stored. A second group of utterances from a second language is processed, on the machine learning system that was trained, wherein the processing determines a second set of mental states corresponding to the second group of utterances. The second set of mental states is output. A series of heuristics is output, based on the correspondence between the first group of utterances and the associated first set of mental states.
    Type: Application
    Filed: November 30, 2018
    Publication date: June 6, 2019
    Applicant: Affectiva, Inc.
    Inventors: Taniya Mishra, Islam Faisal, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed
  • Publication number: 20190172243
    Abstract: Techniques are described for image generation for avatar image animation using translation vectors. An avatar image is obtained for representation on a first computing device. An autoencoder is trained, on a second computing device comprising an artificial neural network, to generate synthetic emotive faces. A plurality of translation vectors is identified corresponding to a plurality of emotion metrics, based on the training. A bottleneck layer within the autoencoder is used to identify the plurality of translation vectors. A subset of the plurality of translation vectors is applied to the avatar image, wherein the subset represents an emotion metric input. The emotion metric input is obtained from facial analysis of an individual. An animated avatar image is generated for the first computing device, based on the applying, wherein the animated avatar image is reflective of the emotion metric input and the avatar image includes vocalizations.
    Type: Application
    Filed: November 30, 2018
    Publication date: June 6, 2019
    Applicant: Affectiva, Inc.
    Inventors: Taniya Mishra, George Alexander Reichenbach, Rana el Kaliouby
  • Publication number: 20190162549
    Abstract: Image-based analysis techniques are used for cognitive state vehicle navigation, including an autonomous or a semi-autonomous vehicle. Images including facial data of a vehicle occupant are obtained using an in-vehicle imaging device. The vehicle occupant can be an operator of or a passenger within the vehicle. A first computing device is used to analyze the images to determine occupant cognitive state data. The analysis can occur at various times along a vehicle travel route. The cognitive state data is mapped to location data along the vehicle travel route. Information about the vehicle travel route is updated based on the cognitive state data. The updated information is provided for vehicle control. The updated information is rendered on a second computing device. The updated information includes road ratings for segments of the vehicle travel route. The updated information includes an emotion metric for vehicle travel route segments.
    Type: Application
    Filed: January 30, 2019
    Publication date: May 30, 2019
    Applicant: Affectiva, Inc.
    Inventors: Maha Amr Mohamed Fouad, Chilton Lyons Cabot, Rana el Kaliouby, Forest Jay Handford
  • Publication number: 20190152492
    Abstract: Techniques are described for cognitive analysis for directed control transfer for autonomous vehicles. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 23, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
  • Patent number: 10289898
    Abstract: Analysis of mental state data is provided to enable video recommendations via affect. Analysis and recommendation is made for socially shared live-stream video. Video response is evaluated based on viewing and sampling various videos. Data is captured for viewers of a video, where the data includes facial information and/or physiological data. Facial and physiological information is gathered for a group of viewers. In some embodiments, demographic information is collected and used as a criterion for visualization of affect responses to videos. In some embodiments, data captured from an individual viewer or group of viewers is used to rank videos.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: May 14, 2019
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman Mahmoud, Panu James Turcot
  • Publication number: 20190133510
    Abstract: Mental state analysis uses sporadic collection of affect data within a vehicle. Mental state data of a vehicle occupant is collected within a vehicle on an intermittent basis. The mental state data includes facial image data and the facial image data is collected intermittently across a plurality of devices within the vehicle. The mental state data further includes audio information. Processors are used to interpolate mental state data in between the collecting which is intermittent. Analysis of the mental state data is obtained on the vehicle occupant, where the analysis of the mental state data includes analyzing the facial image data. An output is rendered based on the analysis of the mental state data. The rendering includes communicating by a virtual assistant, communicating with a navigation component, and manipulating the vehicle. The mental state data is translated into an emoji.
    Type: Application
    Filed: December 3, 2018
    Publication date: May 9, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati