Patents by Inventor Rana el Kaliouby

Rana el Kaliouby has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190283762
    Abstract: Vehicle manipulation uses cognitive state engineering. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A computing device is used to analyze the images to determine a cognitive state. Audio information from the occupant is obtained and the analyzing is augmented based on the audio information. The cognitive state is mapped to a loading curve, where the loading curve represents a continuous spectrum of cognitive state loading variation. The vehicle is manipulated, based on the mapping to the loading curve, where the manipulating uses cognitive state alteration engineering. The manipulating includes changing vehicle occupant sensory stimulation. Additional images of additional occupants of the vehicle are obtained and analyzed to determine additional cognitive states. Additional cognitive states are used to adjust the mapping. A cognitive load is estimated based on eye gaze tracking.
    Type: Application
    Filed: June 2, 2019
    Publication date: September 19, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Taniya Mishra
  • Patent number: 10401860
    Abstract: Image analysis is performed for a two-sided data hub. Data reception on a first computing device is enabled by an individual and a content provider. Cognitive state data including facial data on the individual is collected on a second computing device. The cognitive state data is analyzed on a third computing device and the analysis is provided to the individual. The cognitive state data is evaluated and the evaluation is provided to the content provider. A mood dashboard is displayed to the individual based on the analyzing. The individual opts in to enable data reception for the individual. The content provider provides content via a website.
    Type: Grant
    Filed: March 12, 2018
    Date of Patent: September 3, 2019
    Assignee: Affectiva, Inc.
    Inventors: Jason Krupat, Rana el Kaliouby, Jason Radice, Gabriele Zijderveld, Chilton Lyons Cabot
  • Publication number: 20190268660
    Abstract: Techniques are disclosed for vehicle video recommendation via affect. A first media presentation is played to a vehicle occupant. The playing is accomplished using a video client. Cognitive state data for the vehicle occupant is captured, where the cognitive state data includes video facial data from the vehicle occupant during the first media presentation playing. The first media presentation is ranked, on an analysis server, relative to another media presentation based on the cognitive state data which was captured for the vehicle occupant. The ranking is determined for the vehicle occupant. The cognitive state data which was captured for the vehicle occupant is correlated, on the analysis server, to cognitive state data collected from other people who experienced the first media presentation. One or more further media presentation selections are recommended to the vehicle occupant, based on the ranking and the correlating.
    Type: Application
    Filed: May 10, 2019
    Publication date: August 29, 2019
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot
  • Publication number: 20190197330
    Abstract: Cognitive state-based vehicle manipulation uses near-infrared image processing. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The images include facial data of the vehicle occupant. The images include visible light-based images and near-infrared based images. A classifier is trained based on the visible light content of the images to determine cognitive state data for the vehicle occupant. The classifier is modified based on the near-infrared image content. The modified classifier is deployed for analysis of additional images of the vehicle occupant, where the additional images are near-infrared based images. The additional images are analyzed to determine a cognitive state. The vehicle is manipulated based on the cognitive state that was analyzed. The cognitive state is rendered on a display located within the vehicle.
    Type: Application
    Filed: March 1, 2019
    Publication date: June 27, 2019
    Inventors: Abdelrahman N. Mahmoud, Rana el Kaliouby, Seyedmohammad Mavadati, Panu James Turcot
  • Publication number: 20190172462
    Abstract: Audio analysis learning is performed using video data. Video data is obtained, on a first computing device, wherein the video data includes images of one or more people. Audio data is obtained, on a second computing device, which corresponds to the video data. A face within the video data is identified. A first voice, from the audio data, is associated with the face within the video data. The face within the video data is analyzed for cognitive content. Audio features corresponding to the cognitive content of the video data are extracted. The audio data is segmented to correspond to an analyzed cognitive state. An audio classifier is learned, on a third computing device, based on the analyzing of the face within the video data. Further audio data is analyzed using the audio classifier.
    Type: Application
    Filed: February 11, 2019
    Publication date: June 6, 2019
    Applicant: Affectiva, Inc.
    Inventors: Taniya Mishra, Rana el Kaliouby
  • Publication number: 20190172243
    Abstract: Techniques are described for image generation for avatar image animation using translation vectors. An avatar image is obtained for representation on a first computing device. An autoencoder is trained, on a second computing device comprising an artificial neural network, to generate synthetic emotive faces. A plurality of translation vectors is identified corresponding to a plurality of emotion metrics, based on the training. A bottleneck layer within the autoencoder is used to identify the plurality of translation vectors. A subset of the plurality of translation vectors is applied to the avatar image, wherein the subset represents an emotion metric input. The emotion metric input is obtained from facial analysis of an individual. An animated avatar image is generated for the first computing device, based on the applying, wherein the animated avatar image is reflective of the emotion metric input and the avatar image includes vocalizations.
    Type: Application
    Filed: November 30, 2018
    Publication date: June 6, 2019
    Applicant: Affectiva, Inc.
    Inventors: Taniya Mishra, George Alexander Reichenbach, Rana el Kaliouby
  • Publication number: 20190162549
    Abstract: Image-based analysis techniques are used for cognitive state vehicle navigation, including an autonomous or a semi-autonomous vehicle. Images including facial data of a vehicle occupant are obtained using an in-vehicle imaging device. The vehicle occupant can be an operator of or a passenger within the vehicle. A first computing device is used to analyze the images to determine occupant cognitive state data. The analysis can occur at various times along a vehicle travel route. The cognitive state data is mapped to location data along the vehicle travel route. Information about the vehicle travel route is updated based on the cognitive state data. The updated information is provided for vehicle control. The updated information is rendered on a second computing device. The updated information includes road ratings for segments of the vehicle travel route. The updated information includes an emotion metric for vehicle travel route segments.
    Type: Application
    Filed: January 30, 2019
    Publication date: May 30, 2019
    Applicant: Affectiva, Inc.
    Inventors: Maha Amr Mohamed Fouad, Chilton Lyons Cabot, Rana el Kaliouby, Forest Jay Handford
  • Publication number: 20190152492
    Abstract: Techniques are described for cognitive analysis for directed control transfer for autonomous vehicles. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.
    Type: Application
    Filed: December 28, 2018
    Publication date: May 23, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
  • Patent number: 10289898
    Abstract: Analysis of mental state data is provided to enable video recommendations via affect. Analysis and recommendation is made for socially shared live-stream video. Video response is evaluated based on viewing and sampling various videos. Data is captured for viewers of a video, where the data includes facial information and/or physiological data. Facial and physiological information is gathered for a group of viewers. In some embodiments, demographic information is collected and used as a criterion for visualization of affect responses to videos. In some embodiments, data captured from an individual viewer or group of viewers is used to rank videos.
    Type: Grant
    Filed: November 21, 2016
    Date of Patent: May 14, 2019
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman Mahmoud, Panu James Turcot
  • Publication number: 20190133510
    Abstract: Mental state analysis uses sporadic collection of affect data within a vehicle. Mental state data of a vehicle occupant is collected within a vehicle on an intermittent basis. The mental state data includes facial image data and the facial image data is collected intermittently across a plurality of devices within the vehicle. The mental state data further includes audio information. Processors are used to interpolate mental state data in between the collecting which is intermittent. Analysis of the mental state data is obtained on the vehicle occupant, where the analysis of the mental state data includes analyzing the facial image data. An output is rendered based on the analysis of the mental state data. The rendering includes communicating by a virtual assistant, communicating with a navigation component, and manipulating the vehicle. The mental state data is translated into an emoji.
    Type: Application
    Filed: December 3, 2018
    Publication date: May 9, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
  • Publication number: 20190110103
    Abstract: Content manipulation uses cognitive states for vehicle content recommendation. Images are obtained of a vehicle occupant using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A content ingestion history of the vehicle occupant is obtained, where the content ingestion history includes one or more audio or video selections. A first computing device is used to analyze the one or more images to determine a cognitive state of the vehicle occupant. The cognitive state is correlated to the content ingestion history using a second computing device. One or more further audio or video selections are recommended to the vehicle occupant, based on the cognitive state, the content ingestion history, and the correlating. The analyzing can be compared with additional analyzing performed on additional vehicle occupants. The additional vehicle occupants can be in the same vehicle as the first occupant or different vehicles.
    Type: Application
    Filed: December 6, 2018
    Publication date: April 11, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
  • Publication number: 20190073547
    Abstract: Personal emotional profile generation uses cognitive state analysis for vehicle manipulation. Cognitive state data is obtained from an individual. The cognitive state data is extracted, using one or more processors, from facial images of an individual captured as they respond to stimuli within a vehicle. The cognitive state data extracted from facial images is analyzed to produce cognitive state information. The cognitive state information is categorized, using one or more processors, against a personal emotional profile for the individual. The vehicle is manipulated, based on the cognitive state information, the categorizing, and the stimuli. The personal emotional profile is generated by comparing the cognitive state information of the individual with cognitive state norms from a plurality of individuals and is based on cognitive state data for the individual that is accumulated over time. The cognitive state information is augmented based on audio data collected from within the vehicle.
    Type: Application
    Filed: October 29, 2018
    Publication date: March 7, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Gabriele Zijderveld
  • Patent number: 10204625
    Abstract: Audio analysis learning is performed using video data. Video data is obtained, on a first computing device, wherein the video data includes images of one or more people. Audio data is obtained, on a second computing device, which corresponds to the video data. A face is identified within the video data. A first voice, from the audio data, is associated with the face within the video data. The face within the video data is analyzed for cognitive content. Audio features are extracted corresponding to the cognitive content of the video data. The audio data is segmented to correspond to an analyzed cognitive state. An audio classifier is learned, on a third computing device, based on the analyzing of the face within the video data. Further audio data is analyzed using the audio classifier.
    Type: Grant
    Filed: January 4, 2018
    Date of Patent: February 12, 2019
    Assignee: Affectiva, Inc.
    Inventors: Taniya Mishra, Rana el Kaliouby
  • Publication number: 20190034706
    Abstract: Concepts for facial tracking with classifiers are disclosed. A plurality of images is captured, received, and partitioned into a series of image frames. The plurality of images is captured on an individual viewing a display. One or more faces is identified and tracked in the image frames using a plurality of classifiers. The plurality of classifiers is used to perform head pose estimation. The plurality of images is analyzed to evaluate a query of determining whether the electronic display was attended by the individual with the face. The analyzing includes determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. An engagement score and emotional responses are determined for media and images provided on the display. A result is rendered for the query, based on the analysis.
    Type: Application
    Filed: September 28, 2018
    Publication date: January 31, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Nicholas Langeveld, Daniel McDuff, Seyedmohammad Mavadati
  • Publication number: 20190012599
    Abstract: Techniques are described for machine-trained analysis for multimodal machine learning. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels, wherein the trained weights cover both the audio information and the video information and are trained simultaneously, and wherein the learning facilitates emotional analysis of the audio information and the video information. A second computing device captures further information and analyzes the further information using trained weights to provide an emotion metric based on the further information.
    Type: Application
    Filed: September 11, 2018
    Publication date: January 10, 2019
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Panu James Turcot
  • Patent number: 10143414
    Abstract: An individual can exhibit one or more mental states when reacting to a stimulus. A camera or other monitoring device can be used to collect, on an intermittent basis, mental state data including facial data. The mental state data can be interpolated between the intermittent collecting. The facial data can be obtained from a series of images of the individual where the images are captured intermittently. A second face can be identified, and the first face and the second face can be tracked.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: December 4, 2018
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Daniel Bender, Evan Kodra, Oliver Ernst Nowak, Richard Scott Sadowsky
  • Publication number: 20180330178
    Abstract: Disclosed embodiments provide cognitive state evaluation for vehicle navigation. The cognitive state evaluation is accomplished using a computer, where the computer can perform learning using a neural network such as a deep neural network (DNN) or a convolutional neural network (CNN). Images including facial data are obtained of a first occupant of a first vehicle. The images are analyzed to determine cognitive state data. Layers and weights are learned for the deep neural network. Images of a second occupant of a second vehicle are collected and analyzed to determine additional cognitive state data. The additional cognitive state data is analyzed, and the second vehicle is manipulated. A second imaging device is used to collect images of a person outside the second vehicle to determine cognitive state data. The second vehicle can be manipulated based on the cognitive state data of the person outside the vehicle.
    Type: Application
    Filed: May 9, 2018
    Publication date: November 15, 2018
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 10111611
    Abstract: The mental state of an individual is obtained in order to generate an emotional profile for the individual. The individual's mental state is derived from an analysis of the individual's facial and physiological information. The emotional profile of other individuals is correlated to the first individual for comparison. Various categories of emotional profiles are defined based upon the correlation. The emotional profile of the individual or group of individuals is rendered for display, used to provide feedback and to recommend activities for the individual, or provide information about the individual.
    Type: Grant
    Filed: July 10, 2014
    Date of Patent: October 30, 2018
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Avril England
  • Publication number: 20180303397
    Abstract: Techniques are described for image analysis and representation for emotional metric threshold generation. A client device is used to collect image data of a user interacting with a media presentation, where the image data includes facial images of the user. One or more processors are used to analyze the image data to extract emotional content of the facial images. One or more emotional intensity metrics are determined based on the emotional content. The one or more emotional intensity metrics are stored into a digital storage component. The one or more emotional intensity metrics, obtained from the digital storage component, are coalesced into a summary emotional intensity metric. The summary emotional intensity metric is represented.
    Type: Application
    Filed: June 25, 2018
    Publication date: October 25, 2018
    Applicant: Affectiva, Inc.
    Inventors: Jason Krupat, Rana el Kaliouby, Jason Radice, Chilton Lyons Cabot
  • Patent number: 10108852
    Abstract: A system and method for facial analysis to detect asymmetric expressions is disclosed. A series of facial images is collected, and an image from the series of images is evaluated with a classifier. The image is then flipped to create a flipped image. Then, the flipped image is evaluated with the classifier. The results of the evaluation of original image and the flipped image are compared. Asymmetric features such as a wink, a raised eyebrow, a smirk, or a wince are identified. These asymmetric features are associated with mental states such as skepticism, contempt, condescension, repugnance, disgust, disbelief, cynicism, pessimism, doubt, suspicion, and distrust.
    Type: Grant
    Filed: September 19, 2013
    Date of Patent: October 23, 2018
    Assignee: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby