Patents by Inventor Rana el Kaliouby

Rana el Kaliouby has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11935281
    Abstract: Vehicular in-cabin facial tracking is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. A set of seating locations for the vehicle interior is determined. The set is based on the images. The set of seating locations is scanned for performing facial detection for each of the seating locations using a facial detection model. A view of a detected face is manipulated. The manipulation is based on a geometry of the vehicle interior. Cognitive state data of the detected face is analyzed. The cognitive state data analysis is based on additional images of the detected face. The cognitive state data analysis uses the view that was manipulated. The cognitive state data analysis is promoted to a using application. The using application provides vehicle manipulation information to the vehicle. The manipulation information is for an autonomous vehicle.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: March 19, 2024
    Assignee: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby, Panu James Turcot, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed
  • Patent number: 11887352
    Abstract: Analytics are used for live streaming based on analysis within a shared digital environment. An interactive digital environment is accessed, where the interactive digital environment is a shared digital environment for a plurality of participants. The participants include presenters and viewers. A plurality of images is obtained from a first set of participants within the plurality of participants involved in the interactive digital environment. Cognitive state content is analyzed within the plurality of images for the first set of participants within the plurality of participants. Results of the analyzing cognitive state content are provided to a second set of participants within the plurality of participants. The obtaining and the analyzing are accomplished on a device local to a participant such that images of the first set of participants are not transmitted to a non-local device. The analyzing cognitive state content is augmented with evaluation of audio information.
    Type: Grant
    Filed: March 25, 2020
    Date of Patent: January 30, 2024
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Graham John Page, Gabriele Zijderveld
  • Patent number: 11887383
    Abstract: Vehicle interior object management uses analysis for detection of an object within a vehicle. The object can include a cell phone, a computing device, a briefcase, a wallet, a purse, or luggage. The object can include a child or a pet. A distance between an occupant and the object can be calculated. The object can be within a reachable distance of the occupant. Two or more images of a vehicle interior are collected using imaging devices within the vehicle. The images are analyzed to detect an object within the vehicle. The object is classified. A level of interaction is estimated between an occupant of the vehicle and the object within the vehicle. The object can be determined to have been left behind once the occupant leaves the vehicle. A control element of the vehicle is changed based on the classifying and the level of interaction.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: January 30, 2024
    Assignee: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Andrew Todd Zeilman, Gabriele Zijderveld
  • Patent number: 11823055
    Abstract: Vehicular in-cabin sensing is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. An occupant is detected within the vehicle interior. The detecting is based on identifying an upper torso of the occupant, using the in-cabin sensor data. The imaging is accomplished using a plurality of imaging devices within a vehicle interior. The occupant is located within the vehicle interior, based on the in-cabin sensor data. An additional occupant within the vehicle interior is detected. A human perception metric for the occupant is analyzed, based on the in-cabin sensor data. The detecting, the locating, and/or the analyzing are performed using machine learning. The human perception metric is promoted to a using application. The human perception metric includes a mood for the occupant and a mood for the vehicle. The promoting includes input to an autonomous vehicle.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: November 21, 2023
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
  • Patent number: 11769056
    Abstract: Machine learning is performed using synthetic data for neural network training using vectors. Facial images are obtained for a neural network training dataset. Facial elements from the facial images are encoded into vector representations of the facial elements. A generative adversarial network (GAN) generator is trained to provide one or more synthetic vectors based on the one or more vector representations, wherein the one or more synthetic vectors enable avoidance of discriminator detection in the GAN. The training a GAN further comprises determining a generator accuracy using the discriminator. The generator accuracy can enable a classifier, where the classifier comprises a multi-layer perceptron. Additional synthetic vectors are generated in the GAN, wherein the additional synthetic vectors avoid discriminator detection. A machine learning neural network is trained using the additional synthetic vectors.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: September 26, 2023
    Assignee: Affectiva, Inc.
    Inventors: Sandipan Banerjee, Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Taniya Mishra
  • Patent number: 11704574
    Abstract: Techniques for machine-trained analysis for multimodal machine learning vehicle manipulation are described. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels. The trained weights cover both the audio information and the video information and are trained simultaneously. The learning facilitates cognitive state analysis of the audio information and the video information. A computing device within a vehicle captures further information and analyzes the further information using trained weights. The further information that is analyzed enables vehicle manipulation. The further information can include only video data or only audio data. The further information can include a cognitive state metric.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: July 18, 2023
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Panu James Turcot
  • Patent number: 11700420
    Abstract: Data on a user interacting with a media presentation is collected at a client device. The data includes facial image data of the user. The facial image data is analyzed to extract cognitive state content of the user. One or more emotional intensity metrics are generated. The metrics are based on the cognitive state content. The media presentation is manipulated, based on the emotional intensity metrics and the cognitive state content. An engagement score for the media presentation is provided. The engagement score is based on the emotional intensity metric. A facial expression metric and a cognitive state metric are generated for the user. The manipulating includes optimization of the previously viewed media presentation. The optimization changes various aspects of the media presentation, including the length of different portions of the media presentation, the overall length of the media presentation, character selection, music selection, advertisement placement, and brand reveal time.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: July 11, 2023
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Melissa Sue Burke, Andrew Edwin Dreisch, Graham John Page, Panu James Turcot, Evan Kodra
  • Patent number: 11657288
    Abstract: Disclosed embodiments provide for deep convolutional neural network computing. The convolutional computing is accomplished using a multilayered analysis engine. The multilayered analysis engine includes a deep learning network using a convolutional neural network (CNN). The multilayered analysis engine is used to analyze multiple images in a supervised or unsupervised learning process. Multiple images are provided to the multilayered analysis engine, and the multilayered analysis engine is trained with those images. A subject image is then evaluated by the multilayered analysis engine. The evaluation is accomplished by analyzing pixels within the subject image to identify a facial portion and identifying a facial expression based on the facial portion. The results of the evaluation are output. The multilayered analysis engine is retrained using a second plurality of images.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: May 23, 2023
    Assignee: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Daniel McDuff
  • Patent number: 11587357
    Abstract: Vehicle cognitive data is collected using multiple devices. A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and cognitive states inferred from reactions to the tasks and activities. A first computing device within a vehicle obtains cognitive state data which is collected on an occupant of the vehicle from multiple sources, wherein the multiple sources include at least two sources of facial image data. At least one face in the facial image data is partially occluded. A second computing device generates analysis of the cognitive state data which is collected from the multiple sources. A third computing device renders an output which is based on the analysis of the cognitive state data. The partial occluding includes a time basis of occluding. The partial occluding includes an image basis of occluding. The cognitive state data from multiple sources is tagged.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: February 21, 2023
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Publication number: 20230033776
    Abstract: Techniques for cognitive analysis for directed control transfer with autonomous vehicles are described. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.
    Type: Application
    Filed: October 10, 2022
    Publication date: February 2, 2023
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
  • Patent number: 11511757
    Abstract: Vehicle manipulation is performed using crowdsourced data. A camera within a vehicle is used to collect cognitive state data, including facial data, on a plurality of occupants in a plurality of vehicles. A first computing device is used to learn a plurality of cognitive state profiles for the plurality of occupants, based on the cognitive state data. The cognitive state profiles include information on an absolute time or a trip duration time. Voice data is collected and is used to augment the cognitive state data. A second computing device is used to capture further cognitive state data on an individual occupant in an individual vehicle. A third computing device is used to compare the further cognitive state data with the cognitive state profiles that were learned. The individual vehicle is manipulated based on the comparing of the further cognitive state data.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: November 29, 2022
    Assignee: Affectiva, Inc.
    Inventors: Gabriele Zijderveld, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
  • Patent number: 11484685
    Abstract: Techniques for robotic control using profiles are disclosed. Cognitive state data for an individual is obtained. A cognitive state profile for the individual is learned using the cognitive state data that was obtained. Further cognitive state data for the individual is collected. The further cognitive state data is compared with the cognitive state profile. Stimuli are provided by a robot to the individual based on the comparing. The robot can be a smart toy. The cognitive state data can include facial image data for the individual. The further cognitive state data can include audio data for the individual. The audio data can be voice data. The voice data augments the cognitive state data. Cognitive state data for the individual is obtained using another robot. The cognitive state profile is updated based on input from either of the robots.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: November 1, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Jason Krupat
  • Patent number: 11465640
    Abstract: Techniques are described for cognitive analysis for directed control transfer for autonomous vehicles. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: October 11, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
  • Patent number: 11430561
    Abstract: Remote computing analysis for cognitive state data metrics is performed. Cognitive state data from a plurality of people is collected as they interact with a rendering. The cognitive state data includes video facial data collected on one or more local devices from the plurality of people. Information is uploaded to a remote server. The information includes the cognitive state data. A facial expression metric based on a plurality of image classifiers is calculated for each individual within the plurality of people. Cognitive state information is generated for each individual, based on the facial expression metric for each individual. The cognitive state information for each individual within the plurality of people who interacted with the rendering is aggregated. The aggregation is based on the facial expression metric for each individual. The cognitive state information that was aggregated is displayed on at least one of the one or more local devices.
    Type: Grant
    Filed: July 21, 2020
    Date of Patent: August 30, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Rosalind Wright Picard, Richard Scott Sadowsky
  • Patent number: 11430260
    Abstract: Techniques for performing viewing verification using a plurality of classifiers are disclosed. Images of an individual may be obtained concurrently with an electronic display presenting one or more images. Image classifiers for facial and head pose analysis may be obtained. The images of the individual may be analyzed to identify a face of the individual in one of the plurality of images. A viewing verification metric may be calculated using the image classifiers and a verified viewing duration of the screen images by the individual may be calculated based on the plurality of images and the analyzing. Viewing verification can involve determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. A viewing verification metric can be generated in order to determine a level of interest of the individual in particular media and images.
    Type: Grant
    Filed: December 24, 2019
    Date of Patent: August 30, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Nicholas Langeveld, Daniel McDuff, Seyedmohammad Mavadati
  • Patent number: 11410438
    Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: August 9, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
  • Patent number: 11393133
    Abstract: A machine learning system is accessed. The machine learning system is used to translate content into a representative icon. The machine learning system is used to manipulate emoji. The machine learning system is used to process an image of an individual. The machine learning processing includes identifying a face of the individual. The machine learning processing includes classifying the face to determine facial content using a plurality of image classifiers. The classifying includes generating confidence values for a plurality of action units for the face. The facial content is translated into a representative icon. The translating the facial content includes summing the confidence values for the plurality of action units. The representative icon comprises an emoji. A set of emoji can be imported. The representative icon is selected from the set of emoji. The emoji selection is based on emotion content analysis of the face.
    Type: Grant
    Filed: March 19, 2020
    Date of Patent: July 19, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, May Amr Fouad, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Daniel McDuff
  • Patent number: 11318949
    Abstract: Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: May 3, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 11292477
    Abstract: Vehicle manipulation uses cognitive state engineering. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A computing device is used to analyze the images to determine a cognitive state. Audio information from the occupant is obtained and the analyzing is augmented based on the audio information. The cognitive state is mapped to a loading curve, where the loading curve represents a continuous spectrum of cognitive state loading variation. The vehicle is manipulated, based on the mapping to the loading curve, where the manipulating uses cognitive state alteration engineering. The manipulating includes changing vehicle occupant sensory stimulation. Additional images of additional occupants of the vehicle are obtained and analyzed to determine additional cognitive states. Additional cognitive states are used to adjust the mapping. A cognitive load is estimated based on eye gaze tracking.
    Type: Grant
    Filed: June 2, 2019
    Date of Patent: April 5, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Taniya Mishra
  • Publication number: 20220101146
    Abstract: Techniques for machine learning based on neural network training with bias mitigation are disclosed. Facial images for a neural network configuration and a neural network training dataset are obtained. The training dataset is associated with the neural network configuration. The facial images are partitioned into multiple subgroups, wherein the subgroups represent demographics with potential for biased training. A multifactor key performance indicator (KPI) is calculated per image. The calculating is based on analyzing performance of two or more image classifier models. The neural network configuration and the training dataset are promoted to a production neural network, wherein the promoting is based on the KPI. The KPI identifies bias in the training dataset. Promotion of the neural network configuration and the neural network training dataset is based on identified bias.
    Type: Application
    Filed: September 23, 2021
    Publication date: March 31, 2022
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Sneha Bhattacharya, Taniya Mishra, Shruti Ranjalkar