Patents by Inventor Gabriele Zijderveld
Gabriele Zijderveld has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11887352Abstract: Analytics are used for live streaming based on analysis within a shared digital environment. An interactive digital environment is accessed, where the interactive digital environment is a shared digital environment for a plurality of participants. The participants include presenters and viewers. A plurality of images is obtained from a first set of participants within the plurality of participants involved in the interactive digital environment. Cognitive state content is analyzed within the plurality of images for the first set of participants within the plurality of participants. Results of the analyzing cognitive state content are provided to a second set of participants within the plurality of participants. The obtaining and the analyzing are accomplished on a device local to a participant such that images of the first set of participants are not transmitted to a non-local device. The analyzing cognitive state content is augmented with evaluation of audio information.Type: GrantFiled: March 25, 2020Date of Patent: January 30, 2024Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Graham John Page, Gabriele Zijderveld
-
Patent number: 11887383Abstract: Vehicle interior object management uses analysis for detection of an object within a vehicle. The object can include a cell phone, a computing device, a briefcase, a wallet, a purse, or luggage. The object can include a child or a pet. A distance between an occupant and the object can be calculated. The object can be within a reachable distance of the occupant. Two or more images of a vehicle interior are collected using imaging devices within the vehicle. The images are analyzed to detect an object within the vehicle. The object is classified. A level of interaction is estimated between an occupant of the vehicle and the object within the vehicle. The object can be determined to have been left behind once the occupant leaves the vehicle. A control element of the vehicle is changed based on the classifying and the level of interaction.Type: GrantFiled: August 28, 2020Date of Patent: January 30, 2024Assignee: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 11823055Abstract: Vehicular in-cabin sensing is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. An occupant is detected within the vehicle interior. The detecting is based on identifying an upper torso of the occupant, using the in-cabin sensor data. The imaging is accomplished using a plurality of imaging devices within a vehicle interior. The occupant is located within the vehicle interior, based on the in-cabin sensor data. An additional occupant within the vehicle interior is detected. A human perception metric for the occupant is analyzed, based on the in-cabin sensor data. The detecting, the locating, and/or the analyzing are performed using machine learning. The human perception metric is promoted to a using application. The human perception metric includes a mood for the occupant and a mood for the vehicle. The promoting includes input to an autonomous vehicle.Type: GrantFiled: March 30, 2020Date of Patent: November 21, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
-
Publication number: 20230033776Abstract: Techniques for cognitive analysis for directed control transfer with autonomous vehicles are described. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.Type: ApplicationFiled: October 10, 2022Publication date: February 2, 2023Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 11511757Abstract: Vehicle manipulation is performed using crowdsourced data. A camera within a vehicle is used to collect cognitive state data, including facial data, on a plurality of occupants in a plurality of vehicles. A first computing device is used to learn a plurality of cognitive state profiles for the plurality of occupants, based on the cognitive state data. The cognitive state profiles include information on an absolute time or a trip duration time. Voice data is collected and is used to augment the cognitive state data. A second computing device is used to capture further cognitive state data on an individual occupant in an individual vehicle. A third computing device is used to compare the further cognitive state data with the cognitive state profiles that were learned. The individual vehicle is manipulated based on the comparing of the further cognitive state data.Type: GrantFiled: April 20, 2020Date of Patent: November 29, 2022Assignee: Affectiva, Inc.Inventors: Gabriele Zijderveld, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Patent number: 11465640Abstract: Techniques are described for cognitive analysis for directed control transfer for autonomous vehicles. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.Type: GrantFiled: December 28, 2018Date of Patent: October 11, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 11056225Abstract: Analytics are used for live streaming based on image analysis within a shared digital environment. A group of images is obtained from a group of participants involved in an interactive digital environment. The interactive digital environment can be a shared digital environment. The interactive digital environment can be a gaming environment. Emotional content within the group of images is analyzed for a set of participants within the group of participants. Results of the analyzing of the emotional content within the group of images are provided to a second set of participants within the group of participants. The analyzing emotional content includes identifying an image of an individual, identifying a face of the individual, determining facial regions, and performing content evaluation based on applying image classifiers.Type: GrantFiled: February 28, 2017Date of Patent: July 6, 2021Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, James Henry Deal, Jr., Forest Jay Handford, Panu James Turcot, Gabriele Zijderveld
-
Patent number: 10897650Abstract: Content manipulation uses cognitive states for vehicle content recommendation. Images are obtained of a vehicle occupant using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A content ingestion history of the vehicle occupant is obtained, where the content ingestion history includes one or more audio or video selections. A first computing device is used to analyze the one or more images to determine a cognitive state of the vehicle occupant. The cognitive state is correlated to the content ingestion history using a second computing device. One or more further audio or video selections are recommended to the vehicle occupant, based on the cognitive state, the content ingestion history, and the correlating. The analyzing can be compared with additional analyzing performed on additional vehicle occupants. The additional vehicle occupants can be in the same vehicle as the first occupant or different vehicles.Type: GrantFiled: December 6, 2018Date of Patent: January 19, 2021Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
-
Publication number: 20200394428Abstract: Vehicle interior object management uses analysis for detection of an object within a vehicle. The object can include a cell phone, a computing device, a briefcase, a wallet, a purse, or luggage. The object can include a child or a pet. A distance between an occupant and the object can be calculated. The object can be within a reachable distance of the occupant. Two or more images of a vehicle interior are collected using imaging devices within the vehicle. The images are analyzed to detect an object within the vehicle. The object is classified. A level of interaction is estimated between an occupant of the vehicle and the object within the vehicle. The object can be determined to have been left behind once the occupant leaves the vehicle. A control element of the vehicle is changed based on the classifying and the level of interaction.Type: ApplicationFiled: August 28, 2020Publication date: December 17, 2020Applicant: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 10796176Abstract: Personal emotional profile generation uses cognitive state analysis for vehicle manipulation. Cognitive state data is obtained from an individual. The cognitive state data is extracted, using one or more processors, from facial images of an individual captured as they respond to stimuli within a vehicle. The cognitive state data extracted from facial images is analyzed to produce cognitive state information. The cognitive state information is categorized, using one or more processors, against a personal emotional profile for the individual. The vehicle is manipulated, based on the cognitive state information, the categorizing, and the stimuli. The personal emotional profile is generated by comparing the cognitive state information of the individual with cognitive state norms from a plurality of individuals and is based on cognitive state data for the individual that is accumulated over time. The cognitive state information is augmented based on audio data collected from within the vehicle.Type: GrantFiled: October 29, 2018Date of Patent: October 6, 2020Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Gabriele Zijderveld
-
Publication number: 20200311475Abstract: Vehicular in-cabin sensing is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. An occupant is detected within the vehicle interior. The detecting is based on identifying an upper torso of the occupant, using the in-cabin sensor data. The imaging is accomplished using a plurality of imaging devices within a vehicle interior. The occupant is located within the vehicle interior, based on the in-cabin sensor data. An additional occupant within the vehicle interior is detected. A human perception metric for the occupant is analyzed, based on the in-cabin sensor data. The detecting, the locating, and/or the analyzing are performed using machine learning. The human perception metric is promoted to a using application. The human perception metric includes a mood for the occupant and a mood for the vehicle. The promoting includes input to an autonomous vehicle.Type: ApplicationFiled: March 30, 2020Publication date: October 1, 2020Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
-
Publication number: 20200239005Abstract: Vehicle manipulation is performed using crowdsourced data. A camera within a vehicle is used to collect cognitive state data, including facial data, on a plurality of occupants in a plurality of vehicles. A first computing device is used to learn a plurality of cognitive state profiles for the plurality of occupants, based on the cognitive state data. The cognitive state profiles include information on an absolute time or a trip duration time. Voice data is collected and is used to augment the cognitive state data. A second computing device is used to capture further cognitive state data on an individual occupant in an individual vehicle. A third computing device is used to compare the further cognitive state data with the cognitive state profiles that were learned. The individual vehicle is manipulated based on the comparing of the further cognitive state data.Type: ApplicationFiled: April 20, 2020Publication date: July 30, 2020Applicant: Affectiva, Inc.Inventors: Gabriele Zijderveld, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Publication number: 20200228359Abstract: Analytics are used for live streaming based on analysis within a shared digital environment. An interactive digital environment is accessed, where the interactive digital environment is a shared digital environment for a plurality of participants. The participants include presenters and viewers. A plurality of images is obtained from a first set of participants within the plurality of participants involved in the interactive digital environment. Cognitive state content is analyzed within the plurality of images for the first set of participants within the plurality of participants. Results of the analyzing cognitive state content are provided to a second set of participants within the plurality of participants. The obtaining and the analyzing are accomplished on a device local to a participant such that images of the first set of participants are not transmitted to a non-local device. The analyzing cognitive state content is augmented with evaluation of audio information.Type: ApplicationFiled: March 25, 2020Publication date: July 16, 2020Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Graham John Page, Gabriele Zijderveld
-
Publication number: 20200175262Abstract: Techniques for performing robotic assistance are disclosed. A plurality of images of an individual is obtained by an imagery module associated with an autonomous mobile robot. Cognitive state data including facial data for the individual in the plurality of images is identified by an analysis module associated with the autonomous mobile robot. A facial expression metric, based on the facial data for the individual in the plurality of images, is calculated. A cognitive state metric for the individual is generated by the analysis module based on the cognitive state data. The autonomous mobile robot initiates one or more responses based on the cognitive state metric. The one or more responses include one or more electromechanical responses. The one or more electromechanical responses cause the robot to change locations.Type: ApplicationFiled: February 4, 2020Publication date: June 4, 2020Applicant: Affectiva, Inc.Inventors: Boisy G. Pitre, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Daniel McDuff, Panu James Turcot, Gabriele Zijderveld
-
Patent number: 10627817Abstract: Vehicle manipulation is performed using occupant image analysis. A camera within a vehicle is used to collect cognitive state data including facial data, on an occupant of a vehicle. A cognitive state profile is learned, on a first computing device, for the occupant based on the cognitive state data. The cognitive state profile includes information on absolute time. The cognitive state profile includes information on trip duration time. Voice data is collected and the cognitive state data is augmented with the voice data. Further cognitive state data is captured, on a second computing device, on the occupant while the occupant is in a second vehicle. The further cognitive state data is compared, on a third computing device, with the cognitive state profile that was learned for the occupant. The second vehicle is manipulated based on the comparing of the further cognitive state data.Type: GrantFiled: January 19, 2018Date of Patent: April 21, 2020Assignee: Affectiva, Inc.Inventors: Gabriele Zijderveld, Rana el Kaliouby, Abdelrahman N Mahmoud, Seyedmohammad Mavadati
-
Patent number: 10401860Abstract: Image analysis is performed for a two-sided data hub. Data reception on a first computing device is enabled by an individual and a content provider. Cognitive state data including facial data on the individual is collected on a second computing device. The cognitive state data is analyzed on a third computing device and the analysis is provided to the individual. The cognitive state data is evaluated and the evaluation is provided to the content provider. A mood dashboard is displayed to the individual based on the analyzing. The individual opts in to enable data reception for the individual. The content provider provides content via a website.Type: GrantFiled: March 12, 2018Date of Patent: September 3, 2019Assignee: Affectiva, Inc.Inventors: Jason Krupat, Rana el Kaliouby, Jason Radice, Gabriele Zijderveld, Chilton Lyons Cabot
-
Publication number: 20190152492Abstract: Techniques are described for cognitive analysis for directed control transfer for autonomous vehicles. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.Type: ApplicationFiled: December 28, 2018Publication date: May 23, 2019Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
-
Publication number: 20190110103Abstract: Content manipulation uses cognitive states for vehicle content recommendation. Images are obtained of a vehicle occupant using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A content ingestion history of the vehicle occupant is obtained, where the content ingestion history includes one or more audio or video selections. A first computing device is used to analyze the one or more images to determine a cognitive state of the vehicle occupant. The cognitive state is correlated to the content ingestion history using a second computing device. One or more further audio or video selections are recommended to the vehicle occupant, based on the cognitive state, the content ingestion history, and the correlating. The analyzing can be compared with additional analyzing performed on additional vehicle occupants. The additional vehicle occupants can be in the same vehicle as the first occupant or different vehicles.Type: ApplicationFiled: December 6, 2018Publication date: April 11, 2019Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
-
Publication number: 20190073547Abstract: Personal emotional profile generation uses cognitive state analysis for vehicle manipulation. Cognitive state data is obtained from an individual. The cognitive state data is extracted, using one or more processors, from facial images of an individual captured as they respond to stimuli within a vehicle. The cognitive state data extracted from facial images is analyzed to produce cognitive state information. The cognitive state information is categorized, using one or more processors, against a personal emotional profile for the individual. The vehicle is manipulated, based on the cognitive state information, the categorizing, and the stimuli. The personal emotional profile is generated by comparing the cognitive state information of the individual with cognitive state norms from a plurality of individuals and is based on cognitive state data for the individual that is accumulated over time. The cognitive state information is augmented based on audio data collected from within the vehicle.Type: ApplicationFiled: October 29, 2018Publication date: March 7, 2019Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Gabriele Zijderveld
-
Publication number: 20180196432Abstract: Image analysis is performed for a two-sided data hub. Data reception on a first computing device is enabled by an individual and a content provider. Cognitive state data including facial data on the individual is collected on a second computing device. The cognitive state data is analyzed on a third computing device and the analysis is provided to the individual. The cognitive state data is evaluated and the evaluation is provided to the content provider. A mood dashboard is displayed to the individual based on the analyzing. The individual opts in to enable data reception for the individual. The content provider provides content via a website.Type: ApplicationFiled: March 12, 2018Publication date: July 12, 2018Applicant: Affectiva, Inc.Inventors: Jason Krupat, Rana el Kaliouby, Jason Radice, Gabriele Zijderveld, Chilton Lyons Cabot