Patents by Inventor Abdelrahman N. Mahmoud
Abdelrahman N. Mahmoud has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12076149Abstract: Disclosed embodiments provide for vehicle manipulation with convolutional image processing. The convolutional image processing uses a multilayered analysis engine. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers and hidden layers. The evaluating provides a cognitive state analysis. Further images are evaluated using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Manipulation data is provided to the second vehicle based on the evaluating the further images. An additional plurality of images of one or more occupants of one or more additional vehicles is obtained. The additional images provide opted-in, crowdsourced image training. The crowdsourced image training enables retraining the multilayered analysis engine.Type: GrantFiled: May 24, 2021Date of Patent: September 3, 2024Assignee: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Patent number: 11887383Abstract: Vehicle interior object management uses analysis for detection of an object within a vehicle. The object can include a cell phone, a computing device, a briefcase, a wallet, a purse, or luggage. The object can include a child or a pet. A distance between an occupant and the object can be calculated. The object can be within a reachable distance of the occupant. Two or more images of a vehicle interior are collected using imaging devices within the vehicle. The images are analyzed to detect an object within the vehicle. The object is classified. A level of interaction is estimated between an occupant of the vehicle and the object within the vehicle. The object can be determined to have been left behind once the occupant leaves the vehicle. A control element of the vehicle is changed based on the classifying and the level of interaction.Type: GrantFiled: August 28, 2020Date of Patent: January 30, 2024Assignee: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 11823055Abstract: Vehicular in-cabin sensing is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. An occupant is detected within the vehicle interior. The detecting is based on identifying an upper torso of the occupant, using the in-cabin sensor data. The imaging is accomplished using a plurality of imaging devices within a vehicle interior. The occupant is located within the vehicle interior, based on the in-cabin sensor data. An additional occupant within the vehicle interior is detected. A human perception metric for the occupant is analyzed, based on the in-cabin sensor data. The detecting, the locating, and/or the analyzing are performed using machine learning. The human perception metric is promoted to a using application. The human perception metric includes a mood for the occupant and a mood for the vehicle. The promoting includes input to an autonomous vehicle.Type: GrantFiled: March 30, 2020Date of Patent: November 21, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 11587357Abstract: Vehicle cognitive data is collected using multiple devices. A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and cognitive states inferred from reactions to the tasks and activities. A first computing device within a vehicle obtains cognitive state data which is collected on an occupant of the vehicle from multiple sources, wherein the multiple sources include at least two sources of facial image data. At least one face in the facial image data is partially occluded. A second computing device generates analysis of the cognitive state data which is collected from the multiple sources. A third computing device renders an output which is based on the analysis of the cognitive state data. The partial occluding includes a time basis of occluding. The partial occluding includes an image basis of occluding. The cognitive state data from multiple sources is tagged.Type: GrantFiled: March 16, 2020Date of Patent: February 21, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Publication number: 20230033776Abstract: Techniques for cognitive analysis for directed control transfer with autonomous vehicles are described. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.Type: ApplicationFiled: October 10, 2022Publication date: February 2, 2023Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 11511757Abstract: Vehicle manipulation is performed using crowdsourced data. A camera within a vehicle is used to collect cognitive state data, including facial data, on a plurality of occupants in a plurality of vehicles. A first computing device is used to learn a plurality of cognitive state profiles for the plurality of occupants, based on the cognitive state data. The cognitive state profiles include information on an absolute time or a trip duration time. Voice data is collected and is used to augment the cognitive state data. A second computing device is used to capture further cognitive state data on an individual occupant in an individual vehicle. A third computing device is used to compare the further cognitive state data with the cognitive state profiles that were learned. The individual vehicle is manipulated based on the comparing of the further cognitive state data.Type: GrantFiled: April 20, 2020Date of Patent: November 29, 2022Assignee: Affectiva, Inc.Inventors: Gabriele Zijderveld, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Patent number: 11465640Abstract: Techniques are described for cognitive analysis for directed control transfer for autonomous vehicles. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.Type: GrantFiled: December 28, 2018Date of Patent: October 11, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 11410438Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.Type: GrantFiled: November 8, 2019Date of Patent: August 9, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
-
Patent number: 11393133Abstract: A machine learning system is accessed. The machine learning system is used to translate content into a representative icon. The machine learning system is used to manipulate emoji. The machine learning system is used to process an image of an individual. The machine learning processing includes identifying a face of the individual. The machine learning processing includes classifying the face to determine facial content using a plurality of image classifiers. The classifying includes generating confidence values for a plurality of action units for the face. The facial content is translated into a representative icon. The translating the facial content includes summing the confidence values for the plurality of action units. The representative icon comprises an emoji. A set of emoji can be imported. The representative icon is selected from the set of emoji. The emoji selection is based on emotion content analysis of the face.Type: GrantFiled: March 19, 2020Date of Patent: July 19, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, May Amr Fouad, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Daniel McDuff
-
Patent number: 11318949Abstract: Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.Type: GrantFiled: December 11, 2020Date of Patent: May 3, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 11292477Abstract: Vehicle manipulation uses cognitive state engineering. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A computing device is used to analyze the images to determine a cognitive state. Audio information from the occupant is obtained and the analyzing is augmented based on the audio information. The cognitive state is mapped to a loading curve, where the loading curve represents a continuous spectrum of cognitive state loading variation. The vehicle is manipulated, based on the mapping to the loading curve, where the manipulating uses cognitive state alteration engineering. The manipulating includes changing vehicle occupant sensory stimulation. Additional images of additional occupants of the vehicle are obtained and analyzed to determine additional cognitive states. Additional cognitive states are used to adjust the mapping. A cognitive load is estimated based on eye gaze tracking.Type: GrantFiled: June 2, 2019Date of Patent: April 5, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Taniya Mishra
-
Publication number: 20210339759Abstract: Image-based analysis techniques are used for cognitive state vehicle navigation, including an autonomous or a semi-autonomous vehicle. Images including facial data of a vehicle occupant are obtained using an in-vehicle imaging device. The vehicle occupant can be an operator of or a passenger within the vehicle. A first computing device is used to analyze the images to determine occupant cognitive state data. The analysis can occur at various times along a vehicle travel route. The cognitive state data is mapped to location data along the vehicle travel route. Information about the vehicle travel route is updated based on the cognitive state data and mode data for the vehicle. The updated information is provided for vehicle control. The mode data is configurable based on a mode setting. The mode data is weighted based on additional information.Type: ApplicationFiled: July 19, 2021Publication date: November 4, 2021Applicant: Affectiva, Inc.Inventors: Maha Amr Mohamed Fouad, Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot
-
Publication number: 20210279514Abstract: Disclosed embodiments provide for vehicle manipulation with convolutional image processing. The convolutional image processing uses a multilayered analysis engine. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers and hidden layers. The evaluating provides a cognitive state analysis. Further images are evaluated using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Manipulation data is provided to the second vehicle based on the evaluating the further images. An additional plurality of images of one or more occupants of one or more additional vehicles is obtained. The additional images provide opted-in, crowdsourced image training. The crowdsourced image training enables retraining the multilayered analysis engine.Type: ApplicationFiled: May 24, 2021Publication date: September 9, 2021Applicant: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Publication number: 20210188291Abstract: Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.Type: ApplicationFiled: December 11, 2020Publication date: June 24, 2021Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 11017250Abstract: Disclosed embodiments provide for vehicle manipulation using convolutional image processing. The convolutional image processing is accomplished using a computer, where the computer can include a multilayered analysis engine. The multilayered analysis engine can include a convolutional neural network (CNN). The computer is initialized for convolutional processing. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers hidden layers. The multilayered analysis engine is used for cognitive state analysis. The evaluating provides a cognitive state analysis. Further images are analyzed using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Voice data is collected to augment the cognitive state analysis.Type: GrantFiled: March 2, 2018Date of Patent: May 25, 2021Assignee: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Patent number: 10922567Abstract: Cognitive state-based vehicle manipulation uses near-infrared image processing. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The images include facial data of the vehicle occupant. The images include visible light-based images and near-infrared based images. A classifier is trained based on the visible light content of the images to determine cognitive state data for the vehicle occupant. The classifier is modified based on the near-infrared image content. The modified classifier is deployed for analysis of additional images of the vehicle occupant, where the additional images are near-infrared based images. The additional images are analyzed to determine a cognitive state. The vehicle is manipulated based on the cognitive state that was analyzed. The cognitive state is rendered on a display located within the vehicle.Type: GrantFiled: March 1, 2019Date of Patent: February 16, 2021Assignee: Affectiva, Inc.Inventors: Abdelrahman N. Mahmoud, Rana el Kaliouby, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 10922566Abstract: Disclosed embodiments provide cognitive state evaluation for vehicle navigation. The cognitive state evaluation is accomplished using a computer, where the computer can perform learning using a neural network such as a deep neural network (DNN) or a convolutional neural network (CNN). Images including facial data are obtained of a first occupant of a first vehicle. The images are analyzed to determine cognitive state data. Layers and weights are learned for the deep neural network. Images of a second occupant of a second vehicle are collected and analyzed to determine additional cognitive state data. The additional cognitive state data is analyzed, and the second vehicle is manipulated. A second imaging device is used to collect images of a person outside the second vehicle to determine cognitive state data. The second vehicle can be manipulated based on the cognitive state data of the person outside the vehicle.Type: GrantFiled: May 9, 2018Date of Patent: February 16, 2021Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 10911829Abstract: Techniques are disclosed for vehicle video recommendation via affect. A first media presentation is played to a vehicle occupant. The playing is accomplished using a video client. Cognitive state data for the vehicle occupant is captured, where the cognitive state data includes video facial data from the vehicle occupant during the first media presentation playing. The first media presentation is ranked, on an analysis server, relative to another media presentation based on the cognitive state data which was captured for the vehicle occupant. The ranking is determined for the vehicle occupant. The cognitive state data which was captured for the vehicle occupant is correlated, on the analysis server, to cognitive state data collected from other people who experienced the first media presentation. One or more further media presentation selections are recommended to the vehicle occupant, based on the ranking and the correlating.Type: GrantFiled: May 10, 2019Date of Patent: February 2, 2021Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot
-
Patent number: 10897650Abstract: Content manipulation uses cognitive states for vehicle content recommendation. Images are obtained of a vehicle occupant using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A content ingestion history of the vehicle occupant is obtained, where the content ingestion history includes one or more audio or video selections. A first computing device is used to analyze the one or more images to determine a cognitive state of the vehicle occupant. The cognitive state is correlated to the content ingestion history using a second computing device. One or more further audio or video selections are recommended to the vehicle occupant, based on the cognitive state, the content ingestion history, and the correlating. The analyzing can be compared with additional analyzing performed on additional vehicle occupants. The additional vehicle occupants can be in the same vehicle as the first occupant or different vehicles.Type: GrantFiled: December 6, 2018Date of Patent: January 19, 2021Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
-
Publication number: 20200394428Abstract: Vehicle interior object management uses analysis for detection of an object within a vehicle. The object can include a cell phone, a computing device, a briefcase, a wallet, a purse, or luggage. The object can include a child or a pet. A distance between an occupant and the object can be calculated. The object can be within a reachable distance of the occupant. Two or more images of a vehicle interior are collected using imaging devices within the vehicle. The images are analyzed to detect an object within the vehicle. The object is classified. A level of interaction is estimated between an occupant of the vehicle and the object within the vehicle. The object can be determined to have been left behind once the occupant leaves the vehicle. A control element of the vehicle is changed based on the classifying and the level of interaction.Type: ApplicationFiled: August 28, 2020Publication date: December 17, 2020Applicant: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Andrew Todd Zeilman, Gabriele Zijderveld