Patents by Inventor Seyedmohammad Mavadati
Seyedmohammad Mavadati has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12076149Abstract: Disclosed embodiments provide for vehicle manipulation with convolutional image processing. The convolutional image processing uses a multilayered analysis engine. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers and hidden layers. The evaluating provides a cognitive state analysis. Further images are evaluated using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Manipulation data is provided to the second vehicle based on the evaluating the further images. An additional plurality of images of one or more occupants of one or more additional vehicles is obtained. The additional images provide opted-in, crowdsourced image training. The crowdsourced image training enables retraining the multilayered analysis engine.Type: GrantFiled: May 24, 2021Date of Patent: September 3, 2024Assignee: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Patent number: 11704574Abstract: Techniques for machine-trained analysis for multimodal machine learning vehicle manipulation are described. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels. The trained weights cover both the audio information and the video information and are trained simultaneously. The learning facilitates cognitive state analysis of the audio information and the video information. A computing device within a vehicle captures further information and analyzes the further information using trained weights. The further information that is analyzed enables vehicle manipulation. The further information can include only video data or only audio data. The further information can include a cognitive state metric.Type: GrantFiled: April 20, 2020Date of Patent: July 18, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Panu James Turcot
-
Patent number: 11587357Abstract: Vehicle cognitive data is collected using multiple devices. A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and cognitive states inferred from reactions to the tasks and activities. A first computing device within a vehicle obtains cognitive state data which is collected on an occupant of the vehicle from multiple sources, wherein the multiple sources include at least two sources of facial image data. At least one face in the facial image data is partially occluded. A second computing device generates analysis of the cognitive state data which is collected from the multiple sources. A third computing device renders an output which is based on the analysis of the cognitive state data. The partial occluding includes a time basis of occluding. The partial occluding includes an image basis of occluding. The cognitive state data from multiple sources is tagged.Type: GrantFiled: March 16, 2020Date of Patent: February 21, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 11511757Abstract: Vehicle manipulation is performed using crowdsourced data. A camera within a vehicle is used to collect cognitive state data, including facial data, on a plurality of occupants in a plurality of vehicles. A first computing device is used to learn a plurality of cognitive state profiles for the plurality of occupants, based on the cognitive state data. The cognitive state profiles include information on an absolute time or a trip duration time. Voice data is collected and is used to augment the cognitive state data. A second computing device is used to capture further cognitive state data on an individual occupant in an individual vehicle. A third computing device is used to compare the further cognitive state data with the cognitive state profiles that were learned. The individual vehicle is manipulated based on the comparing of the further cognitive state data.Type: GrantFiled: April 20, 2020Date of Patent: November 29, 2022Assignee: Affectiva, Inc.Inventors: Gabriele Zijderveld, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Patent number: 11430260Abstract: Techniques for performing viewing verification using a plurality of classifiers are disclosed. Images of an individual may be obtained concurrently with an electronic display presenting one or more images. Image classifiers for facial and head pose analysis may be obtained. The images of the individual may be analyzed to identify a face of the individual in one of the plurality of images. A viewing verification metric may be calculated using the image classifiers and a verified viewing duration of the screen images by the individual may be calculated based on the plurality of images and the analyzing. Viewing verification can involve determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. A viewing verification metric can be generated in order to determine a level of interest of the individual in particular media and images.Type: GrantFiled: December 24, 2019Date of Patent: August 30, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Nicholas Langeveld, Daniel McDuff, Seyedmohammad Mavadati
-
Patent number: 11393133Abstract: A machine learning system is accessed. The machine learning system is used to translate content into a representative icon. The machine learning system is used to manipulate emoji. The machine learning system is used to process an image of an individual. The machine learning processing includes identifying a face of the individual. The machine learning processing includes classifying the face to determine facial content using a plurality of image classifiers. The classifying includes generating confidence values for a plurality of action units for the face. The facial content is translated into a representative icon. The translating the facial content includes summing the confidence values for the plurality of action units. The representative icon comprises an emoji. A set of emoji can be imported. The representative icon is selected from the set of emoji. The emoji selection is based on emotion content analysis of the face.Type: GrantFiled: March 19, 2020Date of Patent: July 19, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, May Amr Fouad, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Daniel McDuff
-
Patent number: 11318949Abstract: Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.Type: GrantFiled: December 11, 2020Date of Patent: May 3, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Publication number: 20210279514Abstract: Disclosed embodiments provide for vehicle manipulation with convolutional image processing. The convolutional image processing uses a multilayered analysis engine. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers and hidden layers. The evaluating provides a cognitive state analysis. Further images are evaluated using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Manipulation data is provided to the second vehicle based on the evaluating the further images. An additional plurality of images of one or more occupants of one or more additional vehicles is obtained. The additional images provide opted-in, crowdsourced image training. The crowdsourced image training enables retraining the multilayered analysis engine.Type: ApplicationFiled: May 24, 2021Publication date: September 9, 2021Applicant: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Patent number: 11073899Abstract: Techniques for multidevice, multimodal emotion services monitoring are disclosed. An expression to be detected is determined. The expression relates to a cognitive state of an individual. Input on the cognitive state of the individual is obtained using a device local to the individual. Monitoring for the expression is performed. The monitoring uses a background process on a device remote from the individual. An occurrence of the expression is identified. The identification is performed by the background process. Notification that the expression was identified is provided. The notification is provided from the background process to a device distinct from the device running the background process. The expression is defined as a multimodal expression. The multimodal expression includes image data and audio data from the individual. The notification enables emotion services to be provided. The emotion services augment messaging, social media, and automated help applications.Type: GrantFiled: September 30, 2019Date of Patent: July 27, 2021Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Gregory Poulin, Panu James Turcot
-
Publication number: 20210188291Abstract: Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.Type: ApplicationFiled: December 11, 2020Publication date: June 24, 2021Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 11017250Abstract: Disclosed embodiments provide for vehicle manipulation using convolutional image processing. The convolutional image processing is accomplished using a computer, where the computer can include a multilayered analysis engine. The multilayered analysis engine can include a convolutional neural network (CNN). The computer is initialized for convolutional processing. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers hidden layers. The multilayered analysis engine is used for cognitive state analysis. The evaluating provides a cognitive state analysis. Further images are analyzed using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Voice data is collected to augment the cognitive state analysis.Type: GrantFiled: March 2, 2018Date of Patent: May 25, 2021Assignee: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Publication number: 20210125065Abstract: Deep learning in situ retraining uses deep learning nodes to provide a human perception state on a user device. A plurality of images including facial data is obtained for human perception state analysis. A server device trains a set of weights on a set of layers for deep learning that implements the analysis, where the training is performed with a first set of training data. A subset of weights is deployed on deep learning nodes on a user device, where the deploying enables at least part of the human perception state analysis. An additional set of weights is retrained on the user device, where the additional set of weights is trained using a second set of training data. A human perception state based on the subset of the set of weights, the additional set of weights, and input images obtained by the user device is provided on the user device.Type: ApplicationFiled: October 23, 2020Publication date: April 29, 2021Applicant: Affectiva, Inc.Inventors: Panu James Turcot, Seyedmohammad Mavadati
-
Patent number: 10922566Abstract: Disclosed embodiments provide cognitive state evaluation for vehicle navigation. The cognitive state evaluation is accomplished using a computer, where the computer can perform learning using a neural network such as a deep neural network (DNN) or a convolutional neural network (CNN). Images including facial data are obtained of a first occupant of a first vehicle. The images are analyzed to determine cognitive state data. Layers and weights are learned for the deep neural network. Images of a second occupant of a second vehicle are collected and analyzed to determine additional cognitive state data. The additional cognitive state data is analyzed, and the second vehicle is manipulated. A second imaging device is used to collect images of a person outside the second vehicle to determine cognitive state data. The second vehicle can be manipulated based on the cognitive state data of the person outside the vehicle.Type: GrantFiled: May 9, 2018Date of Patent: February 16, 2021Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 10922567Abstract: Cognitive state-based vehicle manipulation uses near-infrared image processing. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The images include facial data of the vehicle occupant. The images include visible light-based images and near-infrared based images. A classifier is trained based on the visible light content of the images to determine cognitive state data for the vehicle occupant. The classifier is modified based on the near-infrared image content. The modified classifier is deployed for analysis of additional images of the vehicle occupant, where the additional images are near-infrared based images. The additional images are analyzed to determine a cognitive state. The vehicle is manipulated based on the cognitive state that was analyzed. The cognitive state is rendered on a display located within the vehicle.Type: GrantFiled: March 1, 2019Date of Patent: February 16, 2021Assignee: Affectiva, Inc.Inventors: Abdelrahman N. Mahmoud, Rana el Kaliouby, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 10867197Abstract: Drowsiness mental state analysis is performed using blink rate. Video is obtained of an individual or group. The individual or group can be within a vehicle. The video is analyzed to detect a blink event based on a classifier, where the blink event is determined by identifying that eyes are closed for a frame in the video. A blink duration is evaluated for the blink event. Blink-rate information is determined using the blink event and one or more other blink events. The evaluating can include evaluating blinking for a group of people. The blink-rate information is compensated to determine drowsiness, based on the temporal distribution mapping of the blink-rate information. Mental states of the individual are inferred for the blink event based on the blink event, the blink duration of the individual, and the blink-rate information that was compensated. The compensating is biased based on demographic information of the individual.Type: GrantFiled: November 15, 2019Date of Patent: December 15, 2020Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 10796176Abstract: Personal emotional profile generation uses cognitive state analysis for vehicle manipulation. Cognitive state data is obtained from an individual. The cognitive state data is extracted, using one or more processors, from facial images of an individual captured as they respond to stimuli within a vehicle. The cognitive state data extracted from facial images is analyzed to produce cognitive state information. The cognitive state information is categorized, using one or more processors, against a personal emotional profile for the individual. The vehicle is manipulated, based on the cognitive state information, the categorizing, and the stimuli. The personal emotional profile is generated by comparing the cognitive state information of the individual with cognitive state norms from a plurality of individuals and is based on cognitive state data for the individual that is accumulated over time. The cognitive state information is augmented based on audio data collected from within the vehicle.Type: GrantFiled: October 29, 2018Date of Patent: October 6, 2020Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Gabriele Zijderveld
-
Patent number: 10779761Abstract: Mental state analysis uses sporadic collection of affect data within a vehicle. Mental state data of a vehicle occupant is collected within a vehicle on an intermittent basis. The mental state data includes facial image data and the facial image data is collected intermittently across a plurality of devices within the vehicle. The mental state data further includes audio information. Processors are used to interpolate mental state data in between the collecting which is intermittent. Analysis of the mental state data is obtained on the vehicle occupant, where the analysis of the mental state data includes analyzing the facial image data. An output is rendered based on the analysis of the mental state data. The rendering includes communicating by a virtual assistant, communicating with a navigation component, and manipulating the vehicle. The mental state data is translated into an emoji.Type: GrantFiled: December 3, 2018Date of Patent: September 22, 2020Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Publication number: 20200239005Abstract: Vehicle manipulation is performed using crowdsourced data. A camera within a vehicle is used to collect cognitive state data, including facial data, on a plurality of occupants in a plurality of vehicles. A first computing device is used to learn a plurality of cognitive state profiles for the plurality of occupants, based on the cognitive state data. The cognitive state profiles include information on an absolute time or a trip duration time. Voice data is collected and is used to augment the cognitive state data. A second computing device is used to capture further cognitive state data on an individual occupant in an individual vehicle. A third computing device is used to compare the further cognitive state data with the cognitive state profiles that were learned. The individual vehicle is manipulated based on the comparing of the further cognitive state data.Type: ApplicationFiled: April 20, 2020Publication date: July 30, 2020Applicant: Affectiva, Inc.Inventors: Gabriele Zijderveld, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Publication number: 20200242383Abstract: Techniques for machine-trained analysis for multimodal machine learning vehicle manipulation are described. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels. The trained weights cover both the audio information and the video information and are trained simultaneously. The learning facilitates cognitive state analysis of the audio information and the video information. A computing device within a vehicle captures further information and analyzes the further information using trained weights. The further information that is analyzed enables vehicle manipulation. The further information can include only video data or only audio data. The further information can include a cognitive state metric.Type: ApplicationFiled: April 20, 2020Publication date: July 30, 2020Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Panu James Turcot
-
Publication number: 20200226355Abstract: Vehicle cognitive data is collected using multiple devices. A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and cognitive states inferred from reactions to the tasks and activities. A first computing device within a vehicle obtains cognitive state data which is collected on an occupant of the vehicle from multiple sources, wherein the multiple sources include at least two sources of facial image data. At least one face in the facial image data is partially occluded. A second computing device generates analysis of the cognitive state data which is collected from the multiple sources. A third computing device renders an output which is based on the analysis of the cognitive state data. The partial occluding includes a time basis of occluding. The partial occluding includes an image basis of occluding. The cognitive state data from multiple sources is tagged.Type: ApplicationFiled: March 16, 2020Publication date: July 16, 2020Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot