Patents by Inventor Panu James Turcot
Panu James Turcot has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12076149Abstract: Disclosed embodiments provide for vehicle manipulation with convolutional image processing. The convolutional image processing uses a multilayered analysis engine. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers and hidden layers. The evaluating provides a cognitive state analysis. Further images are evaluated using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Manipulation data is provided to the second vehicle based on the evaluating the further images. An additional plurality of images of one or more occupants of one or more additional vehicles is obtained. The additional images provide opted-in, crowdsourced image training. The crowdsourced image training enables retraining the multilayered analysis engine.Type: GrantFiled: May 24, 2021Date of Patent: September 3, 2024Assignee: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Publication number: 20240190206Abstract: Disclosed embodiments provide techniques for solar load usage in a vehicular environment. One or more processors are used to detect at least one individual in a vehicle. Sunlight contact is detected on each individual in the vehicle. A three-dimensional model of each individual in the vehicle is dynamically developed to estimate the total sunlit surface of the individual. A sunlight intensity metric is calculated based on detected sunlight. A shade intensity metric is also calculated and compared to the sunlight intensity metric. The thermal load for each individual in the vehicle is determined based on the sunlight intensity metric. The thermal load determination takes the time of day, length of exposure, and the location of sunlight into account. Climate control within the vehicle is adjusted to compensate for the determined thermal load. Climate control can be adjusted for each individual based on the thermal load determined for the individual.Type: ApplicationFiled: December 7, 2023Publication date: June 13, 2024Applicant: Affectiva, Inc.Inventor: Panu James Turcot
-
Patent number: 11935281Abstract: Vehicular in-cabin facial tracking is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. A set of seating locations for the vehicle interior is determined. The set is based on the images. The set of seating locations is scanned for performing facial detection for each of the seating locations using a facial detection model. A view of a detected face is manipulated. The manipulation is based on a geometry of the vehicle interior. Cognitive state data of the detected face is analyzed. The cognitive state data analysis is based on additional images of the detected face. The cognitive state data analysis uses the view that was manipulated. The cognitive state data analysis is promoted to a using application. The using application provides vehicle manipulation information to the vehicle. The manipulation information is for an autonomous vehicle.Type: GrantFiled: July 14, 2020Date of Patent: March 19, 2024Assignee: Affectiva, Inc.Inventors: Thibaud Senechal, Rana el Kaliouby, Panu James Turcot, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed
-
Patent number: 11887383Abstract: Vehicle interior object management uses analysis for detection of an object within a vehicle. The object can include a cell phone, a computing device, a briefcase, a wallet, a purse, or luggage. The object can include a child or a pet. A distance between an occupant and the object can be calculated. The object can be within a reachable distance of the occupant. Two or more images of a vehicle interior are collected using imaging devices within the vehicle. The images are analyzed to detect an object within the vehicle. The object is classified. A level of interaction is estimated between an occupant of the vehicle and the object within the vehicle. The object can be determined to have been left behind once the occupant leaves the vehicle. A control element of the vehicle is changed based on the classifying and the level of interaction.Type: GrantFiled: August 28, 2020Date of Patent: January 30, 2024Assignee: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Andrew Todd Zeilman, Gabriele Zijderveld
-
Publication number: 20230419642Abstract: Machine learning is used for a neural network multi-attribute facial encoder and decoder. A facial image is obtained for processing on a neural network and is encoded into two or more orthogonal feature subspaces. The encoding is performed by a single, trained encoder. The encoder is a downsampling encoder, orthogonality of the feature subspaces is established using metrics, and orthogonality enables separability of the feature subspaces. Embeddings are generated for two or more attributes of the facial image, wherein the embeddings are generated using one or more copies of the single, trained encoder. The embeddings comprise a vector representation of the two or more attributes of the facial image. A neural network is trained for a multi-task objective, wherein the training is based on the embeddings. The embeddings replace and augment training images. The multi-task objective provides identification of the two or more attributes of the facial image.Type: ApplicationFiled: June 22, 2023Publication date: December 28, 2023Applicant: Smart Eye International Inc.Inventors: Ajjen Das Joshi, Sandipan Banerjee, Panu James Turcot
-
Patent number: 11823055Abstract: Vehicular in-cabin sensing is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. An occupant is detected within the vehicle interior. The detecting is based on identifying an upper torso of the occupant, using the in-cabin sensor data. The imaging is accomplished using a plurality of imaging devices within a vehicle interior. The occupant is located within the vehicle interior, based on the in-cabin sensor data. An additional occupant within the vehicle interior is detected. A human perception metric for the occupant is analyzed, based on the in-cabin sensor data. The detecting, the locating, and/or the analyzing are performed using machine learning. The human perception metric is promoted to a using application. The human perception metric includes a mood for the occupant and a mood for the vehicle. The promoting includes input to an autonomous vehicle.Type: GrantFiled: March 30, 2020Date of Patent: November 21, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
-
Patent number: 11704574Abstract: Techniques for machine-trained analysis for multimodal machine learning vehicle manipulation are described. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels. The trained weights cover both the audio information and the video information and are trained simultaneously. The learning facilitates cognitive state analysis of the audio information and the video information. A computing device within a vehicle captures further information and analyzes the further information using trained weights. The further information that is analyzed enables vehicle manipulation. The further information can include only video data or only audio data. The further information can include a cognitive state metric.Type: GrantFiled: April 20, 2020Date of Patent: July 18, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Panu James Turcot
-
Patent number: 11700420Abstract: Data on a user interacting with a media presentation is collected at a client device. The data includes facial image data of the user. The facial image data is analyzed to extract cognitive state content of the user. One or more emotional intensity metrics are generated. The metrics are based on the cognitive state content. The media presentation is manipulated, based on the emotional intensity metrics and the cognitive state content. An engagement score for the media presentation is provided. The engagement score is based on the emotional intensity metric. A facial expression metric and a cognitive state metric are generated for the user. The manipulating includes optimization of the previously viewed media presentation. The optimization changes various aspects of the media presentation, including the length of different portions of the media presentation, the overall length of the media presentation, character selection, music selection, advertisement placement, and brand reveal time.Type: GrantFiled: June 12, 2020Date of Patent: July 11, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Melissa Sue Burke, Andrew Edwin Dreisch, Graham John Page, Panu James Turcot, Evan Kodra
-
Patent number: 11657288Abstract: Disclosed embodiments provide for deep convolutional neural network computing. The convolutional computing is accomplished using a multilayered analysis engine. The multilayered analysis engine includes a deep learning network using a convolutional neural network (CNN). The multilayered analysis engine is used to analyze multiple images in a supervised or unsupervised learning process. Multiple images are provided to the multilayered analysis engine, and the multilayered analysis engine is trained with those images. A subject image is then evaluated by the multilayered analysis engine. The evaluation is accomplished by analyzing pixels within the subject image to identify a facial portion and identifying a facial expression based on the facial portion. The results of the evaluation are output. The multilayered analysis engine is retrained using a second plurality of images.Type: GrantFiled: June 8, 2020Date of Patent: May 23, 2023Assignee: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Daniel McDuff
-
Patent number: 11587357Abstract: Vehicle cognitive data is collected using multiple devices. A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and cognitive states inferred from reactions to the tasks and activities. A first computing device within a vehicle obtains cognitive state data which is collected on an occupant of the vehicle from multiple sources, wherein the multiple sources include at least two sources of facial image data. At least one face in the facial image data is partially occluded. A second computing device generates analysis of the cognitive state data which is collected from the multiple sources. A third computing device renders an output which is based on the analysis of the cognitive state data. The partial occluding includes a time basis of occluding. The partial occluding includes an image basis of occluding. The cognitive state data from multiple sources is tagged.Type: GrantFiled: March 16, 2020Date of Patent: February 21, 2023Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 11410438Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.Type: GrantFiled: November 8, 2019Date of Patent: August 9, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
-
Patent number: 11318949Abstract: Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.Type: GrantFiled: December 11, 2020Date of Patent: May 3, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 11292477Abstract: Vehicle manipulation uses cognitive state engineering. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A computing device is used to analyze the images to determine a cognitive state. Audio information from the occupant is obtained and the analyzing is augmented based on the audio information. The cognitive state is mapped to a loading curve, where the loading curve represents a continuous spectrum of cognitive state loading variation. The vehicle is manipulated, based on the mapping to the loading curve, where the manipulating uses cognitive state alteration engineering. The manipulating includes changing vehicle occupant sensory stimulation. Additional images of additional occupants of the vehicle are obtained and analyzed to determine additional cognitive states. Additional cognitive states are used to adjust the mapping. A cognitive load is estimated based on eye gaze tracking.Type: GrantFiled: June 2, 2019Date of Patent: April 5, 2022Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Taniya Mishra
-
Publication number: 20210339759Abstract: Image-based analysis techniques are used for cognitive state vehicle navigation, including an autonomous or a semi-autonomous vehicle. Images including facial data of a vehicle occupant are obtained using an in-vehicle imaging device. The vehicle occupant can be an operator of or a passenger within the vehicle. A first computing device is used to analyze the images to determine occupant cognitive state data. The analysis can occur at various times along a vehicle travel route. The cognitive state data is mapped to location data along the vehicle travel route. Information about the vehicle travel route is updated based on the cognitive state data and mode data for the vehicle. The updated information is provided for vehicle control. The mode data is configurable based on a mode setting. The mode data is weighted based on additional information.Type: ApplicationFiled: July 19, 2021Publication date: November 4, 2021Applicant: Affectiva, Inc.Inventors: Maha Amr Mohamed Fouad, Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot
-
Publication number: 20210279514Abstract: Disclosed embodiments provide for vehicle manipulation with convolutional image processing. The convolutional image processing uses a multilayered analysis engine. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers and hidden layers. The evaluating provides a cognitive state analysis. Further images are evaluated using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Manipulation data is provided to the second vehicle based on the evaluating the further images. An additional plurality of images of one or more occupants of one or more additional vehicles is obtained. The additional images provide opted-in, crowdsourced image training. The crowdsourced image training enables retraining the multilayered analysis engine.Type: ApplicationFiled: May 24, 2021Publication date: September 9, 2021Applicant: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Patent number: 11073899Abstract: Techniques for multidevice, multimodal emotion services monitoring are disclosed. An expression to be detected is determined. The expression relates to a cognitive state of an individual. Input on the cognitive state of the individual is obtained using a device local to the individual. Monitoring for the expression is performed. The monitoring uses a background process on a device remote from the individual. An occurrence of the expression is identified. The identification is performed by the background process. Notification that the expression was identified is provided. The notification is provided from the background process to a device distinct from the device running the background process. The expression is defined as a multimodal expression. The multimodal expression includes image data and audio data from the individual. The notification enables emotion services to be provided. The emotion services augment messaging, social media, and automated help applications.Type: GrantFiled: September 30, 2019Date of Patent: July 27, 2021Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Gregory Poulin, Panu James Turcot
-
Patent number: 11056225Abstract: Analytics are used for live streaming based on image analysis within a shared digital environment. A group of images is obtained from a group of participants involved in an interactive digital environment. The interactive digital environment can be a shared digital environment. The interactive digital environment can be a gaming environment. Emotional content within the group of images is analyzed for a set of participants within the group of participants. Results of the analyzing of the emotional content within the group of images are provided to a second set of participants within the group of participants. The analyzing emotional content includes identifying an image of an individual, identifying a face of the individual, determining facial regions, and performing content evaluation based on applying image classifiers.Type: GrantFiled: February 28, 2017Date of Patent: July 6, 2021Assignee: Affectiva, Inc.Inventors: Rana el Kaliouby, James Henry Deal, Jr., Forest Jay Handford, Panu James Turcot, Gabriele Zijderveld
-
Publication number: 20210188291Abstract: Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.Type: ApplicationFiled: December 11, 2020Publication date: June 24, 2021Applicant: Affectiva, Inc.Inventors: Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
-
Patent number: 11017250Abstract: Disclosed embodiments provide for vehicle manipulation using convolutional image processing. The convolutional image processing is accomplished using a computer, where the computer can include a multilayered analysis engine. The multilayered analysis engine can include a convolutional neural network (CNN). The computer is initialized for convolutional processing. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers hidden layers. The multilayered analysis engine is used for cognitive state analysis. The evaluating provides a cognitive state analysis. Further images are analyzed using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Voice data is collected to augment the cognitive state analysis.Type: GrantFiled: March 2, 2018Date of Patent: May 25, 2021Assignee: Affectiva, Inc.Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
-
Publication number: 20210125065Abstract: Deep learning in situ retraining uses deep learning nodes to provide a human perception state on a user device. A plurality of images including facial data is obtained for human perception state analysis. A server device trains a set of weights on a set of layers for deep learning that implements the analysis, where the training is performed with a first set of training data. A subset of weights is deployed on deep learning nodes on a user device, where the deploying enables at least part of the human perception state analysis. An additional set of weights is retrained on the user device, where the additional set of weights is trained using a second set of training data. A human perception state based on the subset of the set of weights, the additional set of weights, and input images obtained by the user device is provided on the user device.Type: ApplicationFiled: October 23, 2020Publication date: April 29, 2021Applicant: Affectiva, Inc.Inventors: Panu James Turcot, Seyedmohammad Mavadati