Patents by Inventor Panu James Turcot

Panu James Turcot has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11935281
    Abstract: Vehicular in-cabin facial tracking is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. A set of seating locations for the vehicle interior is determined. The set is based on the images. The set of seating locations is scanned for performing facial detection for each of the seating locations using a facial detection model. A view of a detected face is manipulated. The manipulation is based on a geometry of the vehicle interior. Cognitive state data of the detected face is analyzed. The cognitive state data analysis is based on additional images of the detected face. The cognitive state data analysis uses the view that was manipulated. The cognitive state data analysis is promoted to a using application. The using application provides vehicle manipulation information to the vehicle. The manipulation information is for an autonomous vehicle.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: March 19, 2024
    Assignee: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby, Panu James Turcot, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed
  • Patent number: 11887383
    Abstract: Vehicle interior object management uses analysis for detection of an object within a vehicle. The object can include a cell phone, a computing device, a briefcase, a wallet, a purse, or luggage. The object can include a child or a pet. A distance between an occupant and the object can be calculated. The object can be within a reachable distance of the occupant. Two or more images of a vehicle interior are collected using imaging devices within the vehicle. The images are analyzed to detect an object within the vehicle. The object is classified. A level of interaction is estimated between an occupant of the vehicle and the object within the vehicle. The object can be determined to have been left behind once the occupant leaves the vehicle. A control element of the vehicle is changed based on the classifying and the level of interaction.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: January 30, 2024
    Assignee: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Andrew Todd Zeilman, Gabriele Zijderveld
  • Publication number: 20230419642
    Abstract: Machine learning is used for a neural network multi-attribute facial encoder and decoder. A facial image is obtained for processing on a neural network and is encoded into two or more orthogonal feature subspaces. The encoding is performed by a single, trained encoder. The encoder is a downsampling encoder, orthogonality of the feature subspaces is established using metrics, and orthogonality enables separability of the feature subspaces. Embeddings are generated for two or more attributes of the facial image, wherein the embeddings are generated using one or more copies of the single, trained encoder. The embeddings comprise a vector representation of the two or more attributes of the facial image. A neural network is trained for a multi-task objective, wherein the training is based on the embeddings. The embeddings replace and augment training images. The multi-task objective provides identification of the two or more attributes of the facial image.
    Type: Application
    Filed: June 22, 2023
    Publication date: December 28, 2023
    Applicant: Smart Eye International Inc.
    Inventors: Ajjen Das Joshi, Sandipan Banerjee, Panu James Turcot
  • Patent number: 11823055
    Abstract: Vehicular in-cabin sensing is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. An occupant is detected within the vehicle interior. The detecting is based on identifying an upper torso of the occupant, using the in-cabin sensor data. The imaging is accomplished using a plurality of imaging devices within a vehicle interior. The occupant is located within the vehicle interior, based on the in-cabin sensor data. An additional occupant within the vehicle interior is detected. A human perception metric for the occupant is analyzed, based on the in-cabin sensor data. The detecting, the locating, and/or the analyzing are performed using machine learning. The human perception metric is promoted to a using application. The human perception metric includes a mood for the occupant and a mood for the vehicle. The promoting includes input to an autonomous vehicle.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: November 21, 2023
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed, Panu James Turcot, Andrew Todd Zeilman, Gabriele Zijderveld
  • Patent number: 11704574
    Abstract: Techniques for machine-trained analysis for multimodal machine learning vehicle manipulation are described. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels. The trained weights cover both the audio information and the video information and are trained simultaneously. The learning facilitates cognitive state analysis of the audio information and the video information. A computing device within a vehicle captures further information and analyzes the further information using trained weights. The further information that is analyzed enables vehicle manipulation. The further information can include only video data or only audio data. The further information can include a cognitive state metric.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: July 18, 2023
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Panu James Turcot
  • Patent number: 11700420
    Abstract: Data on a user interacting with a media presentation is collected at a client device. The data includes facial image data of the user. The facial image data is analyzed to extract cognitive state content of the user. One or more emotional intensity metrics are generated. The metrics are based on the cognitive state content. The media presentation is manipulated, based on the emotional intensity metrics and the cognitive state content. An engagement score for the media presentation is provided. The engagement score is based on the emotional intensity metric. A facial expression metric and a cognitive state metric are generated for the user. The manipulating includes optimization of the previously viewed media presentation. The optimization changes various aspects of the media presentation, including the length of different portions of the media presentation, the overall length of the media presentation, character selection, music selection, advertisement placement, and brand reveal time.
    Type: Grant
    Filed: June 12, 2020
    Date of Patent: July 11, 2023
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Melissa Sue Burke, Andrew Edwin Dreisch, Graham John Page, Panu James Turcot, Evan Kodra
  • Patent number: 11657288
    Abstract: Disclosed embodiments provide for deep convolutional neural network computing. The convolutional computing is accomplished using a multilayered analysis engine. The multilayered analysis engine includes a deep learning network using a convolutional neural network (CNN). The multilayered analysis engine is used to analyze multiple images in a supervised or unsupervised learning process. Multiple images are provided to the multilayered analysis engine, and the multilayered analysis engine is trained with those images. A subject image is then evaluated by the multilayered analysis engine. The evaluation is accomplished by analyzing pixels within the subject image to identify a facial portion and identifying a facial expression based on the facial portion. The results of the evaluation are output. The multilayered analysis engine is retrained using a second plurality of images.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: May 23, 2023
    Assignee: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Daniel McDuff
  • Patent number: 11587357
    Abstract: Vehicle cognitive data is collected using multiple devices. A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and cognitive states inferred from reactions to the tasks and activities. A first computing device within a vehicle obtains cognitive state data which is collected on an occupant of the vehicle from multiple sources, wherein the multiple sources include at least two sources of facial image data. At least one face in the facial image data is partially occluded. A second computing device generates analysis of the cognitive state data which is collected from the multiple sources. A third computing device renders an output which is based on the analysis of the cognitive state data. The partial occluding includes a time basis of occluding. The partial occluding includes an image basis of occluding. The cognitive state data from multiple sources is tagged.
    Type: Grant
    Filed: March 16, 2020
    Date of Patent: February 21, 2023
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 11410438
    Abstract: Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: August 9, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Taniya Mishra, Boisy G. Pitre, Panu James Turcot, Andrew Todd Zeilman
  • Patent number: 11318949
    Abstract: Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: May 3, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 11292477
    Abstract: Vehicle manipulation uses cognitive state engineering. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A computing device is used to analyze the images to determine a cognitive state. Audio information from the occupant is obtained and the analyzing is augmented based on the audio information. The cognitive state is mapped to a loading curve, where the loading curve represents a continuous spectrum of cognitive state loading variation. The vehicle is manipulated, based on the mapping to the loading curve, where the manipulating uses cognitive state alteration engineering. The manipulating includes changing vehicle occupant sensory stimulation. Additional images of additional occupants of the vehicle are obtained and analyzed to determine additional cognitive states. Additional cognitive states are used to adjust the mapping. A cognitive load is estimated based on eye gaze tracking.
    Type: Grant
    Filed: June 2, 2019
    Date of Patent: April 5, 2022
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot, Andrew Todd Zeilman, Taniya Mishra
  • Publication number: 20210339759
    Abstract: Image-based analysis techniques are used for cognitive state vehicle navigation, including an autonomous or a semi-autonomous vehicle. Images including facial data of a vehicle occupant are obtained using an in-vehicle imaging device. The vehicle occupant can be an operator of or a passenger within the vehicle. A first computing device is used to analyze the images to determine occupant cognitive state data. The analysis can occur at various times along a vehicle travel route. The cognitive state data is mapped to location data along the vehicle travel route. Information about the vehicle travel route is updated based on the cognitive state data and mode data for the vehicle. The updated information is provided for vehicle control. The mode data is configurable based on a mode setting. The mode data is weighted based on additional information.
    Type: Application
    Filed: July 19, 2021
    Publication date: November 4, 2021
    Applicant: Affectiva, Inc.
    Inventors: Maha Amr Mohamed Fouad, Rana el Kaliouby, Abdelrahman N. Mahmoud, Panu James Turcot
  • Publication number: 20210279514
    Abstract: Disclosed embodiments provide for vehicle manipulation with convolutional image processing. The convolutional image processing uses a multilayered analysis engine. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers and hidden layers. The evaluating provides a cognitive state analysis. Further images are evaluated using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Manipulation data is provided to the second vehicle based on the evaluating the further images. An additional plurality of images of one or more occupants of one or more additional vehicles is obtained. The additional images provide opted-in, crowdsourced image training. The crowdsourced image training enables retraining the multilayered analysis engine.
    Type: Application
    Filed: May 24, 2021
    Publication date: September 9, 2021
    Applicant: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
  • Patent number: 11073899
    Abstract: Techniques for multidevice, multimodal emotion services monitoring are disclosed. An expression to be detected is determined. The expression relates to a cognitive state of an individual. Input on the cognitive state of the individual is obtained using a device local to the individual. Monitoring for the expression is performed. The monitoring uses a background process on a device remote from the individual. An occurrence of the expression is identified. The identification is performed by the background process. Notification that the expression was identified is provided. The notification is provided from the background process to a device distinct from the device running the background process. The expression is defined as a multimodal expression. The multimodal expression includes image data and audio data from the individual. The notification enables emotion services to be provided. The emotion services augment messaging, social media, and automated help applications.
    Type: Grant
    Filed: September 30, 2019
    Date of Patent: July 27, 2021
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Seyedmohammad Mavadati, Taniya Mishra, Timothy Peacock, Gregory Poulin, Panu James Turcot
  • Patent number: 11056225
    Abstract: Analytics are used for live streaming based on image analysis within a shared digital environment. A group of images is obtained from a group of participants involved in an interactive digital environment. The interactive digital environment can be a shared digital environment. The interactive digital environment can be a gaming environment. Emotional content within the group of images is analyzed for a set of participants within the group of participants. Results of the analyzing of the emotional content within the group of images are provided to a second set of participants within the group of participants. The analyzing emotional content includes identifying an image of an individual, identifying a face of the individual, determining facial regions, and performing content evaluation based on applying image classifiers.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: July 6, 2021
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, James Henry Deal, Jr., Forest Jay Handford, Panu James Turcot, Gabriele Zijderveld
  • Publication number: 20210188291
    Abstract: Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.
    Type: Application
    Filed: December 11, 2020
    Publication date: June 24, 2021
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Ajjen Das Joshi, Survi Kyal, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 11017250
    Abstract: Disclosed embodiments provide for vehicle manipulation using convolutional image processing. The convolutional image processing is accomplished using a computer, where the computer can include a multilayered analysis engine. The multilayered analysis engine can include a convolutional neural network (CNN). The computer is initialized for convolutional processing. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers hidden layers. The multilayered analysis engine is used for cognitive state analysis. The evaluating provides a cognitive state analysis. Further images are analyzed using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Voice data is collected to augment the cognitive state analysis.
    Type: Grant
    Filed: March 2, 2018
    Date of Patent: May 25, 2021
    Assignee: Affectiva, Inc.
    Inventors: Panu James Turcot, Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati
  • Publication number: 20210125065
    Abstract: Deep learning in situ retraining uses deep learning nodes to provide a human perception state on a user device. A plurality of images including facial data is obtained for human perception state analysis. A server device trains a set of weights on a set of layers for deep learning that implements the analysis, where the training is performed with a first set of training data. A subset of weights is deployed on deep learning nodes on a user device, where the deploying enables at least part of the human perception state analysis. An additional set of weights is retrained on the user device, where the additional set of weights is trained using a second set of training data. A human perception state based on the subset of the set of weights, the additional set of weights, and input images obtained by the user device is provided on the user device.
    Type: Application
    Filed: October 23, 2020
    Publication date: April 29, 2021
    Applicant: Affectiva, Inc.
    Inventors: Panu James Turcot, Seyedmohammad Mavadati
  • Patent number: 10922566
    Abstract: Disclosed embodiments provide cognitive state evaluation for vehicle navigation. The cognitive state evaluation is accomplished using a computer, where the computer can perform learning using a neural network such as a deep neural network (DNN) or a convolutional neural network (CNN). Images including facial data are obtained of a first occupant of a first vehicle. The images are analyzed to determine cognitive state data. Layers and weights are learned for the deep neural network. Images of a second occupant of a second vehicle are collected and analyzed to determine additional cognitive state data. The additional cognitive state data is analyzed, and the second vehicle is manipulated. A second imaging device is used to collect images of a person outside the second vehicle to determine cognitive state data. The second vehicle can be manipulated based on the cognitive state data of the person outside the vehicle.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: February 16, 2021
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Abdelrahman N. Mahmoud, Seyedmohammad Mavadati, Panu James Turcot
  • Patent number: 10922567
    Abstract: Cognitive state-based vehicle manipulation uses near-infrared image processing. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The images include facial data of the vehicle occupant. The images include visible light-based images and near-infrared based images. A classifier is trained based on the visible light content of the images to determine cognitive state data for the vehicle occupant. The classifier is modified based on the near-infrared image content. The modified classifier is deployed for analysis of additional images of the vehicle occupant, where the additional images are near-infrared based images. The additional images are analyzed to determine a cognitive state. The vehicle is manipulated based on the cognitive state that was analyzed. The cognitive state is rendered on a display located within the vehicle.
    Type: Grant
    Filed: March 1, 2019
    Date of Patent: February 16, 2021
    Assignee: Affectiva, Inc.
    Inventors: Abdelrahman N. Mahmoud, Rana el Kaliouby, Seyedmohammad Mavadati, Panu James Turcot