Patents by Inventor Thibaud Senechal

Thibaud Senechal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11935281
    Abstract: Vehicular in-cabin facial tracking is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. A set of seating locations for the vehicle interior is determined. The set is based on the images. The set of seating locations is scanned for performing facial detection for each of the seating locations using a facial detection model. A view of a detected face is manipulated. The manipulation is based on a geometry of the vehicle interior. Cognitive state data of the detected face is analyzed. The cognitive state data analysis is based on additional images of the detected face. The cognitive state data analysis uses the view that was manipulated. The cognitive state data analysis is promoted to a using application. The using application provides vehicle manipulation information to the vehicle. The manipulation information is for an autonomous vehicle.
    Type: Grant
    Filed: July 14, 2020
    Date of Patent: March 19, 2024
    Assignee: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby, Panu James Turcot, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed
  • Patent number: 11521599
    Abstract: A system and method performs wakeword detection using a feedforward neural network model. A first output of the model indicates when the wakeword appears on a right side of a first window of input audio data. A second output of the model indicates when the wakeword appears in the center of a second window of input audio data. A third output of the model indicates when the wakeword appears on a left side of a third window of input audio data. Using these outputs, the system and method determine a beginpoint and endpoint of the wakeword.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: December 6, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Christin Jose, Yuriy Mishchenko, Anish N. Shah, Alex Escott, Parind Shah, Shiv Naga Prasad Vitaladevuni, Thibaud Senechal
  • Patent number: 11355102
    Abstract: A neural network model of a user device is trained to map different words represented in audio data to different points in an N-dimensional embedding space. When the user device determines that a mapped point corresponds to a wakeword, it causes further audio processing, such as automatic speech recognition or natural-language understanding, to be performed on the audio data. The user device may first create the wakeword by first processing audio data representing the wakeword to determine the mapped point in the embedding space.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: June 7, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Yuriy Mishchenko, Thibaud Senechal, Anish N. Shah, Shiv Naga Prasad Vitaladevuni
  • Patent number: 11205420
    Abstract: A system and method performs wakeword detection using a neural network model that includes a recurrent neural network (RNN) for processing variable-length wakewords. To prevent the model from being influenced by non-wakeword speech, multiple instances of the model are created to process audio data, and each instance is configured to use weights determined by training data. The model may instead or in addition be used to process the audio data only when a likelihood that the audio data corresponds to the wakeword is greater than a threshold. The model may process the audio data as represented by groups of acoustic feature vectors; computations for feature vectors common to different groups may be re-used.
    Type: Grant
    Filed: June 10, 2019
    Date of Patent: December 21, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Gengshen Fu, Thibaud Senechal, Shiv Naga Prasad Vitaladevuni, Michael J. Rodehorst, Varun K. Nagaraja
  • Publication number: 20210358497
    Abstract: A system processes audio data to detect when it includes a representation of a wakeword or of an acoustic event. The system may receive or determine acoustic features for the audio data, such as log-filterbank energy (LFBE). The acoustic features may be used by a first, wakeword-detection model to detect the wakeword; the output of this model may be further processed using a softmax function, to smooth it, and to detect spikes. The same acoustic features may be also be used by a second, acoustic-event-detection model to detect the acoustic event; the output of this model may be further processed using a sigmoid function and a classifier. Another model may be used to extract additional features from the LFBE data; these additional features may be used by the other models.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 18, 2021
    Inventors: Ming Sun, Thibaud Senechal, Yixin Gao, Anish N. Shah, Spyridon Matsoukas, Chao Wang, Shiv Naga Prasad Vitaladevuni
  • Patent number: 11132990
    Abstract: A system processes audio data to detect when it includes a representation of a wakeword or of an acoustic event. The system may receive or determine acoustic features for the audio data, such as log-filterbank energy (LFBE). The acoustic features may be used by a first, wakeword-detection model to detect the wakeword; the output of this model may be further processed using a softmax function, to smooth it, and to detect spikes. The same acoustic features may be also be used by a second, acoustic-event-detection model to detect the acoustic event; the output of this model may be further processed using a sigmoid function and a classifier. Another model may be used to extract additional features from the LFBE data; these additional features may be used by the other models.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: September 28, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Ming Sun, Thibaud Senechal, Yixin Gao, Anish N. Shah, Spyridon Matsoukas, Chao Wang, Shiv Naga Prasad Vitaladevuni
  • Patent number: 11043218
    Abstract: A system processes audio data to detect when it includes a representation of a wakeword or of an acoustic event. The system may receive or determine acoustic features for the audio data, such as log-filterbank energy (LFBE). The acoustic features may be used by a first, wakeword-detection model to detect the wakeword; the output of this model may be further processed using a softmax function, to smooth it, and to detect spikes. The same acoustic features may be also be used by a second, acoustic-event-detection model to detect the acoustic event; the output of this model may be further processed using a sigmoid function and a classifier. Another model may be used to extract additional features from the LFBE data; these additional features may be used by the other models.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: June 22, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Ming Sun, Thibaud Senechal, Yixin Gao, Anish N. Shah, Spyridon Matsoukas, Chao Wang, Shiv Naga Prasad Vitaladevuni
  • Publication number: 20210001862
    Abstract: Vehicular in-cabin facial tracking is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. A set of seating locations for the vehicle interior is determined. The set is based on the images. The set of seating locations is scanned for performing facial detection for each of the seating locations using a facial detection model. A view of a detected face is manipulated. The manipulation is based on a geometry of the vehicle interior. Cognitive state data of the detected face is analyzed. The cognitive state data analysis is based on additional images of the detected face. The cognitive state data analysis uses the view that was manipulated. The cognitive state data analysis is promoted to a using application. The using application provides vehicle manipulation information to the vehicle. The manipulation information is for an autonomous vehicle.
    Type: Application
    Filed: July 14, 2020
    Publication date: January 7, 2021
    Applicant: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby, Panu James Turcot, Mohamed Ezzeldin Abdelmonem Ahmed Mohamed
  • Patent number: 10872599
    Abstract: A device monitors audio data for a predetermined and/or user-defined wakeword. The device detects an error in detecting the wakeword in the audio data, such as a false-positive detection of the wakeword or a false-negative detection of the wakeword. Upon detecting the error, the device updates a model trained to detect the wakeword to create an updated trained model; the updated trained model reduces or eliminates further errors in detecting the wakeword. Data corresponding to the updated trained model may be collected by a server from a plurality of devices and used to create an updated trained model aggregating the data; this updated trained model may be sent to some or all of the devices.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: December 22, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Shuang Wu, Thibaud Senechal, Gengshen Fu, Shiv Naga Prasad Vitaladevuni
  • Patent number: 10614289
    Abstract: Concepts for facial tracking with classifiers is disclosed. One or more faces are detected and tracked in a series of video frames that include at least one face. Video is captured and partitioned into the series of frames. A first video frame is analyzed using classifiers trained to detect the presence of at least one face in the frame. The classifiers are used to initialize locations for a first set of facial landmarks for the first face. The locations of the facial landmarks are refined using localized information around the landmarks, and a rough bounding box that contains the facial landmarks is estimated. The future locations for the facial landmarks detected in the first video frame are estimated for a future video frame. The detection of the facial landmarks and estimation of future locations of the landmarks are insensitive to rotation, orientation, scaling, or mirroring of the face.
    Type: Grant
    Filed: September 8, 2015
    Date of Patent: April 7, 2020
    Assignee: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby, Panu James Turcot
  • Patent number: 10108852
    Abstract: A system and method for facial analysis to detect asymmetric expressions is disclosed. A series of facial images is collected, and an image from the series of images is evaluated with a classifier. The image is then flipped to create a flipped image. Then, the flipped image is evaluated with the classifier. The results of the evaluation of original image and the flipped image are compared. Asymmetric features such as a wink, a raised eyebrow, a smirk, or a wince are identified. These asymmetric features are associated with mental states such as skepticism, contempt, condescension, repugnance, disgust, disbelief, cynicism, pessimism, doubt, suspicion, and distrust.
    Type: Grant
    Filed: September 19, 2013
    Date of Patent: October 23, 2018
    Assignee: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby
  • Patent number: 9934425
    Abstract: A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and mental states inferred from these performances. Multiple devices, including mobile devices, can observe and record or transmit a user's mental state data. The mental state data collected from the multiple devices can be used to analyze the mental states of the user. The mental state data can be in the form of facial expressions, electrodermal activity, movements, or other detectable manifestations. Multiple cameras on the multiple devices can be usefully employed to collect facial data. An output can be rendered based on an analysis of the mental state data.
    Type: Grant
    Filed: December 30, 2013
    Date of Patent: April 3, 2018
    Assignee: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Daniel Abraham Bender, Evan Kodra, Oliver Ernst Nowak, Richard Scott Sadowsky, Thibaud Senechal, Panu James Turcot
  • Patent number: 9723992
    Abstract: Mental state analysis is performed by obtaining video of an individual as the individual interacts with a computer, either by performing various operations or by consuming a media presentation. The video is analyzed to determine eye-blink information on the individual, such as eye-blink rate or eye-blink duration. A mental state of the individual is then inferred based on the eye blink information. The blink-rate information and associated mental states can be used to modify an advertisement, a media presentation, or a digital game.
    Type: Grant
    Filed: March 15, 2014
    Date of Patent: August 8, 2017
    Assignee: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby, Niels Haering
  • Patent number: 9367736
    Abstract: A multi-orientation text detection method and associated system is disclosed that utilizes orientation-variant glyph features to determine a text line in an image regardless of an orientation of the text line. Glyph features are determined for each glyph in an image with respect to a neighboring glyph. The glyph features are provided to a learned classifier that outputs a glyph pair score for each neighboring glyph pair. Each glyph pair score indicates a likelihood that the corresponding pair of neighboring glyphs form part of a same text line. The glyph pair scores are used to identify candidate text lines, which are then ranked to select a final set of text lines in the image.
    Type: Grant
    Filed: September 1, 2015
    Date of Patent: June 14, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Thibaud Senechal, Quan Wang, Daniel Makoto Willenson, Shuang Wu, Yue Liu, Shiv Naga Prasad Vitaladevuni, David Paul Ramos, Qingfeng Yu
  • Publication number: 20160004904
    Abstract: Concepts for facial tracking with classifiers is disclosed. One or more faces are detected and tracked in a series of video frames that include at least one face. Video is captured and partitioned into the series of frames. A first video frame is analyzed using classifiers trained to detect the presence of at least one face in the frame. The classifiers are used to initialize locations for a first set of facial landmarks for the first face. The locations of the facial landmarks are refined using localized information around the landmarks, and a rough bounding box that contains the facial landmarks is estimated. The future locations for the facial landmarks detected in the first video frame are estimated for a future video frame. The detection of the facial landmarks and estimation of future locations of the landmarks are insensitive to rotation, orientation, scaling, or mirroring of the face.
    Type: Application
    Filed: September 8, 2015
    Publication date: January 7, 2016
    Inventors: Thibaud Senechal, Rana el Kaliouby, Panu James Turcot
  • Publication number: 20140200417
    Abstract: Mental state analysis is performed by obtaining video of an individual as the individual interacts with a computer, either by performing various operations or by consuming a media presentation. The video is analyzed to determine eye-blink information on the individual, such as eye-blink rate or eye-blink duration. A mental state of the individual is then inferred based on the eye blink information. The blink-rate information and associated mental states can be used to modify an advertisement, a media presentation, or a digital game.
    Type: Application
    Filed: March 15, 2014
    Publication date: July 17, 2014
    Applicant: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby, Niels Haering
  • Publication number: 20140112540
    Abstract: A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and mental states inferred from these performances. Multiple devices, including mobile devices, can observe and record or transmit a user's mental state data. The mental state data collected from the multiple devices can be used to analyze the mental states of the user. The mental state data can be in the form of facial expressions, electrodermal activity, movements, or other detectable manifestations. Multiple cameras on the multiple devices can be usefully employed to collect facial data. An output can be rendered based on an analysis of the mental state data.
    Type: Application
    Filed: December 30, 2013
    Publication date: April 24, 2014
    Applicant: Affectiva, Inc.
    Inventors: Rana el Kaliouby, Daniel Abraham Bender, Evan Kodra, Oliver Ernst Nowak, Richard Scott Sadowsky, Thibaud Senechal, Panu James Turcot
  • Publication number: 20140016860
    Abstract: A system and method for facial analysis to detect asymmetric expressions is disclosed. A series of facial images is collected, and an image from the series of images is evaluated with a classifier. The image is then flipped to create a flipped image. Then, the flipped image is evaluated with the classifier. The results of the evaluation of original image and the flipped image are compared. Asymmetric features such as a wink, a raised eyebrow, a smirk, or a wince are identified. These asymmetric features are associated with mental states such as skepticism, contempt, condescension, repugnance, disgust, disbelief, cynicism, pessimism, doubt, suspicion, and distrust.
    Type: Application
    Filed: September 19, 2013
    Publication date: January 16, 2014
    Applicant: Affectiva, Inc.
    Inventors: Thibaud Senechal, Rana el Kaliouby