Patents by Inventor Spyridon Matsoukas

Spyridon Matsoukas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11935525
    Abstract: Systems and methods for utilizing microphone array information for acoustic modeling are disclosed. Audio data may be received from a device having a microphone array configuration. Microphone configuration data may also be received that indicates the configuration of the microphone array. The microphone configuration data may be utilized as an input vector to an acoustic model, along with the audio data, to generate phoneme data. Additionally, the microphone configuration data may be utilized to train and/or generate acoustic models, select an acoustic model to perform speech recognition with, and/or to improve trigger sound detection.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: March 19, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Shiva Kumar Sundaram, Minhua Wu, Anirudh Raju, Spyridon Matsoukas, Arindam Mandal, Kenichi Kumatani
  • Patent number: 11893999
    Abstract: Techniques for enrolling a user in a system's user recognition functionality without requiring the user speak particular speech are described. The system may determine characteristics unique to a user input. The system may generate an implicit voice profile from user inputs having similar characteristics. After an implicit voice profile is generated, the system may receive a user input having speech characteristics similar to that of the implicit voice profile. The system may ask the user if the user wants the system to associate the implicit voice profile with a particular user identifier. If the user responds affirmatively, the system may request an identifier of a user profile (e.g., a user name). In response to receiving the user's name, the system may identify a user profile associated with the name and associate the implicit voice profile with the user profile, thereby converting the implicit voice profile into an explicit voice profile.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: February 6, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Sai Sailesh Kopuri, John Moore, Sundararajan Srinivasan, Aparna Khare, Arindam Mandal, Spyridon Matsoukas, Rohit Prasad
  • Publication number: 20230410833
    Abstract: A speech-capture device can capture audio data during wakeword monitoring and use the audio data to determine if a user is present nearby the device, even if no wakeword is spoken. Audio such as speech, human originating sounds (e.g., coughing, sneezing), or other human related noises (e.g., footsteps, doors closing) can be used to detect audio. Audio frames are individually scored as to whether a human presence is detected in the particular audio frames. The scores are then smoothed relative to nearby frames to create a decision for a particular frame. Presence information can then be sent according to a periodic schedule to a remote device to create a presence “heartbeat” that regularly identifies whether a user is detected proximate to a speech-capture device.
    Type: Application
    Filed: April 6, 2023
    Publication date: December 21, 2023
    Inventors: Shiva Kumar Sundaram, Chao Wang, Shiv Naga Prasad Vitaladevuni, Spyridon Matsoukas, Arindam Mandal
  • Patent number: 11670299
    Abstract: A system processes audio data to detect when it includes a representation of a wakeword or of an acoustic event. The system may receive or determine acoustic features for the audio data, such as log-filterbank energy (LFBE). The acoustic features may be used by a first, wakeword-detection model to detect the wakeword; the output of this model may be further processed using a softmax function, to smooth it, and to detect spikes. The same acoustic features may be also be used by a second, acoustic-event-detection model to detect the acoustic event; the output of this model may be further processed using a sigmoid function and a classifier. Another model may be used to extract additional features from the LFBE data; these additional features may be used by the other models.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: June 6, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Ming Sun, Thibaud Senechai, Yixin Gao, Anish N. Shah, Spyridon Matsoukas, Chao Wang, Shiv Naga Prasad Vitaladevuni
  • Patent number: 11657804
    Abstract: Features are disclosed for detecting words in audio using contextual information in addition to automatic speech recognition results. A detection model can be generated and used to determine whether a particular word, such as a keyword or “wake word,” has been uttered. The detection model can operate on features derived from an audio signal, contextual information associated with generation of the audio signal, and the like. In some embodiments, the detection model can be customized for particular users or groups of users based usage patterns associated with the users.
    Type: Grant
    Filed: November 5, 2020
    Date of Patent: May 23, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Rohit Prasad, Kenneth John Basye, Spyridon Matsoukas, Rajiv Ramachandran, Shiv Naga Prasad Vitaladevuni, Bjorn Hoffmeister
  • Patent number: 11657832
    Abstract: A speech-capture device can capture audio data during wakeword monitoring and use the audio data to determine if a user is present nearby the device, even if no wakeword is spoken. Audio such as speech, human originating sounds (e.g., coughing, sneezing), or other human related noises (e.g., footsteps, doors closing) can be used to detect audio. Audio frames are individually scored as to whether a human presence is detected in the particular audio frames. The scores are then smoothed relative to nearby frames to create a decision for a particular frame. Presence information can then be sent according to a periodic schedule to a remote device to create a presence “heartbeat” that regularly identifies whether a user is detected proximate to a speech-capture device.
    Type: Grant
    Filed: September 16, 2020
    Date of Patent: May 23, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Shiva Kumar Sundaram, Chao Wang, Shiv Naga Prasad Vitaladevuni, Spyridon Matsoukas, Arindam Mandal
  • Publication number: 20230032575
    Abstract: A system capable of performing natural language understanding (NLU) on utterances including complex command structures such as sequential commands (e.g., multiple commands in a single utterance), conditional commands (e.g., commands that are only executed if a condition is satisfied), and/or repetitive commands (e.g., commands that are executed until a condition is satisfied). Audio data may be processed using automatic speech recognition (ASR) techniques to obtain text. The text may then be processed using machine learning models that are trained to parse text of incoming utterances. The models may identify complex utterance structures and may identify what command portions of an utterance go with what conditional statements. Machine learning models may also identify what data is needed to determine when the conditionals are true so the system may cause the commands to be executed (and stopped) at the appropriate times.
    Type: Application
    Filed: August 8, 2022
    Publication date: February 2, 2023
    Inventors: Cengiz Erbas, Thomas Kollar, Avnish Sikka, Spyridon Matsoukas, Simon Peter Reavely
  • Patent number: 11410646
    Abstract: A system capable of performing natural language understanding (NLU) on utterances including complex command structures such as sequential commands (e.g., multiple commands in a single utterance), conditional commands (e.g., commands that are only executed if a condition is satisfied), and/or repetitive commands (e.g., commands that are executed until a condition is satisfied). Audio data may be processed using automatic speech recognition (ASR) techniques to obtain text. The text may then be processed using machine learning models that are trained to parse text of incoming utterances. The models may identify complex utterance structures and may identify what command portions of an utterance go with what conditional statements. Machine learning models may also identify what data is needed to determine when the conditionals are true so the system may cause the commands to be executed (and stopped) at the appropriate times.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: August 9, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Cengiz Erbas, Thomas Kollar, Avnish Sikka, Spyridon Matsoukas, Simon Peter Reavely
  • Publication number: 20220189458
    Abstract: Systems, methods, and devices for verifying a user are disclosed. A speech-controlled device captures a spoken command, and sends audio data corresponding thereto to a server. The server performs ASR on the audio data to determine ASR confidence data. The server, in parallel, performs user verification on the audio data to determine user verification confidence data. The server may modify the user verification confidence data using the ASR confidence data. In addition or alternatively, the server may modify the user verification confidence data using at least one of a location of the speech-controlled device within a building, a type of the speech-controlled device, or a geographic location of the speech-controlled device.
    Type: Application
    Filed: January 26, 2022
    Publication date: June 16, 2022
    Inventors: Spyridon Matsoukas, Aparna Khare, Vishwanathan Krishnamoorthy, Shamitha Somashekar, Arindam Mandal
  • Patent number: 11361763
    Abstract: A speech-processing system capable of receiving and processing audio data to determine if the audio data includes speech that was intended for the system. Non-system directed speech may be filtered out while system-directed speech may be selected for further processing. A system-directed speech detector may use a trained machine learning model (such as a deep neural network or the like) to process a feature vector representing a variety of characteristics of the incoming audio data, including the results of automatic speech recognition and/or other data. Using the feature vector the model may output an indicator as to whether the speech is system-directed. The system may also incorporate other filters such as voice activity detection prior to speech recognition, or the like.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: June 14, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Roland Maximilian Rolf Maas, Sri Harish Reddy Mallidi, Spyridon Matsoukas, Bjorn Hoffmeister
  • Patent number: 11335346
    Abstract: Techniques for processing a user input are described. Text data representing a user input is processed with respect to at least one finite state transducer (FST) to generate at least one FST hypothesis. Context information may be required to traverse one or more paths of the at least one FST. The text data is also processed using at least one statistical model (e.g., perform intent classification, named entity recognition, and/or domain classification processing) to generate at least one statistical model hypothesis. The at least one FST hypothesis and the at least one statistical model hypothesis are input to a reranker that determines a most likely interpretation of the user input.
    Type: Grant
    Filed: December 10, 2018
    Date of Patent: May 17, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Chengwei Su, Spyridon Matsoukas, Sankaranarayanan Ananthakrishnan, Shirin Saleem, Chungnam Chan, Yugang Li, Mallory McManamon, Rahul Gupta, Luca Soldaini
  • Patent number: 11302329
    Abstract: A system may include an acoustic event detection component for detecting acoustic events, which may be non-speech sounds. Upon detection of a command to detect a new sound, a device may prompt a user to cause occurrence of the sound one or more times. The acoustic event detection component may then be reconfigured, using audio data corresponding to the occurrences, to detect future occurrences of the event.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: April 12, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Ming Sun, Spyridon Matsoukas, Venkata Naga Krishna Chaitanya Puvvada, Chao Wang, Chieh-Chi Kao
  • Patent number: 11276403
    Abstract: Techniques for limiting natural language processing performed on input data are described. A system receives input data from a device. The input data corresponds to a command to be executed by the system. The system determines applications likely configured to execute the command. The system performs named entity recognition and intent classification with respect to only the applications likely configured to execute the command.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: March 15, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Ruhi Sarikaya, Rohit Prasad, Kerry Hammil, Spyridon Matsoukas, Nikko Strom, Frédéric Johan Georges Deramat, Stephen Frederick Potter, Young-Bum Kim
  • Patent number: 11270685
    Abstract: Systems, methods, and devices for verifying a user are disclosed. A speech-controlled device captures a spoken command, and sends audio data corresponding thereto to a server. The server performs ASR on the audio data to determine ASR confidence data. The server, in parallel, performs user verification on the audio data to determine user verification confidence data. The server may modify the user verification confidence data using the ASR confidence data. In addition or alternatively, the server may modify the user verification confidence data using at least one of a location of the speech-controlled device within a building, a type of the speech-controlled device, or a geographic location of the speech-controlled device.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: March 8, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Spyridon Matsoukas, Aparna Khare, Vishwanathan Krishnamoorthy, Shamitha Somashekar, Arindam Mandal
  • Patent number: 11227585
    Abstract: Methods and systems for determining an intent of an utterance using contextual information associated with a requesting device are described herein. Voice activated electronic devices may, in some embodiments, be capable of displaying content using a display screen. Entity data representing the content rendered by the display screen may describe entities having similar attributes as an identified intent from natural language understanding processing. Natural language understanding processing may attempt to resolve one or more declared slots for a particular intent and may generate an initial list of intent hypotheses ranked to indicate which are most likely to correspond to the utterance. The entity data may be compared with the declared slots for the intent hypotheses, and the list of intent hypothesis may be re-ranked to account for matching slots from the contextual metadata. The top ranked intent hypothesis after re-ranking may then be selected as the utterance's intent.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: January 18, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Alexandra R. Shapiro, Melanie Chie Bomke Gens, Spyridon Matsoukas, Kellen Gillespie, Rahul Goel
  • Patent number: 11200884
    Abstract: Techniques for labeling user inputs for updating user recognition voice profiles are described. A system may leverage various signals, generated during or after processing of a user input, to retroactively determine which user spoke the user input. For example, after the system receives the user input, the user may provide the system with non-spoken user verification information. Based on such user verification information, the system may label the previously spoken user input as originating from the particular user. The system may also or alternatively use system usage history to retroactively label user inputs.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: December 14, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Sundararajan Srinivasan, Arindam Mandal, Krishna Subramanian, Spyridon Matsoukas, Aparna Khare, Rohit Prasad
  • Publication number: 20210358497
    Abstract: A system processes audio data to detect when it includes a representation of a wakeword or of an acoustic event. The system may receive or determine acoustic features for the audio data, such as log-filterbank energy (LFBE). The acoustic features may be used by a first, wakeword-detection model to detect the wakeword; the output of this model may be further processed using a softmax function, to smooth it, and to detect spikes. The same acoustic features may be also be used by a second, acoustic-event-detection model to detect the acoustic event; the output of this model may be further processed using a sigmoid function and a classifier. Another model may be used to extract additional features from the LFBE data; these additional features may be used by the other models.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 18, 2021
    Inventors: Ming Sun, Thibaud Senechal, Yixin Gao, Anish N. Shah, Spyridon Matsoukas, Chao Wang, Shiv Naga Prasad Vitaladevuni
  • Publication number: 20210304774
    Abstract: Techniques for updating voice profiles used to perform user recognition are described. A system may use clustering techniques to update voice profiles. When the system receives audio data representing a spoken user input, the system may store the audio data. Periodically, the system may recall, from storage, audio data (representing previous user inputs). The system may identify clusters of the audio data, with each cluster including similar or identical speech characteristics. The system may determine a cluster is substantially similar to an existing voice profile. If this occurs, the system may create an updated voice profile using the original voice profile and the cluster of audio data.
    Type: Application
    Filed: April 13, 2021
    Publication date: September 30, 2021
    Inventors: Sundararajan Srinivasan, Arindam Mandal, Krishna Subramanian, Spyridon Matsoukas, Aparna Khare, Rohit Prasad
  • Patent number: 11132990
    Abstract: A system processes audio data to detect when it includes a representation of a wakeword or of an acoustic event. The system may receive or determine acoustic features for the audio data, such as log-filterbank energy (LFBE). The acoustic features may be used by a first, wakeword-detection model to detect the wakeword; the output of this model may be further processed using a softmax function, to smooth it, and to detect spikes. The same acoustic features may be also be used by a second, acoustic-event-detection model to detect the acoustic event; the output of this model may be further processed using a sigmoid function and a classifier. Another model may be used to extract additional features from the LFBE data; these additional features may be used by the other models.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: September 28, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Ming Sun, Thibaud Senechal, Yixin Gao, Anish N. Shah, Spyridon Matsoukas, Chao Wang, Shiv Naga Prasad Vitaladevuni
  • Patent number: 11081104
    Abstract: A natural language understanding system that can determine an overall score for a natural language hypothesis using hypothesis-specific component scores from different aspects of NLU processing as well as context data describing the context surrounding the utterance corresponding to the natural language hypotheses. The individual component scores may be input into a feature vector at a location corresponding to a type of a device captured by the utterance. Other locations in the feature vector corresponding to other device types may be populated with zero values. The feature vector may also be populated with other values represent other context data. The feature vector may then be multiplied by a weight vector comprising trained weights corresponding to the feature vector positions to determine a new overall score for each hypothesis, where the overall score incorporates the impact of the context data. Natural language hypotheses can be ranked using their respective new overall scores.
    Type: Grant
    Filed: December 12, 2017
    Date of Patent: August 3, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Chengwei Su, Sankaranarayanan Ananthakrishnan, Spyridon Matsoukas, Shirin Saleem, Rahul Gupta, Kavya Ravikumar, John Will Crimmins, Kelly James Vanee, John Pelak, Melanie Chie Bomke Gens