Patents by Inventor Matthew Garland

Matthew Garland has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10692502
    Abstract: An automated speaker verification (ASV) system incorporates a first deep neural network to extract deep acoustic features, such as deep CQCC features, from a received voice sample. The deep acoustic features are processed by a second deep neural network that classifies the deep acoustic features according to a determined likelihood of including a spoofing condition. A binary classifier then classifies the voice sample as being genuine or spoofed.
    Type: Grant
    Filed: March 2, 2018
    Date of Patent: June 23, 2020
    Assignee: Pindrop Security, Inc.
    Inventors: Elie Khoury, Parav Nagarsheth, Kailash Patil, Matthew Garland
  • Patent number: 10679630
    Abstract: Utterances of at least two speakers in a speech signal may be distinguished and the associated speaker identified by use of diarization together with automatic speech recognition of identifying words and phrases commonly in the speech signal. The diarization process clusters turns of the conversation while recognized special form phrases and entity names identify the speakers. A trained probabilistic model deduces which entity name(s) correspond to the clusters.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: June 9, 2020
    Assignee: Pindrop Security, Inc.
    Inventors: Elie Khoury, Matthew Garland
  • Patent number: 10672403
    Abstract: A score indicating a likelihood that a first subject is the same as a second subject may be calibrated to compensate for aging of the first subject between samples of age-sensitive biometric characteristics. Age of the first subject obtained at a first sample time and age of the second subject obtained at a second sample time may be averaged, and an age approximation may be generated based on at least the age average and an interval between the first and second samples. The age approximation, the interval between the first and second sample times, and an obtained gender of the subject are used to calibrate the likelihood score.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: June 2, 2020
    Assignee: Pindrop Security, Inc.
    Inventors: Elie Khoury, Matthew Garland
  • Patent number: 10553218
    Abstract: In a speaker recognition apparatus, audio features are extracted from a received recognition speech signal, and first order Gaussian mixture model (GMM) statistics are generated therefrom based on a universal background model that includes a plurality of speaker models. The first order GMM statistics are normalized with regard to a duration of the received speech signal. The deep neural network reduces a dimensionality of the normalized first order GMM statistics, and outputs a voiceprint corresponding to the recognition speech signal.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: February 4, 2020
    Assignee: Pindrop Security, Inc.
    Inventors: Elie Khoury, Matthew Garland
  • Publication number: 20190392842
    Abstract: The present invention is directed to a deep neural network (DNN) having a triplet network architecture, which is suitable to perform speaker recognition. In particular, the DNN includes three feed-forward neural networks, which are trained according to a batch process utilizing a cohort set of negative training samples. After each batch of training samples is processed, the DNN may be trained according to a loss function, e.g., utilizing a cosine measure of similarity between respective samples, along with positive and negative margins, to provide a robust representation of voiceprints.
    Type: Application
    Filed: August 8, 2019
    Publication date: December 26, 2019
    Inventors: Elie KHOURY, Matthew GARLAND
  • Publication number: 20190333521
    Abstract: A system for generating channel-compensated features of a speech signal includes a channel noise simulator that degrades the speech signal, a feed forward convolutional neural network (CNN) that generates channel-compensated features of the degraded speech signal, and a loss function that computes a difference between the channel-compensated features and handcrafted features for the same raw speech signal. Each loss result may be used to update connection weights of the CNN until a predetermined threshold loss is satisfied, and the CNN may be used as a front-end for a deep neural network (DNN) for speaker recognition/verification. The DNN may include convolutional layers, a bottleneck features layer, multiple fully-connected layers and an output layer. The bottleneck features may be used to update connection weights of the convolutional layers, and dropout may be applied to the convolutional layers.
    Type: Application
    Filed: July 8, 2019
    Publication date: October 31, 2019
    Inventors: Elie KHOURY, Matthew GARLAND
  • Publication number: 20190304468
    Abstract: Utterances of at least two speakers in a speech signal may be distinguished and the associated speaker identified by use of diarization together with automatic speech recognition of identifying words and phrases commonly in the speech signal. The diarization process clusters turns of the conversation while recognized special form phrases and entity names identify the speakers. A trained probabilistic model deduces which entity name(s) correspond to the clusters.
    Type: Application
    Filed: June 14, 2019
    Publication date: October 3, 2019
    Inventors: Elie KHOURY, Matthew GARLAND
  • Patent number: 10381009
    Abstract: The present invention is directed to a deep neural network (DNN) having a triplet network architecture, which is suitable to perform speaker recognition. In particular, the DNN includes three feed-forward neural networks, which are trained according to a batch process utilizing a cohort set of negative training samples. After each batch of training samples is processed, the DNN may be trained according to a loss function, e.g., utilizing a cosine measure of similarity between respective samples, along with positive and negative margins, to provide a robust representation of voiceprints.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: August 13, 2019
    Assignee: Pindrop Security, Inc.
    Inventors: Elie Khoury, Matthew Garland
  • Patent number: 10347256
    Abstract: A system for generating channel-compensated features of a speech signal includes a channel noise simulator that degrades the speech signal, a feed forward convolutional neural network (CNN) that generates channel-compensated features of the degraded speech signal, and a loss function that computes a difference between the channel-compensated features and handcrafted features for the same raw speech signal. Each loss result may be used to update connection weights of the CNN until a predetermined threshold loss is satisfied, and the CNN may be used as a front-end for a deep neural network (DNN) for speaker recognition/verification. The DNN may include convolutional layers, a bottleneck features layer, multiple fully-connected layers and an output layer. The bottleneck features may be used to update connection weights of the convolutional layers, and dropout may be applied to the convolutional layers.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: July 9, 2019
    Assignee: Pindrop Security, Inc.
    Inventors: Elie Khoury, Matthew Garland
  • Patent number: 10325601
    Abstract: Utterances of at least two speakers in a speech signal may be distinguished and the associated speaker identified by use of diarization together with automatic speech recognition of identifying words and phrases commonly in the speech signal. The diarization process clusters turns of the conversation while recognized special form phrases and entity names identify the speakers. A trained probabilistic model deduces which entity name(s) correspond to the clusters.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: June 18, 2019
    Assignee: Pindrop Security, Inc.
    Inventors: Elie Khoury, Matthew Garland
  • Publication number: 20190096424
    Abstract: Methods, systems, and apparatuses for audio event detection, where the determination of a type of sound data is made at the cluster level rather than at the frame level. The techniques provided are thus more robust to the local behavior of features of an audio signal or audio recording. The audio event detection is performed by using Gaussian mixture models (GMMs) to classify each cluster or by extracting an i-vector from each cluster. Each cluster may be classified based on an i-vector classification using a support vector machine or probabilistic linear discriminant analysis. The audio event detection significantly reduces potential smoothing error and avoids any dependency on accurate window-size tuning. Segmentation may be performed using a generalized likelihood ratio and a Bayesian information criterion, and the segments may be clustered using hierarchical agglomerative clustering. Audio frames may be clustered using K-means and GMMs.
    Type: Application
    Filed: November 26, 2018
    Publication date: March 28, 2019
    Inventors: Elie KHOURY, Matthew GARLAND
  • Patent number: 10141009
    Abstract: Methods, systems, and apparatuses for audio event detection, where the determination of a type of sound data is made at the cluster level rather than at the frame level. The techniques provided are thus more robust to the local behavior of features of an audio signal or audio recording. The audio event detection is performed by using Gaussian mixture models (GMMs) to classify each cluster or by extracting an i-vector from each cluster. Each cluster may be classified based on an i-vector classification using a support vector machine or probabilistic linear discriminant analysis. The audio event detection significantly reduces potential smoothing error and avoids any dependency on accurate window-size tuning. Segmentation may be performed using a generalized likelihood ratio and a Bayesian information criterion, and the segments may be clustered using hierarchical agglomerative clustering. Audio frames may be clustered using K-means and GMMs.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: November 27, 2018
    Assignee: Pindrop Security, Inc.
    Inventors: Elie Khoury, Matthew Garland
  • Publication number: 20180254046
    Abstract: An automated speaker verification (ASV) system incorporates a first deep neural network to extract deep acoustic features, such as deep CQCC features, from a received voice sample. The deep acoustic features are processed by a second deep neural network that classifies the deep acoustic features according to a determined likelihood of including a spoofing condition. A binary classifier then classifies the voice sample as being genuine or spoofed.
    Type: Application
    Filed: March 2, 2018
    Publication date: September 6, 2018
    Applicant: PINDROP SECURITY, INC.
    Inventors: Elie KHOURY, Parav NAGARSHETH, Kailash PATIL, Matthew GARLAND
  • Publication number: 20180226079
    Abstract: A score indicating a likelihood that a first subject is the same as a second subject may be calibrated to compensate for aging of the first subject between samples of age-sensitive biometric characteristics. Age of the first subject obtained at a first sample time and age of the second subject obtained at a second sample time may be averaged, and an age approximation may be generated based on at least the age average and an interval between the first and second samples. The age approximation, the interval between the first and second sample times, and an obtained gender of the subject are used to calibrate the likelihood score.
    Type: Application
    Filed: February 7, 2018
    Publication date: August 9, 2018
    Applicant: PINDROP SECURITY, INC.
    Inventors: Elie KHOURY, Matthew GARLAND
  • Publication number: 20180082691
    Abstract: In a speaker recognition apparatus, audio features are extracted from a received recognition speech signal, and first order Gaussian mixture model (GMM) statistics are generated therefrom based on a universal background model that includes a plurality of speaker models. The first order GMM statistics are normalized with regard to a duration of the received speech signal. The deep neural network reduces a dimensionality of the normalized first order GMM statistics, and outputs a voiceprint corresponding to the recognition speech signal.
    Type: Application
    Filed: September 19, 2017
    Publication date: March 22, 2018
    Applicant: PINDROP SECURITY, INC.
    Inventors: Elie KHOURY, Matthew GARLAND
  • Publication number: 20180082692
    Abstract: A system for generating channel-compensated features of a speech signal includes a channel noise simulator that degrades the speech signal, a feed forward convolutional neural network (CNN) that generates channel-compensated features of the degraded speech signal, and a loss function that computes a difference between the channel-compensated features and handcrafted features for the same raw speech signal. Each loss result may be used to update connection weights of the CNN until a predetermined threshold loss is satisfied, and the CNN may be used as a front-end for a deep neural network (DNN) for speaker recognition/verification. The DNN may include convolutional layers, a bottleneck features layer, multiple fully-connected layers and an output layer. The bottleneck features may be used to update connection weights of the convolutional layers, and dropout may be applied to the convolutional layers.
    Type: Application
    Filed: September 19, 2017
    Publication date: March 22, 2018
    Applicant: PINDROP SECURITY, INC.
    Inventors: Elie KHOURY, Matthew GARLAND
  • Publication number: 20180082689
    Abstract: Utterances of at least two speakers in a speech signal may be distinguished and the associated speaker identified by use of diarization together with automatic speech recognition of identifying words and phrases commonly in the speech signal. The diarization process clusters turns of the conversation while recognized special form phrases and entity names identify the speakers. A trained probabilistic model deduces which entity name(s) correspond to the clusters.
    Type: Application
    Filed: September 19, 2017
    Publication date: March 22, 2018
    Applicant: PINDROP SECURITY, INC.
    Inventors: Elie KHOURY, Matthew GARLAND
  • Publication number: 20180075849
    Abstract: The present invention is directed to a deep neural network (DNN) having a triplet network architecture, which is suitable to perform speaker recognition. In particular, the DNN includes three feed-forward neural networks, which are trained according to a batch process utilizing a cohort set of negative training samples. After each batch of training samples is processed, the DNN may be trained according to a loss function, e.g., utilizing a cosine measure of similarity between respective samples, along with positive and negative margins, to provide a robust representation of voiceprints.
    Type: Application
    Filed: November 20, 2017
    Publication date: March 15, 2018
    Applicant: PINDROP SECURITY, INC.
    Inventors: Elie KHOURY, Matthew GARLAND
  • Publication number: 20170372725
    Abstract: Methods, systems, and apparatuses for audio event detection, where the determination of a type of sound data is made at the cluster level rather than at the frame level. The techniques provided are thus more robust to the local behavior of features of an audio signal or audio recording. The audio event detection is performed by using Gaussian mixture models (GMMs) to classify each cluster or by extracting an i-vector from each cluster. Each cluster may be classified based on an i-vector classification using a support vector machine or probabilistic linear discriminant analysis. The audio event detection significantly reduces potential smoothing error and avoids any dependency on accurate window-size tuning. Segmentation may be performed using a generalized likelihood ratio and a Bayesian information criterion, and the segments may be clustered using hierarchical agglomerative clustering. Audio frames may be clustered using K-means and GMMs.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 28, 2017
    Applicant: PINDROP SECURITY, INC.
    Inventors: Elie KHOURY, Matthew GARLAND
  • Patent number: 9824692
    Abstract: The present invention is directed to a deep neural network (DNN) having a triplet network architecture, which is suitable to perform speaker recognition. In particular, the DNN includes three feed-forward neural networks, which are trained according to a batch process utilizing a cohort set of negative training samples. After each batch of training samples is processed, the DNN may be trained according to a loss function, e.g., utilizing a cosine measure of similarity between respective samples, along with positive and negative margins, to provide a robust representation of voiceprints.
    Type: Grant
    Filed: September 12, 2016
    Date of Patent: November 21, 2017
    Assignee: PINDROP SECURITY, INC.
    Inventors: Elie Khoury, Matthew Garland