Patents by Inventor Elie Khoury

Elie Khoury has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12380892
    Abstract: Disclosed are systems and methods including computing-processes executing machine-learning architectures extract vectors representing disparate types of data and output predicted identities of users accessing computing services, without express identity assertions, and across multiple computing services, analyzing data from multiple modalities, for various user devices, and agnostic to architectures hosting the disparate computing service. The system invokes the identification operations of the machine-learning architecture, which extracts biometric embeddings from biometric data and context embeddings representing all or most of the types of metadata features analyzed by the system. The context embeddings help identify a subset of potentially matching identities of possible users, which limits the number of biometric-prints the system compares against an inbound biometric embedding for authentication.
    Type: Grant
    Filed: June 3, 2022
    Date of Patent: August 5, 2025
    Assignee: Pindrop Security, Inc.
    Inventors: Payas Gupta, Elie Khoury, Terry Nelms, II, Vijay Balasubramaniyan
  • Patent number: 12266368
    Abstract: Embodiments described herein provide for systems and methods for voice-based cross-channel enrollment and authentication. The systems control for and mitigate against variations in audio signals received across any number of communications channels by training and employing a neural network architecture comprising a speaker verification neural network and a bandwidth expansion neural network. The bandwidth expansion neural network is trained on narrowband audio signals to produce and generate estimated wideband audio signals corresponding to the narrowband audio signals. These estimated wideband audio signals may be fed into one or more downstream applications, such as the speaker verification neural network or embedding extraction neural network. The speaker verification neural network can then compare and score inbound embeddings for a current call against enrolled embeddings, regardless of the channel used to receive the inbound signal or enrollment signal.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: April 1, 2025
    Assignee: Pindrop Security, Inc.
    Inventors: Ganesh Sivaraman, Elie Khoury, Avrosh Kumar
  • Patent number: 12190905
    Abstract: Embodiments described herein provide for a machine-learning architecture for modeling quality measures for enrollment signals. Modeling these enrollment signals enables the machine-learning architecture to identify deviations from expected or ideal enrollment signal in future test phase calls. These differences can be used to generate quality measures for the various audio descriptors or characteristics of audio signals. The quality measures can then be fused at the score-level with the speaker recognition's embedding comparisons for verifying the speaker. Fusing the quality measures with the similarity scoring essentially calibrates the speaker recognition's outputs based on the realities of what is actually expected for the enrolled caller and what was actually observed for the current inbound caller.
    Type: Grant
    Filed: August 20, 2021
    Date of Patent: January 7, 2025
    Assignee: Pindrop Security, Inc.
    Inventors: Hrishikesh Rao, Kedar Phatak, Elie Khoury
  • Patent number: 12142083
    Abstract: The embodiments execute machine-learning architectures for biometric-based identity recognition (e.g., speaker recognition, facial recognition) and deepfake detection (e.g., speaker deepfake detection, facial deepfake detection). The machine-learning architecture includes layers defining multiple scoring components, including sub-architectures for speaker deepfake detection, speaker recognition, facial deepfake detection, facial recognition, and lip-sync estimation engine. The machine-learning architecture extracts and analyzes various types of low-level features from both audio data and visual data, combines the various scores, and uses the scores to determine the likelihood that the audiovisual data contains deepfake content and the likelihood that a claimed identity of a person in the video matches to the identity of an expected or enrolled person.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: November 12, 2024
    Assignee: Pindrop Security, Inc.
    Inventors: Tianxiang Chen, Elie Khoury
  • Publication number: 20240363103
    Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.
    Type: Application
    Filed: November 9, 2023
    Publication date: October 31, 2024
    Applicant: Pindrop Security, Inc.
    Inventors: Umair Altaf, Sai Pradeep Peri, Lakshay Phatela, Payas Gupta, Yitao Sun, Svetlana Afanaseva, Kailash Patil, Elie Khoury, Bradley Magnetta, Vijay Balasubramaniyan, Tianxiang Chen
  • Publication number: 20240363099
    Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.
    Type: Application
    Filed: November 9, 2023
    Publication date: October 31, 2024
    Applicant: PINDROP SECURITY, INC.
    Inventors: Umair Altaf, Sai Pradeep Peri, Lakshay Phatela, Payas Gupta, Yitao Sun, Svetlana Afanaseva, Kailash Patil, Elie Khoury, Bradley Magnetta, Vijay Balasubramaniyan, Tianxiang Chen
  • Publication number: 20240355322
    Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.
    Type: Application
    Filed: November 9, 2023
    Publication date: October 24, 2024
    Applicant: Pindrop Security, Inc.
    Inventors: Umair Altaf, Sai Pradeep Peri, Lakshay Phatela, Payas Gupta, Yitao Sun, Svetlana Afanaseva, Kailash Patil, Elie Khoury, Bradley Magnetta, Vijay Balasubramaniyan, Tianxiang Chen
  • Publication number: 20240355337
    Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.
    Type: Application
    Filed: November 9, 2023
    Publication date: October 24, 2024
    Applicant: PINDROP SECURITY, INC.
    Inventors: Umair Altaf, Sai Pradeep Peri, Lakshay Phatela, Payas Gupta, Yitao Sun, Svetlana Afanaseva, Kailash Patil, Elie Khoury, Bradley Magnetta, Vijay Balasubramaniyan, Tianxiang Chen
  • Publication number: 20240355319
    Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.
    Type: Application
    Filed: November 9, 2023
    Publication date: October 24, 2024
    Applicant: PINDROP SECURITY, INC.
    Inventors: Umair Altaf, Sai Pradeep Peri, Lakshay Phatela, Payas Gupta, Yitao Sun, Svetlana Afanaseva, Kailash Patil, Elie Khoury, Bradley Magnetta, Vijay Balasubramaniyan, Tianxiang Chen
  • Patent number: 12015637
    Abstract: Embodiments described herein provide for automatically detecting whether an audio signal is a spoofed audio signal or a genuine audio signal. A spoof detection system can include an audio signal transforming front end and a classification back end. Both the front end and the back end can include neural networks that can be trained using the same set of labeled audio signals. The audio signal transforming front end can include a one or more neural networks for per-channel energy normalization transformation of the audio signal, and the back end can include a convolution neural network for classification into spoofed or genuine audio signal. In some embodiments, the transforming audio signal front end can include one or more neural networks for bandpass filtering of the audio signals, and the back end can include a residual neural network for audio signal classification into spoofed or genuine audio signal.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: June 18, 2024
    Assignee: Pindrop Security, Inc.
    Inventors: Khaled Lakhdhar, Parav Nagarsheth, Tianxiang Chen, Elie Khoury
  • Patent number: 11948553
    Abstract: Embodiments described herein provide for audio processing operations that evaluate characteristics of audio signals that are independent of the speaker's voice. A neural network architecture trains and applies discriminatory neural networks tasked with modeling and classifying speaker-independent characteristics. The task-specific models generate or extract feature vectors from input audio data based on the trained embedding extraction models. The embeddings from the task-specific models are concatenated to form a deep-phoneprint vector for the input audio signal. The DP vector is a low dimensional representation of the each of the speaker-independent characteristics of the audio signal and applied in various downstream operations.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: April 2, 2024
    Assignee: Pindrop Security, Inc.
    Inventors: Kedar Phatak, Elie Khoury
  • Patent number: 11862177
    Abstract: Embodiments described herein provide for systems and methods for implementing a neural network architecture for spoof detection in audio signals. The neural network architecture contains a layers defining embedding extractors that extract embeddings from input audio signals. Spoofprint embeddings are generated for particular system enrollees to detect attempts to spoof the enrollee's voice. Optionally, voiceprint embeddings are generated for the system enrollees to recognize the enrollee's voice. The voiceprints are extracted using features related to the enrollee's voice. The spoofprints are extracted using features related to features of how the enrollee speaks and other artifacts. The spoofprints facilitate detection of efforts to fool voice biometrics using synthesized speech (e.g., deepfakes) that spoof and emulate the enrollee's voice.
    Type: Grant
    Filed: January 22, 2021
    Date of Patent: January 2, 2024
    Assignee: Pindrop Security, Inc.
    Inventors: Tianxiang Chen, Elie Khoury
  • Patent number: 11842748
    Abstract: Methods, systems, and apparatuses for audio event detection, where the determination of a type of sound data is made at the cluster level rather than at the frame level. The techniques provided are thus more robust to the local behavior of features of an audio signal or audio recording. The audio event detection is performed by using Gaussian mixture models (GMMs) to classify each cluster or by extracting an i-vector from each cluster. Each cluster may be classified based on an i-vector classification using a support vector machine or probabilistic linear discriminant analysis. The audio event detection significantly reduces potential smoothing error and avoids any dependency on accurate window-size tuning. Segmentation may be performed using a generalized likelihood ratio and a Bayesian information criterion, and the segments may be clustered using hierarchical agglomerative clustering. Audio frames may be clustered using K-means and GMMs.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: December 12, 2023
    Assignee: Pindrop Security, Inc.
    Inventors: Elie Khoury, Matthew Garland
  • Patent number: 11756564
    Abstract: A computer may segment a noisy audio signal into audio frames and execute a deep neural network (DNN) to estimate an instantaneous function of clean speech spectrum and noisy audio spectrum in the audio frame. This instantaneous function may correspond to a ratio of an a-priori signal to noise ratio (SNR) and an a-posteriori SNR of the audio frame. The computer may add estimated instantaneous function to the original noisy audio frame to output an enhanced speech audio frame.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: September 12, 2023
    Assignee: PINDROP SECURITY, INC.
    Inventors: Ganesh Sivaraman, Elie Khoury
  • Patent number: 11727942
    Abstract: Systems and methods may generate, by a computer, a voice model for an enrollee based upon a set of one or more features extracted from a first audio sample received at a first time; receive at a second time a second audio sample associated with a caller; generate a likelihood score for the second audio sample by applying the voice model associated with the enrollee on the set of features extracted from the second audio sample associated with the caller, the likelihood score indicating a likelihood that the caller is the enrollee; calibrate the likelihood score based upon a time interval from the first time to the second time and at least one of: an enrollee age at the first time and an enrollee gender; and authenticate the caller as the enrollee upon the computer determining that the likelihood score satisfies a predetermined threshold score.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: August 15, 2023
    Assignee: PINDROP SECURITY, INC.
    Inventors: Elie Khoury, Matthew Garland
  • Patent number: 11715460
    Abstract: Described herein are systems and methods for improved audio analysis using a computer-executed neural network having one or more in-network data augmentation layers. The systems described herein help ease or avoid unwanted strain on computing resources by employing the data augmentation techniques within the layers of the neural network. The in-network data augmentation layers will produce various types of simulated audio data when the computer applies the neural network on an inputted audio signal during a training phase, enrollment phase, and/or testing phase. Subsequent layers of the neural network (e.g., convolutional layer, pooling layer, data augmentation layer) ingest the simulated audio data and the inputted audio signal and perform various operations.
    Type: Grant
    Filed: October 8, 2020
    Date of Patent: August 1, 2023
    Assignee: PINDROP SECURITY, INC.
    Inventors: Elie Khoury, Ganesh Sivaraman, Tianxiang Chen, Amruta Vidwans
  • Patent number: 11670304
    Abstract: Utterances of at least two speakers in a speech signal may be distinguished and the associated speaker identified by use of diarization together with automatic speech recognition of identifying words and phrases commonly in the speech signal. The diarization process clusters turns of the conversation while recognized special form phrases and entity names identify the speakers. A trained probabilistic model deduces which entity name(s) correspond to the clusters.
    Type: Grant
    Filed: June 8, 2020
    Date of Patent: June 6, 2023
    Assignee: PINDROP SECURITY, INC.
    Inventors: Elie Khoury, Matthew Garland
  • Patent number: 11657823
    Abstract: A system for generating channel-compensated features of a speech signal includes a channel noise simulator that degrades the speech signal, a feed forward convolutional neural network (CNN) that generates channel-compensated features of the degraded speech signal, and a loss function that computes a difference between the channel-compensated features and handcrafted features for the same raw speech signal. Each loss result may be used to update connection weights of the CNN until a predetermined threshold loss is satisfied, and the CNN may be used as a front-end for a deep neural network (DNN) for speaker recognition/verification. The DNN may include convolutional layers, a bottleneck features layer, multiple fully-connected layers and an output layer. The bottleneck features may be used to update connection weights of the convolutional layers, and dropout may be applied to the convolutional layers.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: May 23, 2023
    Assignee: PINDROP SECURITY, INC.
    Inventors: Elie Khoury, Matthew Garland
  • Publication number: 20230005486
    Abstract: Embodiments include a computer executing voice biometric machine-learning for speaker recognition. The machine-learning architecture includes embedding extractors that extract embeddings for enrollment or for verifying inbound speakers, and embedding convertors that convert enrollment voiceprints from a first type of embedding to a second type of embedding. The embedding convertor maps the feature vector space of the first type of embedding to the feature vector space of the second type of embedding. The embedding convertor takes as input enrollment embeddings of the first type of embedding and generates as output converted enrolled embeddings that are aggregated into a converted enrolled voiceprint of the second type of embedding.
    Type: Application
    Filed: June 30, 2022
    Publication date: January 5, 2023
    Applicant: Pindrop Security, Inc.
    Inventors: Tianxiang Chen, Elie Khoury
  • Patent number: 11488605
    Abstract: An automated speaker verification (ASV) system incorporates a first deep neural network to extract deep acoustic features, such as deep CQCC features, from a received voice sample. The deep acoustic features are processed by a second deep neural network that classifies the deep acoustic features according to a determined likelihood of including a spoofing condition. A binary classifier then classifies the voice sample as being genuine or spoofed.
    Type: Grant
    Filed: June 22, 2020
    Date of Patent: November 1, 2022
    Assignee: PINDROP SECURITY, INC.
    Inventors: Elie Khoury, Parav Nagarsheth, Kailash Patil, Matthew Garland