Abstract: A computer-implemented method and apparatus for extracting key information from conversational voice data, where the method comprises receiving a first speaker text corresponding to a speech of a first speaker in a conversation with a second speaker, the conversation comprising multiple turns of speech between the first speaker and the second speaker, the first speaker text comprising multiple question lines, each question line corresponding to the speech of the first speaker at a corresponding turn, arranged chronologically. Feature words are identified, and a frequency of occurrence therefor in each question line is determined. Question lines without any of the feature words are removed, to yield candidate question lines, for each of which a mathematical representation is generated. A similarity score for each candidate question line with respect to each subsequent candidate question line is computed, and the line with the highest score is identified as a key question.
Abstract: A method and an apparatus for predicting satisfaction of a customer pursuant to a call between the customer and an agent, in which the method comprises receiving a transcribed text of the call, dividing the transcribed text into a plurality of phases of a conversation, extracting at least one call feature for each of the plurality of phases, receiving call metadata, extracting metadata features from the call metadata, combining the call features and the metadata features, and generating an output, using a trained machine learning (ML) model, based on the combined features, indicating whether the customer is satisfied or not. The ML model is trained to generate an output indicating whether the customer is satisfied or not, based on an input of the combined features.
Abstract: A method and an apparatus for coaching call center agents is provided. The method includes analyzing a conversation of the agent with a first customer, determining a performance of the agent on at least one behavioral skill based on the analysis, generating automatically, a custom training package (CTP) based on the determined first performance, and sending the CTP for presentation on the agent device.
Abstract: A voice biometrics system adapted to authenticate a user based on speech diagnostics is provided. The system includes a pre-processing module to receive and pre-process an input voice sample. The pre-processing module includes a clipping module to clip the input voice sample based on a clipping threshold and a voice activity detection module to apply a detection model on the input voice sample to determine an audible region and a non-audible region in the input voice sample. The pre-processing module includes a noise reduction module to apply a noise reduction model to remove noise components from the input voice sample. The voice biometrics system includes a feature extraction module to extract features from the pre-processed input voice sample. The voice biometrics system also includes an authentication module to authenticate the user by comparing a plurality of features extracted from the pre-processed input voice sample to a plurality of enrollment features.