Patents by Inventor Aravind Ganapathiraju

Aravind Ganapathiraju has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230315992
    Abstract: A method for deriving a model for a chatbot for predicting entities in a sentence. The sentence is input into a named-entity recognition module and features obtained. A LSTM RNN forward pass and backward pass is performed on the features to obtain a first and second set of results, respectively. A first concatenating is performed on the first set of results and the second set of results. A second concatenation is performed on the first concatenation using output target entities. A connected set of neurons from the second concatenation is obtained. An output is obtained, and a prediction is collected on a next output by summing the outputs previous to that output. The prediction is input into the performing of the second concatenation step, wherein the method is performed cyclically until all outputs have been processed with input predictions.
    Type: Application
    Filed: June 9, 2023
    Publication date: October 5, 2023
    Applicant: GENESYS CLOUD SERVICES, INC.
    Inventors: FELIX IMMANUEL WYSS, ARAVIND GANAPATHIRAJU, PAVAN BUDUGUPPA
  • Patent number: 11714965
    Abstract: A system and method are presented for model derivation for entity prediction. An LSTM with 100 memory cells is used in the system architecture. Sentences are truncated and provided with feature information to a named-entity recognition model. A forward and a backward pass of the LSTM are performed, and each pass is concatenated. The concatenated bi-directional LSTM encodings are obtained for the various features for each word. A fully connected set of neurons shared across all encoded words is obtained and the final encoded outputs with dimensions equal to the number of entities is determined.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: August 1, 2023
    Inventors: Felix Immanuel Wyss, Aravind Ganapathiraju, Pavan Buduguppa
  • Patent number: 11694697
    Abstract: A system and method are presented for the correction of packet loss in audio in automatic speech recognition (ASR) systems. Packet loss correction, as presented herein, occurs at the recognition stage without modifying any of the acoustic models generated during training. The behavior of the ASR engine in the absence of packet loss is thus not altered. To accomplish this, the actual input signal may be rectified, the recognition scores may be normalized to account for signal errors, and a best-estimate method using information from previous frames and acoustic models may be used to replace the noisy signal.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: July 4, 2023
    Inventors: Srinath Cheluvaraja, Ananth Nagaraja Iyer, Aravind Ganapathiraju, Felix Immanuel Wyss
  • Publication number: 20230133027
    Abstract: In a method and apparatus for intent-guided automatic speech recognition (ASR) in customer service center environments, the method includes detecting, at a call analytics server (CAS), from a call audio of a call between at least two persons comprising a first person and a second person, an intent expressed by one of the first person or second person. The method further includes verifying that the detected intent is on a predefined list of intents and focusing the range of applicability of a language prediction (LP) module, where the LP module uses one or more language models (LMs), used by the CAS to generate a transcribed text from the call audio, to a conversational domain corresponding to the detected intent.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 4, 2023
    Inventor: Aravind GANAPATHIRAJU
  • Publication number: 20230132710
    Abstract: In a method and apparatus for improved entity extraction in an audio of a conversation or a call, the method includes generating, at a server, from speech data of a conversation between at least two persons, text data and associated preliminary entity prediction data, using an automated speech recognition (ASR) engine comprising one or more neural networks trained via multi-task training. The method further includes identifying, using the text data and associated preliminary entity prediction data, at least one named entity in said speech data.
    Type: Application
    Filed: October 31, 2022
    Publication date: May 4, 2023
    Inventor: Aravind GANAPATHIRAJU
  • Publication number: 20230136746
    Abstract: A method and apparatus for automatically generating a call summary in call center environments is provided. The method includes identifying, from a transcript of a call between a first person and a second person, two or more consecutive mergeable turns of the first person from multiple consecutive turns of the first person, if the two or more consecutive mergeable turns of the first person are interjected by a turn of the second person. The two or more consecutive mergeable turns of the first person are merged into a single merged turn of the first person. In some embodiment, multiple entities, entity values and intents are determined, and each of multiple entity values is mapped to a corresponding entity from multiple entities. A call summary is generated based on the identified entity(ies) and corresponding entity value(s), and the identified intent(s).
    Type: Application
    Filed: December 28, 2022
    Publication date: May 4, 2023
    Inventors: Maragathamani BOOTHLINGAM, Aravind GANAPATHIRAJU
  • Patent number: 11574642
    Abstract: A system and method are presented for the correction of packet loss in audio in automatic speech recognition (ASR) systems. Packet loss correction, as presented herein, occurs at the recognition stage without modifying any of the acoustic models generated during training. The behavior of the ASR engine in the absence of packet loss is thus not altered. To accomplish this, the actual input signal may be rectified, the recognition scores may be normalized to account for signal errors, and a best-estimate method using information from previous frames and acoustic models may be used to replace the noisy signal.
    Type: Grant
    Filed: June 29, 2020
    Date of Patent: February 7, 2023
    Inventors: Srinath Cheluvaraja, Ananth Nagaraja Iyer, Aravind Ganapathiraju, Felix Immanuel Wyss
  • Patent number: 11568305
    Abstract: A system and method are presented for customer journey event representation learning and outcome prediction using neural sequence models. A plurality of events are input into a module where each event has a schema comprising characteristics of the events and their modalities (web clicks, calls, emails, chats, etc.). The events of different modalities can be captured using different schemas and therefore embodiments described herein are schema-agnostic. Each event is represented as a vector of some number of numbers by the module with a plurality of vectors being generated in total for each customer visit. The vectors are then used in sequence learning to predict real-time next best actions or outcome probabilities in a customer journey using machine learning algorithms such as recurrent neural networks.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: January 31, 2023
    Inventors: Sapna Negi, Maciej Dabrowski, Aravind Ganapathiraju, Emir Munoz, Veera Elluru Raghavendra, Felix Immanuel Wyss
  • Patent number: 11302307
    Abstract: A system and method are presented for F0 transfer learning for improving F0 prediction with deep neural network models. Larger models are trained using long short-term memory (LSTM) and multi-layer perceptron (MLP) feed-forward hidden layer modeling. The fundamental frequency values for voiced and unvoiced segments are identified and extracted from the larger models. The values for voiced regions are transferred and applied to training a smaller model and the smaller model is applied in the text to speech system for real-time speech synthesis output.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: April 12, 2022
    Inventors: Elluru Veera Raghavendra, Aravind Ganapathiraju
  • Patent number: 11211065
    Abstract: A system and method are presented for the automatic filtering of test utterance mismatches in automatic speech recognition (ASR) systems. Test data are evaluated for match between audio and text in a language-independent manner. Utterances having mismatch are identified and isolated for either removal or manual verification to prevent incorrect measurements of the ASR system performance. In an embodiment, contiguous stretches of low probabilities in every utterance are searched for and removed. Such segments may be intra-word or cross-word. In another embodiment, scores may be determined using log DNN probability for every word in each utterance. Words may be sorted in the order of the scores and those utterances containing the least word scores are removed.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: December 28, 2021
    Inventors: Tejas Godambe, Aravind Ganapathiraju
  • Patent number: 11195514
    Abstract: A system and method are presented for a multiclass approach for confidence modeling in automatic speech recognition systems. A confidence model may be trained offline using supervised learning. A decoding module is utilized within the system that generates features for audio files in audio data. The features are used to generate a hypothesized segment of speech which is compared to a known segment of speech using edit distances. Comparisons are labeled from one of a plurality of output classes. The labels correspond to the degree to which speech is converted to text correctly or not. The trained confidence models can be applied in a variety of systems, including interactive voice response systems, keyword spotters, and open-ended dialog systems.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: December 7, 2021
    Inventors: Ramasubramanian Sundaram, Aravind Ganapathiraju, Yingyi Tan
  • Patent number: 11134155
    Abstract: A method for automated generation of contact center system embeddings according to one embodiment includes determining, by a computing system, contact center system agents, contact center system agent skills, and/or contact center system virtual queue experiences; generating, by the computing system, a matrix representation based on the contact center system agents, the contact center system agent skills, and/or the contact center system virtual queue experiences; generating, by the computing system and based on the matrix representation, contact center system agent identifiers, contact center system agent skills identifiers, and/or contact center system virtual queue identifiers; transforming, by the computing system, the contact center system agent identifiers, the contact center system agent skills identifiers, and/or the contact center system virtual queue identifiers into the contact center system agent embeddings, contact center system agent skills embeddings, and/or contact center system virtual queue
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: September 28, 2021
    Assignee: Genesys Telecommunications Laboratories, Inc.
    Inventors: Felix Immanuel Wyss, Ramasubramanian Sundaram, Aravind Ganapathiraju
  • Publication number: 20200335110
    Abstract: A system and method are presented for the correction of packet loss in audio in automatic speech recognition (ASR) systems. Packet loss correction, as presented herein, occurs at the recognition stage without modifying any of the acoustic models generated during training. The behavior of the ASR engine in the absence of packet loss is thus not altered. To accomplish this, the actual input signal may be rectified, the recognition scores may be normalized to account for signal errors, and a best-estimate method using information from previous frames and acoustic models may be used to replace the noisy signal.
    Type: Application
    Filed: June 29, 2020
    Publication date: October 22, 2020
    Applicant: GENESYS TELECOMMUNICATIONS LABORATORIES, INC.
    Inventors: SRINATH CHELUVARAJA, ANANTH NAGARAJA IYER, ARAVIND GANAPATHIRAJU, FELIX IMMANUEL WYSS
  • Publication number: 20200327444
    Abstract: A system and method are presented for customer journey event representation learning and outcome prediction using neural sequence models. A plurality of events are input into a module where each event has a schema comprising characteristics of the events and their modalities (web clicks, calls, emails, chats, etc.). The events of different modalities can be captured using different schemas and therefore embodiments described herein are schema-agnostic. Each event is represented as a vector of some number of numbers by the module with a plurality of vectors being generated in total for each customer visit. The vectors are then used in sequence learning to predict real-time next best actions or outcome probabilities in a customer journey using machine learning algorithms such as recurrent neural networks.
    Type: Application
    Filed: April 9, 2019
    Publication date: October 15, 2020
    Inventors: Sapna Negi, Maciej Dabrowski, Aravind Ganapathiraju, Emir Munoz, Veera Elluru Raghavendra, Felix Immanuel Wyss
  • Patent number: 10789962
    Abstract: A system and method are presented for the correction of packet loss in audio in automatic speech recognition (ASR) systems. Packet loss correction, as presented herein, occurs at the recognition stage without modifying any of the acoustic models generated during training. The behavior of the ASR engine in the absence of packet loss is thus not altered. To accomplish this, the actual input signal may be rectified, the recognition scores may be normalized to account for signal errors, and a best-estimate method using information from previous frames and acoustic models may be used to replace the noisy signal.
    Type: Grant
    Filed: November 12, 2018
    Date of Patent: September 29, 2020
    Inventors: Srinath Cheluvaraja, Ananth Nagaraja Iyer, Aravind Ganapathiraju, Felix Immanuel Wyss
  • Patent number: 10755718
    Abstract: A method for classifying speakers includes: receiving, by a speaker recognition system including a processor and memory, input audio including speech from a speaker; extracting, by the speaker recognition system, a plurality of speech frames containing voiced speech from the input audio; computing, by the speaker recognition system, a plurality of features for each of the speech frames of the input audio; computing, by the speaker recognition system, a plurality of recognition scores for the plurality of features; computing, by the speaker recognition system, a speaker classification result in accordance with the recognition scores; and outputting, by the speaker recognition system, the speaker classification result.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: August 25, 2020
    Inventors: Zhenhao Ge, Ananth N. Iyer, Srinath Cheluvaraja, Ram Sundaram, Aravind Ganapathiraju
  • Patent number: 10733974
    Abstract: A system and method are presented for the synthesis of speech from provided text. Particularly, the generation of parameters within the system is performed as a continuous approximation in order to mimic the natural flow of speech as opposed to a step-wise approximation of the feature stream. Provided text may be partitioned and parameters generated using a speech model. The generated parameters from the speech model may then be used in a post-processing step to obtain a new set of parameters for application in speech synthesis.
    Type: Grant
    Filed: January 18, 2018
    Date of Patent: August 4, 2020
    Inventors: Yingyi Tan, Aravind Ganapathiraju, Felix Immanuel Wyss
  • Publication number: 20200151248
    Abstract: A system and method are presented for model derivation for entity prediction. An LSTM with 100 memory cells is used in the system architecture. Sentences are truncated and provided with feature information to a named-entity recognition model. A forward and a backward pass of the LSTM are performed, and each pass is concatenated. The concatenated bi-directional LSTM encodings are obtained for the various features for each word. A fully connected set of neurons shared across all encoded words is obtained and the final encoded outputs with dimensions equal to the number of entities is determined.
    Type: Application
    Filed: November 8, 2019
    Publication date: May 14, 2020
    Inventors: Felix Immanuel Wyss, Aravind Ganapathiraju, Pavan Buduguppa
  • Patent number: 10621969
    Abstract: A system and method are presented for forming the excitation signal for a glottal pulse model based parametric speech synthesis system. The excitation signal may be formed by using a plurality of sub-band templates instead of a single one. The plurality of sub-band templates may be combined to form the excitation signal wherein the proportion in which the templates are added is dynamically based on determined energy coefficients. These coefficients vary from frame to frame and are learned, along with the spectral parameters, during feature training. The coefficients are appended to the feature vector, which comprises spectral parameters and is modeled using HMMs, and the excitation signal is determined.
    Type: Grant
    Filed: February 11, 2019
    Date of Patent: April 14, 2020
    Inventors: Rajesh Dachiraju, E. Veera Raghavendra, Aravind Ganapathiraju
  • Patent number: 10614814
    Abstract: Technologies for authenticating a speaker in a voice authentication system using voice biometrics include a speech collection computing device and a speech authentication computing device. The speech collection computing device is configured to collect a speech signal from a speaker and transmit the speech signal to the speech authentication computing device. The speech authentication computing device is configured to compute a speech signal feature vector for the received speech signal, retrieve a speech signal classifier associated with the speaker, and feed the speech signal feature vector to the retrieved speech signal classifier. Additionally, the speech authentication computing device is configured to determine whether the speaker is an authorized speaker based on an output of the retrieved speech signal classifier. Additional embodiments are described herein.
    Type: Grant
    Filed: June 2, 2017
    Date of Patent: April 7, 2020
    Inventors: Rajesh Dachiraju, Aravind Ganapathiraju, Ananth Nagaraja Iyer, Felix Immanuel Wyss