Patents Examined by Michael C. Lee
  • Patent number: 11804226
    Abstract: A method includes providing audio signals of an interaction between a plurality of human speakers, the speakers speaking into electronic devices to record the audio signals. The audio signals, which are optionally combined, include agent audio and subject audio. The method further includes automatically processing the audio signals to generate a speaker separated natural language transcript of the interaction from the audio signals. For each identified question, a subject response is identified. From the agent text, it is determined whether the question asked by the at least one agent is an open question or a closed question. A decision engine is used to determine the veracity of the subject response and the subject response is flagged if the indicia of the likelihood of deception in the subject response exceeds a predetermined value.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: October 31, 2023
    Assignee: Lexiqal Ltd
    Inventors: James Laird, Nigel Cannings, Cornelius Patrick Glackin, Julie Ann Wall, Nikesh Bajaj
  • Patent number: 11797776
    Abstract: A device may receive training data that includes datasets associated with natural language processing, and may mask the training data to generate masked training data. The device may train a masked event C-BERT model, with the masked training data, to generate pretrained weights and a trained masked event C-BERT model, and may train an event aware C-BERT model, with the training data and the pretrained weights, to generate a trained event aware C-BERT model. The device may receive natural language text data identifying natural language events, and may process the natural language text data, with the trained masked event C-BERT model, to determine weights. The device may process the natural language text data and the weights, with the trained event aware C-BERT model, to predict causality relationships between the natural language events, and may perform actions, based on the causality relationships.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: October 24, 2023
    Assignee: Accenture Global Solutions Limited
    Inventors: Vivek Kumar Khetan, Mayuresh Anand, Roshni Ramesh Ramnani, Shubhashis Sengupta, Andrew E. Fano
  • Patent number: 11790893
    Abstract: A voice processing method is disclosed. The voice processing method applies first and second sentence vectors extracted from first and second utterances, that are included in one dialog group and are separated from each other, to a learning model and generates an output from which at least one word having an overlapping meaning is removed. The voice processing method can be associated with an artificial intelligence module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, devices related to 5G services, and the like.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: October 17, 2023
    Assignee: LG ELECTRONICS INC.
    Inventors: Kwangyong Lee, Hyun Yu, Byeongha Kim, Yejin Kim
  • Patent number: 11790884
    Abstract: A computer-implemented method of generating speech audio in a video game is provided. The method includes inputting, into a synthesizer module, input data that represents speech content. Source acoustic features for the speech content in the voice of a source speaker are generated and are input, along with a speaker embedding associated with a player of the video game into an acoustic feature encoder of a voice convertor. One or more acoustic feature encodings are generated as output of the acoustic feature encoder, which are inputted into an acoustic feature decoder of the voice convertor to generate target acoustic features. The target acoustic features are processed with one or more modules, to generate speech audio in the voice of the player.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: October 17, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Zahra Shakeri, Jervis Pinto, Kilol Gupta, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kenneth Moss
  • Patent number: 11775764
    Abstract: A computer-implemented method is provided for estimating output confidence of a black box Application Programming Interface (API). The method includes generating paraphrases for an input text. The method further includes calculating a distance between the input text and each respective one of the paraphrases. The method also includes sorting the paraphrases in ascending order of the distance. The method additionally includes selecting a top predetermined number of the paraphrases. The method further includes inputting the input text and the selected paraphrases into the API to obtain an output confidence score for each of the input text and the selected paraphrases. The method also includes estimating, by a hardware processor, the output confidence of the input text from a robustness of output scores of the input text and the selected paraphrases.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: October 3, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Yohei Ikawa, Issei Yoshida, Sachiko Yoshihama, Miki Ishikawa, Kohichi Kamijoh
  • Patent number: 11763086
    Abstract: Systems and techniques are generally described for anomaly detection in text. In some examples, text data comprising a plurality of words may be received. An image of a first word of the plurality of words may be generated. A feature representation of the first word may be generated using a variational autoencoder. A score may be generated based at least in part on the feature representation. In various examples, the score may indicate a likelihood that an appearance of the first word in the image of the first word is anomalous with respect to at least some other words of the plurality of words.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: September 19, 2023
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Ionut Catalin Sandu, Alin-Ionut Popa, Daniel Voinea
  • Patent number: 11741941
    Abstract: A discriminator trained on labeled samples of speech can compute probabilities of voice properties. A speech synthesis generative neural network that takes in text and continuous scale values of voice properties is trained to synthesize speech audio that the discriminator will infer as matching the values of the input voice properties. Voice parameters can include speaker voice parameters, accents, and attitudes, among others. Training can be done by transfer learning from an existing neural speech synthesis model or such a model can be trained with a loss function that considers speech and parameter values. A graphical user interface can allow voice designers for products to synthesize speech with a desired voice or generate a speech synthesis engine with frozen voice parameters. A vector of parameters can be used for comparison to previously registered voices in databases such as ones for trademark registration.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: August 29, 2023
    Assignee: SoundHound, Inc
    Inventor: Andrew Richards
  • Patent number: 11699434
    Abstract: Embodiments provide for improved data sequence validity processing, for example to determine validity of sentences or other language within a particular language domain. Such improved processing is useful at least for arranging data sequences based on determined validity, and/or making determinations and/or performing actions based on the determined validity. A determined probability (e.g., transformed into the perplexity space) of each token appearing in a data sequence is used in any of a myriad of manners to perform such data sequence validity processing. Example embodiments provide for generating a perplexity value set for each data sequence in a plurality of data sequences, generating a probabilistic ranking set for the plurality of data sequences based on the perplexity value sets and at least one sequence ranking metric, and generating an arrangement of the plurality of data sequences based on the probabilistic ranking set.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: July 11, 2023
    Assignee: ARRIA DATA2TEXT LIMITED
    Inventors: Daniel da Silva De Paiva, Gowri Somayajulu Sripada, Craig Thomson
  • Patent number: 11636850
    Abstract: A method and system for performing real-time sentiment modulation in conversation systems is disclosed. The method includes generating an impact table comprising a plurality of sentiment vectors and a plurality of emotion vectors associated with the plurality of sentences. The method further includes generating for each of the plurality of sentences, a dependency vector based on the associated sentiment vector and the associated emotion vector. The method further includes stacking the dependency vector generated to generate a waveform representing variance in sentiment and emotions across words within the plurality of sentences. The method further includes altering at least one portion of the waveform based on a desired emotional output to generate a reshaped waveform. The method further includes generating a set of rephrased sentences associated with the at least one portion, based on the reshaped waveform, the set of sentences, a user defined sentiment output.
    Type: Grant
    Filed: July 24, 2020
    Date of Patent: April 25, 2023
    Assignee: Wipro Limited
    Inventor: Manjunath Ramachandra Iyer