Patents Examined by Douglas Godbold
-
Patent number: 11599720Abstract: A method may include receiving an electronic message from a sender. The method may further include parsing the electronic message into a set of sections, the set of sections including structured sections and an unstructured section. The method may further include detecting etiquette errors in the structured sections of the electronic message, wherein the etiquette errors include at least one of a missing word, a redundant word, an incorrect usage of a word, a style error, an emotional punctuation error, or a punctuation error. The method may further include generating an etiquette score based on the etiquette errors.Type: GrantFiled: July 28, 2020Date of Patent: March 7, 2023Assignee: SHL (India) Private LimitedInventors: Varun Aggarwal, Rohit Takhar, Abhishek Unnam
-
Patent number: 11599721Abstract: A natural language processing system that trains task models for particular natural language tasks programmatically generates additional utterances for inclusion in the training set, based on the existing utterances in the training set and the existing state of a task model as generated from the original (non-augmented) training set. More specifically, the training augmentation module 220 identifies specific textual units of utterances and generates variants of the utterances based on those identified units. The identification is based on determined importances of the textual units to the output of the task model, as well as on task rules that correspond to the natural language task for which the task model is being generated. The generation of the additional utterances improves the quality of the task model without the expense of manual labeling of utterances for training set inclusion.Type: GrantFiled: August 25, 2020Date of Patent: March 7, 2023Assignee: Salesforce, Inc.Inventors: Shiva Kumar Pentyala, Mridul Gupta, Ankit Chadha, Indira Iyer, Richard Socher
-
Patent number: 11600284Abstract: A voice morphing apparatus having adjustable parameters is described. The disclosed system and method include a voice morphing apparatus that morphs input audio to mask a speaker's identity. Parameter adjustment uses evaluation of an objective function that is based on the input audio and output of the voice morphing apparatus. The voice morphing apparatus includes objectives that are based adversarially on speaker identification and positively on audio fidelity. Thus, the voice morphing apparatus is adjusted to reduce identifiability of speakers while maintaining fidelity of the morphed audio. The voice morphing apparatus may be used as part of an automatic speech recognition system.Type: GrantFiled: January 11, 2020Date of Patent: March 7, 2023Assignee: SOUNDHOUND, INC.Inventor: Steve Pearson
-
Patent number: 11594211Abstract: Methods and systems for correcting transcribed text. One method includes receiving audio data from one or more audio data sources and transcribing the audio data based on a voice model to generate text data. The method also includes making the text data available to a plurality of users over at least one computer network and receiving corrected text data over the at least one computer network from the plurality of users. In addition, the method can include modifying the voice model based on the corrected text data.Type: GrantFiled: November 4, 2020Date of Patent: February 28, 2023Assignee: III Holdings 1, LLCInventor: Paul M. Hager
-
Patent number: 11587562Abstract: A user directed verbal interactive method and system for requesting a evaluation and obtaining a customized verbal therapy routine based on the evaluation obtained. The method and system allow users to interact with an artificial intelligence agent by answering a series of system directed questions that guides the users through evaluation and treatment of physical pain using a customized verbal interaction and delivery regimen. Users verbally engage with the artificial intelligence agent to create respective profiles. The system develops therapies based on their current physiological state and profile. The users are then delivered verbal therapy prompts through the system to implement the developed therapy routines.Type: GrantFiled: January 27, 2020Date of Patent: February 21, 2023Inventor: John Lemme
-
Patent number: 11580972Abstract: A robot teaching device includes: a display device; an operation key formed of a hard key or a soft key and including an input changeover switch; a microphone; a voice recognition section; a correspondence storage section storing each of a plurality of types of commands and a recognition target word in association with each other; a recognition target word determination section configured to determine whether a phrase represented by character information includes the recognition target word; and a command execution signal output section configured to switch, in response to the input changeover switch being operated, between a first operation in which a signal for executing the command corresponding to an operation to the operation key is outputted and a second operation in which a signal for executing the command associated with the recognition target word represented by the character information is outputted.Type: GrantFiled: April 15, 2020Date of Patent: February 14, 2023Assignee: Fanuc CorporationInventor: Naruki Shinohara
-
Patent number: 11580989Abstract: A training method of training a speaker identification model which receives voice data as an input and outputs speaker identification information for identifying a speaker of an utterance included in the voice data is provided. The training method includes: performing voice quality conversion of first voice data of a first speaker to generate second voice data of a second speaker; and performing training of the speaker identification model using, as training data, the first voice data and the second voice data.Type: GrantFiled: August 18, 2020Date of Patent: February 14, 2023Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Misaki Doi, Takahiro Kamai, Kousuke Itakura
-
Patent number: 11580310Abstract: A computing system can include one or more machine-learned models configured to receive context data that describes one or more entities to be named. In response to receipt of the context data, the machine-learned model(s) can generate output data that describes one or more names for the entity or entities described by the context data. The computing system can be configured to perform operations including inputting the context data into the machine-learned model(s). The operations can include receiving, as an output of the machine-learned model(s), the output data that describes the name(s) for the entity or entities described by the context data. The operations can include storing at least one name described by the output data.Type: GrantFiled: August 27, 2019Date of Patent: February 14, 2023Assignee: GOOGLE LLCInventors: Victor Carbune, Alexandru-Marian Damian
-
Patent number: 11580986Abstract: Obtaining configuration audio data including voice information for a plurality of meeting participants. Generating localization information indicating a respective location for each meeting participant. Generating a respective voiceprint for each meeting participant. Obtaining meeting audio data. Identifying a first meeting participant and a second meeting participant. Linking a first meeting participant identifier of the first meeting participant with a first segment of the meeting audio data. Linking a second meeting participant identifier of the second meeting participant with a second segment of the meeting audio data. Generating a GUI indicating the respective locations of the first and second meeting participants, and the GUI indicating a first transcription of the first segment and a second transcription of the second segment. The first transcription is associated with the first meeting participant in the GUI, and the second transcription is associated with the second meeting participant in the GUI.Type: GrantFiled: November 17, 2020Date of Patent: February 14, 2023Inventors: Timothy Degraye, Liliane Huguet
-
Patent number: 11574642Abstract: A system and method are presented for the correction of packet loss in audio in automatic speech recognition (ASR) systems. Packet loss correction, as presented herein, occurs at the recognition stage without modifying any of the acoustic models generated during training. The behavior of the ASR engine in the absence of packet loss is thus not altered. To accomplish this, the actual input signal may be rectified, the recognition scores may be normalized to account for signal errors, and a best-estimate method using information from previous frames and acoustic models may be used to replace the noisy signal.Type: GrantFiled: June 29, 2020Date of Patent: February 7, 2023Inventors: Srinath Cheluvaraja, Ananth Nagaraja Iyer, Aravind Ganapathiraju, Felix Immanuel Wyss
-
Patent number: 11574644Abstract: The present technology relates to a signal processing device and method, and a program making it possible to reduce the computational complexity of decoding at low cost. A signal processing device includes: a priority information generation unit configured to generate priority information about an audio object on the basis of a plurality of elements expressing a feature of the audio object. The present technology may be applied to an encoding device and a decoding device.Type: GrantFiled: April 12, 2018Date of Patent: February 7, 2023Assignee: Sony CorporationInventors: Yuki Yamamoto, Toru Chinen, Minoru Tsuji
-
Patent number: 11557307Abstract: Embodiments include techniques and objects related to a wearable audio device that includes a microphone to detect a plurality of sounds in an environment in which the wearable audio device is located. The wearable audio device further includes a non-acoustic sensor to detect that a user of the wearable audio device is speaking. The wearable audio device further includes one or more processors communicatively to alter, based on an identification by the non-acoustic sensor that the user of the wearable audio device is speaking, one or more of the plurality of sounds to generate a sound output. Other embodiments may be described or claimed.Type: GrantFiled: October 16, 2020Date of Patent: January 17, 2023Assignee: LISTEN ASInventors: Anders Boeen, Snorre Vevstad, Aksel Kvalheim Johnsby, Rafael Ignacio Gallegos, Soreti Darge Gemeda
-
Patent number: 11551699Abstract: Provided are a method of authenticating a voice input provided from a user and a method of detecting a voice input having a strong attack tendency. The voice input authentication method includes: receiving the voice input; obtaining, from the voice input, signal characteristic data representing signal characteristics of the voice input; and authenticating the voice input by applying the obtained signal characteristic data to a first learning model configured to determine an attribute of the voice input, wherein the first learning model is trained to determine the attribute of the voice input based on a voice uttered by a person and a voice output by an apparatus.Type: GrantFiled: April 30, 2019Date of Patent: January 10, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Junho Huh, Hyoungshik Kim, Muhammad Ejaz Ahmed, Ilyoup Kwak, Iljoo Kim, Sangjoon Je
-
Patent number: 11551667Abstract: A learning device (10) includes a feature extracting unit (11) that extracts features of speech from speech data for training, a probability calculating unit (12) that, on the basis of the features of speech, performs prefix searching using a speech recognition model of which a neural network is representative, and calculates a posterior probability of a recognition character string to obtain a plurality of hypothetical character strings, an error calculating unit (13) that calculates an error by word error rates of the plurality of hypothetical character strings and a correct character string for training, and obtains a parameter for the entire model that minimizes an expected value of summation of loss in the word error rates, and an updating unit (14) that updates a parameter of the model in accordance with the parameter obtained by the error calculating unit (13).Type: GrantFiled: February 1, 2019Date of Patent: January 10, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Shigeki Karita, Atsunori Ogawa, Marc Delcroix, Tomohiro Nakatani
-
Patent number: 11531811Abstract: Example implementations described herein involve extracting keywords and dependency information from a text; and generating a co-occurrence dictionary for the text, the generating the co-occurrence dictionary involving selecting ones of the keywords for inclusion in the co-occurrence dictionary based on a number of times the ones of the keywords satisfy the dependency rules; determining for the selected ones of the keywords included in the co-occurrence dictionary, surrounding words to be associated with the selected ones of the keywords in the co-occurrence dictionary based on a number of instances of co-occurrence of the surrounding words with the selected ones of the keywords; and generating weights for each of the selected ones of the keywords in the co-occurrence dictionary based on a number of the surrounding words associated with the selected ones of the keywords.Type: GrantFiled: July 23, 2020Date of Patent: December 20, 2022Assignee: Hitachi, Ltd.Inventors: Ken Sugimoto, Kazuhide Aikoh, Shouchun Peng, Jose Luis Beltran-Guerrero
-
Patent number: 11527250Abstract: Methods, systems, and computing platforms for mobile data communication are disclosed. Processor(s) may be configured to electronically receive a plurality of user mobile interaction data to initiate a session on processing server. The processor(s) may be configured to electronically process the user mobile interaction data with AI including predefined user activity data for actions. The processor(s) may be configured to electronically determine whether one of more of the user mobile voice data samples includes a session key data command and responsive to the session key data command, electronically initiating a biometric authentication of the user mobile voice data samples. In some implementations, the system processor(s) may be configured to electronically initiate a second computing session for the computer platform while receiving the plurality of user mobile interaction voice data.Type: GrantFiled: July 1, 2020Date of Patent: December 13, 2022Assignee: Bank of America CorporationInventors: Sandeep Kumar Chauhan, Udaya Kumar Raju Ratnakaram
-
Patent number: 11521622Abstract: A system and method for efficient universal background model (UBM) training for speaker recognition, including: receiving an audio input, divisible into a plurality of audio frames, wherein at least a first audio frame of the plurality of audio frames includes an audio sample having a length above a first threshold extracting at least one identifying feature from the first audio frame and generating a feature vector based on the at least one identifying feature; generating an optimized training sequence computation based on the feature vector and a Gaussian Mixture Model (GMM), wherein the GMM is associated with a plurality of components, wherein each of the plurality of components is defined by a covariance matrix, a mean vector, and a weight vector; and updating any of the associated components of the GMM based on the generated optimized training sequence computation.Type: GrantFiled: October 27, 2020Date of Patent: December 6, 2022Assignee: ILLUMA Labs Inc.Inventor: Milind Borkar
-
Patent number: 11521641Abstract: State-of-satisfaction change pattern models each including a set of transition weights in state sequences of the states of satisfaction are obtained for predetermined change patterns of the states of satisfaction, and a state-of-satisfaction estimation model for obtaining the posteriori probability of the utterance feature amount given the state of satisfaction of an utterer is obtained by using the utterance-for-learning feature amount and a correct value of the state of satisfaction of an utterer who gave an utterance for learning corresponding to the utterance-for-learning feature amount. By using the input utterance feature amount and the state-of-satisfaction change pattern models and the state-of-satisfaction estimation model, an estimated value of the state of satisfaction of an utterer who gave an utterance corresponding to the input utterance feature amount is obtained.Type: GrantFiled: February 2, 2018Date of Patent: December 6, 2022Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Atsushi Ando, Hosana Kamiyama, Satoshi Kobashikawa
-
Patent number: 11520994Abstract: The present disclosure relates to a method of evaluating accuracy of a summary of a document. The method includes receiving a plurality of reference summaries of a document and a system summary of the document. The system summary is generated by a machine. The method further includes extracting, for each reference summary, a tuple that is a pair of words composed of a modified word and a dependent word having a dependency relation to the modified word and a label representing the dependency relation. The method further includes replacing, for each of the extracted tuples, each of the modified word of the tuple's word pair and the dependent word with a class predetermined for the words. The method further generates a score of the system summary based on the class and a set of tuples of the system summary.Type: GrantFiled: February 14, 2019Date of Patent: December 6, 2022Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Tsutomu Hirao, Masaaki Nagata
-
Patent number: 11507746Abstract: A memory stores therein a document and a plurality of word vectors that are word embeddings respectively computed for a plurality of words. A processor extracts, with respect to one of the words, two or more surrounding words within a prescribed range from one occurrence position where the one word occurs, from the document, and computes a sum vector by adding word vectors corresponding to the surrounding words. The processor determines a parameter such as to predict the surrounding words from the sum vector and the parameter using a machine learning model. The processor stores the parameter as context information for the one occurrence position, in association with the word vector corresponding to the one word.Type: GrantFiled: October 7, 2019Date of Patent: November 22, 2022Assignee: FUJITSU LIMITEDInventors: Seiji Okura, Masahiro Kataoka, Satoshi Onoue