Patents Examined by Michelle M Koeth
-
Patent number: 12050509Abstract: Examples provide a system and method for retraining a machine learning (ML) algorithm associated with a trained model using root cause pattern recognition. The system analyzes the results of parsing unstructured data and identifies a root cause pattern causing the trained model to underperform when parsing data including the identified pattern. Examples of data including the pattern are created for use in retraining the model to correctly detect and parse data following the identified pattern. Once retrained, the model is able to parse unstructured data, including data having the identified pattern, in accordance with expected performance metrics. The system automatically identifies parsing errors, identifies the root cause patterns for these errors and retrains the models to correctly handle those patterns for more accurate and efficient handing of unstructured data by trained models.Type: GrantFiled: January 27, 2021Date of Patent: July 30, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Allison Mae Giddings, Mo Zhou, Dong Yuan
-
Patent number: 12050881Abstract: A text translation method includes: obtaining a to-be-translated text sequence; encoding the to-be-translated text sequence, to obtain a first hidden state sequence; obtaining a first state vector; generating a second hidden state sequence according to the first state vector and the first hidden state sequence; generating a context vector corresponding to a current word according to the second hidden state sequence and the first state vector; determining a second target word according to the context vector, the first state vector, and a first target word. The first state vector corresponds to a predecessor word of a current word, the current word is a to-be-translated word in the source language text, the predecessor word is a word that has been translated in the source language text. The first target word is a translation result of a predecessor word, and the second target word is a translation result of the current word.Type: GrantFiled: February 24, 2021Date of Patent: July 30, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Zhaopeng Tu, Xinwei Geng, Longyue Wang, Xing Wang
-
Patent number: 12050864Abstract: A provider computing system includes a service recommendation database configured to retrievably store recommendation information, a network interface configured to communicate data over a network, and a processing circuit that includes a processor and memory. The memory is structured to store instructions that are executable by the processor and cause the processing circuit to receive a first set of recommendation information derived from a first set of voice data received by a local voice assistant and store the first set of recommendation information in the services recommendation database. The processing circuit is further caused to receive a request to generate a recommendation from a user voice assistant, derived from a second set of voice data received by the user voice assistant. The processing circuit is also caused to access the first set of recommendation information in response to the request and transmit the recommendation to the user voice assistant.Type: GrantFiled: March 4, 2021Date of Patent: July 30, 2024Assignee: Wells Fargo Bank, N.A.Inventors: Margaret Duguid, Chris Kalaboukis, Janet McClellan, Wendy Minkus, Robert M. Ronnau, Donald Winans
-
Patent number: 12039278Abstract: A method and system may select an interaction involving an agent, sending the selected interaction and a computerized form to the agent and an evaluator, simultaneously or concurrently, displaying to the evaluator and agent screens defined by the form, each screen including an evaluation question, accepting from the agent, for each evaluation question, an agent answer having associated with the agent answer a rating, accepting from the evaluator a submission indicating that the evaluator has completed the computerized evaluation form, accepting from the agent a submission indicating that the agent has completed the computerized evaluation form, summing an agent rating from the ratings associated with the agent answers provided by the agent, summing an evaluator rating from the ratings associated with evaluator answers provided by the evaluator, and calculating a variance from the agent rating and evaluator rating.Type: GrantFiled: June 8, 2021Date of Patent: July 16, 2024Assignee: Nice Ltd.Inventors: Abhijit Mokashi, Amram Amir Cohen
-
Patent number: 12020688Abstract: An evaluation method and system of an air traffic control (ATC) speech recognition system are provided, the method includes: S1, obtaining, grading and inputting ATC speech data; S2, constructing an evaluation index system; S3, determining weights of utility layer indexes and support layer indexes under different levels ATC speech data; S4, calculating a utility layer score and a support layer score, and adding the scores of the utility layer and the support layer to obtain comprehensive scores of the utility layer and the support layer; S5, determining, weights of the utility layer and the support layer; S6, adding a product of the utility layer weight with the utility layer comprehensive score and a product of the support layer weight with the support layer comprehensive score to obtain a comprehensive score of the speech recognition system; and S7, determining a level of the recognition performance of the speech recognition system.Type: GrantFiled: December 26, 2023Date of Patent: June 25, 2024Assignee: CIVIL AVIATION FLIGHT UNIVERSITY OF CHINAInventors: Weijun Pan, Peiyuan Jiang, Qinghai Zuo, Xuan Wang, Rundong Wang, Tian Luan, Zixuan Wang, Jian Zhang, Qianlan Jiang, Yidi Wang
-
Patent number: 12002484Abstract: This application discloses a method and an apparatus for processing an audio signal. The method includes obtaining a first speech signal acquired by a first device; performing frame blocking on the first speech signal, to obtain multiple speech signal frames; converting the multiple speech signal frames into multiple first frequency domain signal frames; performing aliasing processing on a first sub-frequency domain signal frame among the multiple first frequency domain signal frames with a frequency lower than or equal to a target frequency threshold, and retaining a second sub-frequency domain signal frame among the multiple first frequency domain signal frames with a frequency higher than the target frequency threshold, to obtain multiple second frequency domain signal frames, the target frequency threshold being related to a sampling frequency of a second device; and performing frame fusion on the multiple second frequency domain signal frames, to obtain a second speech signal.Type: GrantFiled: May 4, 2022Date of Patent: June 4, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yang Yu, Yu Chen
-
Patent number: 11996094Abstract: User interaction may be supported with an audio presentation by an automated assistant, and in particular with the spoken content of such an audio presentation that is presented at particular points within the audio presentation. Analysis of an audio presentation may be performed to identify one or more entities addressed by, mentioned by, or otherwise associated with the audio presentation, and utterance classification may be performed to determine whether an utterance received during playback of the audio presentation is directed to the audio presentation, and in some instances, to a particular entity and/or point of playback in the audio presentation, thereby enabling a suitable response to be generated to the utterance.Type: GrantFiled: July 15, 2020Date of Patent: May 28, 2024Assignee: GOOGLE LLCInventors: Victor Carbune, Matthew Sharifi
-
Patent number: 11995406Abstract: An encoding method is provided for an encoding apparatus. The method includes obtaining a target paragraph and a preset database, and inputting the target paragraph and preset database into a memory encoding model, obtaining, in an input layer, an original vector set of the target paragraph and a knowledge vector set of the preset database, obtaining, in a first memory layer, a first target sentence matrix according to the original vector set and the knowledge vector set, and obtaining, in an output layer, a paragraph vector of the target paragraph according to the first target sentence matrix, and performing processing based on the paragraph vector.Type: GrantFiled: June 23, 2021Date of Patent: May 28, 2024Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Yizhang Tan, Shuo Sun, Jie Cao, Le Tian, Cheng Niu, Jie Zhou
-
Patent number: 11983501Abstract: The present invention relates to an apparatus and method for automatically generating machine reading comprehension training data, and more particularly, to an apparatus and method for automatically generating and managing machine reading comprehension training data based on text semantic analysis. The apparatus for automatically generating machine reading comprehension training data according to the present invention includes a domain selection text collection unit configured to collect pieces of text data according to domains and subjects, a paragraph selection unit configured to select a paragraph using the pieces of collected text data and determine whether questions and correct answers are generatable, and a question and correct answer generation unit configured to generate questions and correct answers from the selected paragraph.Type: GrantFiled: October 7, 2021Date of Patent: May 14, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Yong Jin Bae, Joon Ho Lim, Min Ho Kim, Hyun Kim, Hyun Ki Kim, Ji Hee Ryu, Kyung Man Bae, Hyung Jik Lee, Soo Jong Lim, Myung Gil Jang, Mi Ran Choi, Jeong Heo
-
Patent number: 11978433Abstract: An end-to-end automatic speech recognition (ASR) system includes: a first encoder configured for close-talk input captured by a close-talk input mechanism; a second encoder configured for far-talk input captured by a far-talk input mechanism; and an encoder selection layer configured to select at least one of the first and second encoders for use in producing ASR output. The selection is made based on at least one of short-time Fourier transform (STFT), Mel-frequency Cepstral Coefficient (MFCC) and filter bank derived from at least one of the close-talk input and the far-talk input. If signals from both the close-talk input mechanism and the far-talk input mechanism are present for a speech segment, the encoder selection layer dynamically selects between the close-talk encoder and the far-talk encoder to select the encoder that better recognizes the speech segment. An encoder-decoder model is used to produce the ASR output.Type: GrantFiled: June 22, 2021Date of Patent: May 7, 2024Assignee: Microsoft Technology Licensing, LLC.Inventors: Felix Weninger, Marco Gaudesi, Ralf Leibold, Puming Zhan
-
Patent number: 11978441Abstract: According to one embodiment, a speech recognition apparatus includes processing circuitry. The processing circuitry generates, based on sensor information, environmental information relating to an environment in which the sensor information has been acquired, generates, based on the environmental information and generic speech data, an adapted acoustic model obtained by adapting a base acoustic model to the environment, acquires speech uttered in the environment as input speech data, and subjects the input speech data to a speech recognition process using the adapted acoustic model.Type: GrantFiled: February 26, 2021Date of Patent: May 7, 2024Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Daichi Hayakawa, Takehiko Kagoshima, Kenji Iwata
-
Patent number: 11972759Abstract: Mitigating mistranscriptions resolves errors in a transcription of the audio portion of a video based on a semantic matching with contextualized data electronically garnered from one or more sources other than the audio portion of the video. A mistranscription is identified using a pretrained word embedding model that maps words to an embedding space derived from the contextualizing data. A similarity value for each vocabulary word of a multi-word vocabulary of the pretrained word embedding model is determined in relation to the mistranscription. Candidate words are selected based on the similarity values, each indicating a closeness of a corresponding vocabulary word to the mistranscription. The textual rendering is modified by replacing the mistranscription with a candidate word that, based on average semantic similarity values, is more similar to the mistranscription than is each other candidate word.Type: GrantFiled: December 2, 2020Date of Patent: April 30, 2024Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Shikhar Kwatra, Vijay Ekambaram, Hemant Kumar Sivaswamy, Rodrigo Goulart Silva
-
Patent number: 11972221Abstract: Methods and systems are described for generating dynamic conversational responses using machine learning models. The dynamic conversational responses may be generated in real time and reflect the likely goals and/or intents of a user. The machine learning model may provide these features by monitoring one or more user actions and/or lengths of time between one or more user actions during conversational interactions.Type: GrantFiled: April 24, 2023Date of Patent: April 30, 2024Assignee: Capital One Services, LLCInventors: Victor Alvarez Miranda, Rui Zhang
-
Patent number: 11961510Abstract: According to one embodiment, an information processing apparatus includes following units. The acquisition unit acquires first training data including a combination of a voice feature quantity and a correct phoneme label of the voice feature quantity. The training unit trains an acoustic model using the first training data in a manner to output the correct phoneme label in response to input of the voice feature quantity. The extraction unit extracts from the first training data, second training data including voice feature quantities of at least one of a keyword, a sub-word, a syllable, or a phoneme included in the keyword. The adaptation processing unit adapts the trained acoustic model using the second training data to a keyword detection model.Type: GrantFiled: February 28, 2020Date of Patent: April 16, 2024Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Ning Ding, Hiroshi Fujimura
-
Patent number: 11961096Abstract: Systems, methods, and apparatuses are described for determining compliance with a plurality of restrictions associated with one or more devices in an organization. First text indicating restrictions may be received, and second text indicating a current configuration of one or more devices may be received. Both sets of text may be processed by, e.g., removing a portion of the text based on a predetermined list of terms and simplifying the text using a lemmatization algorithm. A first vector and second vector may be generated based on the processed sets of text, and each vector may be weighted based on an inverse frequency of words in their respective text. Each vector may be normalized based on semantic analysis. The two vectors may be compared. Based on the comparison, third text corresponding to a portion of the second vector may be generated and transmitted to a third computing device.Type: GrantFiled: April 28, 2023Date of Patent: April 16, 2024Assignee: Capital One Services, LLCInventors: David Spencer Warren, Daniel Lantz, Ricky Su, Shannon Hsu, Scott Anderson
-
Patent number: 11960842Abstract: This application relates to apparatus and methods for natural language understanding in conversational systems using machine learning processes. In some examples, a computing device receives a request that identifies textual data. The computing device applies a natural language model to the textual data to generate first embeddings. In some examples, the natural language model is trained on retail data, such as item descriptions and chat session data. The computing device also applies a dependency based model to the textual data to generate second embeddings. Further, the computing device concatenates the first and second embeddings, and applies an intent and entity classifier to the concatenated embeddings to determine entities, and an intent, for the request. The computing device may generate a response to the request based on the determined intent and entities.Type: GrantFiled: February 27, 2021Date of Patent: April 16, 2024Assignee: Walmart Apollo, LLCInventors: Pratik Sridatt Jayarao, Arpit Sharma, Deepa Mohan
-
Patent number: 11937911Abstract: Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to a Voice Analysis Engine. According to various embodiments, the Voice Analysis Engine receives first streaming prompt data from a computing device. The Voice Analysis Engine analyzes the first streaming prompt data to provide feedback to the user of the computing device. Upon determining the first streaming prompt data satisfies one or more criteria, the Voice Analysis Engine receives second streaming prompt data from the computing device. The Voice Analysis Engine analyzes the streaming prompt data to predict a respiratory state of the user of the computing device.Type: GrantFiled: November 25, 2020Date of Patent: March 26, 2024Assignee: DeepConvo Inc.Inventors: Satya Venneti, Mir Mohammed Daanish Ali Khan, Rajat Kulshreshtha, Prakhar Pradeep Naval
-
Patent number: 11900953Abstract: An audio processing method includes the following operations. A calculated value is obtained according to multiple audio clock frequency information contained in multiple audio input packets. An audio sampling frequency is generated according to the calculated value and a link symbol clock signal. Multiple audio output packets corresponding to the audio input packets are generated according to the audio sampling frequency.Type: GrantFiled: January 27, 2021Date of Patent: February 13, 2024Assignee: REALTEK SEMICONDUCTOR CORPORATIONInventors: Chun-Chang Liu, Jing-Chu Chan, Hung-Yi Chang
-
Patent number: 11900946Abstract: A voice recognition method is provided. The voice recognition method includes: collecting a plurality of voice signals; extracting the voiceprint features of each of the voice signals; performing a data process on the voiceprint features, to convert the voiceprint features into a N-dimensional matrix, and N is an integer greater than or equal to 2; performing a feature normalization process on the N-dimensional matrix to obtain a plurality of voiceprint data; classifying the voiceprint data to generate a clustering result; finding out a centroid of each cluster according to the clustering result, and registering the voiceprint data adjacent to each of the centroid. The disclosure also provides an electronic device that adapted for the voice recognition method.Type: GrantFiled: July 21, 2021Date of Patent: February 13, 2024Assignee: ASUSTEK COMPUTER INC.Inventor: Pei-Lin Liang
-
Patent number: 11880667Abstract: This application discloses an information conversion method and apparatus, a storage medium, and an electronic apparatus.Type: GrantFiled: May 22, 2020Date of Patent: January 23, 2024Assignee: Tencent Technology (Shenzhen) Company LimitedInventors: Jun Xie, Mingxuan Wang, Jiangquan Huang, Jian Yao