Patents Examined by Edgar X Guerra-Erazo
  • Patent number: 11562137
    Abstract: A system retrains a natural language understanding (NLU) model by regularly analyzing electronic documents including web publications such as online newspapers, blogs, social media posts, etc. to understand how word and phrase usage is evolving. Generally, the system determines the frequency of words and phrases in the electronic documents and updates an NLU dictionary depending on whether certain words or phrases are being used more frequently or less frequently. This dictionary is then used to retrain the NLU model, which is then applied to predict the meaning of text or speech communicated by a people group. By analyzing electronic documents such as web publications, the system is able to stay up-to-date on the vocabulary of the people group and make correct predictions as the vocabulary changes (e.g., due to natural disaster). In this manner, the safety of the people is improved.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: January 24, 2023
    Assignee: Bank of America Corporation
    Inventors: Utkarsh Raj, Maharaj Mukherjee
  • Patent number: 11562756
    Abstract: What is described is an apparatus for post-processing an audio signal, having: a time-spectrum-converter for converting the audio signal into a spectral representation having a sequence of spectral frames; a prediction analyzer for calculating prediction filter data for a prediction over frequency within a spectral frame; a shaping filter controlled by the prediction filter data for shaping the spectral frame to enhance a transient portion within the spectral frame; and a spectrum-time-converter for converting a sequence of spectral frames having a shaped spectral frame into a time domain.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: January 24, 2023
    Assignee: FRAUNHOFER-GESELLSCHAFT ZUR FÖRDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
    Inventors: Sascha Disch, Christian Uhle, Jürgen Herre, Peter Prokein, Patrick Gampp, Antonios Karampourniotis, Julia Havenstein, Oliver Hellmuth, Daniel Richter
  • Patent number: 11562145
    Abstract: This application relates to a text classification method.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: January 24, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Run Tu
  • Patent number: 11562749
    Abstract: Systems, methods, and computer-readable storage media for responding to a query using a neural network and natural language processing. If necessary, the system can request disambiguation, then parse the query using a trained machine-learning classifier, resulting in at least one of an identified subject or an identified domain of the text query. The system can determine if the user is authorized to retrieve answers to the query and, if so, retrieve factual data associated with the query. The system can then retrieve a response template, and fill in the template with the retrieved facts. The system can then determine, by executing a machine comprehension model on the filled response template, a probable readability token, portion of text, of at least a portion of the filled response template and, upon identifying that the probable readability is above a threshold, reply to the text query with the at least a portion of the filled response template.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: January 24, 2023
    Assignee: ADP, Inc.
    Inventors: Guilherme Gomes, Bruno Apel, Jarismar Silva, Vincent Kellers, Roberto Rodrigues Dias, Roberto Masiero, Roberto Silveira
  • Patent number: 11538491
    Abstract: An interaction system that interacts with a user is disclosed. The interaction system includes: an input device that receives a speech signal of the user; a computing device that determines a speech content of the interaction system for a speech content acquired from the speech signal of the user such that a frequency distribution of speech feature values of the speech content of the interaction system approaches an ideal frequency distribution; and an output device that outputs the determined speech content of the interaction system.
    Type: Grant
    Filed: September 24, 2020
    Date of Patent: December 27, 2022
    Assignee: HITACHI, LTD.
    Inventors: Takashi Numata, Ryuji Mine, Yasuhiro Asa
  • Patent number: 11526760
    Abstract: An architecture for training the weights of artificial neural networks provides a global constrainer modifying the neuron weights in each iteration not only by the back-propagated error but also by a global constraint constraining these weights based on the value of all weights at that iteration. The ability to accommodate a global constraint is made practical by using a constrained gradient descent which approximates the error gradient deduced in the training as a plane, offsetting the increased complexity of the global constraint.
    Type: Grant
    Filed: November 9, 2018
    Date of Patent: December 13, 2022
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Sathya Narayanan Ravi, Tuan Quang Dinh, Vishnu Sai Rao Suresh Lokhande, Vikas Singh
  • Patent number: 11521605
    Abstract: A method of training a natural language neural network comprises obtaining at least one constituency span; obtaining a training video input; applying a multi-modal transform to the video input, thereby generating a transformed video input; comparing the at least one constituency span and the transformed video input using a compound Probabilistic Context-Free Grammar (PCFG) model to match the at least one constituency span with corresponding portions of the transformed video input; and using results from the comparison to learn a constituency parser.
    Type: Grant
    Filed: February 22, 2021
    Date of Patent: December 6, 2022
    Assignee: TENCENT AMERICA LLC
    Inventor: Linfeng Song
  • Patent number: 11521597
    Abstract: Implementations can receive audio data corresponding to a spoken utterance of a user, process the audio data to generate a plurality of speech hypotheses, determine an action to be performed by an automated assistant based on the speech hypotheses, and cause the computing device to render an indication of the action. In response to the computing device rendering the indication, implementations can receive additional audio data corresponding to an additional spoken utterance of the user, process the additional audio data to determine that a portion of the spoken utterance is similar to an additional portion of the additional spoken utterance, supplant the action with an alternate action, and cause the automated assistant to initiate performance of the alternate action. Some implementations can determine whether to render the indication of the action based on a confidence level associated with the action.
    Type: Grant
    Filed: September 3, 2020
    Date of Patent: December 6, 2022
    Assignee: GOOGLE LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 11514891
    Abstract: A named entity recognition method, a named entity recognition equipment and a medium are disclosed, the method including: acquiring a voice signal; extracting a voice feature vector in the voice signal; extracting, based on a literalness result after voice recognition is performed on the voice signal, a literalness feature vector in the literalness result; splicing the voice feature vector and the literalness feature vector to obtain a composite feature vector of each word in the voice signal; processing the composite feature vector of each word in the voice signal through a deep learning model to obtain a named entity recognition result.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: November 29, 2022
    Assignee: BEIJING BOE TECHNOLOGY DEVELOPMENT CO., LTD.
    Inventor: Fengshuo Hu
  • Patent number: 11514915
    Abstract: A system and corresponding method are provided for generating responses for a dialogue between a user and a computer. The system includes a memory storing information for a dialogue history and a knowledge base. An encoder may receive a new utterance from the user and generate a global memory pointer used for filtering the knowledge base information in the memory. A decoder may generate at least one local memory pointer and a sketch response for the new utterance. The sketch response includes at least one sketch tag to be replaced by knowledge base information from the memory. The system generates the dialogue computer response using the local memory pointer to select a word from the filtered knowledge base information to replace the at least one sketch tag in the sketch response.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: November 29, 2022
    Assignee: salesforce.com, inc.
    Inventors: Chien-Sheng Wu, Caiming Xiong, Richard Socher
  • Patent number: 11514886
    Abstract: Disclosed are an emotion classification information-based text-to-speech (TTS) method and device. The emotion classification information-based TTS method according to an embodiment of the present invention may, when emotion classification information is set in a received message, transmit metadata corresponding to the set emotion classification information to a speech synthesis engine and, when no emotion classification information is set in the received message, generate new emotion classification information through semantic analysis and context analysis of sentences in the received message and transmit the metadata to the speech synthesis engine. The speech synthesis engine may perform speech synthesis by carrying emotion classification information based on the transmitted metadata.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: November 29, 2022
    Assignee: LG ELECTRONICS INC.
    Inventors: Siyoung Yang, Yongchul Park, Juyeong Jang, Jonghoon Chae, Sungmin Han
  • Patent number: 11514332
    Abstract: A method, computer program product, and system for a cognitive dialoguing avatar, the method including identifying a user, a target entity, and a user goal, initiating communication with the target entity, evaluating cognitively a question from a dialog with the target entity, determining cognitively an answer to the question by evaluating stored user information to progress to the user goal, communicating the determined answer to the target entity.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: November 29, 2022
    Assignee: International Business Machines Corporation
    Inventors: Adam T. Clark, Nathaniel D. Lee, Daniel J. Strauss
  • Patent number: 11508386
    Abstract: An inventive concept relates to an audio coding method to which CNN-based frequency spectrum recovery is applied. An inventive concept transmits a part of frequency spectral coefficients generated in transform coding to a decoder and the decoder recovers the frequency spectral coefficient not transmitted. Furthermore, the signs of frequency spectral coefficient are transmitted from an encoder to the decoder depending on a sign transmission rule.
    Type: Grant
    Filed: April 8, 2020
    Date of Patent: November 22, 2022
    Assignees: Electronics and Telecommunications Research Institute, Kwangwoon University Industry-Academic Collaboration Foundation
    Inventors: Hochong Park, Seung Kwon Beack, Jongmo Sung, Seong-Hyeon Shin, Mi Suk Lee, Tae Jin Lee, Jin Soo Choi
  • Patent number: 11508358
    Abstract: Disclosed herein an artificial intelligence apparatus for recognizing speech in consideration of an utterance style including a microphone, and a processor configured to obtain, via the microphone, speech data including speech of a user, extract an utterance feature vector from the obtained speech data, determine an utterance style corresponding to the speech based on the extracted utterance feature vector, and generate a speech recognition result using a speech recognition model corresponding to the determined utterance style.
    Type: Grant
    Filed: December 10, 2019
    Date of Patent: November 22, 2022
    Assignee: LG Electronics Inc.
    Inventor: Jonghoon Chae
  • Patent number: 11501080
    Abstract: Examples of a sentence phrasing system are provided. The system may obtain a user question from a user. The system may obtain question entailment data from a plurality of data sources. The system may implement an artificial intelligence component to identify a word index from the question entailment data and to identify a question premise from the user question. The system may implement a first cognitive learning operation to determine an answer premise corresponding to the question premise comprising a second-word data set. The system may determine a subject component corresponding to the question premise. The system may generate an object component and a predicate component from the second-word data set corresponding to the subject component. The system may generate an integrated answer relevant for resolving the user question and comprising the subject component, the object component, and the predicate component concatenated to form an answer sentence.
    Type: Grant
    Filed: December 30, 2019
    Date of Patent: November 15, 2022
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Shaun Cyprian D'Souza, Subhashini Lakshminarayanan, Gopali Raval Contractor
  • Patent number: 11501766
    Abstract: Provided are a device and a method for providing a response message to a voice input of a user. The method, performed by a device, of providing a response message to a voice input of a user includes: receiving the voice input of the user; determining a destination of the user and an intention of the user, by analyzing the received voice input; obtaining association information related to the destination; generating the response message that recommends a substitute destination related to the intention of the user, based on the obtained association information; and displaying the generated response message.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: November 15, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ga-hee Lee, In-dong Lee, Se-chun Kang, Hyung-rai Oh
  • Patent number: 11495244
    Abstract: A computer may train a single-class machine learning using normal speech recordings. The machine learning model or any other model may estimate the normal range of parameters of a physical speech production model based on the normal speech recordings. For example, the computer may use a source-filter model of speech production, where voiced speech is represented by a pulse train and unvoiced speech by a random noise and a combination of the pulse train and the random noise is passed through an auto-regressive filter that emulates the human vocal tract. The computer leverages the fact that intentional modification of human voice introduces errors to source-filter model or any other physical model of speech production. The computer may identify anomalies in the physical model to generate a voice modification score for an audio signal. The voice modification score may indicate a degree of abnormality of human voice in the audio signal.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: November 8, 2022
    Assignee: PINDROP SECURITY, INC.
    Inventors: David Looney, Nikolay D. Gaubitch
  • Patent number: 11488589
    Abstract: Techniques for processing a voice initiated request by a web server are presented. The techniques may include receiving, by a web server, request data representing a voice command to a user device, the request data including an identification of a requested webpage; determining, by the web server, that a response to the request data will continue a voice interaction; and providing, by the web server and to the user device, data for a voice enabled webpage associated with the requested webpage, where the data for the voice enabled webpage is configured to invoke a voice interface for the user device.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: November 1, 2022
    Assignee: VeriSign, Inc.
    Inventors: Andrew Fregly, Andrew Kaizer, Burton S. Kaliski, Jr., Patrick Kane, Swapneel Sheth, Hari Sola, Paul Tidwell, Pedro Vasquez
  • Patent number: 11481548
    Abstract: A method, computer program, and computer system to recover a dropped pronoun is provided for receiving data corresponding to one or more input words and determining contextual representations for the received input word data. The dropped pronoun may be identified based on a probability value associated with the contextual representations, and a span associated with one or more of the received input words may and that corresponds to which of the input words the dropped pronoun refers may be determined.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: October 25, 2022
    Assignee: TENCENT AMERICA LLC
    Inventor: Linfeng Song
  • Patent number: 11445145
    Abstract: The present application relates to the technical field of communication, and provides a method and a device for controlling camera shooting, a smart device and a computer storage medium, including: collecting voice data of a sound source object; extracting a voice feature based on the voice data of the sound source object; determining a current voice scene according to the extracted voice feature and a voice feature corresponding to the preset voice scene; and acquiring a shooting mode corresponding to the current voice scene, and controlling movement of the camera according to the shooting mode corresponding to the current voice scene. With the method above, frequently shaking can be avoided, and shooting efficiency and user experience can be improved.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: September 13, 2022
    Assignee: SHENZHEN GRANDSUN ELECTRONIC CO., LTD.
    Inventors: Dejun Jiang, Haiquan Wu, Enqin Zhang, Lei Cao, Ruiwen Shi