Patents Examined by Uthej Kunamneni
  • Patent number: 11861492
    Abstract: Various embodiments provide for quantizing a trained neural network with removal of normalization with respect to at least one layer of the quantized neural network, such as a quantized multiple fan-in layer (e.g., element-wise add or sum layer).
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: January 2, 2024
    Assignee: Cadence Design Systems, Inc.
    Inventor: Ming Kai Hsu
  • Patent number: 11830477
    Abstract: An automatic speech recognition (ASR) system that determines a textual representation of a word from a word spoken in a natural language is provided. The ASR system uses an acoustic model, a language model, and a decoder. When the ASR system receives a spoken word, the acoustic model generates word candidates for the spoken word. The language model determines an n-gram score for each word candidate. The n-gram score includes a base score and a bias score. The bias score is based on a logarithmic probability of the word candidate, where the logarithmic probability is derived using a class-based language model where the words are clustered into non-overlapping clusters according to word statistics. The decoder decodes a textual representation of the spoken word from the word candidates and the corresponding n-gram score for each word candidate.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: November 28, 2023
    Assignee: Salesforce, Inc.
    Inventors: Young Mo Kang, Yingbo Zhou
  • Patent number: 11823666
    Abstract: Automatic measurement of semantic textual similarity of conversations, by: receiving two conversation texts, each comprising a sequence of utterances; encoding each of the sequences of utterances into a corresponding sequence of semantic representations; computing a minimal edit distance between the sequences of semantic representations; and, based on the computation of the minimal edit distance, performing at least one of: quantifying a semantic similarity between the two conversation texts, and outputting an alignment of the two sequences of utterances with each other.
    Type: Grant
    Filed: October 4, 2021
    Date of Patent: November 21, 2023
    Assignee: International Business Machines Corporation
    Inventors: Ofer Lavi, Inbal Ronen, Ella Rabinovich, David Boaz, David Amid, Segev Shlomov, Ateret Anaby - Tavor
  • Patent number: 11798529
    Abstract: A language module is joint trained with a knowledge module for natural language understanding by aligning a first knowledge graph with a second knowledge graph. The knowledge module is trained on the aligned knowledge graphs. Then, the knowledge module is integrated with the language module to generate an integrated knowledge-language module.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: October 24, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Chenguang Zhu, Nanshan Zeng
  • Patent number: 11790929
    Abstract: According to an aspect, a WPE-based dereverberation apparatus using virtual acoustic channel expansion based on a deep neural network includes a signal reception unit for receiving as input a first speech signal through a single channel microphone, a signal generation unit for generating a second speech signal by applying a virtual acoustic channel expansion algorithm based on a deep neural network to the first speech signal and a dereverberation unit for removing reverberation of the first speech signal and generating a dereverberated signal from which the reverberation has been removed by applying a dual-channel weighted prediction error (WPE) algorithm based on a deep neural network to the first speech signal and the second speech signal.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: October 17, 2023
    Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon Hyuk Chang, Joon Young Yang
  • Patent number: 11775763
    Abstract: Systems and methods for weakly-supervised training a machine-learning model to perform named-entity recognition. All possible entity candidates and all possible rule candidates are automatically identified in an input data set of unlabeled text. An initial training of the machine-learning model is performed using labels assigned to entity candidates by a set of seeding rules as a first set of training data. The trained machine-learning model is then applied to the unlabeled text and a subset of rules from the rule candidates is identified that produces labels that most accurately match the labels assigned by the trained machine-learning model. The machine-learning model is then retrained using the labels assigned by the identified subset of rules as the second set of training data. This process is iteratively repeated to further refine and improve the performance of the machine-learning model for named-entity recognition.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: October 3, 2023
    Assignee: Robert Bosch GmbH
    Inventors: Jiacheng Li, Haibo Ding, Zhe Feng
  • Patent number: 11748594
    Abstract: An electronic apparatus, including a memory configured to store a first artificial intelligence model; and a processor connected to the memory and configured to: based on receiving an input audio signal, obtain an input frequency spectrum image representing a frequency spectrum of the input audio signal, input the input frequency spectrum image to the first artificial intelligence model, obtain an output frequency spectrum image from the first artificial intelligence model, obtain an output audio signal based on the output frequency spectrum image, wherein the first artificial intelligence model is trained based on a target learning image, and wherein the target learning image represents a target frequency spectrum of a specific style, and is obtained from a second artificial intelligence model based on a random value.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: September 5, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Anant Baijal, Jeongrok Jang
  • Patent number: 11741967
    Abstract: An automatic speech recognition system and a method thereof are provided. The system includes an encoder and a decoder. The encoder comprises a plurality of encoder layers. At least one encoder layer includes a plurality of encoder sublayers fused into one or more encoder kernels. The system further comprises a first pair of ping-pong buffers communicating with the one or more encoder kernels. The decoder comprises a plurality of decoder layers. At least one decoder layer includes a plurality of decoder sublayers fused into one or more decoder kernels. The decoder receives a decoder output related to the encoder output and generates a decoder output. The encoder sends the decoder output to a beam search kernel.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: August 29, 2023
    Assignee: KWAI INC.
    Inventors: Yongxiong Ren, Heng Liu, Yang Liu, Lingzhi Liu, Jie Li, Yuanyuan Zhao, Xiaorui Wang
  • Patent number: 11741317
    Abstract: The disclosure relates to system and method for processing multilingual user inputs using a Single Natural Language Processing (SNLP) model. The method includes receiving a user input in a source language and translating the user input to generate a plurality of translated user inputs in an intermediate language. The method includes using the SNLP model configured only using the intermediate language to generate a plurality of sets of intermediate input vectors in the intermediate language. The method includes processing the plurality of sets of intermediate input vectors in the intermediate language using at least one of a plurality of predefined mechanisms to identify a predetermined response. The method includes translating the predetermined response to generate a translated response that is rendered to the user.
    Type: Grant
    Filed: May 25, 2021
    Date of Patent: August 29, 2023
    Inventor: Rajiv Trehan
  • Patent number: 11727940
    Abstract: The present disclosure relates to automatically correcting mispronounced keywords during a conference session. More particularly, the present invention provides methods and systems for automatically correcting audio data generated from audio input having indications of mispronounced keywords during an audio/videoconferencing system. In some embodiments, the process of automatically correcting the audio data may require a re-encoding process of the audio data at the conference server. In alternative embodiments, the process may require updating the audio data at the receiver end of the conferencing system.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: August 15, 2023
    Assignee: Rovi Guides, Inc.
    Inventors: Daina Emmanuel, Padmassri Chandrashekar
  • Patent number: 11729236
    Abstract: A sampling rate processing method performed by a computer device are disclosed. The method includes: obtaining a first audio signal recorded by a transmitting device, the first audio signal being recorded according to an initial sampling rate of the transmitting device; obtaining a second audio signal recorded by a receiving device during playing of the first audio signal, the second audio signal being recorded according to the initial sampling rate; determining a frequency response gain value of the receiving device according to a power spectrum of the first audio signal and a power spectrum of the second audio signal; determining a target sampling rate of the transmitting device according to the initial sampling rate and the frequency response gain value; and configuring the transmitting device to record audio signals according to the target sampling rate.
    Type: Grant
    Filed: November 3, 2021
    Date of Patent: August 15, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Junbin Liang
  • Patent number: 11688408
    Abstract: Embodiments provide an audio processor for processing an audio signal to obtain a subband representation of the audio signal. The audio processor is configured to perform a cascaded lapped critically sampled transform on at least two partially overlapping blocks of samples of the audio signal, to obtain a set of subband samples on the basis of a first block of samples of the audio signal, and to obtain a corresponding set of subband samples on the basis of a second block of samples of the audio signal. Further, the audio processor is configured to perform a weighted combination of two corresponding sets of subband samples, one obtained on the basis of the first block of samples of the audio signal and one obtained on the basis on the second block of samples of the audio signal, to obtain an aliasing reduced subband representation of the audio signal.
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: June 27, 2023
    Assignee: FRAUNHOFER-GESELLSCHAFT ZUR FĂ–RDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
    Inventors: Nils Werner, Bernd Edler, Sascha Disch
  • Patent number: 11645464
    Abstract: Systems, computer-implemented methods, and computer program products to transform a lexicon that describes an information asset are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a term validation component that can determine from a subject matter expert, a validated term that can indicate validation of a candidate term that describes an information asset. The computer executable components can further comprise a lexicon transforming component that, based on the validated term, can transform a lexicon that describes the information asset, by incorporating the validated term into the lexicon.
    Type: Grant
    Filed: March 18, 2021
    Date of Patent: May 9, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Anna Lisa Gentile, Chad Eric DeLuca, Petar Ristoski, Ismini Lourentzou, Linda Ha Kato, Alfredo Alba, Daniel Gruhl, Steven R. Welch
  • Patent number: 11615252
    Abstract: A dispatcher virtual assistant (DVA) that can augment the capability of emergency dispatchers while reducing human errors. Major functions of the DVA include updating an emergency incident's status in real time, recommending or reminding the dispatcher to take proper actions at the right timing, answering the dispatcher's inquiries for task-related information, and fulfilling the dispatcher's request for an incident report. The DVA system includes a dispatcher language model based on machine-learning and deep-learning algorithms, for extracting the status of a live incident from incoming incident logs, and for processing and answering inquiries or requests from the dispatcher. It is customizable for different types of emergencies and for different local communities. The DVA can be used in tandem with an existing CAD system.
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: March 28, 2023
    Assignee: D8AI Inc.
    Inventors: Yin-Hsuan Wei, Angela Chen, Yuh-Bin Tsai, Fu-Chieh Chang, You-Zheng Yin, Zai-Ching Wen, Pei-Hua Chen, Hsiang-Pin Lee, Richard Li-Cheng Sheng, Hui Hsiung
  • Patent number: 11538487
    Abstract: The disclosure discloses a voice signal enhancing method and device, which divide a voice signal at the present scene into multiple frame signals based on a preset time interval; feed multiple frame signals into a trained neural network based on a preset step size, perform convolution operations on multiple frame signals through skip-connected convolutional layers to obtain multiple enhanced frame signals; superpose each enhanced frame signal according to the time domain of each enhanced frame signal to obtain an enhanced voice signal. Compared with the prior art, the present disclosure automatically enhances voice signals through the neural network without manual interference, so the effects and the application scenes of voice enhancement is not necessary to be limited by the preset method and method designers, thereby reducing the occurrence frequency of signal distortion and extra noises, which in turn improves the effects of the voice signal enhancement.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: December 27, 2022
    Assignee: YEALINK (XIAMEN) NETWORK TECHNOLOGY CO., LTD.
    Inventors: Wanjian Feng, Lianchang Zhang, Jiantao Liu
  • Patent number: 11514332
    Abstract: A method, computer program product, and system for a cognitive dialoguing avatar, the method including identifying a user, a target entity, and a user goal, initiating communication with the target entity, evaluating cognitively a question from a dialog with the target entity, determining cognitively an answer to the question by evaluating stored user information to progress to the user goal, communicating the determined answer to the target entity.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: November 29, 2022
    Assignee: International Business Machines Corporation
    Inventors: Adam T. Clark, Nathaniel D. Lee, Daniel J. Strauss
  • Patent number: 11514915
    Abstract: A system and corresponding method are provided for generating responses for a dialogue between a user and a computer. The system includes a memory storing information for a dialogue history and a knowledge base. An encoder may receive a new utterance from the user and generate a global memory pointer used for filtering the knowledge base information in the memory. A decoder may generate at least one local memory pointer and a sketch response for the new utterance. The sketch response includes at least one sketch tag to be replaced by knowledge base information from the memory. The system generates the dialogue computer response using the local memory pointer to select a word from the filtered knowledge base information to replace the at least one sketch tag in the sketch response.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: November 29, 2022
    Assignee: salesforce.com, inc.
    Inventors: Chien-Sheng Wu, Caiming Xiong, Richard Socher