Patents Examined by Mulugeta Tuji Dugda
  • Patent number: 12346657
    Abstract: Systems and methods are provided for adapting a pretrained language model to perform cybersecurity-specific named entity recognition and relation extraction. The method includes introducing a pretrained language model and a corpus of security text to a model adaptor, and generating a fine-tuned language model through unsupervised training utilizing the security text corpus. The method further includes combining a joint extraction model from a head for joint extraction with the fine-tuned language model to form an adapted joint extraction model that can perform entity and relation label prediction. The method further includes applying distant labels to security text in the corpus of security text to produce security text with distant labels, and performing Distant Supervision Training for joint extraction on the adapted joint extraction model using the security text to transform the adapted joint extraction model into a Security Language Model for name-entity recognition (NER) and relation extraction (RE).
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: July 1, 2025
    Assignee: NEC Corporation
    Inventors: Xiao Yu, Yanchi Liu, Haifeng Chen, Yufei Li
  • Patent number: 12340806
    Abstract: A method of transcript presentation may include generating, by a device, audio data by capturing via a microphone of the device an audible audio signal that is broadcast by the device. The audible audio signal may include words. The method may also include obtaining, at the device, transcript data. The transcript data may be generated using the audio data and may include a transcription of the words of the audible audio data. The method may also include presenting, by the device, the transcription.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: June 24, 2025
    Assignee: Sorenson IP Holdings, LLC
    Inventor: David Thomson
  • Patent number: 12300259
    Abstract: A method for dynamically controlling enhancement of an audio stream is provided, where the audio stream defines a sequence of audio segments over time. Each audio segment defines a waveform having a plurality of waveform attributes. For each audio segment of the sequence of audio segments, the method includes: (i) determining a set of waveform-attribute values of the audio segment's waveform attributes, (ii) computing a first distance between the determined set of waveform-attribute values and a first predefined set of waveform-attribute values representative of speech, and computing a second distance between the determined set of waveform-attribute values and a second predefined set of waveform-attribute values representative of music, (iii) using the computed first and second distances as a basis to classify the audio segment as primarily speech or rather primarily music, and (iv) controlling, based on the classifying, whether or not to enhance the audio segment for output.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: May 13, 2025
    Assignee: Roku, Inc.
    Inventors: David Henry Friedman, Alan Robert Bithell, Robert Caston Curtis
  • Patent number: 12293770
    Abstract: A speech signal dereverberation processing method includes extracting an amplitude spectrum feature and a phase spectrum feature of a current frame in an original speech signal, extracting subband amplitude spectrums from the amplitude spectrum feature corresponding to the current frame, determining, based on the subband amplitude spectrums and by using a first reverberation predictor, a reverberation strength indicator corresponding to the current frame, and determining, based on the subband amplitude spectrums and the reverberation strength indicator, and by using a second reverberation predictor, a clean speech subband spectrum corresponding to the current frame.
    Type: Grant
    Filed: March 2, 2022
    Date of Patent: May 6, 2025
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Rui Zhu, Juan Juan Li, Yan Nan Wang, Yue Peng Li
  • Patent number: 12293760
    Abstract: A vehicle agent device receives utterance information from an on-board unit, analyzes the content of the utterance, detects, as a non-installed function from a database, a function that an occupant intended to utilize but which was not installed and is installable, generates proposal information for furnishing the occupant with information relating to the non-installed function it detected, and sends the proposal information that has been generated to the on-board unit to thereby send the information relating to the non-installed function to a preregistered mobile device carried by the occupant.
    Type: Grant
    Filed: October 6, 2021
    Date of Patent: May 6, 2025
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Chikage Kubo, Keiko Nakano, Eiichi Maeda, Hiroyuki Nishizawa
  • Patent number: 12283281
    Abstract: Embodiments are disclosed for bitrate distribution in immersive voice and audio services. In an embodiment, a method of encoding an IVAS bitstream comprises: receiving an input audio signal; downmixing the input audio signal into one or more downmix channels and spatial metadata; reading a set of one or more bitrates for the downmix channels and a set of quantization levels for the spatial metadata from a bitrate distribution control table; determining a combination of the one or more bitrates for the downmix channels; determining a metadata quantization level from the set of metadata quantization levels using a bitrate distribution process; quantizing and coding the spatial metadata using the metadata quantization level; generating, using the combination of one or more bitrates, a downmix bitstream for the one or more downmix channels; combining the downmix bitstream, the quantized and coded spatial metadata and the set of quantization levels into the IVAS bitstream.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: April 22, 2025
    Assignee: Dolby Laboratories Licensing Corporation
    Inventors: Rishabh Tyagi, Juan Felix Torres, Stefanie Brown
  • Patent number: 12277393
    Abstract: A method of training a ranking model, and an electronic device, which relate to technical fields of natural language processing and intelligent search. The method includes: in training the ranking model, firstly acquiring a plurality of first sample pairs and respective label information; for each first sample pair, inputting a first search text, a first title text of a first candidate text, and a first target summary corresponding to the first candidate text into an initial language model to obtain a second relevance score corresponding to the each first sample pair; then using the first target summary to replace the first candidate text to participate in the training of the ranking model, and updating at least one network parameter of the initial language model according to the label information and the second relevance score corresponding to each first sample pair.
    Type: Grant
    Filed: March 9, 2022
    Date of Patent: April 15, 2025
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventor: Lixin Zou
  • Patent number: 12271700
    Abstract: A method for interpreting structured and unstructured content to facilitate tailored transactions is provided. The method includes acquiring, from one or more unstructured data sources, unstructured data, and acquiring, from one or more structured data sources, structured data. The method further includes performing natural language processing (NLP) on both the structured data and the unstructured data using a machine learning algorithm, and generating, via the machine learning algorithm, an NLP response based on the NLP. Based on the NLP response, the method further performs identifying at least one candidate object, and generating a list of actions corresponding to the candidate object.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: April 8, 2025
    Assignee: JPMORGAN CHASE BANK, N.A.
    Inventors: Oleh Filipchuk, Richard Lascelles
  • Patent number: 12266354
    Abstract: Systems and processes for speech interpretation based on environmental context are provided. For example, a user gaze direction is detected, and a speech input is received from a first user of the electronic device. In accordance with a determination that the user gaze is directed at a digital assistant object, the speech input is processed by the digital assistant. In accordance with a determination that the user gaze is not directed at a digital assistant object, contextual information associated with the electronic device is obtained, wherein the contextual information includes speech from a second user. Determination is made whether the speech input is directed to a digital assistant of the electronic device. In accordance with a determination that the speech input is directed to a digital assistant of the electronic device, the speech input is processed by the digital assistant.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Brad Kenneth Herman, Shiraz Akmal, Aaron Mackay Burns, David A. Carson
  • Patent number: 12230275
    Abstract: A speech instruction recognition method, an electronic device, and a non-transient computer readable storage medium. The speech instruction recognition method comprises: acquiring a target speech; processing the target speech to obtain a target speech vector corresponding to the target speech; performing speech recognition on the target speech to obtain a target speech text of the target speech, and processing the target speech text to obtain a target text vector corresponding to the target speech text; and inputting the target speech vector and the target text vector to a pre-trained instruction recognition model to obtain an instruction category corresponding to the target speech.
    Type: Grant
    Filed: January 6, 2021
    Date of Patent: February 18, 2025
    Assignee: BOE Technology Group Co., Ltd.
    Inventor: Shaoxun Su
  • Patent number: 12229514
    Abstract: System and method are provided for identification and classification of multilingual messages that would be considered inappropriate in an online interactive portal. The system may include processors to generate a set of data of intended inappropriate multilingual messages to train classification model. The set of data with labels is classified by assigning unique identifiers. The system includes pre-processing module to eliminate unwanted characters from set of data to train classification model. The classification model may be trained by multilingual representation module based at least in part on set of data with labels. The classification model determines whether set of data with one or more labels includes intended inappropriate multilingual messages. Furthermore, feedback loop module is utilised to retrain classification model recurrently to update set of data.
    Type: Grant
    Filed: January 25, 2022
    Date of Patent: February 18, 2025
    Assignee: Extramarks Education India Pvt Ltd.
    Inventors: Ritvik Kulshrestha, Gaurav Sharma, Deep Dwivedi, Abhra Das, Suman Gadhawal, Vipin Tripathi
  • Patent number: 12229510
    Abstract: There are provided systems and methods for named entity recognition in chat dialogues for customer relationship management systems. A service provider, such as an electronic transaction processor for digital transactions, may provide live chat service channels for assistance through live agents and chatbot services. When interacting with these channels, a user may engage in a chat dialogue with live agents. This may include lines of texts corresponding to the exchanged messages and may include named entities for particular types or categories of words that refer to a particular object or thing. To identify these named entities, a natural language processor may utilize machine learning and other engines for named entity recognition in customer relationship management systems to highlight the named entities in live service chats. Agents of the systems may view content that identify the named entities and interact with the named entities to view descriptions.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: February 18, 2025
    Assignee: PAYPAL, INC.
    Inventors: Nikita Alekseyevich Lukyanenko, Alexander Shvid
  • Patent number: 12211491
    Abstract: One or more computer processors obtain an initial subnetwork at a target sparsity and an initial pruning mask from a pre-trained self-supervised learning (SSL) speech model. The one or more computer processors finetune the initial subnetwork, comprising: the one or more computer processors zero out one or more masked weights in the initial subnetwork specified by the initial pruning mask; the one or more computer processors train a new subnetwork from the zeroed out subnetwork; the one or more computer processors prune one or more weights of lowest magnitude in the new subnetwork regardless of network structure to satisfy the target sparsity. The one or more computer processors classify an audio segment with the finetuned subnetwork.
    Type: Grant
    Filed: May 9, 2022
    Date of Patent: January 28, 2025
    Assignee: International Business Machines Corporation
    Inventors: Cheng-I Lai, Yang Zhang, Kaizhi Qian, Chuang Gan, James R. Glass, Alexander Haojan Liu
  • Patent number: 12093297
    Abstract: The present disclosure provides a summary generation model training method and apparatus, a device and a storage medium, and relates to the field of computer technologies, and in particular, to the field of artificial intelligence such as natural language processing and deep learning. The summary generation model training method includes: acquiring a document representation corresponding to a document sample; constructing, based on the document representation, a summary representation corresponding to the document representation, the summary representation including a positive summary representation and a negative summary representation; and constructing a total contrastive loss function based on the document representation, the positive summary representation and the negative summary representation, and training a summary generation model based on the total contrastive loss function. The present disclosure may improve accuracy of the summary generation model.
    Type: Grant
    Filed: January 18, 2022
    Date of Patent: September 17, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Wenhao Wu, Wei Li, Xinyan Xiao, Jiachen Liu