Patents Examined by Jesse S Pullias
  • Patent number: 12165661
    Abstract: There is inter alia disclosed an apparatus for spatial audio encoding which can receive or determine for one or more audio signals (102), spatial audio parameters (106) on a sub band basis for providing spatial audio reproduction, the spatial audio parameters can comprise a coherence value (112) for each sub band of a plurality of subbands (202) of a frame. The apparatus then determines a significance measure for the coherence values (401) of the plurality of sub bands of the frame and uses the significance measure to determine whether to encode (403) the coherence values of the plurality of sub bands of the frame.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: December 10, 2024
    Assignee: NOKIA TECHNOLOGIES OY
    Inventors: Mikko-Ville Laitinen, Adriana Vasilache
  • Patent number: 12164828
    Abstract: A method in an interactive computing-system includes pre-processing an input natural-language (NL) from a user command based on natural language processing (NLP) for classifying speech information and non-speech information, obtaining an NLP result from the user command, fetching a device specific information from one or more IoT devices operating in an environment based on the NLP result, generating one or more contextual parameters based on the NLP result and the device specific information, selecting at least one speaker embedding stored in a database for the one or more IoT devices based on the one or more contextual parameters, and outputting the selected at least one speaker embedding for playback to the user.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: December 10, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sourabh Tiwari, Akshit Jindal, Saksham Goyal, Vinay Vasanth Patage, Ravibhushan B. Tayshete
  • Patent number: 12158902
    Abstract: Methods, systems, and computer programs are presented for searching the content of voice conversations. The conversations are translated into text and analysis of the conversation is performed to identify information in the conversation. The information identified includes turns taking data in the conversation and states identified within each state. A powerful user interface (UI) is provided to review the conversations and add annotations that tag the different turns. Additionally, parameter values are extracted from the text. A powerful search engine is provided with multiple search options, such as searching for text, searching by state within the conversation, searching by parameters extracted from the conversation, or a combination thereof.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: December 3, 2024
    Assignee: Twilio Inc.
    Inventors: Luke Percival de Oliveira, Umair Akeel, Alfredo Láinez Rodrigo, Nicolas Acosta Amador, Sahil Kumar, Liat Barda Dremer, Byeongung Ahn, Tyler Cole
  • Patent number: 12159251
    Abstract: The present disclosure includes systems, apparatuses, and methods for event identification. In some aspects, a method includes receiving data including text and performing natural language processing on the received data to generate processed data that indicates one or more sentences. The method also includes generating, based on a first keyword set, a second keyword set having more keywords than the first keyword set. The method further includes, for each of the first and second keyword sets: detecting one or more keywords and one or more entities included in the processed data, determining one or more matched pairs based on the detected keywords and entities, and extracting a sentence, such as a single sentence or multiple sentences, from a document based on the one or more sentences indicated by the processed data. The method may also include outputting at least one extracted sentence.
    Type: Grant
    Filed: September 6, 2022
    Date of Patent: December 3, 2024
    Assignee: Thomson Reuters Enterprise Centre GmbH
    Inventors: Berk Ekmekci, Eleanor Hagerman, Blake Stephen Howald
  • Patent number: 12154582
    Abstract: A system and method code an object-based audio signal comprising audio objects in response to audio streams with associated metadata. In the system and method, a metadata processor codes the metadata and generates information about bit-budgets for the coding of the metadata of the audio objects. An encoder codes the audio streams while a bit-budget allocator is responsive to the information about the bit-budgets for the coding of the metadata of the audio objects from the metadata processor to allocate bitrates for the coding of the audio streams by the encoder.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: November 26, 2024
    Inventor: Vaclav Eksler
  • Patent number: 12154586
    Abstract: A computer-implemented method for suppressing noise from audio signal uses both statistical noise estimation and neural network noise estimation to achieve more desirable noise reduction. The method is performed by a noise suppression computer software application running on an electronic device. The noise suppression computer software application first transforms the speech signal in time domain into frequency domain before determining a statistical noise estimate and a neural network noise estimate. The noise suppression computer software application merges the two noise estimates to derive a final noise estimate, and determines and refines a noise suppression filter. The filter is applied to the speech signal in frequency domain to obtain an enhanced signal. The enhanced signal is transformed back into time domain.
    Type: Grant
    Filed: May 24, 2022
    Date of Patent: November 26, 2024
    Assignee: Agora Lab, Inc.
    Inventors: Jimeng Zheng, Bo Wu, Xiaohan Zhao, Liangliang Wang, Ruofei Chen
  • Patent number: 12147907
    Abstract: A method executed by a computing device includes determining to update an entigen group within a knowledge database, where the entigen group includes a set of linked entigens that represent knowledge associated with an object. The method further includes determining a set of identigens for each word of a phrase associated with the object to produce sets of identigens. The method further includes interpreting the sets of identigens to determine a most likely meaning interpretation of the phrase and produce a new entigen group. The method further includes updating the entigen group utilizing the new entigen group to produce a curated entigen group such that a deficiency of the entigen group is resolved providing more accurate and more complete knowledge of the object.
    Type: Grant
    Filed: February 10, 2022
    Date of Patent: November 19, 2024
    Assignee: entigenlogic LLC
    Inventors: Frank John Williams, David Ralph Lazzara, Donald Joseph Wurzel, Paige Kristen Thompson, Stephen Emerson Sundberg, Ameeta Vasant Reed, Stephen Chen, Dennis Arlen Roberson, Thomas James MacTavish, Karl Olaf Knutson, Jessy Thomas, David Michael Corns, II, Andrew Chu, Theodore Mazurkiewicz, Gary W. Grube
  • Patent number: 12141179
    Abstract: A system and method for automatically generating organization level ontology for knowledge retrieval, are provided. An input/output unit receives a plurality of documents from document sources and an ontology generation system generates the organization level ontology based on the documents. The ontology generation system extracts one or more nodes and directed relationships from each document and generates an intermediate document ontology for each document. A combination of syntactic, semantic, and pragmatic assessment of intermediate document ontology is performed to assess at least structure and adaptability of the ontology. The ontology generation system further generates a refined document ontology, based on assessment, to satisfy one or more quality metrics. Each of the refined document ontologies is integrated together to generate the organization level ontology.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: November 12, 2024
    Assignee: OntagenAI, Inc.
    Inventors: Diego Fernando Martinez Ayala, Brian Sanchez, Carlos Alejandro Jimenez Holmquist
  • Patent number: 12135950
    Abstract: Using a logistic regression classification model executing on a processor, a topic is classified into an interaction type in a set of predefined interaction types. A set of documents corresponding to the topic is extracted from a document repository. Using a generative adversarial model executing on a processor, a sentiment corresponding to a reaction to a previous presentation is scored, the scoring resulting in a scored sentiment. Using a trained attention layer model, the interaction type, the set of documents, and the scored sentiment are weighted, the weighting generating a weighted interaction type, a weighted set of documents, and a weighted scored sentiment. Using a natural language generation transformer model executing on the processor, a document in the weighted set of documents is weighted according to the weighted interaction type and the weighted scored sentiment.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: November 5, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Micah Forster, Sai Krishna Reddy Gudimetla, Aaron K. Baughman
  • Patent number: 12135938
    Abstract: Systems, methods, apparatuses, and computer program products for natural language processing are provided. One method may include utilizing a trained machine learning model to learn syntax dependency patterns and parts of speech tag patterns of text based on labeled training data. The method may also include contextualizing vector embeddings from a language model for each word in the text, and extracting relationships for a given fragment of the text based on the contextualization. The method may further include resolving relationships between identified verbs based on a plurality of heuristics to identify the syntax dependency patterns, identifying nested relationships, and capturing metadata associated with the nested relationships.
    Type: Grant
    Filed: May 11, 2022
    Date of Patent: November 5, 2024
    Assignee: CORASCLOUD, INC.
    Inventors: Ajay Patel, Alex Sands
  • Patent number: 12131122
    Abstract: At least one processor may obtain a document comprising text tokens. The at least one processor may determine, based on a pre-trained language model, word embeddings corresponding to the text tokens. The at least one processor may determine, based on the word embeddings, named entities corresponding to the text tokens; and one or more accuracy predictions corresponding to the named entities. The at least one processor may compare the one or more accuracy predictions with at least one threshold. The at least one processor may associate, based on the comparing, the named entities with one or more confidence levels. The at last one processor may deliver the named entities and the one or more confidence levels.
    Type: Grant
    Filed: December 21, 2022
    Date of Patent: October 29, 2024
    Assignee: INTUIT INC.
    Inventor: Terrence J. Torres
  • Patent number: 12130923
    Abstract: In some embodiments, a processor receives natural language data for performing an identified cybersecurity task. The processor can provide the natural language data to a first machine learning (ML) model. The first ML model can automatically infer a template query based on the natural language data. The processor can receive user input indicating a finalized query and to provide the finalized query as input to a system configured to perform the identified computational task. The processor can provide the finalized query as a reference phrase to a second ML model, the second ML model configured to generate a set of natural language phrases similar to the reference phrase. The processor can generate supplemental training data using the set of natural language phrases similar to the reference phrase to augment training data used to improve performance of the first ML model and/or the second ML model.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: October 29, 2024
    Assignee: Sophos Limited
    Inventors: Younghoo Lee, Miklós Sándor Béky, Joshua Daniel Saxe
  • Patent number: 12118319
    Abstract: The present disclosure provides a dialog method and system, an electronic device and a storage medium, and relates to the field of artificial intelligence (AI) technologies such as deep learning and natural language processing. A specific implementation scheme involves: rewriting a corresponding dialog state based on received dialog information of a user; determining to-be-used dialog action information based on the dialog information of the user and the dialog state; and generating a reply statement based on the dialog information of the user and the dialog action information. According to the present disclosure, the to-be-used dialog action information can be determined based on the dialog information of the user and the dialog state; and then the reply statement is generated based on the dialog action information, thereby providing an efficient dialog scheme.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: October 15, 2024
    Assignee: BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
    Inventors: Jun Xu, Zeming Liu, Zeyang Lei, Zhengyu Niu, Hua Wu, Haifeng Wang
  • Patent number: 12112278
    Abstract: A system and method for analyzing a response to an interview questions is disclosed, including a speech recognition engine to receive audio information corresponding to a response to create a transcription of the response. A segmentation engine segments the transcription into one or more segments. A segment classification engine classifies the one or more segments into one or more functional units and group the functional units by at least one structure. A presentation skill KPI engine, a structure sequence KPI engine, and a content KPI engine to analyze the response and applying the analysis to a composite model to provide an overall rating of the response.
    Type: Grant
    Filed: January 19, 2022
    Date of Patent: October 8, 2024
    Inventor: Ashwarya Poddar
  • Patent number: 12112740
    Abstract: A computer-implemented method for measuring cognitive load of a user creating a creative work in a creative work system, may include generating at least one verbal statement capable of provoking at least one verbal response from the user, prompting the user to vocally interact with the creative work system by vocalizing the at least one generated verbal statement to the user via an audio interface of the creative work system, and obtaining the at least one verbal response from the user via the audio interface, and determining the cognitive load of the user based on the at least one verbal response obtained from the user, wherein generating the at least one verbal statement is based on at least one predicted verbal response suitable for determining the cognitive load of the user.
    Type: Grant
    Filed: December 8, 2021
    Date of Patent: October 8, 2024
    Assignee: SOCIÉTÉ BIC
    Inventors: David Duffy, Bernadette Elliott-Bowman
  • Patent number: 12112743
    Abstract: A speech recognition method includes: obtaining speech data; performing feature extraction on speech data, to obtain speech features of at least two speech segments; inputting the speech features of the at least two speech segments into the speech recognition model, and processing the speech features of the speech segments by using cascaded hidden layers in the speech recognition model, to obtain hidden layer features of the speech segments, a hidden layer feature of an ith speech segment being determined based on speech features of n speech segments located after the ith speech segment in a time sequence and a speech feature of the ith speech segment; and obtaining text information corresponding to the speech data based on the hidden layer features of the speech segments.
    Type: Grant
    Filed: March 30, 2022
    Date of Patent: October 8, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Xilin Zhang, Bo Liu
  • Patent number: 12100391
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media for speech recognition. One method includes obtaining an input acoustic sequence, the input acoustic sequence representing an utterance, and the input acoustic sequence comprising a respective acoustic feature representation at each of a first number of time steps; processing the input acoustic sequence using a first neural network to convert the input acoustic sequence into an alternative representation for the input acoustic sequence; processing the alternative representation for the input acoustic sequence using an attention-based Recurrent Neural Network (RNN) to generate, for each position in an output sequence order, a set of substring scores that includes a respective substring score for each substring in a set of substrings; and generating a sequence of substrings that represent a transcription of the utterance.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: September 24, 2024
    Assignee: Google LLC
    Inventors: William Chan, Navdeep Jaitly, Quoc V. Le, Oriol Vinyals, Noam M. Shazeer
  • Patent number: 12100417
    Abstract: Disclosed embodiments may include a system that may receive an audio file comprising an interaction between a first user and a second user. The system may detect, using a deep neural network (DNN), moment(s) of interruption between the first and second users from the audio file. The system may extract, using the DNN, vocal feature(s) from the moment(s) of interruption. The system may determine, using a machine learning model (MLM) and based on the vocal feature(s), whether a threshold number of moments of the moment(s) of interruption corresponds to a first emotion type. When the threshold number of moments corresponds to the first emotion type, the system may transmit a first message comprising a first binary indication. When the threshold number of moments do not correspond to the first emotion type, the system may transmit a second message comprising a second binary indication.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: September 24, 2024
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventor: Vahid Khanagha
  • Patent number: 12086564
    Abstract: A system and method for masking an identity of a speaker of natural language speech, such as speech clips to be labeled by humans in a system generating voice transcriptions for training an automatic speech recognition model. The natural language speech is morphed prior to being presented to the human for labeling. In one embodiment, morphing comprises pitch shifting the speech randomly either up or down, then frequency shifting the speech, then pitch shifting the speech in a direction opposite the first pitch shift. Labeling the morphed speech comprises at least one or more of transcribing the morphed speech, identifying a gender of the speaker, identifying an accent of the speaker, and identifying a noise type of the morphed speech.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: September 10, 2024
    Assignee: SoundHound AI IP, LLC.
    Inventor: Dylan H. Ross
  • Patent number: 12073821
    Abstract: A system capable of speech gap modulation is configured to: receive at least one composite speech portion, which comprises at least one speech portion and at least one dynamic-gap portion, wherein the speech portion(s) comprising at least one variable-value speech portion, wherein the dynamic-gap portion(s) associated with a pause in speech; receive at least one synchronization point, wherein synchronization point(s) is associating a point in time in the composite speech portion(s) and a point in time in other media portion(s); and modulate dynamic-gap portion(s), based at least partially on the at variable-value speech portion(s), and on the point(s), thereby generating at least one modulated composite speech portion. This facilitates improved synchronization of the modulated composite speech portion(s) and the other media portion(s) at the synchronization point(s), when combining the other media portion(s) and the audio-format modulated composite speech portion(s) into a synchronized multimedia output.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: August 27, 2024
    Assignee: IGENTIFY LTD.
    Inventors: Zohar Sherman, Ori Inbar