Patents Examined by Jesse S Pullias
  • Patent number: 11449688
    Abstract: An engine interposed between an application translator and UI intercepts string templates populated with string variables and output by the translator. The engine determines translation-resistance of the string template. Such determination can be based upon an existing mark inserted by the translator, a number of string variables in the string template, a comment in the string template, user settings, or syntax rules. Frequently, translation-resistance of the string template is not indicated and the engine simply forwards on the string template to the user. Less frequently, the engine determines the string template to be resistant to translation. Then, the engine causes the string template to be processed according to a prefix and a suffix inserted thereto. The processing can comprise forwarding the string template for machine translation, and/or falling back to a simpler string variant. The modified content resulting from the processing is communicated by the engine to the user.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: September 20, 2022
    Assignee: SAP SE
    Inventors: Jens Scharnbacher, Michail Vasiltschenko
  • Patent number: 11449685
    Abstract: Certain aspects of the present disclosure provide techniques for generating a compliance graph based on a compliance rule to implement in a software program product for determining user compliance. To generate a compliance graph, an encoder receives a compliance rule in a source language and generates a set of corresponding vectors. The decoder, which has been trained using verified training pairs and synthetic data, generates a sequence of operations based on the vectors from the encoder. The sequence of operations is the used to build a graph in which each operation is a node in the graph and each node is connected to at least one other node in the same graph or a separate graph.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: September 20, 2022
    Assignee: INTUIT INC.
    Inventor: Conrad De Peuter
  • Patent number: 11437022
    Abstract: A method of speaker recognition comprises receiving an audio signal representing speech. A speaker change detection process is performed on the received audio signal. A trigger phrase detection process is also performed on the received audio signal. On detecting the trigger phrase in the received audio signal, a speaker recognition process is performed on the detected trigger phrase and on any speech preceding the detected trigger phrase and following an immediately preceding speaker change.
    Type: Grant
    Filed: June 15, 2020
    Date of Patent: September 6, 2022
    Assignee: Cirrus Logic, Inc.
    Inventor: John Paul Lesso
  • Patent number: 11430448
    Abstract: A method and apparatus for processing voice data of a speech received from a speaker are provided. The method includes extracting a speaker feature vector from the voice data of the speech received from a speaker, generating a speaker feature map by positioning the extracted speaker feature vector at a specific position on a multi-dimensional vector space, forming a plurality of clusters indicating features of voices of a plurality of speakers by grouping at least one speaker feature vector positioned on the speaker feature map, and classifying the plurality of speakers according to the plurality of clusters.
    Type: Grant
    Filed: November 22, 2019
    Date of Patent: August 30, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jaeyoung Roh, Keunseok Cho, Jiwon Hyung, Donghan Jang, Jaewon Lee
  • Patent number: 11430438
    Abstract: An electronic device includes a microphone, a communication circuit, and a processor configured to obtain a user's utterance through the microphone, transmit first information about the utterance through the communication circuit to an external server for at least partially automatic speech recognition (ASR) or natural language understanding (NLU), obtain a second text from the external server through the communication circuit, the second text being a text resulting from modifying at least part of a first text included in a neutral response to the utterance based on parameters corresponding to the user's conversation style and emotion identified based on the first information, and provide a voice corresponding to the second text or a message including the second text in response to the utterance.
    Type: Grant
    Filed: March 11, 2020
    Date of Patent: August 30, 2022
    Inventors: Piotr Andruszkiewicz, Tomasz Latkowski, Kamil Herba, Maciej Pienkosz, Iryna Orlova, Jakub Staniszewski, Krystian Koziel
  • Patent number: 11392766
    Abstract: Disclosed embodiments relate to systems and methods for automatically mediating among diversely structured operational policies. Techniques include identifying a first communication of a computing resource that is associated with an operational policy, identifying a second computing resource, determining if there is a conflict between the first communication and the second computing resource, applying a language processing protocol to the communication, normalizing the communication and policy, and generating a mediated communication. Other techniques include transmitting the mediated communication, generating a recommendation for implementing a security control on the first communication, and applying a security policy to the first communication.
    Type: Grant
    Filed: February 26, 2020
    Date of Patent: July 19, 2022
    Assignee: CyberArk Software Ltd.
    Inventors: Tal Kandel, Lavi Lazarovitz
  • Patent number: 11393491
    Abstract: An artificial intelligence device includes a microphone configured to receive a command uttered by a user, a wireless communication unit configured to perform communication with an external artificial intelligence device, and a processor configured to receive a first operation command through the microphone, acquire a first speech quality level and a first intention of the received first operation command, determine a first external artificial intelligence device to perform the acquired first intention, transmit a first control command corresponding to the first intention to the determined first external artificial intelligence device, receive a second operation command through the microphone, acquire a second speech quality level and a second intention of the received second operation command, and determine that a device to be controlled is changed when a difference between the first speech quality level and the second speech quality level is equal to or greater than a predetermined level range.
    Type: Grant
    Filed: June 4, 2019
    Date of Patent: July 19, 2022
    Inventors: Jongwoo Han, Hangil Jeong, Heeyeon Choi
  • Patent number: 11386914
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: July 12, 2022
    Assignee: DeepMind Technologies Limited
    Inventors: Aaron Gerard Antonius van den Oord, Sander Etienne Lea Dieleman, Nal Emmerich Kalchbrenner, Karen Simonyan, Oriol Vinyals
  • Patent number: 11380344
    Abstract: A device and method controlling a speaker according to priority data is provided. An audio processor, in communication with a speaker-controlling processor at a device, processes remote audio data, the remote audio data remote to the speaker-controlling processor. The audio processor assigns priority data to the remote audio data. The audio processor provides the remote audio data and the priority data to the speaker-controlling processor. The speaker-controlling processor processes local audio data, the local audio data local to the speaker-controlling processor. The speaker-controlling processor controls a speaker, with respect to the local audio data and the remote audio data, according to the priority data.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: July 5, 2022
    Inventors: Mark A. Boerger, Sean Regan, Jesus F. Corretjer
  • Patent number: 11380316
    Abstract: The present invention discloses a speech interaction method and apparatus, and pertains to the field of speech processing technologies. The method includes: acquiring speech data of a user; performing user attribute recognition on the speech data to obtain a first user attribute recognition result; performing content recognition on the speech data to obtain a content recognition result of the speech data; and performing a corresponding operation according to at least the first user attribute recognition result and the content recognition result, so as to respond to the speech data. According to the present invention, after speech data is acquired, user attribute recognition and content recognition are separately performed on the speech data to obtain a first user attribute recognition result and a content recognition result, and a corresponding operation is performed according to at least the first user attribute recognition result and the content recognition result.
    Type: Grant
    Filed: October 10, 2019
    Date of Patent: July 5, 2022
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Hongbo Jin, Zhuolin Jiang
  • Patent number: 11348601
    Abstract: A system is provided for using voice characteristics in determining a user intent corresponding to an utterance. The system processes a NLU hypothesis and voice characteristics data, using a trained model, to determine an alternate NLU hypothesis based on the voice characteristics data. The voice characteristics data may indicate if a user's level of uncertainty when speaking the utterance, an age group of the user, a sentiment of the user when speaking the utterance, and other data.
    Type: Grant
    Filed: June 6, 2019
    Date of Patent: May 31, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Avani Deshpande, Jie Liang
  • Patent number: 11341956
    Abstract: The present invention provides a method and a system utilizing an AI entity for confirming an agreement has been entered between a first entity and a second entity during a verbal communication, capturing the portions of the communication that constitute the elements of an agreement and storing the portions for later verification of the agreement.
    Type: Grant
    Filed: June 1, 2020
    Date of Patent: May 24, 2022
    Assignee: United Services Automobile Association (USAA)
    Inventor: Brady Carl Stephenson
  • Patent number: 11341330
    Abstract: Disclosed herein is computer technology that provides adaptive mechanisms for learning concepts that are expressed by natural language sentences, and then applies this learning to appropriately classify new natural language sentences with the relevant concept that they express. The computer technology can also discover the uniqueness of terms within a training corpus, and sufficiently unique terms can be flagged for the user for possible updates to an ontology for the system.
    Type: Grant
    Filed: January 16, 2020
    Date of Patent: May 24, 2022
    Inventors: Michael Justin Smathers, Daniel Joseph Platt, Nathan D. Nichols, Jared Lorince
  • Patent number: 11328131
    Abstract: A method for translating messages between users is presented. The method includes receiving, at a first computing device, a message from a first user associated with a first language, in which content of the message is in the first language. The method also includes transmitting the message to a second computing device associated with a second language of a second user. The method further includes receiving the message and an indication of the second language from the second computing device as well as transmitting the message to a translation server to be translated to the second language. Furthermore, the method includes receiving a translated message from the translation server and transmitting the translated message to the second computing device of the second user.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: May 10, 2022
    Inventors: Jordan Abbott Orlick, Matthew Jason Weisman, Collin Javon Alford, Jr.
  • Patent number: 11321531
    Abstract: A system for automatically updating a process model is provided. The system uses semantic similarities between externally sourced textual data and textual descriptions contained in the process model to classify words in the externally sourced textual data into one of multiple possible actionable categories. The textual data is then parsed for dependent words that are used to automatically update to an existing process model.
    Type: Grant
    Filed: November 29, 2019
    Date of Patent: May 3, 2022
    Assignee: SOFTWARE AG
    Inventors: Ganesh Swamypillai, Shriram Venkatnarayanan
  • Patent number: 11321529
    Abstract: A date extractor disclosed herein allows extracting dates and date ranges from documents. An implementation of the date extractor is implemented using various computer process instructions including scanning a document to generate a plurality of tokens, assigning labels to token using named entity recognition machine to generate a named entity vector, extracting dates from the named entity vector by comparing each of the named entities of the named entity vector to predetermined patterns of dates to generate a date vector, generating a plurality of date pairs from the date vector, and extracting date-ranges by comparing the plurality of date pairs to predetermined patterns of date ranges.
    Type: Grant
    Filed: December 25, 2018
    Date of Patent: May 3, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ying Wang, Min Li, Mengyan Lu
  • Patent number: 11321890
    Abstract: Generation of expressive content is provided. An expressive synthesized speech system provides improved voice authoring user interfaces by which a user is enabled to efficiently author content for generating expressive output. An expressive synthesized speech system provides an expressive keyboard for enabling input of textual content and for selecting expressive operators, such as emoji objects or punctuation objects for applying predetermined prosody attributes or visual effects to the textual content. A voicesetting editor mode enables the user to author and adjust particular prosody attributes associated with the content for composing carefully-crafted synthetic speech. An active listening mode (ALM) is provided, which when selected, a set of ALM effect options are displayed, wherein each option is associated with a particular sound effect and/or visual effect. The user is enabled to rapidly respond with expressive vocal sound effects or visual effects while listening to others speak.
    Type: Grant
    Filed: November 9, 2016
    Date of Patent: May 3, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ann M. Paradiso, Jonathan Campbell, Edward Bryan Cutrell, Harish Kulkarni, Meredith Morris, Alexander John Fiannaca, Kiley Rebecca Sobel
  • Patent number: 11315582
    Abstract: A method for recovering audio signals, a terminal and a storage medium are provided. The method includes: buffering an audio signal sampled at a preset number of sampling points each time, and then performing frequency spectrum analysis on the sampled audio signal by FFT; when it is determined that the audio signal is compressed, filtering a frequency point; recovering high-frequency signals based on audio signals before the frequency point; and performing phase recovery on the high-frequency signals. Thus, compressed high-frequency signals in the audio signals may be recovered.
    Type: Grant
    Filed: November 27, 2018
    Date of Patent: April 26, 2022
    Inventors: Jiaze Liu, Yufei Wang
  • Patent number: 11308274
    Abstract: A computer-implemented method is provided. The method includes acquiring a seed word; calculating a similarity score of each of a plurality of words relative to the seed word for each of a plurality of models to calculate a weighted sum of similarity scores for each of the plurality of words; outputting a plurality of candidate words among the plurality of words; acquiring annotations indicating at least one of preferred words and non-preferred words among the plurality of the candidate words; updating weights of the plurality of models in a manner to cause weighted sums of similarity scores for the preferred words to be relatively larger than the weighted sums of the similarity scores for the non-preferred words, based on the annotations; and grouping the plurality of candidate words output based on the weighted sum of similarity scores calculated with updated weights of the plurality of models.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: April 19, 2022
    Assignee: International Business Machines Corporation
    Inventors: Ryosuke Kohita, Issei Yoshida, Tetsuya Nasukawa, Hiroshi Kanayama
  • Patent number: 11308283
    Abstract: Text data including at least named entities can be received. From the named entities, continuous entities, overlapping entities and disjoint entities can be identified. The overlapping entities can be transformed into continuous entities. The continuous entities, the transformed entities and the disjoint entities can be encoded. The encoded entities can be input to a machine learning language model to train the machine learning model to predict candidate entities. The predicted entities can be decoded to reconstruct the predicted entities.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: April 19, 2022
    Assignee: International Business Machines Corporation
    Inventors: Diwakar Mahajan, Ananya Aniruddha Poddar, Bharath Dandala, Ching-Huei Tsou