Patents Examined by Michael N. Opsasnick
  • Patent number: 11954445
    Abstract: Artificial intelligence (AI) technology can be used in combination with composable communication goal statements to facilitate a user's ability to quickly structure story outlines using “explanation” communication goals in a manner usable by an NLG narrative generation system without any need for the user to directly author computer code. This AI technology permits NLG systems to determine the appropriate content for inclusion in a narrative story about a data set in a manner that will satisfy a desired explanation communication goal such that the narratives will express various ideas that are deemed relevant to a given explanation communication goal.
    Type: Grant
    Filed: December 22, 2022
    Date of Patent: April 9, 2024
    Assignee: Narrative Science LLC
    Inventors: Nathan D. Nichols, Andrew R. Paley, Maia Lewis Meza, Santiago Santana
  • Patent number: 11954432
    Abstract: This disclosure relates to a method of generating a symbol string based on an input sentence represented by a sequence of symbols. In particular, the method involves receiving an input symbol string representing a sentence, generating, using a neural network based on a sequence of dependency structure of elements in the input symbol string, an output symbol string corresponding to the input sentence. The neural network includes an encoder that converts elements of the input symbol string to a first hidden state in a form of a multi-dimensional vector, an attention mechanism that applies a weight to the first hidden state and generates the weighted first hidden state as a second hidden state, a decoder that generates a third hidden state based on at least one element of the input symbol string, an element of the output symbol string, and the second hidden state, and an output generator that generates an element of the output symbol string based on the second hidden state and the third hidden state.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: April 9, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Hidetaka Kamigaito, Masaaki Nagata, Tsutomu Hirao
  • Patent number: 11947917
    Abstract: The present disclosure provides systems and methods that perform machine-learned natural language processing. A computing system can include a machine-learned natural language processing model that includes an encoder model trained to receive a natural language text body and output a knowledge graph and a programmer model trained to receive a natural language question and output a program. The computing system can include a computer-readable medium storing instructions that, when executed, cause the processor to perform operations. The operations can include obtaining the natural language text body, inputting the natural language text body into the encoder model, receiving, as an output of the encoder model, the knowledge graph, obtaining the natural language question, inputting the natural language question into the programmer model, receiving the program as an output of the programmer model, and executing the program on the knowledge graph to produce an answer to the natural language question.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: April 2, 2024
    Assignee: GOOGLE LLC
    Inventors: Ni Lao, Jiazhong Nie, Fan Yang
  • Patent number: 11942103
    Abstract: The simultaneous transmission and reproduction of a compressed audio signal and a linear PCM signal is satisfactorily achieved. An audio signal of a predetermined unit is sequentially transmitted via a predetermined transmission line to a reception side. The audio signal of the predetermined unit is a mixed signal of a compressed audio signal and a linear PCM signal. For example, the audio signal of the predetermined unit is an audio signal of a sub-frame unit. In this case, for example, in the audio signal of the sub-frame unit, the compressed audio signal is arranged on an upper-order bit side, and the linear PCM signal is arranged on a lower-order bit side.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: March 26, 2024
    Assignee: SONY CORPORATION
    Inventor: Gen Ichimura
  • Patent number: 11922323
    Abstract: A method for deep reinforcement learning using a neural network model includes receiving a distribution including a plurality of related tasks. Parameters for the reinforcement learning neural network model is trained based on gradient estimation associated with the parameters using samples associated with the plurality of related tasks. Control variates are incorporated into the gradient estimation by automatic differentiation.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: March 5, 2024
    Assignee: Salesforce, Inc.
    Inventor: Hao Liu
  • Patent number: 11922118
    Abstract: The present disclosure relates generally to systems and methods for analyzing intent. Intents may be analyzed to determine to which device or agent to route a communication. The analyzed intent information can also be used to formulate reports and analyze the accuracy of the identified intents with respect to the received communication.
    Type: Grant
    Filed: September 7, 2022
    Date of Patent: March 5, 2024
    Assignee: LIVEPERSON, INC.
    Inventors: Matthew Dunn, Joe Bradley, Laura Onu
  • Patent number: 11915701
    Abstract: Computer-readable media, systems and methods may improve automatic summarization of transcripts of financial earnings calls. For example, a system may generate segments, such as by disambiguating sentences, from a transcript to be summarized. The system may use an estimator that assesses whether or not the segment should be included in the summary. Different types of estimators may be used. For example, the estimator may be rule-based, trained based on machine-learning techniques, or trained on based on machine-learning with language modeling using natural language processing to fine-tune language models specific to financial earnings calls.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: February 27, 2024
    Assignee: REFINITIV US ORGANIZATION LLC
    Inventors: Jochen Lothar Leidner, Georgios Gkotsis, Tim Nugent
  • Patent number: 11907672
    Abstract: Computer-readable media, systems and methods may improve classification of content based on a machine-learning natural language processing (ML-NLP) classifier. The system may train a general language model based on a general corpus, further train the general language model based on a domain-specific corpus to generate a domain-specific language model, and conduct supervised machine-learning based on the domain-specific language using topic-specific corpus labeled as relating to topics of interest to generate the ML-NLP classifier. Accordingly, the ML-NLP classifier may be trained on a general corpus, further trained on a domain-specific corpus, and fine-tuned on a topic-specific corpus. In this manner, domain-specific content may be classified into topics of interest. The ML-NLP classifier may classify content into the topics of interest.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: February 20, 2024
    Assignee: REFINITIV US ORGANIZATION LLC
    Inventors: Tim Nugent, Matt Harding, Jochen Lothar Leidner
  • Patent number: 11907679
    Abstract: An arithmetic operation device is provided that removes a part of parameters of a predetermined number of parameters from a first machine learning model which includes the predetermined number of parameters and is trained so as to output second data corresponding to input first data, determines the number of bits of a weight parameter according to required performance related to an inference to generate a second machine learning model, and acquires data output from the second machine learning model so as to correspond to the input first data with a smaller computational complexity than the first machine learning model.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: February 20, 2024
    Assignee: Kioxia Corporation
    Inventors: Kengo Nakata, Asuka Maki, Daisuke Miyashita
  • Patent number: 11893995
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for collaboration between multiple voice controlled devices are disclosed. In one aspect, a method includes the actions of identifying, by a first computing device, a second computing device that is configured to respond to a particular, predefined hotword; receiving audio data that corresponds to an utterance; receiving a transcription of additional audio data outputted by the second computing device in response to the utterance; based on the transcription of the additional audio data and based on the utterance, generating a transcription that corresponds to a response to the additional audio data; and providing, for output, the transcription that corresponds to the response.
    Type: Grant
    Filed: December 5, 2022
    Date of Patent: February 6, 2024
    Assignee: GOOGLE LLC
    Inventors: Victor Carbune, Pedro Gonnet Anders, Thomas Deselaers, Sandro Feuz
  • Patent number: 11887582
    Abstract: Systems, methods, and devices for training and testing utterance based frameworks are disclosed. The training and testing can be conducting using synthetic utterance samples in addition to natural utterance samples. The synthetic utterance samples can be generated based on a vector space representation of natural utterances. In one method, a synthetic weight vector associated with a vector space is generated. An average representation of the vector space is added to the synthetic weight vector to form a synthetic feature vector. The synthetic feature vector is used to generate a synthetic voice sample. The synthetic voice sample is provided to the utterance-based framework as at least one of a testing or training sample.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: January 30, 2024
    Assignee: Spotify AB
    Inventor: Daniel Bromand
  • Patent number: 11887578
    Abstract: A method and system for automatic dubbing method is disclosed, comprising, responsive to receiving a selection of media content for playback on a user device by a user of the user device, processing extracted speeches of a first voice from the media content to generate replacement speeches using a set of phenomes of a second voice of the user of the user device, and replacing the extracted speeches of the first voice with the generated replacement speeches in the audio portion of the media content for playback on the user device.
    Type: Grant
    Filed: November 10, 2022
    Date of Patent: January 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Henry Gabryjelski, Jian Luan, Dapeng Li
  • Patent number: 11886812
    Abstract: In an embodiment, the disclosed technologies are capable of receiving, by a digital model, data representing a first text sequence in a first language; using the digital model, modifying the first text sequence to result in creating and digitally storing a second text sequence in the first language; and outputting, by the digital model, the second text sequence in the first language. The modifying may include any one or more of: deleting text from the first text sequence, adding text to the first text sequence, modifying text of the first text sequence, reordering text of the first text sequence, adding a digital markup to the first text sequence. The digital model may have been fine-tuned, after having been machine-learned, using a subset of values of model parameters associated with an encoding layer or an embedding layer or both the encoding layer and the embedding layer.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: January 30, 2024
    Assignee: Grammarly, Inc.
    Inventors: Maria Nadejde, Joel Tetreault
  • Patent number: 11887615
    Abstract: A method and device of transparency processing of music. The method comprises: obtaining a characteristic of a music to be played; inputting the characteristic into a transparency probability neural network to obtain a transparency probability of the music to be played; determining a transparency enhancement parameter corresponding to the transparency probability, the transparency enhancement parameter is used to perform transparency adjustment on the music to be played. The present invention constructs a transparency probability neural network in advance based on deep learning and builds a mapping relationship between the transparency probability and the transparency enhancement parameters can be constructed, so that the music to be played can be automatically permeated.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: January 30, 2024
    Assignee: Anker Innovations Technology Co., Ltd.
    Inventors: Qingshan Yao, Yu Qin, Haowen Yu, Feng Lu
  • Patent number: 11875807
    Abstract: A deep learning method-based tonal balancing method, apparatus, and system, the method includes: extracting features from audio data to obtain audio data features, generating audio balancing results by using a trained audio balancing model based on the obtained audio data features. The present invention employs deep neural networks and unsupervised deep learning method to solve the problems of audio balancing of unlabeled music and music of unknown style. The present invention also combines user preferences statistics to achieve a more rational multi-style audio balancing design to meet individual needs.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: January 16, 2024
    Assignee: Anker Innovations Technology Co., Ltd.
    Inventors: Qingshan Yao, Yu Qin, Haowen Yu, Feng Lu
  • Patent number: 11869529
    Abstract: It is intended to accurately convert a speech rhythm. A model storage unit (10) stores a speech rhythm conversion model which is a neural network that receives, as an input thereto, a first feature value vector including information related to a speech rhythm of at least a phoneme extracted from a first speech signal resulting from a speech uttered by a speaker in a first group, converts the speech rhythm of the first speech signal to a speech rhythm of a speaker in a second group, and outputs the speech rhythm of the speaker in the second group. A feature value extraction unit (11) extracts, from the input speech signal resulting from the speech uttered by the speaker in the first group, information related to a vocal tract spectrum and information related to the speech rhythm.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: January 9, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventor: Sadao Hiroya
  • Patent number: 11862153
    Abstract: An audio controlled assistant captures environmental noise and converts the environmental noise into audio signals. The audio signals are provided to a system which analyzes the audio signals for a plurality of audio prompts, which have been customized for the acoustic environment surrounding the audio controlled assistant by an acoustic modeling system. The system configured to detect the presence of an audio prompt in the audio signals and transmit instructions associated with the detected audio prompt to at least one of the audio controlled assistant or one or more cloud based services, in response.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: January 2, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: John Daniel Thimsen, Gregory Michael Hart, Ryan Paul Thomas
  • Patent number: 11862152
    Abstract: Disclosed herein are system, apparatus, article of manufacture, method, and computer program product embodiments for adapting an automated speech recognition system to provide more accurate suggestions to voice queries involving media content including recently created or recently available content. An example computer-implemented method includes transcribing the voice query, identifying respective components of the query such as the media content being requested and the action to be performed, and generating fuzzy candidates that potentially match the media content based on phonetic representations of the identified components. Phonetic representations of domain specific candidates are stored in a domain entities index and is continuously updated with new entries so as to maintain the accuracy of the speech recognition of voice queries for recently created or recently available content.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: January 2, 2024
    Assignee: ROKU, INC.
    Inventors: Atul Kumar, Elizabeth O. Bratt, Minsuk Heo, Nidhi Rajshree, Praful Chandra Mangalath
  • Patent number: 11854570
    Abstract: An electronic apparatus, method, and computer readable medium are provided. The electronic apparatus includes a communicator, and a controller. The controller, based on a first voice input being received, controls the communicator to receive data including first response information corresponding to the first voice input from a server, and outputs the first response information on a display, and based on a second voice input being received, controls the communicator to receive data including second response information corresponding to the second voice input from the server, and outputs the second response information on the display. Based on whether the second voice input is received within a predetermined time from a time corresponding to the output of the first response information, whether a use of utterance history information is identified, and the second response information is displayed differently based on whether the second voice input is received within the predetermined time.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: December 26, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ji-hye Chung, Cheong-jae Lee, Hye-jeong Lee, Yong-wook Shin
  • Patent number: 11847380
    Abstract: Systems and methods for providing supplemental information with a response to a command are provided herein. In some embodiments, audio data representing a spoken command may be received by a cloud-based information system. A response to the command may be retrieved from a category related to the context of the command. A supplemental information database may also be provided that is pre-populated with supplemental information related to an individual having a registered account on the cloud-based information system. In response to retrieving the response to the command, supplemental information may be selected from the supplemental information database to be appended to the response to the command. A message may then be generated including the response and the supplemental information appended thereto, which in turn may be converted into audio data representing the message, which may be sent to a voice-controlled electronic device of the individual.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: December 19, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Srikanth Doss Kadarundalagi Raghuram Doss, Jeffery David Wells, Richard Dault, Benjamin Joseph Tobin, Mark Douglas Elders, Stanislava R. Vlasseva, Skeets Jonathan Norquist, Nathan Lee Bosen, Ryan Christopher Rapp