Patents Examined by Michael N. Opsasnick
  • Patent number: 11907679
    Abstract: An arithmetic operation device is provided that removes a part of parameters of a predetermined number of parameters from a first machine learning model which includes the predetermined number of parameters and is trained so as to output second data corresponding to input first data, determines the number of bits of a weight parameter according to required performance related to an inference to generate a second machine learning model, and acquires data output from the second machine learning model so as to correspond to the input first data with a smaller computational complexity than the first machine learning model.
    Type: Grant
    Filed: March 13, 2020
    Date of Patent: February 20, 2024
    Assignee: Kioxia Corporation
    Inventors: Kengo Nakata, Asuka Maki, Daisuke Miyashita
  • Patent number: 11907672
    Abstract: Computer-readable media, systems and methods may improve classification of content based on a machine-learning natural language processing (ML-NLP) classifier. The system may train a general language model based on a general corpus, further train the general language model based on a domain-specific corpus to generate a domain-specific language model, and conduct supervised machine-learning based on the domain-specific language using topic-specific corpus labeled as relating to topics of interest to generate the ML-NLP classifier. Accordingly, the ML-NLP classifier may be trained on a general corpus, further trained on a domain-specific corpus, and fine-tuned on a topic-specific corpus. In this manner, domain-specific content may be classified into topics of interest. The ML-NLP classifier may classify content into the topics of interest.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: February 20, 2024
    Assignee: REFINITIV US ORGANIZATION LLC
    Inventors: Tim Nugent, Matt Harding, Jochen Lothar Leidner
  • Patent number: 11893995
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for collaboration between multiple voice controlled devices are disclosed. In one aspect, a method includes the actions of identifying, by a first computing device, a second computing device that is configured to respond to a particular, predefined hotword; receiving audio data that corresponds to an utterance; receiving a transcription of additional audio data outputted by the second computing device in response to the utterance; based on the transcription of the additional audio data and based on the utterance, generating a transcription that corresponds to a response to the additional audio data; and providing, for output, the transcription that corresponds to the response.
    Type: Grant
    Filed: December 5, 2022
    Date of Patent: February 6, 2024
    Assignee: GOOGLE LLC
    Inventors: Victor Carbune, Pedro Gonnet Anders, Thomas Deselaers, Sandro Feuz
  • Patent number: 11887582
    Abstract: Systems, methods, and devices for training and testing utterance based frameworks are disclosed. The training and testing can be conducting using synthetic utterance samples in addition to natural utterance samples. The synthetic utterance samples can be generated based on a vector space representation of natural utterances. In one method, a synthetic weight vector associated with a vector space is generated. An average representation of the vector space is added to the synthetic weight vector to form a synthetic feature vector. The synthetic feature vector is used to generate a synthetic voice sample. The synthetic voice sample is provided to the utterance-based framework as at least one of a testing or training sample.
    Type: Grant
    Filed: February 11, 2021
    Date of Patent: January 30, 2024
    Assignee: Spotify AB
    Inventor: Daniel Bromand
  • Patent number: 11886812
    Abstract: In an embodiment, the disclosed technologies are capable of receiving, by a digital model, data representing a first text sequence in a first language; using the digital model, modifying the first text sequence to result in creating and digitally storing a second text sequence in the first language; and outputting, by the digital model, the second text sequence in the first language. The modifying may include any one or more of: deleting text from the first text sequence, adding text to the first text sequence, modifying text of the first text sequence, reordering text of the first text sequence, adding a digital markup to the first text sequence. The digital model may have been fine-tuned, after having been machine-learned, using a subset of values of model parameters associated with an encoding layer or an embedding layer or both the encoding layer and the embedding layer.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: January 30, 2024
    Assignee: Grammarly, Inc.
    Inventors: Maria Nadejde, Joel Tetreault
  • Patent number: 11887578
    Abstract: A method and system for automatic dubbing method is disclosed, comprising, responsive to receiving a selection of media content for playback on a user device by a user of the user device, processing extracted speeches of a first voice from the media content to generate replacement speeches using a set of phenomes of a second voice of the user of the user device, and replacing the extracted speeches of the first voice with the generated replacement speeches in the audio portion of the media content for playback on the user device.
    Type: Grant
    Filed: November 10, 2022
    Date of Patent: January 30, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Henry Gabryjelski, Jian Luan, Dapeng Li
  • Patent number: 11887615
    Abstract: A method and device of transparency processing of music. The method comprises: obtaining a characteristic of a music to be played; inputting the characteristic into a transparency probability neural network to obtain a transparency probability of the music to be played; determining a transparency enhancement parameter corresponding to the transparency probability, the transparency enhancement parameter is used to perform transparency adjustment on the music to be played. The present invention constructs a transparency probability neural network in advance based on deep learning and builds a mapping relationship between the transparency probability and the transparency enhancement parameters can be constructed, so that the music to be played can be automatically permeated.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: January 30, 2024
    Assignee: Anker Innovations Technology Co., Ltd.
    Inventors: Qingshan Yao, Yu Qin, Haowen Yu, Feng Lu
  • Patent number: 11875807
    Abstract: A deep learning method-based tonal balancing method, apparatus, and system, the method includes: extracting features from audio data to obtain audio data features, generating audio balancing results by using a trained audio balancing model based on the obtained audio data features. The present invention employs deep neural networks and unsupervised deep learning method to solve the problems of audio balancing of unlabeled music and music of unknown style. The present invention also combines user preferences statistics to achieve a more rational multi-style audio balancing design to meet individual needs.
    Type: Grant
    Filed: June 3, 2019
    Date of Patent: January 16, 2024
    Assignee: Anker Innovations Technology Co., Ltd.
    Inventors: Qingshan Yao, Yu Qin, Haowen Yu, Feng Lu
  • Patent number: 11869529
    Abstract: It is intended to accurately convert a speech rhythm. A model storage unit (10) stores a speech rhythm conversion model which is a neural network that receives, as an input thereto, a first feature value vector including information related to a speech rhythm of at least a phoneme extracted from a first speech signal resulting from a speech uttered by a speaker in a first group, converts the speech rhythm of the first speech signal to a speech rhythm of a speaker in a second group, and outputs the speech rhythm of the speaker in the second group. A feature value extraction unit (11) extracts, from the input speech signal resulting from the speech uttered by the speaker in the first group, information related to a vocal tract spectrum and information related to the speech rhythm.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: January 9, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventor: Sadao Hiroya
  • Patent number: 11862153
    Abstract: An audio controlled assistant captures environmental noise and converts the environmental noise into audio signals. The audio signals are provided to a system which analyzes the audio signals for a plurality of audio prompts, which have been customized for the acoustic environment surrounding the audio controlled assistant by an acoustic modeling system. The system configured to detect the presence of an audio prompt in the audio signals and transmit instructions associated with the detected audio prompt to at least one of the audio controlled assistant or one or more cloud based services, in response.
    Type: Grant
    Filed: September 23, 2019
    Date of Patent: January 2, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: John Daniel Thimsen, Gregory Michael Hart, Ryan Paul Thomas
  • Patent number: 11862152
    Abstract: Disclosed herein are system, apparatus, article of manufacture, method, and computer program product embodiments for adapting an automated speech recognition system to provide more accurate suggestions to voice queries involving media content including recently created or recently available content. An example computer-implemented method includes transcribing the voice query, identifying respective components of the query such as the media content being requested and the action to be performed, and generating fuzzy candidates that potentially match the media content based on phonetic representations of the identified components. Phonetic representations of domain specific candidates are stored in a domain entities index and is continuously updated with new entries so as to maintain the accuracy of the speech recognition of voice queries for recently created or recently available content.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: January 2, 2024
    Assignee: ROKU, INC.
    Inventors: Atul Kumar, Elizabeth O. Bratt, Minsuk Heo, Nidhi Rajshree, Praful Chandra Mangalath
  • Patent number: 11854570
    Abstract: An electronic apparatus, method, and computer readable medium are provided. The electronic apparatus includes a communicator, and a controller. The controller, based on a first voice input being received, controls the communicator to receive data including first response information corresponding to the first voice input from a server, and outputs the first response information on a display, and based on a second voice input being received, controls the communicator to receive data including second response information corresponding to the second voice input from the server, and outputs the second response information on the display. Based on whether the second voice input is received within a predetermined time from a time corresponding to the output of the first response information, whether a use of utterance history information is identified, and the second response information is displayed differently based on whether the second voice input is received within the predetermined time.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: December 26, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ji-hye Chung, Cheong-jae Lee, Hye-jeong Lee, Yong-wook Shin
  • Patent number: 11847380
    Abstract: Systems and methods for providing supplemental information with a response to a command are provided herein. In some embodiments, audio data representing a spoken command may be received by a cloud-based information system. A response to the command may be retrieved from a category related to the context of the command. A supplemental information database may also be provided that is pre-populated with supplemental information related to an individual having a registered account on the cloud-based information system. In response to retrieving the response to the command, supplemental information may be selected from the supplemental information database to be appended to the response to the command. A message may then be generated including the response and the supplemental information appended thereto, which in turn may be converted into audio data representing the message, which may be sent to a voice-controlled electronic device of the individual.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: December 19, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Srikanth Doss Kadarundalagi Raghuram Doss, Jeffery David Wells, Richard Dault, Benjamin Joseph Tobin, Mark Douglas Elders, Stanislava R. Vlasseva, Skeets Jonathan Norquist, Nathan Lee Bosen, Ryan Christopher Rapp
  • Patent number: 11842377
    Abstract: In an example embodiment, text is received at an ecommerce service from a first user, the text in a first language and pertaining to a first listing on the ecommerce service. Contextual information about the first listing may be retrieved. The text may be translated to a second language. Then, a plurality of text objects, in the second language, similar to the translated text may be located in a database, each of the text objects corresponding to a listing. Then, the plurality of text objects similar to the translated text may be ranked based on a comparison of the contextual information about the first listing and contextual information stored in the database for the listings corresponding to the plurality of text objects similar to the translated text. At least one of the ranked plurality of text objects may then be translated to the first language.
    Type: Grant
    Filed: November 11, 2021
    Date of Patent: December 12, 2023
    Assignee: EBAY INC.
    Inventor: Yan Chelly
  • Patent number: 11836454
    Abstract: A computer-implemented method is provided for translating input text from a source language to a target language including receiving, by an interface, the input text in a source language, and identifying, by a processor coupled to the interface, at least one portion of the input text. The method includes replacing each portion with a corresponding sematic structure to produce at least one semantic structure, and organizing the at least one semantic structure into a semantic tree. The method includes matching a portion of the semantic tree to at least one phrase from a stored phrase bank, and providing one or more versions of the at least one phrase in the source language. The method includes receiving a selected version of the set of versions, translating the selected version from the source language to the target language, and providing the selected version in the target language.
    Type: Grant
    Filed: May 2, 2018
    Date of Patent: December 5, 2023
    Assignee: Language Scientific, Inc.
    Inventor: Leonid Fridman
  • Patent number: 11837245
    Abstract: A method, system, and computer readable medium for decomposing an audio signal into different isolated sources. The techniques and mechanisms convert an audio signal into K input spectrogram fragments. The fragments are sent into a deep neural network to isolate for different sources. The isolated fragments are then combined to form full isolated source audio signals.
    Type: Grant
    Filed: November 1, 2022
    Date of Patent: December 5, 2023
    Assignee: AUDIOSHAKE, INC.
    Inventor: Luke Miner
  • Patent number: 11829720
    Abstract: Systems and methods for analysis and validation of language models trained using data that is unavailable or inaccessible are provided. One example method includes, at an electronic device with one or more processors and memory, obtaining a first set of data corresponding to one or more tokens predicted based on one or more previous tokens. The method determines a probability that the first set of data corresponds to a prediction generated by a first language model trained using a user privacy preserving training process. In accordance with a determination that the probability is within a predetermined range, the method determines that the one or more tokens correspond to a prediction associated with the user privacy preserving training process and outputs a predicted token sequence including the one or more tokens and the one or more previous tokens.
    Type: Grant
    Filed: December 1, 2020
    Date of Patent: November 28, 2023
    Assignee: Apple Inc.
    Inventors: Jerome R. Bellegarda, Bishal Barman, Brent D. Ramerth
  • Patent number: 11832068
    Abstract: Methods and apparatus for identifying a music service based on a user command. A content type is identified from a received user command and a music service is selected that supports the content type. A selected music service can then transmit audio content associated with the content type for playback.
    Type: Grant
    Filed: November 22, 2021
    Date of Patent: November 28, 2023
    Assignee: Sonos, Inc.
    Inventors: Simon Jarvis, Mark Plagge, Christopher Butts
  • Patent number: 11816431
    Abstract: Computer implemented method and a system for auto completion of text based on the context associated with the text. The computer implemented method includes steps of receiving input text, identifying a certain context associated with the input text from multiple predefined contexts, by feeding the input text into a context-prediction component of a machine learning model that predicts the certain context, selecting a certain context-specific component of the machine learning model from multiple context-specific components according to the identified certain context, feeding the input text into the selected context-specific component that outputs autocomplete text associated with the identified certain context. The context-specific components are each trained to generate autocompleted text associated with a respective context pre-defined for the respective context-specific component.
    Type: Grant
    Filed: April 12, 2020
    Date of Patent: November 14, 2023
    Assignee: Salesforce, Inc.
    Inventor: Yang Zhang
  • Patent number: 11817115
    Abstract: Methods and systems for deessing of speech signals are described. A deesser of a speech processing system includes an analyzer configured to receive a full spectral envelope for each time frame of a speech signal presented to the speech processing system, and to analyze the full spectral envelope to identify frequency content for deessing. The deesser also includes a compressor configured to receive results from the analyzer and to spectrally weight the speech signal as a function of results of the analyzer. The analyzer can be configured to calculate a psychoacoustic measure from the full spectral envelope, and may be further configured to detect sibilant sounds of the speech signal using the psychoacoustic measure. The psychoacoustic measure can include, for example, a measure of sharpness, and the analyzer may be further configured to calculate deesser weights based on the measure of sharpness. An example application includes in-car communications.
    Type: Grant
    Filed: September 1, 2016
    Date of Patent: November 14, 2023
    Assignee: Cerence Operating Company
    Inventors: Tobias Herbig, Stefan Richardt