Neural Network Patents (Class 704/232)
  • Patent number: 12380877
    Abstract: A method may include obtaining first audio data of a first communication session between a first and second device and during the first communication session, obtaining a first text string that is a transcription of the first audio data and training a model of an automatic speech recognition system using the first text string and the first audio data. The method may further include in response to completion of the training, deleting the first audio data and the first text string and after deleting the first audio data and the first text string, obtaining second audio data of a second communication session between a third and fourth device and during the second communication session obtaining a second text string that is a transcription of the second audio data and further training the model of the automatic speech recognition system using the second text string and the second audio data.
    Type: Grant
    Filed: November 8, 2021
    Date of Patent: August 5, 2025
    Assignee: Sorenson IP Holdings, LLC
    Inventors: David Thomson, Jadie Adams
  • Patent number: 12374347
    Abstract: A voice conversion method used in an electronic device, including determining a target speech to be converted and a content representation vector corresponding to the target speech, where the target speech has a first content and a first voiceprint, and the content representation vector is obtained based on the speech waveform of the target speech; determining a reference speech and a voiceprint representation vector corresponding to the reference speech, where the reference speech has a second voiceprint, and the second voiceprint is different from the first voiceprint; generating a converted speech based on a speech generator according to the content representation vector and the voiceprint representation vector; where the converted speech has the first content and the second voiceprint; where the speech generator is obtained by jointly training a preset speech generation network and a preset discriminator network by using a training speech having the second voiceprint.
    Type: Grant
    Filed: June 7, 2023
    Date of Patent: July 29, 2025
    Assignee: Shanghai Lilith Technology Corporation
    Inventors: Huanhua Liao, Junfeng Li, Zhikai Li, Haonan Zhao, Xin Xiong
  • Patent number: 12367165
    Abstract: The present disclosure provides example operation accelerators and compression methods. One example operation accelerator includes a storage configured to store first input data, weight data, and a control instruction, and an operation circuit connected to the storage and configured to perform matrix multiplication on the first input data and the weight data, to obtain a computation result. The operation accelerator further includes a compression module configured to compress the computation result to obtain compressed data, as well as a controller connected to the storage and configured to obtain the control instruction from the storage, and when the control instruction includes instructions to compress the computation result, control the compression module to compress the computation result to obtain the compressed data.
    Type: Grant
    Filed: March 11, 2024
    Date of Patent: July 22, 2025
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Baoqing Liu, Hu Liu, Qinglong Chen
  • Patent number: 12367863
    Abstract: A computer-implemented method for training a neural transducer is provided including, by using audio data and transcription data of the audio data as input data, obtaining outputs from a trained language model and a seed neural transducer, respectively, combining the outputs to obtain a supervisory output, and updating parameters of another neural transducer in training so that its output is close to the supervisory output. The neural transducer can be a Recurrent Neural Network Transducer (RNN-T).
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: July 22, 2025
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Gakuto Kurata
  • Patent number: 12361931
    Abstract: A method includes receiving, by at least one processing device of an electronic device, an utterance provided by a user. The method also includes delexicalizing at least a portion of the utterance using a named entity database stored on the electronic device to create an encoded utterance. The method further includes transmitting the encoded utterance to a server on which a language model is stored. The method also includes receiving an intent and one or more slots associated with the utterance, where at least one slot of the one or more slots is a representative tag. The method further includes identifying a named entity corresponding to the at least one slot based on the named entity database. In addition, the method includes performing an action based on the intent and the one or more slots.
    Type: Grant
    Filed: March 29, 2023
    Date of Patent: July 15, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Tapas Kanungo, Qingxiaoyang Zhu, Nehal A. Bengre
  • Patent number: 12361215
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on an input to generate an output. In one aspect, one of the method includes receiving input data that describes an input of a machine learning task; receiving candidate output data that describes a set of candidate classification outputs of the machine learning task for the input; generating an input sequence that includes the input and the set of candidate classification outputs; processing the input sequence using a neural network to generate a network output that specifies a respective score for each candidate classification output in the set of candidate classification outputs; and generating an output of the machine learning task for the input, comprising selecting, as the output, a selected candidate classification output from the set of candidate classification outputs using the respective scores.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: July 15, 2025
    Assignee: Google LLC
    Inventors: Jason Weng Wei, Maarten Paul Bosma, Yuzhe Zhao, Jr., Kelvin Gu, Quoc V. Le
  • Patent number: 12344277
    Abstract: An authentication system for a voice controlled autonomous driving system of a vehicle includes a plurality of perception sensors for collecting perception data indicative of an environment surrounding the vehicle and one or more controllers in electronic communication with one or more autonomous driving controllers of the voice controlled autonomous driving system and the plurality of perception sensors. The voice controlled autonomous driving system determines the trajectory of the vehicle based on a voice command generated by an occupant of the vehicle. The one or more controllers execute instructions to calculate a credibility score of the occupant that quantifies a reliability of one or more voice-based commands generated by the occupant. In response to determining an overall credibility score of the occupant is less than the threshold access score, the one or more controllers adjust an access level of the voice controlled autonomous driving system for the occupant.
    Type: Grant
    Filed: September 15, 2023
    Date of Patent: July 1, 2025
    Assignee: GM GLOBAL TECHNOLOGY OPERATIONS LLC
    Inventors: Alireza Esna Ashari Esfahani, Rouhollah Sayed Jafari, Upali P. Mudalige
  • Patent number: 12334067
    Abstract: A plurality of microphone signals can be obtained. In the plurality of microphone signals, speech of a user can be detected. A gaze of a user can be determined based on the plurality of microphone signals. A voice activated response of the computing device can be performed in response to the gaze of the user being directed at the computing device. Other aspects are described and claimed.
    Type: Grant
    Filed: August 8, 2022
    Date of Patent: June 17, 2025
    Assignee: Apple Inc.
    Inventors: Prateek Murgai, Ashrith Deshpande
  • Patent number: 12334059
    Abstract: A method includes receiving a plurality of unlabeled audio samples corresponding to spoken utterances not paired with corresponding transcriptions. At a target branch of a contrastive Siamese network, the method also includes generating a sequence of encoder outputs for the plurality of unlabeled audio samples and modifying time characteristics of the encoder outputs to generate a sequence of target branch outputs. At an augmentation branch of a contrastive Siamese network, the method also includes performing augmentation on the unlabeled audio samples, generating a sequence of augmented encoder outputs for the augmented unlabeled audio samples, and generating predictions of the sequence of target branch outputs generated at the target branch. The method also includes determining an unsupervised loss term based on target branch outputs and predictions of the sequence of target branch outputs. The method also includes updating parameters of the audio encoder based on the unsupervised loss term.
    Type: Grant
    Filed: March 28, 2024
    Date of Patent: June 17, 2025
    Assignee: Google LLC
    Inventors: Jaeyoung Kim, Soheil Khorram, Hasim Sak, Anshuman Tripathi, Han Lu, Qian Zhang
  • Patent number: 12315497
    Abstract: A method includes receiving, as input to a speech recognition model, audio data corresponding to a spoken utterance. The method also includes performing, using the speech recognition model, speech recognition on the audio data by, at each of a plurality of time steps, encoding, using an audio encoder, the audio data corresponding to the spoken utterance into a corresponding audio encoding, and decoding, using a speech recognition joint network, the corresponding audio encoding into a probability distribution over possible output labels. At each of the plurality of time steps, the method also includes determining, using an intended query (IQ) joint network configured to receive a label history representation associated with a sequence of non-blank symbols output by a final softmax layer, an intended query decision indicating whether or not the spoken utterance includes a query intended for a digital assistant.
    Type: Grant
    Filed: March 20, 2023
    Date of Patent: May 27, 2025
    Assignee: Google LLC
    Inventors: Shuo-yiin Chang, Guru Prakash Arumugam, Zelin Wu, Tara N. Sainath, Bo Li, Qiao Liang, Adam Stambler, Shyam Upadhyay, Manaal Faruqui, Trevor Strohman
  • Patent number: 12315532
    Abstract: Proposed is an audio data identification apparatus for collecting random audio data and identifying an audio resource obtained by exacting any one section of the collected audio data. The audio data identification apparatus includes: a communication unit that collects and transmits the random audio data; and a control unit that identifies the collected audio data. The control unit includes: a parsing unit that parses the collected audio data into predetermined units; an extraction unit that selects, as the audio resource, any one of a plurality of parsed sections of the audio data; a matching unit that matches identification information of the audio resource via a pre-loaded artificial intelligence algorithm; and a verification unit that verifies the identification information matched to the audio resource.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: May 27, 2025
    Assignee: Cochl Inc
    Inventors: Ilyoung Jeong, Hyungui Lim, Yoonchang Han, Subin Lee, Jeongsoo Park, Donmoon Lee
  • Patent number: 12315507
    Abstract: Techniques for performing automatic speech recognition (ASR) processing are described. The ASR processing may involve use of a segmenter and a decoder. The segmenter may be configured to identify audio segments, from audio data containing speech, based on word boundaries. The decoder may be configured to generate multiple word hypotheses for individual audio segments. A fixed-size representation of an audio segment may be generated. The described ASR processing techniques may be computationally less expensive than at least some other systems. Also, the described ASR processing techniques may maintain a larger number of word predictions per audio segment as compared to at least some other systems.
    Type: Grant
    Filed: September 16, 2022
    Date of Patent: May 27, 2025
    Assignee: Amazon Technologies, Inc.
    Inventor: Denis Filimonov
  • Patent number: 12300251
    Abstract: The present invention relates to a speaker diarization technology, and more specifically to, end-to-end speaker diarization system and method through transformer learning having an auxiliary loss-based residual connection to separate speakers by dividing the speakers for time interval, wherein the end-to-end speaker diarization system and method using an auxiliary loss can differentiate and separate speakers through speaker labeling based on the transformer learning using an auxiliary loss even if speaker speeches overlap in a multi-speaker environment.
    Type: Grant
    Filed: November 29, 2022
    Date of Patent: May 13, 2025
    Assignee: Gwangju Institute of Science and Technology
    Inventors: Dong Keon Park, Hong Kook Kim, Ye Chan Yu
  • Patent number: 12300220
    Abstract: The present disclosure provides a pitch-based speech conversion model training method and a speech conversion system, wherein an audio feature code is output by a priori encoder, and a pitch feature is extracted by a pitch extraction module. A linear spectrum corresponding to the reference speech is input into the posteriori encoder to obtain an audio latent variable. In addition, the audio feature code, a speech concatenation feature obtained by concatenation of the audio feature code and the pitch feature, and the audio latent variable are input into a temporal alignment module to obtain a converted speech code, and the converted speech code is decoded by a decoder to obtain a converted speech. The training loss of the converted speech is then calculated to determine the degree of convergence of the speech conversion model.
    Type: Grant
    Filed: December 23, 2024
    Date of Patent: May 13, 2025
    Assignee: NANJING SILICON INTELLIGENCE TECHNOLOGY CO., LTD.
    Inventors: Huapeng Sima, Ran Xu
  • Patent number: 12292912
    Abstract: A system for intent-based action recommendations and/or fulfillment in a messaging platform, preferably including and/or interfacing with: a set of user interfaces; a set of models; and/or a messaging platform. A method for intent-based action recommendations and/or fulfillment in a messaging platform, preferably including any or all of: receiving a set of information associated with a request; producing and sending a set of intent options; receiving a selected intent; generating a message based on the selected intent and/or the set of information; and providing the message.
    Type: Grant
    Filed: November 26, 2024
    Date of Patent: May 6, 2025
    Assignee: OrangeDot, Inc.
    Inventor: William Kearns
  • Patent number: 12288145
    Abstract: A computer-implemented method, a computer program product, and a computer system for parallel cross validation in collaborative machine learning. A server groups local models into groups. In each group, each local device uses its local data to validate accuracies of the local models and sends a validation result to a group leader or the server. The group leader or the server selects groups whose variances of the accuracies are not below a predetermined variance threshold. In each selected group, the group leader or the server compares an accuracy of each local model with an average value of the accuracies and randomly selects one or more local models whose accuracies do not exceed a predetermined accuracy threshold. The server obtains weight parameters of selected local models and updates the global model based on the weight parameters.
    Type: Grant
    Filed: April 27, 2021
    Date of Patent: April 29, 2025
    Assignee: International Business Machines Corporation
    Inventors: Kenichi Takasaki, Shoichiro Watanabe, Mari Abe Fukuda, Sanehiro Furuichi, Yasutaka Nishimura
  • Patent number: 12282731
    Abstract: Systems and methods for using a generative artificial intelligence (AI) model to generate a suggested draft reply to a selected message. A message generation system and method are described that use guardrails that prevent unnecessary AI model processing and accidental sending of an AI model-generated draft. In some examples, draft reply-generation is limited to a subset of messages (e.g., focused, non-confidential) and triggering of the draft reply generation is performed only after user interaction criteria are satisfied. In some examples, a confirmation message is presented when the draft reply is attempted to be sent with no changes or quickly after the draft is generated. For instance, the guardrails limit the number of times the AI model is invoked to generate suggested replies and further prevents users from accidentally sending drafts generated from the AI model.
    Type: Grant
    Filed: March 3, 2023
    Date of Patent: April 22, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Caleb Whitmore, Susan Marie Grimshaw, Poonam Ganesh Hattangady
  • Patent number: 12277939
    Abstract: A method includes receiving, by a first encoder, an original speech segment, receiving, by a second encoder, an augmented speech segment of the original speech segment, generating, by the first encoder, a first speaker representation based on the original speech segment, generating, by the second encoder, a second speaker representation based on the augmented speech segment, and generating a contrastive loss based on the first speaker representation and the second speaker representation.
    Type: Grant
    Filed: May 2, 2022
    Date of Patent: April 15, 2025
    Assignee: TENCENT AMERICA LLC
    Inventors: Chunlei Zhang, Dong Yu
  • Patent number: 12266342
    Abstract: A method for generating speech through multi-speaker neural text-to-speech (TTS) synthesis is provided. A text input may be received (1410). Speaker latent space information of a target speaker may be provided through at least one speaker model (1420). At least one acoustic feature may be predicted through an acoustic feature predictor based on the text input and the speaker latent space information (1430). A speech waveform corresponding to the text input may be generated through a neural vocoder based on the at least one acoustic feature and the speaker latent space information (1440).
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: April 1, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Yan Deng, Lei He
  • Patent number: 12266373
    Abstract: A method and apparatus for audio processing, an electronic device and a storage medium are provided. The method includes: obtaining an audio encoding result, wherein each element in the audio encoding result has a coordinate in an audio frame number dimension and a coordinate in a text label sequence dimension; in response to an output result of an ith frame in a decoding path being a non-null character, respectively increasing the coordinate in the audio frame number dimension and the coordinate in the text label sequence dimension corresponding to an output position of the ith frame by 1 to obtain an output position of a (i+1)th frame in the decoding path; and determining an output result corresponding to the output position of the (i+1)th frame according to the output result of the ith frame in the decoding path and an element of the (i+1)th frame in the audio encoding result.
    Type: Grant
    Filed: December 9, 2022
    Date of Patent: April 1, 2025
    Assignee: BEIJING XIAOMI MOBILE SOFTWARE CO., LTD.
    Inventors: Mingshuang Luo, Fangjun Kuang, Liyong Guo, Long Lin, Wei Kang, Zengwei Yao, Povey Daniel
  • Patent number: 12261827
    Abstract: Systems, methods, and apparatus, including computer programs encoded on a computer storage medium for managing network traffic to and from a server configured to: (i) receive, from a client device, a query in a natural language, and (ii) generate a response to the query in the natural language. In one aspect, a method includes: receiving, from the client device via a network connection, a network message including a new query for the server; processing the new query, using a text encoder, to generate an embedding vector of the new query; identifying, from amongst multiple entries of a vector database, a particular entry based on a similarity metric between: (i) the embedding vector of the new query, and (ii) an embedding vector of a particular query stored in the particular entry; and determining whether the similarity metric is greater than a threshold similarity value.
    Type: Grant
    Filed: January 19, 2024
    Date of Patent: March 25, 2025
    Assignee: Auradine, Inc.
    Inventors: Tao Xu, Barun Kar
  • Patent number: 12260863
    Abstract: In an embodiment, the disclosure relates to a device for assisting a respondent in a conversation. The device includes a microphone configured to detect a voice input, and a transmitter communicatively coupled to a server and configured to transmit the voice input to the server. The server is to generate vectors associated with the voice input, feed the vectors associated with the voice input to an Artificial Intelligence utilizing a trained Machine Learning (ML) model, and obtain, from the trained ML model, an output corresponding to the vectors. The device further includes a receiver communicatively coupled to the server, and configured to receive from the server, the output generated by the ML model. A speaker is communicatively coupled with the receiver and is configured to generate a voice-based response based on the output, for assisting the respondent in responding to the conversation.
    Type: Grant
    Filed: May 6, 2024
    Date of Patent: March 25, 2025
    Inventor: Leigh M. Rothschild
  • Patent number: 12254885
    Abstract: Techniques are described herein for detecting and handling failures in other automated assistants. A method includes: executing a first automated assistant in an inactive state at least in part on a computing device operated by a user; while in the inactive state, determining, by the first automated assistant, that a second automated assistant failed to fulfill a request of the user; in response to determining that the second automated assistant failed to fulfill the request of the user, the first automated assistant processing cached audio data that captures a spoken utterance of the user comprising the request that the second automated assistant failed to fulfill, or features of the cached audio data, to determine a response that fulfills the request of the user; and providing, by the first automated assistant to the user, the response that fulfills the request of the user.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: March 18, 2025
    Assignee: GOOGLE LLC
    Inventors: Victor Carbune, Matthew Sharifi
  • Patent number: 12243545
    Abstract: A method and system of neural network dynamic noise suppression is provided for audio processing.
    Type: Grant
    Filed: December 24, 2021
    Date of Patent: March 4, 2025
    Assignee: Intel Corporation
    Inventors: Adam Kupryjanow, Lukasz Pindor
  • Patent number: 12244792
    Abstract: A method of processing, prior to encoding using an external encoder, image data using an artificial neural network is provided. The external encoder is operable in a plurality of encoding modes. At the neural network, image data representing one or more images is received. The image data is processed using the neural network to generate output data indicative of an encoding mode selected from the plurality of encoding modes of the external encoder. The neural network trained to select using image data an encoding mode of the plurality of encoding modes of the external encoder using one or more differentiable functions configured to emulate an encoding process. The generated output data is outputted from the neural network to the external encoder to enable the external encoder to encode the image data using the selected encoding mode.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: March 4, 2025
    Assignee: Sony Interactive Entertainment Europe Limited
    Inventors: Aaron Chadha, Ioannis Andreopoulos
  • Patent number: 12228476
    Abstract: Provided is a fault signal locating and identifying method of industrial equipment based on a microphone array. The method includes the steps of: acquiring sound signals and dividing the acquired signals into a training set, a verifying set and a test set; performing feature extraction on the sound signals in the training set, and extracting a phase spectrogram and an amplitude spectrogram of a spectrogram; sending an output of a feature extraction module, as an input, to a CNN, and in each layer of the CNN, learning a translation invariance in the spectrogram by using a 2D CNN; in between the layers of the CNN, normalizing the output by using a batch normalization, and reducing a dimension by using a maximum pooling layer along a frequency axis; sending an output from the layers of the CNN to layers of RNN; using a linear activation function; and inputting an output of a full connection layer to two parallel full connection layer branches for fault identification and fault location, respectively.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: February 18, 2025
    Assignee: NORTHEASTERN UNIVERSITY
    Inventors: Feng Luan, Xu Li, Ziming Zhang, Yan Wu, Yuejiao Han, Dianhua Zhang
  • Patent number: 12225317
    Abstract: According to one embodiment, a method, computer system, and computer program product for front-end clipping reduction is provided. The embodiment may include capturing input, including at least one visual input and at least one audio input. The embodiment may also include modeling data regarding visual cues based on a visual input from the at least one visual input. The embodiment may further include marking one or more timestamps which, in light of the modeled data, correspond to speech in the at least one audio input. The embodiment may also include transmitting an audio input from within the at least one audio input corresponding to the one or more marked timestamps.
    Type: Grant
    Filed: March 3, 2022
    Date of Patent: February 11, 2025
    Assignee: International Business Machines Corporation
    Inventors: Joseph Sayer, Andrew David Lyell, Benjamin David Cox
  • Patent number: 12223953
    Abstract: A contextual end-to-end automatic speech recognition (ASR) system includes: an audio encoder configured to process input audio signal to produce as output encoded audio signal; a bias encoder configured to produce as output at least one bias entry corresponding to a word to bias for recognition by the ASR system; a transcription token probability prediction network configured to produce as output a probability of a selected transcription token, based at least in part on the output of the bias encoder and the output of the audio encoder; a first attention mechanism configured to receive the at least one bias entry and determine whether the at least one bias entry is suitable to be transcribed at a specific moment of an ongoing transcription; and a second attention mechanism configured to produce prefix penalties for restricting the first attention mechanism to only entries fitting a current transcription context.
    Type: Grant
    Filed: May 5, 2022
    Date of Patent: February 11, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alejandro Coucheiro Limeres, Junho Park
  • Patent number: 12217012
    Abstract: A method classifies feedback from transcripts. The method includes receiving an utterance from a transcript from a communication session and processing the utterance with a classifier model to identify a topic label for the utterance. The classifier model is trained to identify topic labels for training utterances. The topic labels correspond to topics of clusters of the training utterances. The training utterances are selected using attention values for the training utterances and clustered using encoder values for the utterances. The method further includes routing the communication session using the topic label for the utterance.
    Type: Grant
    Filed: July 31, 2023
    Date of Patent: February 4, 2025
    Assignee: Intuit Inc.
    Inventors: Nitzan Gado, Adi Shalev, Talia Tron, Noa Haas, Oren Dar, Rami Cohen
  • Patent number: 12217159
    Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory (RAM) to store parameters of an artificial neural network (ANN). The device can generate random bit errors to simulate compromised or corrupted memory cells in a portion of the RAM accessed during computations of a first ANN output. A second ANN output is generated with the random bit errors applied to the data retrieved from the portion of the RAM. Based on a difference between the first and second ANN outputs, the device may adjust the ANN computation to reduce sensitivity to compromised or corrupted memory cells in the portion of the RAM. For example, the sensitivity reduction may be performed through ANN training using machine learning.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: February 4, 2025
    Assignee: Micron Technology, Inc.
    Inventor: Poorna Kale
  • Patent number: 12205576
    Abstract: An electronic apparatus includes a memory storing a speech recognition model and first recognition information corresponding to a first user voice obtained through the speech recognition model, the speech recognition model including a first network, a second network, and a third network; and a processor configured to: obtain a first vector by inputting voice data corresponding to a second user voice to the first network, obtain a second vector by inputting the first recognition information to the second network which generates a vector based on first weight information, and obtain second recognition information corresponding to the second user voice by inputting the first vector and the second vector to the third network which generates recognition information based on second weight information, wherein at least a part of the second weight information is the same as the first weight information.
    Type: Grant
    Filed: October 18, 2022
    Date of Patent: January 21, 2025
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jinhwan Park, Sungsoo Kim, Sichen Jin, Junmo Park, Dhairya Sandhyana, Changwoo Han
  • Patent number: 12190062
    Abstract: Various embodiments of the present invention provide methods, apparatus, systems, computing devices, computing entities, and/or the like for performing natural language processing operations using a hybrid reason code prediction machine learning framework. Certain embodiments of the present invention utilize systems, methods, and computer program products that perform natural language processing using a hybrid reason code prediction machine learning framework that comprises one or more of the following: (i) a hierarchical transformer machine learning model, (ii) an utterance prediction machine learning model, (iii) an attention distribution generation machine learning model, (iv) an utterance-code pair prediction machine learning model, and (v) a hybrid prediction machine learning model.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: January 7, 2025
    Assignee: Optum, Inc.
    Inventors: Suman Roy, Thomas G. Sullivan, Vijay Varma Malladi, Matthew J. Stewart, Abraham Gebru Tesfay, Gaurav Ranjan
  • Patent number: 12190877
    Abstract: Devices and techniques are generally described for nearest device arbitration. In various examples, a first device may receive first audio data representing a wakeword spoken by a first speaker at a first time. In some examples, a second device may receive second audio data representing the wakeword spoken by the first speaker at the first time. In some cases, the first device may generate first feature data representing the first audio data and the second device may generate second feature data representing the second audio data. In various examples, a machine learning model may use the first feature data and the second feature data to generate first prediction data representing a prediction that the first device is closer to the first speaker than the second device.
    Type: Grant
    Filed: March 2, 2022
    Date of Patent: January 7, 2025
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Jarred Barber, Tao Zhang, Yifeng Fan
  • Patent number: 12190896
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for processing an input audio waveform using a generator neural network to generate an output audio waveform. In one aspect, a method comprises: receiving an input audio waveform; processing the input audio waveform using an encoder neural network to generate a set of feature vectors representing the input audio waveform; and processing the set of feature vectors representing the input audio waveform using a decoder neural network to generate an output audio waveform that comprises a respective output audio sample for each of a plurality of output time steps.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: January 7, 2025
    Assignee: Google LLC
    Inventors: Yunpeng Li, Marco Tagliasacchi, Dominik Roblek, Félix de Chaumont Quitry, Beat Gfeller, Hannah Raphaelle Muckenhirn, Victor Ungureanu, Oleg Rybakov, Karolis Misiunas, Zalán Borsos
  • Patent number: 12190869
    Abstract: A computer-implemented method includes receiving a sequence of acoustic frames as input to an automatic speech recognition (ASR) model. Here, the ASR model includes a causal encoder and a decoder. The method also includes generating, by the causal encoder, a first higher order feature representation for a corresponding acoustic frame in the sequence of acoustic frames. The method also includes generating, by the decoder, a first probability distribution over possible speech recognition hypotheses. Here, the causal encoder includes a stack of causal encoder layers each including a Recurrent Neural Network (RNN) Attention-Performer module that applies linear attention.
    Type: Grant
    Filed: September 29, 2022
    Date of Patent: January 7, 2025
    Assignee: Google LLC
    Inventors: Tara N. Sainath, Rami Botros, Anmol Gulati, Krzysztof Choromanski, Ruoming Pang, Trevor Strohman, Weiran Wang, Jiahui Yu
  • Patent number: 12183204
    Abstract: Techniques are discussed for determining prediction probabilities of an object based on a top-down representation of an environment. Data representing objects in an environment can be captured. Aspects of the environment can be represented as map data. A multi-channel image representing a top-down view of object(s) in the environment can be generated based on the data representing the objects and map data. The multi-channel image can be used to train a machine learned model by minimizing an error between predictions from the machine learned model and a captured trajectory associated with the object. Once trained, the machine learned model can be used to generate prediction probabilities of objects in an environment, and the vehicle can be controlled based on such prediction probabilities.
    Type: Grant
    Filed: December 6, 2021
    Date of Patent: December 31, 2024
    Assignee: Zoox, Inc.
    Inventors: Xi Joey Hong, Benjamin John Sapp, James William Vaisey Philbin, Kai Zhenyu Wang
  • Patent number: 12175202
    Abstract: A method includes receiving a sequence of audio features characterizing an utterance and processing, using an encoder neural network, the sequence of audio features to generate a sequence of encodings. At each of a plurality of output steps, the method also includes determining a corresponding hard monotonic attention output to select an encoding from the sequence of encodings, identifying a proper subset of the sequence of encodings based on a position of the selected encoding in the sequence of encodings, and performing soft attention over the proper subset of the sequence of encodings to generate a context vector at the corresponding output step. The method also includes processing, using a decoder neural network, the context vector generated at the corresponding output step to predict a probability distribution over possible output labels at the corresponding output step.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: December 24, 2024
    Assignee: Google LLC
    Inventors: Chung-Cheng Chiu, Colin Abraham Raffel
  • Patent number: 12169779
    Abstract: The present disclosure provides systems and methods that enable parameter-efficient transfer learning, multi-task learning, and/or other forms of model re-purposing such as model personalization or domain adaptation. In particular, as one example, a computing system can obtain a machine-learned model that has been previously trained on a first training dataset to perform a first task. The machine-learned model can include a first set of learnable parameters. The computing system can modify the machine-learned model to include a model patch, where the model patch includes a second set of learnable parameters. The computing system can train the machine-learned model on a second training dataset to perform a second task that is different from the first task, which may include learning new values for the second set of learnable parameters included in the model patch while keeping at least some (e.g., all) of the first set of parameters fixed.
    Type: Grant
    Filed: May 2, 2023
    Date of Patent: December 17, 2024
    Assignee: GOOGLE LLC
    Inventors: Mark Sandler, Andrew Gerald Howard, Andrey Zhmoginov, Pramod Kaushik Mudrakarta
  • Patent number: 12165633
    Abstract: The present disclosure relates to Communicational and Conversational Artificial Intelligence, Machine Perception, Perceptual-User-Interface, and a professional training method. A chatbot may comprise at least one skills module. The chatbot engages with trainee(s) on communicational training on a subject matter provided by the skills module. A trainer may create, remove, or update a skills module with interaction skills and training materials through an onboarding module. A trainee can upload recorded interactions to a skills module for evaluation or for role playing an interaction without a trainer or partner. An administrator may monitor a trainee's performance, and correlate with the organization's metrics. Based on the evaluation, the trainer or chatbot may provide the trainee with feedback and recommended improvement plans. The chatbot may be implemented in an Internet-of-Things or any device. The subject matters may extend to cover different industries/markets.
    Type: Grant
    Filed: May 10, 2022
    Date of Patent: December 10, 2024
    Assignee: AskWisy, Inc.
    Inventors: Patrick Pak Tak Leong, Kwok-Cheung Ellis Hung
  • Patent number: 12159619
    Abstract: According to an embodiment, an electronic device comprises: a memory and at least one processor operatively connected with the memory. The at least one processor is configured to: in response to a voice assistant application being executed, identify a pronunciation variant for which an amount of sound source data stored in the memory is less than a specified value among a plurality of pronunciation variants, identify a subject based on the identified pronunciation variant, obtain a question text corresponding to a word including the identified pronunciation variant among a plurality of words included in the subject, output a question speech corresponding to the question text, and receive an utterance after outputting the question speech.
    Type: Grant
    Filed: March 15, 2022
    Date of Patent: December 3, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Cheol Ryu, Kwanghoon Kim, Junesig Sung
  • Patent number: 12159111
    Abstract: A system and method for providing a voice assistant service for text including an anaphor are provided. A method, performed by an electronic device, of providing a voice assistant service includes: obtaining first text generated from a first input, detecting a target word within the first text and generating common information related to the detected target word, using a first natural language understanding (NLU) model, obtaining second text generated from a second input, inputting the common information and the second text to a second NLU model, detecting an anaphor included in the second text and outputting an intent and a parameter, based on common information corresponding to the detected anaphor, using the second NLU model, and generating response information related to the intent and the parameter.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: December 3, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yeonho Lee, Munjo Kim, Sangwook Park, Youngbin Shin, Kookjin Yeo
  • Patent number: 12153423
    Abstract: A system for transportation includes a vehicle interface for gathering hormonal state data of a rider in the vehicle. The system further includes an artificial intelligence-based circuit that is trained on a set of outcomes related to rider in-vehicle experience and that induces, responsive to the sensed rider hormonal state data, variation in one or more of the user experience parameters to achieve at least one desired outcome in the set of outcomes. The set of outcomes includes at least one outcome that promotes rider safety. The inducing variation includes control of timing and extent of the variation.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: November 26, 2024
    Assignee: Strong Force TP Portfolio 2022, LLC
    Inventor: Charles Howard Cella
  • Patent number: 12154546
    Abstract: A method and system for acoustic model conditioning on non-phoneme information features for optimized automatic speech recognition is provided. The method includes using an encoder model to encode sound embedding from a known key phrase of speech and conditioning an acoustic model with the sound embedding to optimize its performance in inferring the probabilities of phonemes in the speech. The sound embedding can comprise non-phoneme information related to the key phrase and the following utterance. Further, the encoder model and the acoustic model can be neural networks that are jointly trained with audio data.
    Type: Grant
    Filed: July 6, 2023
    Date of Patent: November 26, 2024
    Assignee: SoundHound AI IP, LLC.
    Inventors: Zizu Gowayyed, Keyvan Mohajer
  • Patent number: 12142258
    Abstract: Without dividing speech into a unit such as a word or a character, text corresponding to the speech is labeled. A speech distributed representation sequence converting unit 11 converts an acoustic feature sequence into a speech distributed representation. A symbol distributed representation converting unit 12 converts each symbol included in the symbol sequence corresponding to the acoustic feature sequence into a symbol distributed representation. A label estimation unit 13 estimates a label corresponding to the symbol from the fixed-length vector of the symbol generated using the speech distributed representation, the symbol distributed representation, and fixed-length vectors of previous and next symbols.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: November 12, 2024
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Tomohiro Tanaka, Ryo Masumura, Takanobu Oba
  • Patent number: 12125472
    Abstract: Methods, apparatus, and systems are disclosed to segment audio and determine audio segment similarities. An example apparatus includes at least one memory storing instructions and processor circuitry to execute instructions to at least select an anchor index beat of digital audio, identify a first segment of the digital audio based on the anchor index beat to analyze, the first segment having at least two beats and a respective center beat, concatenate time-frequency data of the at least two beats and the respective center beat to form a matrix of the first segment, generate a first deep feature based on the first segment, the first deep feature indicative of a descriptor of the digital audio, and train internal coefficients to classify the first deep feature as similar to a second deep feature based on the descriptor of the first deep feature and a descriptor of a second deep feature.
    Type: Grant
    Filed: April 10, 2023
    Date of Patent: October 22, 2024
    Assignee: Gracenote, Inc.
    Inventor: Matthew McCallum
  • Patent number: 12106749
    Abstract: A method for performing speech recognition using sequence-to-sequence models includes receiving audio data for an utterance and providing features indicative of acoustic characteristics of the utterance as input to an encoder. The method also includes processing an output of the encoder using an attender to generate a context vector, generating speech recognition scores using the context vector and a decoder trained using a training process, and generating a transcription for the utterance using word elements selected based on the speech recognition scores. The transcription is provided as an output of the ASR system.
    Type: Grant
    Filed: September 20, 2021
    Date of Patent: October 1, 2024
    Assignee: Google LLC
    Inventors: Rohit Prakash Prabhavalkar, Zhifeng Chen, Bo Li, Chung-cheng Chiu, Kanury Kanishka Rao, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Michiel A. u. Bacchiani, Tara N. Sainath, Jan Kazimierz Chorowski, Anjuli Patricia Kannan, Ekaterina Gonina, Patrick An Phu Nguyen
  • Patent number: 12087306
    Abstract: In one embodiment, a method includes receiving a user's utterance comprising a word in a custom vocabulary list of the user, generating a previous token to represent a previous audio portion of the utterance, and generating a current token to represent a current audio portion of the utterance by generating a bias embedding by using the previous token to query a trie of wordpieces representing the custom vocabulary list, generating first probabilities of respective first candidate tokens likely uttered in the current audio portion based on the bias embedding and the current audio portion, generating second probabilities of respective second candidate tokens likely uttered after the previous token based on the previous token and the bias embedding, and generating the current token to represent the current audio portion of the utterance based on the first probabilities of the first candidate tokens and the second probabilities of the second candidate tokens.
    Type: Grant
    Filed: November 24, 2021
    Date of Patent: September 10, 2024
    Assignee: Meta Platforms, Inc.
    Inventors: Duc Hoang Le, FNU Mahaveer, Gil Keren, Christian Fuegen, Yatharth Saraf
  • Patent number: 12086704
    Abstract: Representative embodiments disclose machine learning classifiers used in scenarios such as speech recognition, image captioning, machine translation, or other sequence-to-sequence embodiments. The machine learning classifiers have a plurality of time layers, each layer having a time processing block and a depth processing block. The time processing block is a recurrent neural network such as a Long Short Term Memory (LSTM) network. The depth processing blocks can be an LSTM network, a gated Deep Neural Network (DNN) or a maxout DNN. The depth processing blocks account for the hidden states of each time layer and uses summarized layer information for final input signal feature classification. An attention layer can also be used between the top depth processing block and the output layer.
    Type: Grant
    Filed: November 3, 2021
    Date of Patent: September 10, 2024
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jinyu Li, Liang Lu, Changliang Liu, Yifan Gong
  • Patent number: 12087307
    Abstract: An apparatus for processing speech data may include a processor configured to: separate an input speech into speech signals; identify a bandwidth of each of the speech signals; extract speaker embeddings from the speech signals based on the bandwidth of each of the speech signals, using at least one neural network configured to receive the speech signals and output the speaker embeddings; and cluster the speaker embeddings into one or more speaker clusters, each speaker cluster corresponding to a speaker identity.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: September 10, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Myungjong Kim, Vijendra Raj Apsingekar, Aviral Anshu, Taeyeon Ki
  • Patent number: 12079913
    Abstract: This specification relates to the generation of animation data using recurrent neural networks. According to a first aspect of this specification, there is described a computer implemented method comprising: sampling an initial hidden state of a recurrent neural network (RNN) from a distribution; generating, using the RNN, a sequence of frames of animation from the initial state of the RNN and an initial set of animation data comprising a known initial frame of animation, the generating comprising, for each generated frame of animation in the sequence of frames of animation: inputting, into the RNN, a respective set of animation data comprising the previous frame of animation data in the sequence of frames of animation; generating, using the RNN and based on a current hidden state of the RNN, the frame of animation data; and updating the hidden state of the RNN based on the input respective set of animation data.
    Type: Grant
    Filed: March 31, 2022
    Date of Patent: September 3, 2024
    Assignee: ELECTRONIC ARTS INC.
    Inventor: Elaheh Akhoundi