Patents Examined by Brian L. Albertalli
  • Patent number: 11907666
    Abstract: Various embodiments of a system and associated method for anonymization of text without losing semantic utility of text by extracting a latent embedding representation of content with respect to a given task and by learning an optimal strategy for text embedding manipulation to satisfy both privacy and utility requirements are disclosed herein. In particular, the system balances private attribute obfuscation with retained semantic utility.
    Type: Grant
    Filed: November 16, 2021
    Date of Patent: February 20, 2024
    Assignee: Arizona Board of Regents on Behalf of Arizona State University
    Inventors: Ahmadreza Mosallanezhad, Ghazaleh Beigi, Huan Liu
  • Patent number: 11900937
    Abstract: Example techniques involve suppressing a wake word response to a local wake word. An example implementation involves a playback device receiving audio content for playback by the playback device and providing a sound data stream representing the received audio content to a voice assistant service (VAS) wake-word engine and a local keyword engine. The playback device plays back a first portion of the audio content and detects, via the local keyword engine, that a second portion of the received audio content includes sound data matching one or more particular local keywords. Before the second portion of the received audio content is played back, the playback device disables a local keyword response of the local keyword engine to the one or more particular local keywords and then plays back the second portion of the audio content via one or more speakers.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: February 13, 2024
    Assignee: Sonos, Inc.
    Inventor: Jonathan P. Lang
  • Patent number: 11902766
    Abstract: An illustrative collaboration space provider system provides a virtual collaboration session that allows for audio communication between a user and one or more other users virtually located within a virtual collaboration space. The user is represented by an avatar located at an avatar location within the virtual collaboration space. The collaboration space provider system receives user input from the user, the user input representative of a voice origination location that is within the virtual collaboration space and is distinct from the avatar location. During the virtual collaboration session, the collaboration space provider system simulates propagation within the virtual collaboration space of a voice communication spoken by the user. The propagation of the voice communication is simulated to originate from the voice origination location and not from the avatar location. Corresponding methods and systems are also disclosed.
    Type: Grant
    Filed: July 30, 2021
    Date of Patent: February 13, 2024
    Assignee: Verizon Patent and Licensing Inc.
    Inventors: Samuel Charles Mindlin, Kunal Jathal, Shan Anis, David Skuratowicz
  • Patent number: 11893309
    Abstract: In response to a user interacting with a tangible peripheral assistant control device (e.g., depressing a button of the device), causing an automated assistant to perform one or more actions. The action(s) performed can be based on input previously provided by the user in configuring the peripheral assistant control device. The action(s) performed in response to interaction with the peripheral assistant control device can vary based on one or more conditions, such as which user is currently active, where the peripheral assistant control device is currently located (which can optionally be inferred based on which of multiple assistant computing devices the button is paired with), and/or the current state of one or more smart devices and/or other devices (e.g., as determined based on a device topology). A utility of the peripheral assistant control device can be automatically extended beyond what was specifically requested by a user during configuration.
    Type: Grant
    Filed: March 10, 2022
    Date of Patent: February 6, 2024
    Assignee: GOOGLE LLC
    Inventors: Tomer Amarilio, Yuzhao Ni, Bryan Allen, Norbert Tydingco, Will Donnelly, Feng Yuan, Nathaniel Nesiba, Anurag Jain, Jacky Cheung, Ronghui Zhu, Chunya Hua, Gregory Kielian
  • Patent number: 11887612
    Abstract: Disclosed is an LPC residual signal encoding/decoding apparatus of an MDCT based unified voice and audio encoding device. The LPC residual signal encoding apparatus analyzes a property of an input signal, selects an encoding method of an LPC filtered signal, and encode the LPC residual signal based on one of a real filterbank, a complex filterbank, and an algebraic code excited linear prediction (ACELP).
    Type: Grant
    Filed: August 25, 2022
    Date of Patent: January 30, 2024
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Seung Kwon Beack, Tae Jin Lee, Min Je Kim, Kyeongok Kang, Dae Young Jang, Jin Woo Hong, Jeongil Seo, Chieteuk Ahn, Hochong Park, Young-Cheol Park
  • Patent number: 11886820
    Abstract: A method and system are provided for training a machine-learning (ML) system/module and to provide an ML model. In one embodiment, a method includes using a labeled entities set to train a machine learning (ML) system, to obtain an ML model, and using the trained ML model to predict labels for entities in an unlabeled entities set, yielding a machine-labeled entities set. One or more individual ML models may be trained and used in this way, where each individual ML model corresponds to a respective document source. The document sources can be identified via classification of a corpus of documents. The prediction of labels provides a respective confidence score for each machine-labeled entity. The method also includes selecting from the machine-labeled entities set, a subset of machine-labeled entities having a respective confidence score at least equal to a threshold confidence score; and updating the labeled entities set by adding thereto the selected subset of machine-labeled entities.
    Type: Grant
    Filed: October 6, 2020
    Date of Patent: January 30, 2024
    Assignee: Genpact Luxembourg S.à r.l. II
    Inventors: Sreekanth Menon, Prakash Selvakumar, Sudheesh Sudevan
  • Patent number: 11881225
    Abstract: Audio encoder for encoding a multichannel signal is shown. The audio encoder includes a downmixer for downmixing the multichannel signal to obtain a downmix signal, a linear prediction domain core encoder for encoding the downmix signal, wherein the downmix signal has a low band and a high band, wherein the linear prediction domain core encoder is configured to apply a bandwidth extension processing for parametrically encoding the high band, a filterbank for generating a spectral representation of the multichannel signal, and a joint multichannel encoder configured to process the spectral representation including the low band and the high band of the multichannel signal to generate multichannel information.
    Type: Grant
    Filed: January 13, 2022
    Date of Patent: January 23, 2024
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Sascha Disch, Guillaume Fuchs, Emmanuel Ravelli, Christian Neukam, Konstantin Schmidt, Conrad Benndorf, Andreas Niedermeier, Benjamin Schubert, Ralf Geiger
  • Patent number: 11881218
    Abstract: Prevention of voice misappropriation in voice interaction/response systems. The system relies on telemetry data, including thermal data of components to determine whether a received voice command was made by actual voice. If the voice command is determined to have been made by an actual voice, a response to the command is generated and transmitted, otherwise if the voice command is determined to have likely not been made by an actual voice (e.g., artificial means replicating a voice, such as a laser or the like), no response to the command is transmitted or action taken with respect to the command.
    Type: Grant
    Filed: July 12, 2021
    Date of Patent: January 23, 2024
    Assignee: BANK OF AMERICA CORPORATION
    Inventor: Steven Mark DiMaria
  • Patent number: 11868724
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating author vectors. One of the methods includes obtaining a set of sequences of words, the set of sequences of words comprising a plurality of first sequences of words and, for each first sequence of words, a respective second sequence of words that follows the first sequence of words, wherein each first sequence of words and each second sequence of words has been classified as being authored by a first author; and training a neural network system on the first sequences and the second sequences to determine an author vector for the first author, wherein the author vector characterizes the first author.
    Type: Grant
    Filed: March 14, 2022
    Date of Patent: January 9, 2024
    Assignee: GOOGLE LLC
    Inventors: Quoc V. Le, Brian Patrick Strope
  • Patent number: 11853706
    Abstract: Sentiment analysis is a task in natural language processing. The embodiments are directed to using a generative language model to extract an aspect term, aspect category and their corresponding polarities. The generative language model may be trained as a single, joint, and multi-task model. The single-task generative language model determines a term polarity from the aspect term in the sentence or a category polarity from an aspect category in the sentence. The joint-task generative language model determines both the aspect term and the term polarity or the aspect category and the category polarity. The multi-task generative language model determines the aspect term, term polarity, aspect category and category polarity of the sentence.
    Type: Grant
    Filed: September 8, 2021
    Date of Patent: December 26, 2023
    Assignee: salesforce.com, inc.
    Inventors: Ehsan Hosseini-Asl, Wenhao Liu
  • Patent number: 11848015
    Abstract: The invention is directed towards a an audio scrubbing system that allows for scrubbing recognized voice commands from audio data and replacing the recognized voice commands with environment audio data. Specifically, as a user captures video and audio data via a HMD, audio data captured by the HMD may be processed by an audio scrubbing module to identify voice commands in the audio data that are used for controlling the HMD. When a voice command is identified in the audio data, timestamps corresponding to the voice command may be determined. Filler audio data may then be generated to imitate the environment by processing at least a portion of the audio data by a neural network of a machine learning model. The filler audio data may then be used to replace the audio data corresponding to the identified voice commands, thereby scrubbing the voice command from the audio data.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: December 19, 2023
    Assignee: RealWear, Inc.
    Inventor: Christopher Iain Parkinson
  • Patent number: 11842748
    Abstract: Methods, systems, and apparatuses for audio event detection, where the determination of a type of sound data is made at the cluster level rather than at the frame level. The techniques provided are thus more robust to the local behavior of features of an audio signal or audio recording. The audio event detection is performed by using Gaussian mixture models (GMMs) to classify each cluster or by extracting an i-vector from each cluster. Each cluster may be classified based on an i-vector classification using a support vector machine or probabilistic linear discriminant analysis. The audio event detection significantly reduces potential smoothing error and avoids any dependency on accurate window-size tuning. Segmentation may be performed using a generalized likelihood ratio and a Bayesian information criterion, and the segments may be clustered using hierarchical agglomerative clustering. Audio frames may be clustered using K-means and GMMs.
    Type: Grant
    Filed: December 14, 2020
    Date of Patent: December 12, 2023
    Assignee: Pindrop Security, Inc.
    Inventors: Elie Khoury, Matthew Garland
  • Patent number: 11829724
    Abstract: Support for natural language expressions is provided by the use of semantic grammars that describe the structure of expressions in that grammar and that construct the meaning of a corresponding natural language expression. A semantic grammar extension mechanism is provided, which allows one semantic grammar to be used in the place of another semantic grammar. This enriches the expressivity of semantic grammars in a simple, natural, and decoupled manner.
    Type: Grant
    Filed: July 16, 2021
    Date of Patent: November 28, 2023
    Assignee: SOUNDHOUND AI IP, LLC
    Inventors: Bernard Mont-Reynaud, Christopher S. Wilson, Keyvan Mohajer
  • Patent number: 11798568
    Abstract: Conventional audio compression technologies perform a standardized signal transformation, independent of the type of the content. Multi-channel signals are decomposed into their signal components, subsequently quantized and encoded. This is disadvantageous due to lack of knowledge on the characteristics of scene composition, especially for e.g. multi-channel audio or Higher-Order Ambisonics (HOA) content. A method for decoding an encoded bitstream of multi-channel audio data and associated metadata is provided, including transforming the first Ambisonics format of the multi-channel audio data to a second Ambisonics format representation of the multi-channel audio data, wherein the transforming maps the first Ambisonics format of the multi-channel audio data into the second Ambisonics format representation of the multi-channel audio data.
    Type: Grant
    Filed: August 2, 2021
    Date of Patent: October 24, 2023
    Assignee: DOLBY LABORATORIES LICENSING CORPORATION
    Inventors: Oliver Wuebbolt, Johannes Boehm, Peter Jax
  • Patent number: 11790925
    Abstract: The present technology relates to an information processing device and method, and a program capable of reducing a code amount. The information processing device includes: an acquisition unit that acquires space information regarding a position and a size of a child space within a parent space and position information in the child space indicating a position of an object within the child space, the child space being included in the parent space, and the object being included in the child space; and a calculation unit that calculates position information in the parent space indicating a position of the object within the parent space on the basis of the space information and the position information in the child space. The present technology can be applied to a signal processing device.
    Type: Grant
    Filed: June 20, 2019
    Date of Patent: October 17, 2023
    Assignee: Sony Corporation
    Inventors: Mitsuyuki Hatanaka, Toru Chinen, Minoru Tsuji, Hiroyuki Honma, Yuki Yamamoto
  • Patent number: 11790937
    Abstract: Systems and methods for optimizing voice detection via a network microphone device are disclosed herein. In one example, individual microphones of a network microphone device detect sound. The sound data is captured in a first buffer and analyzed to detect a trigger event. Metadata associated with the sound data is captured in a second buffer and provided to at least one network device to determine at least one characteristic of the detected sound based on the metadata. The network device provides a response that includes an instruction, based on the determined characteristic, to modify at least one performance parameter of the NMD. The NMD then modifies the at least one performance parameter based on the instruction.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: October 17, 2023
    Assignee: Sonos, Inc.
    Inventors: Connor Kristopher Smith, Kurt Thomas Soto, Charles Conor Sleith
  • Patent number: 11783831
    Abstract: A user may access multiple virtual assistants via a voice-enabled device. The device may receive a command from the user, detect a wakeword corresponding to one of the assistants, and send audio data to a command processing system corresponding to the selected assistant. The device transmits encrypted audio data to one or more systems and, upon detecting a wakeword or wake command corresponding to one of the systems, the device may provide an encryption key to that particular system. The system may decrypt and process the audio data without additional latency introduced by having to wait for the audio data to arrive.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: October 10, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Philippe Andre Lantin, Ori Neidich, David Berol
  • Patent number: 11763813
    Abstract: Implementations described herein relate to reducing latency in automated assistant interactions. In some implementations, a client device can receive audio data that captures a spoken utterance of a user. The audio data can be processed to determine an assistant command to be performed by an automated assistant. The assistant command can be processed, using a latency prediction model, to generate a predicted latency to fulfill the assistant command. Further, the client device (or the automated assistant) can determine, based on the predicted latency, whether to audibly render pre-cached content for presentation to the user prior to audibly rendering content that is responsive to the spoken utterance. The pre-cached content can be tailored to the assistant command and audibly rendered for presentation to the user while the content is being obtained, and the content can be audibly rendered for presentation to the user subsequent to the pre-cached content.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: September 19, 2023
    Assignee: GOOGLE LLC
    Inventors: Lior Alon, Rafael Goldfarb, Dekel Auster, Dan Rasin, Michael Andrew Goodman, Trevor Strohman, Nino Tasca, Valerie Nygaard, Jaclyn Konzelmann
  • Patent number: 11755844
    Abstract: Servers configured to perform automatic summarization of content in electronic messages are discloses herein. In one embodiment, upon receiving an email, an server determines whether the incoming email is a templated message. In response to determining that the incoming email is not a templated message, the server classifies one or more sentences in the email as a statement of decision, judgement, inference, or fact, cluster the classified statements into clusters, and select one or more of the clusters to automatically generate summaries of the incoming email. The server can then insert data representing the generated summaries into the email before transmitting the email to a destination via a computer network.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: September 12, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kausik Ghatak, Ganessh Kumar R P, Priyanka Goel, Neeraj Singh, Swathi Karri
  • Patent number: 11756537
    Abstract: Techniques are described herein for enabling an automated assistant to adjust its behavior depending on a detected age range and/or “vocabulary level” of a user who is engaging with the automated assistant. In various implementations, data indicative of a user's utterance may be used to estimate one or more of the user's age range and/or vocabulary level. The estimated age range/vocabulary level may be used to influence various aspects of a data processing pipeline employed by an automated assistant. In various implementations, aspects of the data processing pipeline that may be influenced by the user's age range/vocabulary level may include one or more of automated assistant invocation, speech-to-text (“STT”) processing, intent matching, intent resolution (or fulfillment), natural language generation, and/or text-to-speech (“TTS”) processing. In some implementations, one or more tolerance thresholds associated with one or more of these aspects, such as grammatical tolerances, vocabularic tolerances, etc.
    Type: Grant
    Filed: October 10, 2022
    Date of Patent: September 12, 2023
    Assignee: GOOGLE LLC
    Inventors: Pedro Gonnet Anders, Victor Carbune, Daniel Keysers, Thomas Deselaers, Sandro Feuz