Patents Issued in February 20, 2024
  • Patent number: 11908434
    Abstract: A display assembly displays a virtual object in a select location wherein an eye viewing the virtual object has an expected gaze direction. Deformation of the display assembly is detected. The deformation causes the virtual object to be viewable in an altered location wherein the eye has altered gaze direction. The virtual object may be displayed in a corrected location wherein the eye viewing the virtual object in the correct location has a corrected gaze direction that is moved closer to the expected gaze direction than the altered gaze direction.
    Type: Grant
    Filed: October 14, 2021
    Date of Patent: February 20, 2024
    Assignee: Magic Leap, Inc.
    Inventor: Adam Neustein
  • Patent number: 11908435
    Abstract: Percussion instruments configured for outdoor installation are disclosed. The percussion instrument comprises a support post, wherein the support post is configured for attachment to an outdoor surface, a mounting base secured to the support post, one or more metal discs, wherein each metal disc is tuned to produce a note on a musical scale when struck by a user, and one or more fasteners securing the one or more metal discs to the mounting base.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: February 20, 2024
    Assignee: PLAYCORE WISCONSIN, INC.
    Inventor: Richard Cooke
  • Patent number: 11908436
    Abstract: An instrument pick has a triangular planar body; a concave thumb platform; and an index finger slot. The body has rounded vertices, including a plucking vertex. The thumb platform has concentric grooves and a thumb cavity sidewall extending between the thumb platform and the body to form a drainage cavity. The thumb platform sidewall has an opening that connects to the drainage cavity. The finger slot has a base flanked by a pair of curved side walls. The side walls taper together to form a ridge that resist the user's index finger from sliding towards the plucking vertex. The finger slot has ventilation openings that connect to the drainage cavity to collect perspiration. The pick stays in place between thumb and finger even with a loose grip and can be customized for different playing styles.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: February 20, 2024
    Inventor: Kevin Randall Goold
  • Patent number: 11908437
    Abstract: The stringed instrument-playing machine is an electromechanical device. The stringed instrument-playing machine is a musical instrument. The stringed instrument-playing machine is an automated structure. The stringed instrument-playing machine automatically announces the musical notes of a song. The stringed instrument-playing machine comprises a stringed instrument and a playing device. The stringed instrument-playing machine incorporates a stringed instrument and a playing device. The stringed instrument is a mechanical structure that generates audible sounds in the form of a plurality of notes. The playing device is an electromechanical device. The playing device is an electrically powered device. The playing device mechanically plays the stringed instrument. By playing the stringed instrument is meant that the playing device determines and causes the stringed instrument to generate: a) the notes that are played; b) the order that the notes are played; and, c) the length of time each note is played.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: February 20, 2024
    Inventor: Olayinka Adetoye
  • Patent number: 11908438
    Abstract: Computer-based systems, devices, and methods for generating variations of musical compositions are described. Musical compositions stored in digital media include one or more music data object(s) that encode notes. A first set of notes is characterized and a transformation is applied to replace at least one note in the first set of notes with at least one note in a second set of notes. The transformation may explore or call upon the full range of musical notes available without being constrained by conventions of musicality and harmony. For each particular note in the second set of notes that replaces a note in the first set of notes, whether the particular note is in musical harmony with other notes in the music data object is separately assessed and, if not, the particular note is adjusted to bring it into musical harmony with other notes in the music data object.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: February 20, 2024
    Assignee: Obeebo Labs Ltd.
    Inventor: Colin P. Williams
  • Patent number: 11908439
    Abstract: Signalling apparatus for a sound system, the signalling apparatus being configured for communicating with a processor, the signalling apparatus comprising: an identifier comprising a unique parameter value for identifying a sound property associated with the signalling apparatus; and an electrical contact configured to electrically connect with an electrical contact associated with the processor to communicate the parameter value with the processor, wherein the signalling apparatus is configured such that the electrical contact of the signalling apparatus is substantially fixed with respect to the electrical contact associated with the processor during activation of the sound system and/or signalling apparatus.
    Type: Grant
    Filed: May 28, 2019
    Date of Patent: February 20, 2024
    Assignee: Spark and Rocket Ltd.
    Inventor: Michael Tougher
  • Patent number: 11908440
    Abstract: [Problem] To provide an arpeggiator enabling musically natural arpeggio playing to be achieved, and a program comprising a function therefor. [Solution] In the present invention, a synthesizer 1 resets the number of octaves in an octave counter memory 12d to zero at the beginning of each bar when an octave reset function is on. Due to this configuration, sounds generated at the beginning of each bar, that is, when the step count is 0, all have the same note number. Consequently, a sound generation timing-based cycle of an arpeggio pattern of each bar can be synchronized with a pitch variation-based cycle, making it possible to give a listener the impression that a consistent phrase is being played.
    Type: Grant
    Filed: September 4, 2019
    Date of Patent: February 20, 2024
    Assignee: Roland Corporation
    Inventors: Akihiro Nagata, Takaaki Hagino
  • Patent number: 11908441
    Abstract: A sound muffling chamber covers a nose and mouth in an airtight manner. The sound muffling chamber includes a microphone that detects vocal sound waves in the sound muffling chamber and generates a voice signal. A digital signal processing system analyzes the voice signal and generates a voice cancellation signal. A sound silencing chamber includes a speaker that generates out of phase sound waves in response to the voice cancellation signal that superimposes on and cancels the vocal sound waves. A sound decelerator is positioned between the sound muffling chamber and the sound silencing chamber and configured to increase a traveling time of the vocal sound waves such that the vocal sound waves' arrival at the sound silencing chamber may be synchronized with the arrival of a voice cancellation signal. The sound muffling chamber may include inflatable cells separated by slats such that the sound muffling chamber is foldable.
    Type: Grant
    Filed: October 19, 2023
    Date of Patent: February 20, 2024
    Inventor: Kevin Chong Kim
  • Patent number: 11908442
    Abstract: A wireless earpiece includes a wireless earpiece housing, a processor disposed within the wireless earpiece housing, at least one microphone operatively connected to the processor, and at least one speaker operatively connected to the processor. The processor is configured to receive audio from the at least one microphone, perform processing of the audio to provide processed audio, and output the processed audio to the at least one speaker. The processing of the audio involves identifying body generated sounds generated by a body of a user of the wireless earpiece and removing the body generated sounds.
    Type: Grant
    Filed: August 2, 2022
    Date of Patent: February 20, 2024
    Assignee: BRAGI GMBH
    Inventor: Peter Vincent Boesen
  • Patent number: 11908443
    Abstract: Disclosed herein is a system for facilitating stress adaption in a workstation. Accordingly, the system may include microphones disposed on the workstation. Further, the one or more microphones may be configured for generating first sound signals of first sounds associated with an environment of the workstation. Further, the system may include a processing device communicatively coupled with the microphones. Further, the processing device may be configured for analyzing the first sound signals, determining first sound characteristics of the first sounds, determining second sound characteristics of second sounds, and generating second sound signals for the one or more second sounds. Further, the system may include acoustic devices disposed on the workstation. Further, the acoustic devices may be communicatively coupled with the processing device. Further, the acoustic devices may be configured for emitting the second sounds based on the second sound signals.
    Type: Grant
    Filed: January 27, 2022
    Date of Patent: February 20, 2024
    Assignee: BRELYON INC.
    Inventors: Alok Ajay Mehta, Barmak Heshmat Dehkordi
  • Patent number: 11908444
    Abstract: An apparatus for providing active noise control, includes: one or more microphones configured to detect sound entering through an aperture of a building structure; a set of speakers configured to provide sound output for cancelling or reducing at least some of the sound; and a processing unit communicatively coupled to the set of speakers, wherein the processing unit is configured to provide control signals to operate the speakers, wherein the control signals are independent of an error-microphone output.
    Type: Grant
    Filed: October 25, 2021
    Date of Patent: February 20, 2024
    Assignee: GN HEARING A/S
    Inventors: Willem Bastiaan Kleijn, Daan Ratering
  • Patent number: 11908445
    Abstract: A method for proactive notifications in a voice interface device includes: receiving a first user voice request for an action with an future performance time; assigning the first user voice request to a voice assistant service for performance; subsequent to the receiving, receiving a second user voice request and in response to the second user voice request initiating a conversation with the user; and during the conversation: receiving a notification from the voice assistant service of performance of the action; triggering a first audible announcement to the user to indicate a transition from the conversation and interrupting the conversation; triggering a second audible announcement to the user to indicate performance of the action; and triggering a third audible announcement to the user to indicate a transition back to the conversation and rejoining the conversation.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: February 20, 2024
    Assignee: Google LLC
    Inventors: Kenneth Mixter, Daniel Colish, Tuan Nguyen
  • Patent number: 11908446
    Abstract: The wearable audio-visual translation system is a device that includes a wearable camera and audio system, that can take photos of a signboard using a wearable camera, send the images wirelessly to the user's smart phone for translation, and send the translation back to the user in the form of an audio signal in very less time. To accomplish this, the device is mounted on to eyewear, such that the camera system can capture visual signs instantaneously as the user is looking at them. Further, the device comprises associated electrical and electronic circuitry mounted onto the same eyewear that enables streaming of the photos taken by the camera system into a wirelessly connected smartphone. The smartphone performs image processing and recognition on the images with the help of a translator software application, and the translated signs are synthesized to audio signals and played out in an audio device.
    Type: Grant
    Filed: October 5, 2023
    Date of Patent: February 20, 2024
    Inventor: Eunice Jia Min Yong
  • Patent number: 11908447
    Abstract: According to an aspect, method for synthesizing multi-speaker speech using an artificial neural network comprises generating and storing a speech learning model for a plurality of users by subjecting a synthetic artificial neural network of a speech synthesis model to learning, based on speech data of the plurality of users, generating speaker vectors for a new user who has not been learned and the plurality of users who have already been learned by using a speaker recognition model, determining a speaker vector having the most similar relationship with the speaker vector of the new user according to preset criteria out of the speaker vectors of the plurality of users who have already been learned, and generating and learning a speaker embedding of the new user by subjecting the synthetic artificial neural network of the speech synthesis model to learning, by using a value of a speaker embedding of a user for the determined speaker vector as an initial value and based on speaker data of the new user.
    Type: Grant
    Filed: August 4, 2021
    Date of Patent: February 20, 2024
    Assignee: IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)
    Inventors: Joon Hyuk Chang, Jae Uk Lee
  • Patent number: 11908448
    Abstract: A method for training a non-autoregressive TTS model includes receiving training data that includes a reference audio signal and a corresponding input text sequence. The method also includes encoding the reference audio signal into a variational embedding that disentangles the style/prosody information from the reference audio signal and encoding the input text sequence into an encoded text sequence. The method also includes predicting a phoneme duration for each phoneme in the input text sequence and determining a phoneme duration loss based on the predicted phoneme durations and a reference phoneme duration. The method also includes generating one or more predicted mel-frequency spectrogram sequences for the input text sequence and determining a final spectrogram loss based on the predicted mel-frequency spectrogram sequences and a reference mel-frequency spectrogram sequence. The method also includes training the TTS model based on the final spectrogram loss and the corresponding phoneme duration loss.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: February 20, 2024
    Assignee: Google LLC
    Inventors: Isaac Elias, Jonathan Shen, Yu Zhang, Ye Jia, Ron J. Weiss, Yonghui Wu, Byungha Chun
  • Patent number: 11908449
    Abstract: A system and method for translating audio, and video when desired. The translations include synthetic media and data generated using AI systems. Through unique processors and generators executing a unique sequence of steps, the system and method produces more accurate translations that can account for various speech characteristics (e.g., emotion, pacing, idioms, sarcasm, jokes, tone, phonemes, etc.). These speech characteristics are identified in the input media and synthetically incorporated into the translated outputs to mirror the characteristics in the input media. Some embodiments further include systems and methods that manipulate the input video such that the speakers' faces and/or lips appear as if they are natively speaking the generated audio.
    Type: Grant
    Filed: November 29, 2022
    Date of Patent: February 20, 2024
    Assignee: Deep Media Inc.
    Inventors: Rijul Gupta, Emma Brown
  • Patent number: 11908450
    Abstract: A conversation design is received for a conversation bot that enables the conversation bot to provide a service using a conversation flow specified at least in part by the conversation design. The conversation design specifies in a first human language at least a portion of a message content to be provided by the conversation bot. It is identified that an end-user of the conversation bot prefers to converse in a second human language different from the first human language. In response to a determination that the message content is to be provided by the conversation bot to the end-user, the message content of the conversation design is dynamically translated for the end-user from the first human language to the second human language. The translated message content is provided to the end-user in a message from the conversation bot.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: February 20, 2024
    Assignee: ServiceNow, Inc.
    Inventors: Jebakumar Mathuram Santhosm Swvigaradoss, Satya Sarika Sunkara, Ankit Goel, Rajesh Voleti, Rishabh Verma, Patrick Casey, Rao Surapaneni
  • Patent number: 11908451
    Abstract: A text-based virtual object animation generation includes acquiring text information, where the text information includes an original text of a virtual object animation to be generated; analyzing an emotional feature of the text information; performing speech synthesis according to the emotional feature, a rhyme boundary, and the text information to obtain audio information, where the audio information includes emotional speech obtained by conversion based on the original text; and generating a corresponding virtual object animation based on the text information and the audio information, where the virtual object animation is synchronized in time with the audio information.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: February 20, 2024
    Assignees: Mofa (Shanghai) Information Technology Co., Ltd., Shanghai Movu Technology Co., Ltd.
    Inventors: Congyi Wang, Yu Chen, Jinxiang Chai
  • Patent number: 11908452
    Abstract: Techniques for presenting an alternative input representation to a user for testing and collecting processing data are described. A system may determine that a received spoken input triggers an alternative input representation for presenting. The system may output data corresponding to the alternative input representation in response to the received spoken input, and the system may receive user feedback from the user. The system may store the user feedback and processing data corresponding to processing of the alternative input representation, which may be later used to update an alternative input component configured to determine alternative input representations for spoken inputs.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: February 20, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Sixing Lu, Chengyuan Ma, Chenlei Guo, Fangfu Li
  • Patent number: 11908453
    Abstract: A method and a system for training a machine-learning algorithm (MLA) to determine a user class of a user of an electronic device are provided. The method comprises: receiving a training audio signal representative of a training user utterance; soliciting, by the processor, a plurality of assessor-generated labels for the training audio signal, the given one of the plurality of assessor-generated labels being indicative of whether the training user is perceived to be one of a first class and a second class; generating an amalgamated assessor-generated label for the training audio signal, the amalgamated assessor-generated label being indicative of a label distribution of the plurality of assessor-generated labels between the first class and the second class; generating a training set of data including the training audio signal and the amalgamated assessor-generated to train the MLA to determine the user class of the user producing an in-use user utterance.
    Type: Grant
    Filed: August 23, 2021
    Date of Patent: February 20, 2024
    Assignee: Direct Cursus Technology L.L.C
    Inventors: Vladimir Andreevich Aliev, Stepan Aleksandrovich Kargaltsev, Artem Valerevich Babenko
  • Patent number: 11908454
    Abstract: A processor-implemented method trains an automatic speech recognition system using speech data and text data. A computer device receives speech data, and generates a spectrogram based on the speech data. The computing device receives text data associated with an entire corpus of text data, and generates a textogram based upon the text data. The computing device trains an automatic speech recognition system using the spectrogram and the textogram.
    Type: Grant
    Filed: December 1, 2021
    Date of Patent: February 20, 2024
    Assignee: International Business Machines Corporation
    Inventors: Samuel Thomas, Hong-Kwang Kuo, Brian E. D. Kingsbury, George Andrei Saon, Gakuto Kurata
  • Patent number: 11908455
    Abstract: A speech separation model training method and apparatus, a computer-readable storage medium, and a computer device are provided, the method including: obtaining first audio and second audio, the first audio including target audio and having corresponding labeled audio, and the second audio including noise audio.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: February 20, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jun Wang, Wingyip Lam, Dan Su, Dong Yu
  • Patent number: 11908456
    Abstract: Embodiments of this application discloses an azimuth estimation method performed at a computing device, the method including: obtaining, in real time, multi-channel sampling signals and buffering the multi-channel sampling signals; performing wakeup word detection on one or more sampling signals of the multi-channel sampling signals, and determining a wakeup word detection score for each channel of the one or more sampling signals; performing a spatial spectrum estimation on the buffered multi-channel sampling signals to obtain a spatial spectrum estimation result, when the wakeup word detection scores of the one or more sampling signals indicates that a wakeup word exists in the one or more sampling signals; and determining an azimuth of a target voice associated with the multi-channel sampling signals according to the spatial spectrum estimation result and a highest wakeup word detection score, thereby improving the accuracy of the azimuth estimation in a voice interaction process.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: February 20, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Jimeng Zheng, Yi Gao, Meng Yu, Ian Ernan Liu
  • Patent number: 11908457
    Abstract: A method for operating a neural network includes receiving an input sequence at an encoder. The input sequence is encoded to produce a set of hidden representations. Attention-heads of the neural network calculate attention weights based on the hidden representations. A context vector is calculated for each attention-head based on the attention weights and the hidden representations. Each of the context vectors correspond to a portion of the input sequence. An inference is output based on the context vectors.
    Type: Grant
    Filed: July 3, 2020
    Date of Patent: February 20, 2024
    Assignee: QUALCOMM Incorporated
    Inventors: Mingu Lee, Jinkyu Lee, Hye Jin Jang, Kyu Woong Hwang
  • Patent number: 11908458
    Abstract: A computer-implemented method for customizing a recurrent neural network transducer (RNN-T) is provided. The computer implemented method includes synthesizing first domain audio data from first domain text data, and feeding the synthesized first domain audio data into a trained encoder of the recurrent neural network transducer (RNN-T) having an initial condition, wherein the encoder is updated using the synthesized first domain audio data and the first domain text data. The computer implemented method further includes synthesizing second domain audio data from second domain text data, and feeding the synthesized second domain audio data into the updated encoder of the recurrent neural network transducer (RNN-T), wherein the prediction network is updated using the synthesized second domain audio data and the second domain text data. The computer implemented method further includes restoring the updated encoder to the initial condition.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: February 20, 2024
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Gakuto Kurata, George Andrei Saon, Brian E. D. Kingsbury
  • Patent number: 11908459
    Abstract: The present disclosure is generally related to a data processing system to detect potential exfiltration of audio data by agent applications can include a data processing system. The data processing system can identify, from an I/O record, an input received from the digital assistant application via a microphone of a client device, an output received from the agent application after the input, and a microphone status for the microphone. The data processing system can determine that the output is terminal based on the input and the output. The data processing system can identify the microphone status as in the enabled state subsequent to the input. The data processing system can determine that the agent application is unauthorized to access audio data acquired via the microphone of the client device based on determining that the output is terminal and identifying the microphone status as enabled.
    Type: Grant
    Filed: May 14, 2021
    Date of Patent: February 20, 2024
    Assignee: GOOGLE LLC
    Inventors: Yan Huang, Nikhil Rao
  • Patent number: 11908460
    Abstract: Disclosed herein are techniques for using a generative adversarial network (GAN) to train a semantic parser of a dialog system. A method described herein involves accessing seed data that includes seed tuples. Each seed tuple includes a respective seed utterance and a respective seed logical form corresponding to the respective seed utterance. The method further includes training a semantic parser and a discriminator in a GAN. The semantic parser learns to map utterances to logical forms based on output from the discriminator, and the discriminator learns to recognize authentic logical forms based on output from the semantic parser. The semantic parser may then be integrated into a dialog system.
    Type: Grant
    Filed: August 13, 2020
    Date of Patent: February 20, 2024
    Assignee: Oracle International Corporation
    Inventors: Thanh Long Duong, Mark Edward Johnson
  • Patent number: 11908461
    Abstract: A method of performing speech recognition using a two-pass deliberation architecture includes receiving a first-pass hypothesis and an encoded acoustic frame and encoding the first-pass hypothesis at a hypothesis encoder. The first-pass hypothesis is generated by a recurrent neural network (RNN) decoder model for the encoded acoustic frame. The method also includes generating, using a first attention mechanism attending to the encoded acoustic frame, a first context vector, and generating, using a second attention mechanism attending to the encoded first-pass hypothesis, a second context vector. The method also includes decoding the first context vector and the second context vector at a context vector decoder to form a second-pass hypothesis.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: February 20, 2024
    Assignee: Google LLC
    Inventors: Ke Hu, Tara N. Sainath, Ruoming Pang, Rohit Prakash Prabhavalkar
  • Patent number: 11908462
    Abstract: The systems and methods of the present disclosure generally relate to a data processing system that can identify and surface alternative requests when presented with ambiguous, unclear, or other requests to which a data processing system may not be able to respond. The data processing system can improve the efficiency of network transmissions to reduce network bandwidth usage and processor utilization by selecting alternative requests that are responsive to the intent of the original request.
    Type: Grant
    Filed: March 21, 2022
    Date of Patent: February 20, 2024
    Assignee: GOOGLE LLC
    Inventors: Gleb Skobeltsyn, Mihaly Kozsevnyikov, Vladimir Vuskovic
  • Patent number: 11908463
    Abstract: Techniques for storing and using multi-session context are described. A system may store context data corresponding to a first interaction, where the context data may include action data, entity data and a profile identifier for a user. Later the stored context data may be retrieved during a second interaction corresponding to the entity of the second interaction. The second interaction may take place at a system different than the first interaction. The system may generate a response during the second interaction using the stored context data of the prior interaction.
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: February 20, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Arjit Biswas, Shishir Bharathi, Anushree Venkatesh, Yun Lei, Ashish Kumar Agrawal, Siddhartha Reddy Jonnalagadda, Prakash Krishnan, Arindam Mandal, Raefer Christopher Gabriel, Abhay Kumar Jha, David Chi-Wai Tang, Savas Parastatidis
  • Patent number: 11908464
    Abstract: An electronic device and a method for controlling same are provided. The present electronic device comprises: a communication unit; and a processor configured to receive multiple audio signals via the communication unit, the multiple audio signals being acquired by multiple external electronic devices which have microphones, respectively, and which are positioned at different places, via microphones thereof, the processor being configured to determine at least one audio signal including a user voice uttered by a user among the multiple audio signals and to perform voice recognition regarding an audio signal acquired from the determined audio signals on the basis of the intensity of the determined audio signals.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: February 20, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jaesun Shin, Joonrae Cho, Jeongman Lee
  • Patent number: 11908465
    Abstract: An approach for controlling method of an electronic device is provided. The approach acquires voice information and image information for setting an action to be executed according to a condition, the voice information and the image information being respectively generated from a voice and a behavior associated with the voice of a user. The approach determines an event to be detected according to the condition and a function to be executed according to the action when the event is detected, based on the acquired voice information and the acquired image information. The approach determines at least one detection resource to detect the determined event. In response to the at least one determined detection resource detecting at least one event satisfying the condition, the approach executes the function according to the action.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: February 20, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Young-chul Sohn, Gyu-tae Park, Ki-beom Lee, Jong-ryul Lee
  • Patent number: 11908466
    Abstract: One or more parameters of one or more processes identified as belonging to a specific process grouping among a plurality of process groupings are obtained. Eligible token words in the one or more parameters are identified. The eligible token words are processed to select a subset within the eligible token words that are likely descriptive of the specific process grouping. The selected subset within the eligible token words is utilized to determine a descriptive identifier associated with the specific process grouping.
    Type: Grant
    Filed: June 9, 2020
    Date of Patent: February 20, 2024
    Assignee: ServiceNow, Inc.
    Inventors: Asaf Garty, Robert Bitterfeld
  • Patent number: 11908467
    Abstract: Systems, methods, and computer-readable media are disclosed for dynamic voice search transitioning. Example methods may include receiving, by a computer system in communication with a display, a first incoming voice data indication, initiating a first user interface theme for presentation at a display, wherein the first user interface theme is a default user interface theme, and receiving first voice data. Example methods may include sending the first voice data to a remote server for processing, receiving an indication from the remote server to initiate a second user interface theme, and initiating the second user interface theme for presentation at the display.
    Type: Grant
    Filed: August 24, 2020
    Date of Patent: February 20, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Rohit Prasad, Anna Santos, David Sanchez, Jared Strawderman, Sarah Castle, Kerry Hammil, Christopher Schindler, Timothy Twerdahl, Joseph Tavares, Bartosz Gulik
  • Patent number: 11908468
    Abstract: A system that is capable of resolving anaphora using timing data received by a local device. A local device outputs audio representing a list of entries. The audio may represent synthesized speech of the list of entries. A user can interrupt the device to select an entry in the list, such as by saying “that one.” The local device can determine an offset time representing the time between when audio playback began and when the user interrupted. The local device sends the offset time and audio data representing the utterance to a speech processing system which can then use the offset time and stored data to identify which entry on the list was most recently output by the local device when the user interrupted. The system can then resolve anaphora to match that entry and can perform additional processing based on the referred to item.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: February 20, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Prakash Krishnan, Arindam Mandal, Siddhartha Reddy Jonnalagadda, Nikko Strom, Ariya Rastrow, Ying Shi, David Chi-Wai Tang, Nishtha Gupta, Aaron Challenner, Bonan Zheng, Angeliki Metallinou, Vincent Auvray, Minmin Shen
  • Patent number: 11908469
    Abstract: An embodiment dashboard voice control system for a motorcycle comprises receiver circuitry to receive voice-generated signals, command recognition circuitry to recognize voice-generated command signals for a motorcycle dashboard out of the voice-generated signals received at the receiver circuitry as well as command implementation circuitry to implement motorcycle dashboard actions as a function of voice-generated command signals recognized by the command recognition circuitry.
    Type: Grant
    Filed: January 8, 2021
    Date of Patent: February 20, 2024
    Assignee: STMicroelectronics S.r.l.
    Inventors: Nicola Magistro, Alessandro Mariani, Riccardo Parisi
  • Patent number: 11908470
    Abstract: A method of dispensing a beverage from a beverage dispenser includes: detecting a user in proximity to the beverage dispenser; prompting the user to provide a first input, wherein the first input is audible; retrieving a user profile for the user based on the first input; receiving a second input from the user, wherein the second input comprises information about a beverage selection of the user, and wherein the second input is provided in a different manner than the first input; and dispensing the beverage.
    Type: Grant
    Filed: May 3, 2021
    Date of Patent: February 20, 2024
    Assignee: PepsiCo, Inc.
    Inventor: Robert Crawford
  • Patent number: 11908471
    Abstract: Methods, apparatuses, and computing systems are provided for integrating logic services with a group communication service. In an implementation, a method may include receiving a spoken message from a communication node in a communication group and determining that the spoken message relates to a logic service and transferring the spoken message to a voice assistant service with an indication that the spoken message relates to the logic service. The method also includes receiving status information from the logic service indicative of a status of a networked device associated with the logic service. The further method includes sending an audible announcement to the communication nodes in the commutation group expressive of the status of the networked device.
    Type: Grant
    Filed: December 20, 2021
    Date of Patent: February 20, 2024
    Assignee: Orion Labs, Inc.
    Inventors: Greg Albrecht, Ellen Juhlin, Jesse Robbins, Justin Black
  • Patent number: 11908472
    Abstract: Coordinated operation of a voice-controlled device and an accessory device in an environment is described. A remote system processes audio data it receives from the voice-controlled device in the environment to identify a first intent associated with a first domain, a second intent associated with a second domain, and a named entity associated with the audio data. The remote system sends, to the voice-controlled device, first information for accessing main content associated with the named entity, and a first instruction corresponding to the first intent. The remote system also sends, to the accessory device, second information for accessing control information or supplemental content associated with the main content, and a second instruction corresponding to the second intent. The first and second instructions, when processed by the devices in the environment, cause coordinated operation of the voice-controlled device and the accessory device.
    Type: Grant
    Filed: September 9, 2022
    Date of Patent: February 20, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Derick Deller, Link Cornelius, Apoorv Naik, Zoe Adams, Aslan Appleman, Pete Klein
  • Patent number: 11908473
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. An example process includes, at an electronic device having one or more processors and memory: performing a first task specified in a first user speech input; receiving a second user speech input; and in accordance with a determination that the second user speech input includes a modification to the first task, performing a second task, wherein performance of the second task modifies at least a portion of the performance of the first task.
    Type: Grant
    Filed: September 21, 2022
    Date of Patent: February 20, 2024
    Assignee: Apple Inc.
    Inventors: Yi Ma, Arash Dawoodi, Antoine R. Raux, Humza M. Siddiqui
  • Patent number: 11908474
    Abstract: [Problem] Provided is a system that can objectively evaluate a person who makes a presentation (presenter) [Solution] A presentation evaluation system 1 includes: a voice analysis unit 3 that analyzes a content of a conversation, a presentation material related information storage unit 5 that stores information related to a presentation material, a keyword storage unit 7 that stores information related to a keyword in each page of the presentation material, a related term storage unit 9 that stores a related term of each keyword, and an evaluation unit 11 that evaluates the content of the conversation analyzed by the voice analysis unit 3 or a person who had the conversation.
    Type: Grant
    Filed: December 28, 2021
    Date of Patent: February 20, 2024
    Assignee: Interactive Solutions Corp.
    Inventor: Kiyoshi Sekine
  • Patent number: 11908475
    Abstract: A method, system, and non-transitory computer readable media for converting input from a user into a human interface device (HID) output to cause a corresponding action at an mapped device includes receiving one or more user input from a user at an input device, analyzing the user input selecting a command from a command profile that maps at least one of the received user inputs to one or more mapped tasks, executing the one or more mapped tasks associated with the selected command, and causing one or more corresponding actions at one or more mapped devices associated with the one or more mapped tasks.
    Type: Grant
    Filed: February 10, 2023
    Date of Patent: February 20, 2024
    Assignee: CEPHABLE INC.
    Inventor: Alexander Dunn
  • Patent number: 11908476
    Abstract: An artificial intelligence enabled system is disclosed. The system includes a core component for enabling AI-powered interactions between the system and its users and one or more agents that understand user intent and automatically interact with products and services on the web and/or in the physical world through imitation of a human user.
    Type: Grant
    Filed: September 21, 2023
    Date of Patent: February 20, 2024
    Assignee: Rabbit Inc.
    Inventors: Cheng Lyu, Peiyuan Liao, Zhuoheng Yang
  • Patent number: 11908477
    Abstract: This disclosure describes techniques for generating a conversation summary. The techniques may include processing at least one statement indication of the conversation to determine at least one statement that is a candidate highlight of the conversation. The techniques may further include applying linguistic filtering rules to the candidate highlight to determine the candidate highlight is an actual highlight. The techniques may further include generating the conversation summary including providing the actual highlight as at least a portion of the conversation summary.
    Type: Grant
    Filed: August 28, 2020
    Date of Patent: February 20, 2024
    Inventors: Varsha Ravikumar Embar, Karthik Raghunathan
  • Patent number: 11908478
    Abstract: A method for generating speech includes uploading a reference set of features that were extracted from sensed movements of one or more target regions of skin on faces of one or more reference human subjects in response to words articulated by the subjects and without contacting the one or more target regions. A test set of features is extracted a from the sensed movements of at least one of the target regions of skin on a face of a test subject in response to words articulated silently by the test subject and without contacting the one or more target regions. The extracted test set of features is compared to the reference set of features, and, based on the comparison, a speech output is generated, that includes the articulated words of the test subject.
    Type: Grant
    Filed: March 7, 2023
    Date of Patent: February 20, 2024
    Assignee: Q (Cue) Ltd.
    Inventors: Aviad Maizels, Avi Barliya, Yonatan Wexler
  • Patent number: 11908479
    Abstract: In one example, a method includes method comprising: receiving audio data generated by a microphone of a current computing device; identifying, based on the audio data, one or more computing devices that each emitted a respective audio signal in response to speech reception being activated at the current computing device; and selecting either the current computing device or a particular computing device from the identified one or more computing devices to satisfy a spoken utterance determined based on the audio data.
    Type: Grant
    Filed: July 1, 2022
    Date of Patent: February 20, 2024
    Assignee: GOOGLE LLC
    Inventor: Jian Wei Leong
  • Patent number: 11908480
    Abstract: This disclosure proposes systems and methods for processing natural language inputs using data associated with multiple language recognition contexts (LRC). A system using multiple LRCs can receive input data from a device, identify a first identifier associated with the device, and further identify second identifiers associated with the first identifier and representing candidate users of the device. The system can access language processing data used for natural language processing for the LRCs corresponding to each of the first and second identifiers, and process the input data using the language processing data at one or more stages of automatic speech recognition, natural language understanding, entity resolution, and/or command execution. User recognition can reduce the number of candidate users, and thus the amount of data used to process the input data. Dynamic arbitration can select from between competing hypotheses representing the first identifier and a second identifier, respectively.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: February 20, 2024
    Assignee: Amazon Technologies, Inc.
    Inventors: Da Teng, Adrian Evans, Naresh Narayanan
  • Patent number: 11908481
    Abstract: Provided is a method for encoding live-streaming data, including: acquiring first state information associated with a current data frame; generating backup state information by backing up the first state information; generating a first encoded data frame by encoding the current data frame based on a first bit rate and the first state information; generating reset state information by resetting the updated first state information based on the backup state information; generating a second encoded data frame by encoding the current data frame based on a second bit rate and the reset state information; and generating a first target data frame corresponding to the current data frame based on the first encoded data frame and the second encoded data frame.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: February 20, 2024
    Assignee: BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Wenhao Xing, Chen Zhang
  • Patent number: 11908482
    Abstract: This application provides a packet loss retransmission method, a computer-readable storage medium, and an electronic device. The packet loss retransmission method includes: obtaining a loudness corresponding to a target audio data packet; and in response to receiving a packet loss state indicating that the target audio data packet is lost, in accordance with a determination that the loudness corresponding to the target audio data packet meets a first threshold: retransmitting the target audio data packet. The technical solutions of this application may alleviate the problem of long data retransmission time, and improve data transmission efficiency.
    Type: Grant
    Filed: April 26, 2022
    Date of Patent: February 20, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Junbin Liang
  • Patent number: 11908483
    Abstract: This application relates to a method of extracting an inter channel feature from a multi-channel multi-sound source mixed audio signal performed at a computing device.
    Type: Grant
    Filed: August 12, 2021
    Date of Patent: February 20, 2024
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Rongzhi Gu, Shixiong Zhang, Lianwu Chen, Yong Xu, Meng Yu, Dan Su, Dong Yu