Patents Examined by Andrew C Flanders
  • Patent number: 11996093
    Abstract: An information processing apparatus and an information processing method are provided that enable suitable determination of sensing results used in estimating a user state. The information processing apparatus is provided with a determination unit that determines, on the basis of a predetermined reference, one or more second sensing results used in estimating the user state from among a plurality of first sensing results received from a plurality of devices. The information processing apparatus is further provided with an output control unit that controls an output of information on the basis of the one or more second sensing results.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: May 28, 2024
    Inventors: Shinichi Kawano, Hiro Iwase, Mari Saito, Yuhei Taki
  • Patent number: 11861295
    Abstract: Described herein are techniques for using a graph neural network to encode online job postings as embeddings. First, an input graph is defined by processing one or more rules to discover edges that connect nodes in an input graph, where the nodes of the input graph represent job postings or standardized job attributes, and the edges are determined based on analyzing a log of user activity directed to online job postings. Next, a graph neural network (GNN) is trained based on an edge prediction task. Finally, once trained, the GNN is used to derive node embeddings for the nodes (e.g., job postings) of the input graph, and in some instances, new online job postings not represented in the original input graph.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: January 2, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Shan Li, Baoxu Shi, Jaewon Yang
  • Patent number: 11854572
    Abstract: Computer-implemented methods, computer program products, and computer systems for mitigating frequency loss may include one or more processors configured for receiving first audio data corresponding to unobstructed user utterances, receiving second audio data corresponding to first obstructed user utterances, generating a frequency loss (FL) model representing frequency loss between the first audio data and the second audio data, receiving third audio data corresponding to one or more second obstructed user utterances, processing the third audio data using the FL model to generate fourth audio data corresponding to a frequency loss mitigated version of the second obstructed user utterances, and transmitting the fourth audio data to a recipient computing device. The first obstructed user utterances are obstructed by a facemask and the one or more second obstructed user utterances is obstructed by the facemask. The FL model may be executed as an audio plugin in a web conferencing program.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: December 26, 2023
    Assignee: International Business Machines Corporation
    Inventors: Mary D. Swift, Irene Lizeth Manotas Gutiérrez, Kelley Anders, Jonathan D. Dunne
  • Patent number: 11848029
    Abstract: A method for detecting an audio signal, the method comprises: obtaining a speech segment and a non-speech segment of an audio signal to be detected, extracting a first audio feature of the speech segment and a second audio feature of the non-speech segment, detecting the first audio feature using a predetermined speech segment detection model to obtain a first detection score, detecting the second audio feature using a predetermined non-speech segment detection model to obtain a second detection score, and determining whether the audio signal belongs to a target audio based on the first detection score and the second detection score.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: December 19, 2023
    Assignee: BEIJING XIAOMI PINECONE ELECTRONICS CO., LTD.
    Inventors: Yifeng Wang, Guodu Cai, Shuo Yang, Lihan Li, Peng Gao
  • Patent number: 11842718
    Abstract: An unambiguous phonics system (UPS) is capable of presenting text in a format with unambiguous pronunciation. The system can translate input text written in a given language (e.g., English) into a UPS representation of the text written in a UPS alphabet. A unique UPS grapheme can be used to represent each unique grapheme-phoneme combination in the input text. Thus, each letter of the input text is represented in the UPS spelling and each letter of the UPS spelling unambiguously indicates the phoneme used. For all the various grapheme-phoneme combinations for a given input grapheme, the corresponding UPS graphemes can be constructed to have visual similarity with the given input grapheme, thus easing an eventual transition from UPS spelling to traditional spelling. The UPS can include translation, complexity scoring, word/phoneme-grapheme searching, and other module. The UPS can also include techniques to provide efficient, level-based training of the UPS alphabet.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: December 12, 2023
    Assignee: TINYIVY, INC.
    Inventor: Zachary Silverzweig
  • Patent number: 11837254
    Abstract: Disclosed are systems and methods for a frontend capture module of a video conferencing application, which can modify an input signal, received from a microphone device to match predetermined signal characteristics, such as voice signal level and expected noise floor. An Input stage, a suppression module and an output stage amplify the voice signal portion of the input signal and suppress the noise signal of input signal to predetermined ranges. The input stage selectively applies gains defined by a gain table, based on signal level of the input signal. The suppression module selectively applies a suppression gain to the input signal based on presence or absence of voice signal in the input signal. The output stage further amplifies the input signal in portions having a voice signal and applies a gain table to maintain a consistent noise floor.
    Type: Grant
    Filed: October 15, 2021
    Date of Patent: December 5, 2023
    Assignee: Zoom Video Communications, Inc.
    Inventor: Yu Rao
  • Patent number: 11830486
    Abstract: Techniques are described herein for identifying a failed hotword attempt. A method includes: receiving first audio data; processing the first audio data to generate a first predicted output; determining that the first predicted output satisfies a secondary threshold but does not satisfy a primary threshold; receiving second audio data; processing the second audio data to generate a second predicted output; determining that the second predicted output satisfies the secondary threshold but does not satisfy the primary threshold; in response to the first predicted output and the second predicted output satisfying the secondary threshold but not satisfying the primary threshold, and in response to the first spoken utterance and the second spoken utterance satisfying one or more temporal criteria relative to one another, identifying a failed hotword attempt; and in response to identifying the failed hotword attempt, providing a hint that is responsive to the failed hotword attempt.
    Type: Grant
    Filed: October 27, 2020
    Date of Patent: November 28, 2023
    Assignee: GOOGLE LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 11823707
    Abstract: An audio spotting system configured for various operating modes including a regular mode and sensitivity mode is described. An example cascade audio spotting system may include a high-power subsystem including a high-power trigger and a transfer module. This high-power trigger includes one or more detection models used to detect whether a target sound activity is included in the one or more audio streams. The one or more detection models are associated with a first set of hyperparameters when the cascade audio spotting system is in a regular mode, and the one or more detection models are associated with a second set of hyperparameters when the cascade audio spotting system is in a sensitivity mode. The transfer module provides at least one of one or more processed audio streams for further processing in response to the high-power trigger detecting the target sound activity in the one or more audio streams.
    Type: Grant
    Filed: January 10, 2022
    Date of Patent: November 21, 2023
    Assignee: Synaptics Incorporated
    Inventor: Saeed Mosayyebpour Kaskari
  • Patent number: 11810550
    Abstract: A computer system may connect to various customer-facing devices and manage or automate the order process between a retail store and the customer. The computer system may perform the dialogue and receive an order for items from the retail store and may perform quality control monitoring of the dialogue between customers and employees taking orders. The ordering system may utilize the ordered items in combination with various contextual cues to determine a customer identity which may then be linked to past orders and/or various order preferences. Based on the determined customer identity, the system may provide recommendations of additional order items or order alterations to the customer before personally identifying information has been collected from the customer. The determination of the customer identity and the determination of recommendations may be performed by machine learning algorithms that were trained on customer data and the retail store products.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: November 7, 2023
    Inventors: Vinay Kumar Shukla, Rahul Aggarwal, Pranav Nirmal Mehra, Vrajesh Navinchandra Sejpal, Akshay Labh Kayastha, Yuganeshan A J
  • Patent number: 11804229
    Abstract: An apparatus for providing a processed audio signal representation on the basis of input audio signal representation configured to apply an un-windowing, in order to provide the processed audio signal representation on the basis of the input audio signal representation. The apparatus is configured to adapt the un-windowing in dependence on one or more signal characteristics and/or in dependence on one or more processing parameters used for a provision of the input audio signal representation.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: October 31, 2023
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Stefan Bayer, Pallavi Maben, Emmanuel Ravelli, Guillaume Fuchs, Eleni Fotopoulou, Markus Multrus
  • Patent number: 11790884
    Abstract: A computer-implemented method of generating speech audio in a video game is provided. The method includes inputting, into a synthesizer module, input data that represents speech content. Source acoustic features for the speech content in the voice of a source speaker are generated and are input, along with a speaker embedding associated with a player of the video game into an acoustic feature encoder of a voice convertor. One or more acoustic feature encodings are generated as output of the acoustic feature encoder, which are inputted into an acoustic feature decoder of the voice convertor to generate target acoustic features. The target acoustic features are processed with one or more modules, to generate speech audio in the voice of the player.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: October 17, 2023
    Assignee: ELECTRONIC ARTS INC.
    Inventors: Zahra Shakeri, Jervis Pinto, Kilol Gupta, Mohsen Sardari, Harold Chaput, Navid Aghdaie, Kenneth Moss
  • Patent number: 11790893
    Abstract: A voice processing method is disclosed. The voice processing method applies first and second sentence vectors extracted from first and second utterances, that are included in one dialog group and are separated from each other, to a learning model and generates an output from which at least one word having an overlapping meaning is removed. The voice processing method can be associated with an artificial intelligence module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, devices related to 5G services, and the like.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: October 17, 2023
    Assignee: LG ELECTRONICS INC.
    Inventors: Kwangyong Lee, Hyun Yu, Byeongha Kim, Yejin Kim
  • Patent number: 11769007
    Abstract: An approach for generating synthetic treebanks to be used in training a parser in a production system is provided. A processor receives a request to generate one or more synthetic treebanks from a production system, wherein the request indicates a language for the one or more synthetic treebanks. A processor retrieves at least one corpus of text in which the requested language is present. A processor provides the at least one corpus to a transformer enhanced parser neural network model. A processor generates at least one synthetic treebank associated with a string of text from the at least one corpus of text in which the requested language is present. A processor sends the at least one synthetic treebank to the production system, wherein the production system trains a parser utilized by the production system with the at least one synthetic treebank.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: September 26, 2023
    Assignee: International Business Machines Corporation
    Inventors: Yousef El-Kurdi, Radu Florian, Hiroshi Kanayama, Efsun Kayi, Laura Chiticariu, Takuya Ohko, Robert Todd Ward
  • Patent number: 11749261
    Abstract: Implementations disclosed herein are directed to federated learning of machine learning (“ML”) model(s) based on gradient(s) generated at corresponding client devices and a remote system. Processor(s) of the corresponding client devices can process client data generated locally at the corresponding client devices using corresponding on-device ML model(s) to generate corresponding predicted outputs, generate corresponding client gradients based on the corresponding predicted outputs, and transmit the corresponding client gradients to the remote system. Processor(s) of the remote system can process remote data obtained from remote database(s) using global ML model(s) to generate additional corresponding predicted outputs, generate corresponding remote gradients based on the additional corresponding predicted outputs. Further, the remote system can utilize the corresponding client gradients and the corresponding remote gradients to update the global ML model(s) or weights thereof.
    Type: Grant
    Filed: March 10, 2021
    Date of Patent: September 5, 2023
    Assignee: GOOGLE LLC
    Inventors: Françoise Beaufays, Andrew Hard, Swaroop Indra Ramaswamy, Om Dipakbhai Thakkar, Rajiv Mathews
  • Patent number: 11748573
    Abstract: This disclosure relates to a system and method for quantitative measure of subject specific sentiment analysis of a text input. The text input comprises subjects and objects. The text input is tokenized, and each word of the tokenized text input is tagged based on a part-of-speech (POS) and a universal dependency tag. A universal dependency tag tree is prepared based on dependency tags. Further, the subjects and objects are identified using a subject-verb-object (SVO) detection. The universal dependency tree is analyzed for each identified subject to determine a token dependency of the subject. The identified subject is quantified using a deep learning-based sentiment analyzer and finally a sentiment score is recommended for the subject using a probability score and a class score is assigned to the subject.
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: September 5, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Sitarama Brahmam Gunturi, Pranavi Sura, Brajesh Singh
  • Patent number: 11741944
    Abstract: A method of training a speech model includes receiving, at a voice-enabled device, a fixed set of training utterances where each training utterance in the fixed set of training utterances includes a transcription paired with a speech representation of the corresponding training utterance. The method also includes sampling noisy audio data from an environment of the voice-enabled device. For each training utterance in the fixed set of training utterances, the method further includes augmenting, using the noisy audio data sampled from the environment of the voice-enabled device, the speech representation of the corresponding training utterance to generate noisy audio samples and pairing each of the noisy audio samples with the corresponding transcription of the corresponding training utterance. The method additionally includes training a speech model on the noisy audio samples generated for each speech representation in the fixed set of training utterances.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: August 29, 2023
    Assignee: Google LLC
    Inventors: Matthew Sharifi, Victor Carbune
  • Patent number: 11715465
    Abstract: A de-coupled computing infrastructure is described that is adapted to provide domain specific contextual engines based on conversational flow. The computing infrastructure further includes, in some embodiments, a mechanism for directing conversational flow in respect of a backend natural language processing engine. The computing infrastructure is adapted to control or manage conversational flows using a plurality of natural language processing agents.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: August 1, 2023
    Assignee: ROYAL BANK OF CANADA
    Inventors: MohammadHosein Ahmadidaneshashtiani, Ian Robert Middleton, Shawn Harold Munro, Darren Michael MacNamara, Bo Sang, Devina Jaiswal, Hanke Liu, Kylie To
  • Patent number: 11710003
    Abstract: Embodiments of this application include an information conversion method for translating source information. The source information is encoded to obtain a first code. A preset conversion condition is obtained. The preset conversion condition indicates a mapping relationship between the source information and a conversion result. The first code is decoded according to the source information, the preset conversion condition, and translated information to obtain target information. The target information and the source information are in different languages. Further, the translated information includes a word obtained through conversion of the source information into a language of the target information.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: July 25, 2023
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventors: Mingxuan Wang, Jun Xie, Jian Yao, Jiangquan Huang
  • Patent number: 11694696
    Abstract: A method and apparatus for generating a speaker identification neural network include generating a first neural network that is trained to identify a first speaker with respect to a first voice signal in a first environment, generating a second neural network for identifying a second speaker with respect to a second voice signal in a second environment, and generating the speaker identification neural network by training the second neural network based on a teacher-student training model in which the first neural network is set to a teacher neural network and the second neural network is set to a student neural network.
    Type: Grant
    Filed: November 25, 2019
    Date of Patent: July 4, 2023
    Assignees: SAMSUNG ELECTRONICS CO.. LTD., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Sungchan Kang, Namsoo Kim, Cheheung Kim, Seokwan Chae
  • Patent number: 11670308
    Abstract: A method for generating a comfort noise (CN) parameter is provided. The method includes receiving an audio input; detecting, with a Voice Activity Detector (VAD), a current inactive segment in the audio input; as a result of detecting, with the VAD, the current inactive segment in the audio input, calculating a CN parameter CNused; and providing the CN parameter CNused to a decoder. The CN parameter CNused is calculated based at least in part on the current inactive segment and a previous inactive segment.
    Type: Grant
    Filed: June 26, 2019
    Date of Patent: June 6, 2023
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Fredrik Jansson, Tomas Jansson Toftgård