Patents Examined by Jesse S Pullias
  • Patent number: 11868730
    Abstract: System and method for aspect-level sentiment classification. The system includes a computing device, the computing device has a processer and a storage device storing computer executable code. The computer executable code is configured to: receive a sentence having a labeled aspect term and context; convert the sentence into a dependency tree graph; calculate an attention matrix of the dependency tree graph based on one-hop attention between any two nodes of the graph; calculate multi-head attention diffusion for any two nodes from the attention matrix; obtain updated embedding of the graph using the multi-head diffusion attention; classify the aspect term based on the updated embedding of the graph to obtain predicted classification of the aspect term; calculate loss function based on the predicted classification and the ground truth label of the aspect term; and adjust parameters of models in the computer executable code based on the loss function.
    Type: Grant
    Filed: May 24, 2021
    Date of Patent: January 9, 2024
    Assignees: JINGDONG DIGITS TECHNOLOGY HOLDING CO., LTD., JD FINANCE AMERICA CORPORATION
    Inventors: Xiaochen Hou, Jing Huang, Guangtao Wang, Xiaodong He, Bowen Zhou
  • Patent number: 11869530
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.
    Type: Grant
    Filed: June 13, 2022
    Date of Patent: January 9, 2024
    Assignee: DeepMind Technologies Limited
    Inventors: Aaron Gerard Antonius van den Oord, Sander Etienne Lea Dieleman, Nal Emmerich Kalchbrenner, Karen Simonyan, Oriol Vinyals
  • Patent number: 11869527
    Abstract: A method at an electronic device with one or more microphones and a speaker, the electronic device configured to be responsive to any of a plurality of affordances including a voice-based affordance, includes determining background noise of an environment associated with the electronic device, and before detecting the voice-based affordance: determining whether the background noise would interfere with recognition of the hotword in voice inputs detected by the electronic device, and if so, indicating to a user to use an affordance other than the voice-based affordance.
    Type: Grant
    Filed: April 8, 2021
    Date of Patent: January 9, 2024
    Assignee: Google LLC
    Inventor: Kenneth Mixter
  • Patent number: 11860915
    Abstract: Methods and systems are provided for generating automatic program recommendations based on user interactions. In some embodiments, control circuitry processes verbal data received during an interaction between a user of a user device and a person with whom the user is interacting. The control circuitry analyzes the verbal data to automatically identify a media asset referred to during the interaction by at least one of the user and the person with whom the user is interacting. The control circuitry adds the identified media asset to a list of media assets associated with the user of the user device. The list of media assets is transmitted to a second user device of the user.
    Type: Grant
    Filed: February 18, 2021
    Date of Patent: January 2, 2024
    Assignee: Rovi Guides, Inc.
    Inventors: Brian Fife, Jason Braness, Michael Papish, Thomas Steven Woods
  • Patent number: 11862145
    Abstract: A method for processing multi-modal input includes receiving multiple signal inputs, each signal input having a corresponding input mode. Each signal input is processed in a series of mode-specific processing stages. Each successive mode-specific stage is associated with a successively longer scale of analysis of the signal input. A fused output is generated based on the output of a series of fused processing stages. Each successive fused processing stage is associated with a successively longer scale of analysis of the signal input. Multiple fused processing stages receive inputs from corresponding mode-specific processing stages, so that the fused output depends on the multiple of signal inputs.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: January 2, 2024
    Assignee: Behavioral Signal Technologies, Inc.
    Inventors: Efthymis Georgiou, Georgios Paraskevopoulos, James Gibson, Alexandros Potamianos, Shrikanth Narayanan
  • Patent number: 11853710
    Abstract: Natural language elements are present in both the executable lines and non-executable lines of the code. Rich information hidden within them are often ignored in code analysis as extraction of meaningful insights from its raw form is not straight forward. A system and method extracting natural language elements from an application source code is provided. The disclosure provides a method for performing detailed analytics on the natural language elements, classify those using deep learning networks and create meaningful insights. The system understands the different type of natural language elements, comment patterns present in the application source code and segregates the authentic comments having valuable insights, version comments, data element level comments from other non-value adding comments.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: December 26, 2023
    Assignee: TATA CONSULTANCY SERVICES LIMITED
    Inventors: Yogananda Ravindranath, Tamildurai Mehalingam, Aditya Thuruvas Senthil, Reshinth Gnana Adithyan, Shrayan Banerjee, Balakrishnan Venkatanarayanan
  • Patent number: 11854532
    Abstract: Disclosed is a system and method for detecting and addressing bias in training data prior to building language models based on the training data. Accordingly system and method, detect bias in training data for Intelligent Virtual Assistant (IVA) understanding and highlight any found. Suggestions for reducing or eliminating them may be provided This detection may be done for each model within the Natural Language Understanding (NLU) component. For example, the language model, as well as any sentiment or other metadata models used by the NLU, can introduce understanding bias. For each model deployed, training data is automatically analyzed for bias and corrections suggested.
    Type: Grant
    Filed: January 3, 2022
    Date of Patent: December 26, 2023
    Assignee: Verint Americas Inc.
    Inventor: Ian Beaver
  • Patent number: 11854540
    Abstract: A device may receive text data, audio data, and video data associated with a user, and may process the received data, with a first model, to determine a stress level of the user. The device may process the received data, with second models, to determine depression levels of the user, and may combine the depression levels to identify an overall depression level. The device may process the received data, with a third model, to determine a continuous affect prediction, and may process the received data, with a fourth model, to determine an emotion of the user. The device may process the received data, with a fifth model, to determine a response to the user, and may utilize a sixth model to determine a context for the response. The device may utilize seventh models to generate contextual conversation data, and may perform actions based on the contextual conversational data.
    Type: Grant
    Filed: April 5, 2021
    Date of Patent: December 26, 2023
    Assignee: Accenture Global Solutions Limited
    Inventors: Anutosh Maitra, Shubhashis Sengupta, Sowmya Rasipuram, Roshni Ramesh Ramnani, Junaid Hamid Bhat, Sakshi Jain, Manish Agnihotri, Dinesh Babu Jayagopi
  • Patent number: 11853340
    Abstract: In one aspect, a system receives a request to cluster a set of log records. Responsive to receiving the request, the system identifies at least one dictionary that defines a set of tokens and corresponding token weights and generates, based at least in part on the set of tokens and corresponding token weights, a set of clusters such that each cluster in the set of clusters represents a unique combination of two or more tokens from the dictionary and groups a subset of log records mapped to the unique combination of two or more tokens. The system may then perform one or more automated actions based on at least one cluster in the set of clusters.
    Type: Grant
    Filed: February 24, 2021
    Date of Patent: December 26, 2023
    Assignee: Oracle International Corporation
    Inventors: Dhileeban Kumaresan, Sreeji Krishnan Das, Adrienne Wong
  • Patent number: 11853902
    Abstract: Techniques and a framework are described herein for constructing and/or updating, e.g., on top of a general-purpose knowledge graph, an “event-specific provisional knowledge graph.” In various implementations, live data stream(s) may be analyzed to identify entity(s) associated with a developing event. The entity(s) may form part of a general-purpose knowledge graph that includes entity nodes and edges between the entity nodes. Based on the identified one or more entities, an event-specific provisional knowledge graph may be constructed or updated in association with the developing event. In some implementations, the event-specific provisional knowledge graph may be queried for new information about the developing event. Computing devices may be caused to render, as output, the new information.
    Type: Grant
    Filed: January 11, 2022
    Date of Patent: December 26, 2023
    Assignee: GOOGLE LLC
    Inventors: Victor Carbune, Sandro Feuz
  • Patent number: 11847419
    Abstract: Disclosed herein are system, method, and computer program product embodiments for recognizing a human emotion in a message. An embodiment operates by receiving a message from a user. The embodiment labels each word of the message with a part of speech (POS) thereby creating a POS set. The embodiment determines an incongruity score for a combination of words in the POS set using a knowledgebase. The embodiment determines a preliminary emotion detection score for an emotion for the message based on the POS set. Finally, the embodiment calculates a final emotion detection score for the emotion for the message based on the preliminary emotion detection score and the incongruity score.
    Type: Grant
    Filed: October 1, 2021
    Date of Patent: December 19, 2023
    Assignee: VIRTUAL EMOTION RESOURCE NETWORK, LLC
    Inventors: Craig Tucker, Bryan Novak
  • Patent number: 11847414
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a text classification machine learning model. One of the methods includes training a model having a plurality of parameters and configured to generate a classification of a text sample comprising a plurality of words by processing a model input that includes a combined feature representation of the plurality of words in the text sample, wherein the training comprises receiving a text sample and a target classification for the text sample; generating a plurality of perturbed combined feature representations; determining, based on the plurality of perturbed combined feature representations, a region in the embedding space; and determining an update to the parameters based on an adversarial objective that encourages the model to assign the target classification for the text sample for all of the combined feature representations in the region in the embedding space.
    Type: Grant
    Filed: April 23, 2021
    Date of Patent: December 19, 2023
    Assignee: DeepMind Technologies Limited
    Inventors: Krishnamurthy Dvijotham, Anton Zhernov, Sven Adrian Gowal, Conrad Grobler, Robert Stanforth
  • Patent number: 11842151
    Abstract: A method of and system of for compressing and decompressing a localized software resource is disclosed. The method may include receiving a software resource, the software resource being in a first language, receiving a localized software resource for compression, where the software resource in the first language is a counterpart of the localized software resource in the second language. Upon receiving the software resources creating a first local dictionary for the localized software resource based at least in part on one or more first language words in the software resource and on data from a global dictionary, and compressing the localized software resource based on the local dictionary.
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: December 12, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Anatoliy Burukhin
  • Patent number: 11837236
    Abstract: This speech processing device is provided with: a contribution degree estimation means which calculates a contribution degree representing a quality of a segment of the speech signal; and a speaker feature calculation means which calculates a feature from the speech signal, for recognizing attribute information of the speech signal, using the contribution degree as a weight of the segment of the speech signal.
    Type: Grant
    Filed: December 8, 2021
    Date of Patent: December 5, 2023
    Assignee: NEC CORPORATION
    Inventors: Hitoshi Yamamoto, Takafumi Koshinaka
  • Patent number: 11830473
    Abstract: A system for synthesising expressive speech includes: an interface configured to receive an input text for conversion to speech; a memory; and at least one processor coupled to the memory. The processor is configured to generate, using an expressivity characterisation module, a plurality of expression vectors, wherein each expression vector is a representation of prosodic information in a reference audio style file, and synthesise expressive speech from the input text, using an expressive acoustic model comprising a deep convolutional neural network that is conditioned by at least one of the plurality of expression vectors.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: November 28, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jesus Monge Alvarez, Holly Francois, Hosang Sung, Seungdo Choi, Kihyun Choo, Sangjun Park
  • Patent number: 11804226
    Abstract: A method includes providing audio signals of an interaction between a plurality of human speakers, the speakers speaking into electronic devices to record the audio signals. The audio signals, which are optionally combined, include agent audio and subject audio. The method further includes automatically processing the audio signals to generate a speaker separated natural language transcript of the interaction from the audio signals. For each identified question, a subject response is identified. From the agent text, it is determined whether the question asked by the at least one agent is an open question or a closed question. A decision engine is used to determine the veracity of the subject response and the subject response is flagged if the indicia of the likelihood of deception in the subject response exceeds a predetermined value.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: October 31, 2023
    Assignee: Lexiqal Ltd
    Inventors: James Laird, Nigel Cannings, Cornelius Patrick Glackin, Julie Ann Wall, Nikesh Bajaj
  • Patent number: 11797776
    Abstract: A device may receive training data that includes datasets associated with natural language processing, and may mask the training data to generate masked training data. The device may train a masked event C-BERT model, with the masked training data, to generate pretrained weights and a trained masked event C-BERT model, and may train an event aware C-BERT model, with the training data and the pretrained weights, to generate a trained event aware C-BERT model. The device may receive natural language text data identifying natural language events, and may process the natural language text data, with the trained masked event C-BERT model, to determine weights. The device may process the natural language text data and the weights, with the trained event aware C-BERT model, to predict causality relationships between the natural language events, and may perform actions, based on the causality relationships.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: October 24, 2023
    Assignee: Accenture Global Solutions Limited
    Inventors: Vivek Kumar Khetan, Mayuresh Anand, Roshni Ramesh Ramnani, Shubhashis Sengupta, Andrew E. Fano
  • Patent number: 11797777
    Abstract: A natural language understanding server includes grammars specified in a modified extended Backus-Naur form (MEBNF) that includes an agglutination metasymbol not supported by conventional EBNF grammar parsers, as well as an agglutination preprocessor. The agglutination preprocessor applies one or more sets of agglutination rewrite rules to the MEBNF grammars, transforming them to EBNF grammars that can be processed by conventional EBNF grammar parsers. Permitting grammars to be specified in MEBNF form greatly simplifies the authoring and maintenance of grammars supporting inflected forms of words in the languages described by the grammars.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: October 24, 2023
    Assignee: SOUNDHOUND AI IP HOLDING, LLC
    Inventors: Bernard Mont-Reynaud, Seth Taron
  • Patent number: 11797263
    Abstract: Systems and methods for media playback via a media playback system include (i) capturing a voice input comprising a request for media content, (ii) receiving information derived at least from the request for media content, (iii) requesting and receiving information from at least one remote computing device associated with a first media content service and at least one remote computing device associated with a second media content service, wherein (a) the information identifies first media content available via the first media content service for playback and identifies second media content available via the second media content service for playback, and (b) the first and second media content are related to the requested media content, and (iv) after receiving at least one of the first information and the second information, (a) selecting the first media content instead of the second media content, and (b) playing back the first media content.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: October 24, 2023
    Assignee: Sonos, Inc.
    Inventors: Sherwin Liu, Paul Bates
  • Patent number: 11797781
    Abstract: A multi-layer language translator operating in conjunction with a syntax-based model, coupled with machine learning and artificial intelligence, performs language translations from a source language text to text expressed in a target language. A relevancy-based “chunking” module breaks a source text into smaller units and applies a part-of-speech tag to some or all of the units. A hierarchy-based structuring module determines grammatical structure of the source text based, at least in part, on the applied part-of-speech tags. The hierarchy-based structuring module recursively combines grammatically linked units into one or more phrases, and applies to the phrases higher-level tags. A syntax-based translating module translates the units and/or phrases into the target language, and based on syntax differences between the source and target languages, reconfigures the translated text, as needed, such that the translated text is expressed in the target language using target language syntax rules and conventions.
    Type: Grant
    Filed: August 6, 2020
    Date of Patent: October 24, 2023
    Assignee: International Business Machines Corporation
    Inventors: Fan Wang, Li Cao, Enrico James Monteleone