Patents Examined by Richard Z Zhu
  • Patent number: 11580309
    Abstract: Provided is a method including obtaining a first data object including a first set of data entries, wherein each data entry of the first set of data entries includes text content associated with a time entry. The method includes generating a first data object score using the text content and the time entries included in the first set of data entries and using scoring parameters, determine that the first data object score satisfies a data object score condition; perform in response to the first data object score satisfying the data object score condition, a condition-specific action associated with the data object score condition.
    Type: Grant
    Filed: March 14, 2022
    Date of Patent: February 14, 2023
    Assignee: STATES TITLE, LLC
    Inventors: Erica Kimball Mason, Timothy Nathaniel Rubin
  • Patent number: 11562736
    Abstract: A speech recognition method includes segmenting captured voice information to obtain a plurality of voice segments, and extracting voiceprint information of the voice segments; matching the voiceprint information of the voice segments with a first stored voiceprint information to determine a set of filtered voice segments having voiceprint information that successfully matches the first stored voiceprint information; combining the set of filtered voice segments to obtain combined voice information, and determining combined semantic information of the combined voice information; and using the combined semantic information as a speech recognition result when the combined semantic information satisfies a preset rule.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: January 24, 2023
    Assignee: TENCENT TECHNOLOGY (SHEN ZHEN) COMPANY LIMITED
    Inventor: Qiusheng Wan
  • Patent number: 11557289
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for language models using domain-specific model components. In some implementations, context data for an utterance is obtained. A domain-specific model component is selected from among multiple domain-specific model components of a language model based on the non-linguistic context of the utterance. A score for a candidate transcription for the utterance is generated using the selected domain-specific model component and a baseline model component of the language model that is domain-independent. A transcription for the utterance is determined using the score the transcription is provided as output of an automated speech recognition system.
    Type: Grant
    Filed: October 1, 2020
    Date of Patent: January 17, 2023
    Assignee: Google LLC
    Inventors: Fadi Biadsy, Diamantino Antionio Caseiro
  • Patent number: 11551042
    Abstract: Sentiment classification can be implemented by an entity-level multimodal sentiment classification neural network. The neural network can include left, right, and target entity subnetworks. The neural network can further include an image network that generates representation data that is combined and weighted with data output by the left, right, and target entity subnetworks to output a sentiment classification for an entity included in a network post.
    Type: Grant
    Filed: August 27, 2019
    Date of Patent: January 10, 2023
    Assignee: Snap Inc.
    Inventors: Jianfei Yu, Luis Carlos Dos Santos Marujo, Venkata Satya Pradeep Karuturi, Leonardo Ribas Machado das Neves, Ning Xu, William Brendel
  • Patent number: 11551690
    Abstract: In one aspect, a playback deice is configured to identify in an audio stream, via a second wake-word engine, a false wake word for a first wake-word engine that is configured to receive as input sound data based on sound detected by a microphone. The first and second wake-word engines are configured according to different sensitivity levels for false positives of a particular wake word. Based on identifying the false wake word, the playback device is configured to (i) deactivate the first wake-word engine and (ii) cause at least one network microphone device to deactivate a wake-word engine for a particular amount of time. While the first wake-word engine is deactivated, the playback device is configured to cause at least one speaker to output audio based on the audio stream. After a predetermined amount of time has elapsed, the playback device is configured to reactivate the first wake-word engine.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: January 10, 2023
    Assignee: Sonos, Inc.
    Inventors: Connor Kristopher Smith, Charles Conor Sleith, Kurt Thomas Soto
  • Patent number: 11551686
    Abstract: A home appliance is provided. The home appliance includes a sensor, a microphone, a speaker, and a processor. The processor is configured to, based on one of a first event wherein a user action is detected through the sensor or a second event wherein a trigger command for initiating a voice recognition mode is input through the microphone occurring, operate in the voice recognition mode, and control the speaker to output an audio signal corresponding to the event occurred, and the audio signal is an audio signal set differently for each of the first event and the second event.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: January 10, 2023
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Changkyu Ahn, Minkyong Kim, Miyoung Yoo, Hyoungjin Lee
  • Patent number: 11521642
    Abstract: Methods and systems include sending recording data of a call to a first server and a second server, wherein the recording data includes a first voice of a first participant of the call and a second voice of a second participant of the call; receiving, from the first server, a first emotion score representing a degree of a first emotion associated with the first voice, and a second emotion score representing a degree of a second emotion associated with the first voice; receiving, from the second server, a first sentiment score, a second sentiment score, and a third sentiment score; determining a quality score and classification data for the recording data based on the first emotion score, the second emotion score, the first sentiment score, the second sentiment score, and the third sentiment score; and outputting the quality score and the classification data for visualization of the recording data.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: December 6, 2022
    Assignee: Fidelity Information Services, LLC
    Inventor: Rajiv Ramanjani
  • Patent number: 11514895
    Abstract: A method and system are provided. The method includes separating a predicate that specifies a set of events into a temporal part and a non-temporal part. The method further includes comparing the temporal part of the predicate against a predicate of a known window type. The method also includes determining whether the temporal part of the predicate matches the predicate of the known window type. The method additionally includes replacing (i) the non-temporal part of the predicate by a filter, and (ii) the temporal part of the predicate by an instance of the known window type, responsive to the temporal part of the temporal predicate matching the predicate of the known window type. The instance is parameterized with substitutions used to match the temporal part of the predicate to the predicate of the known window type.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: November 29, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATTON
    Inventors: Martin J. Hirzel, Christopher Hyland, Nicolas C. Ke
  • Patent number: 11514903
    Abstract: The present technology relates to an information processing device and an information processing method that make it possible to generate interaction data with less cost. Provided is the information processing device including a processor that generates, on the basis of interaction history information, a coupling context to be coupled to a context of interest to be noticed among a plurality of contexts. This makes it possible to generate interaction data with less cost. The present technology is applicable as server-side service of a voice interaction system, for example.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: November 29, 2022
    Assignee: SONY CORPORATION
    Inventor: Junki Ohmura
  • Patent number: 11514897
    Abstract: A method for intent mining that includes: receiving conversation data; using an intent mining algorithm to automatically mine intents from the conversation data; and uploading the mined intents into the conversational bot. The intent mining algorithm may include: analyzing utterances of the conversation data to identify intent-bearing utterances; analyzing the identified intent-bearing utterances to identify candidate intents; selecting salient intents from the candidate intents; grouping the selected salient intents into salient intent groups in accordance with a degree of semantic similarity; for each of the salient intent groups, selecting one of the salient intents as the intent label and designating the others as the intent alternatives; and associating the intent-bearing utterances with the salient intent groups via determining a degree of semantic similarity between the candidate intents present in the intent-bearing utterance and the intent alternatives within each group.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: November 29, 2022
    Inventors: Basil George, Ramasubramanian Sundaram
  • Patent number: 11508372
    Abstract: Techniques for performing runtime ranking of skill components are described. A skill developer may generate a rule indicating a skill component is to be invoked at runtime when a natural language input corresponds to a specific context. At runtime, a virtual assistant system may implement a machine learned model to generate an initial ranking of skill components. Thereafter, the virtual assistant system may use skill component-specific rules to adjust the initial ranking, and this second ranking is used to determine which skill component to invoke to respond to the natural language input. Overtime, if a rule results in beneficial user experiences, the virtual assistant system may incorporate the rule into the machine learned model.
    Type: Grant
    Filed: June 18, 2020
    Date of Patent: November 22, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Michael Schwartz, Joe Pemberton, Steven Mack Saunders, Archit Jain, Alexander Go
  • Patent number: 11490229
    Abstract: Various embodiments generally relate to systems and methods for creation of voice memos while an electronic device is in a driving mode. In some embodiments, a triggering event can be used to indicate that the electronic device is within a car or about to be within a car and that text communications should be translated (e.g., via an application or a conversion platform) into a voice memo that can be played via a speaker. These triggering events can include a manual selection or an automatic selection based on a set of transition criteria (e.g., electronic device moving above a certain speed, following a roadway, approaching a location in a map of a marked car, etc.).
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: November 1, 2022
    Assignee: T-Mobile USA, Inc.
    Inventor: Niraj Nayak
  • Patent number: 11482222
    Abstract: A method and apparatus for determining a unique wake word for devices within an incident. One system includes an electronic computing device comprising a transceiver and an electronic processor communicatively coupled to the transceiver. The electronic processor is configured to receive a notification indicative of an occurrence of an incident and one or more communication devices present at the incident, determine contextual information associated with the incident and the one or more communication devices, and identify one or more wake words based on the contextual information. The electronic processor is further configured to determine a phonetic distance for each pair of wake words included in the one or more wake words, and select a unique wake word from the one or more wake words for each communication device of the one or more communication devices based on the determined phonetic distance.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: October 25, 2022
    Assignee: MOTOROLA SOLUTIONS, INC.
    Inventors: Sean Regan, Maryam Eneim, Melanie King, Manoj Prasad Nagendra Prasad
  • Patent number: 11482227
    Abstract: A server controlling an external device is provided. The server includes a communicator; a processor; a memory which stores at least one natural language understanding (NLU) engine for generating a command corresponding to a user's utterance. The server receives, from a pairing device paired to the external device, the user's utterance controlling the external device and information about at least one external device registered with the pairing device, via the communicator, determines an NLU engine corresponding to the external device, from among the at least one NLU engine, based on the user's utterance controlling the external device and the information about the at least one external device, and generates the command controlling the external device based on the user's utterance, by using the determined NLU engine.
    Type: Grant
    Filed: April 9, 2020
    Date of Patent: October 25, 2022
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Dong-hyun Choi
  • Patent number: 11468886
    Abstract: According to an embodiment of the present invention, an artificial intelligence (AI) apparatus for performing voice control, includes a memory configured to store a voice extraction filter for extracting a voice of a registered user, and a processor to receive identification information of a user and a first voice signal of the user, to register the user using the received identification information, to extract a voice of the registered user from the received second voice signal by using the voice extraction filter corresponding to the registered user, when a second voice signal is received, and to proceed a control operation corresponding to intention information of the extracted voice of the registered user. The voice extraction filter is generated by using the received first voice signal of the registered user.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: October 11, 2022
    Assignee: LG ELECTRONICS INC.
    Inventor: Jaehong Kim
  • Patent number: 11455467
    Abstract: A method, computer program, and computer system is provided for extracting relations between one or more entities in a sentence. A forest corresponding to probabilities of relations between each pair of the entities is generated, and the generated forest generated forest is encoded with relation information for each of the pairs of entities. One or more features are extracted based on the generated forest and the encoded relation information, and a relation is predicted between the entities of each pair of entities based on the extracted features.
    Type: Grant
    Filed: January 30, 2020
    Date of Patent: September 27, 2022
    Assignee: TENCENT AMERICA LLC
    Inventor: Linfeng Song
  • Patent number: 11443742
    Abstract: In a method for determining a dialog state, first dialog information is obtained. The first dialog information is dialog information inputted during a dialog process. Based on the first dialog information, target scenario information corresponding to the first dialog information is determined. The target scenario information is used to indicate a dialog scenario of the first dialog information. Based on the first dialog information and the target scenario information, a first dialog state corresponding to the first dialog information is obtained. The first dialog state is used to represent a response mode for responding to the first dialog information.
    Type: Grant
    Filed: July 6, 2020
    Date of Patent: September 13, 2022
    Assignee: Tencent Technology (Shenzhen) Company Limited
    Inventor: Xiaodong Lu
  • Patent number: 11436412
    Abstract: An apparatus includes at least one processing device configured to obtain event metadata for events published by event sources to an event platform, the event metadata comprising static event tags for respective ones of the events. The at least one processing device is also configured to generate dynamic event tags having an association with event types based at least in part on analysis of real-time event traffic comprising a subset of the events published by the event sources to the event platform over a designated time period. The at least one processing device is further configured to train a machine learning model utilizing the static event tags and the association of the dynamic event tags with the event types, receive a query comprising event parameters, and provide a response to the query by utilizing the trained machine learning model to match events with the event parameters in the query.
    Type: Grant
    Filed: February 18, 2020
    Date of Patent: September 6, 2022
    Assignee: EMC IP Holding Company LLC
    Inventor: Mohammad Rafey
  • Patent number: 11429428
    Abstract: Systems and methods of invoking functions of agents via digital assistant applications are provided. Each action-inventory can have an address template for an action by an agent. The address template can include a portion having an input variable used to execute the action. A data processing system can parse an input audio signal from a client device to identify a request and a parameter to be executed by the agent. The data processing system can select an action-inventory for the action corresponding to the request. The data processing system can generate, using the address template, an address. The address can include a substring having the parameter used to control execution of the action. The data processing system can direct an action data structure including the address to the agent to cause the agent to execute the action and to provide output for presentation.
    Type: Grant
    Filed: May 6, 2019
    Date of Patent: August 30, 2022
    Assignee: GOOGLE LLC
    Inventors: Jason Douglas, Carey Radebaugh, Ilya Firman, Ulas Kirazci, Luv Kothari
  • Patent number: 11425494
    Abstract: A device capable of motion includes a beamformer for determining audio data corresponding to one or more directions. The beamformer includes a target beamformer that boosts audio from a target direction and a null beamformer that suppresses audio from that direction. When the device outputs sound while moving, the target and null beamformers capture and compensate for Doppler effects in output audio that reflects from nearby surfaces back to the device.
    Type: Grant
    Filed: June 12, 2019
    Date of Patent: August 23, 2022
    Assignee: Amazon Technologies, Inc.
    Inventors: Navin Chatlani, Amit Singh Chhetri