Patents Examined by Jesse S Pullias
  • Patent number: 10923100
    Abstract: In some implementations, a language proficiency of a user of a client device is determined by one or more computers. The one or more computers then determines a text segment for output by a text-to-speech module based on the determined language proficiency of the user. After determining the text segment for output, the one or more computers generates audio data including a synthesized utterance of the text segment. The audio data including the synthesized utterance of the text segment is then provided to the client device for output.
    Type: Grant
    Filed: September 17, 2019
    Date of Patent: February 16, 2021
    Assignee: Google LLC
    Inventors: Matthew Sharifi, Jakob Nicolaus Foerster
  • Patent number: 10909975
    Abstract: Systems, devices and methods are described herein for segmentation of content, and more specifically for segmentation of content in a content management system. In one aspect, a method may include receiving content associated with speech, text, or closed captioning data. The speech, the text, or the closed captioning data may be analyzed to derive at least one of a topic, subject, or event for at least a portion of the content. The content may be divided into two or more content segments based on the analyzing. At least one of the topic, the subject, or the event may be associated with at least one of the two or more content segments based on the analyzing. At least one of the two or more content segments may then be published such that each of the two or more content segments is individually accessible.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: February 2, 2021
    Assignee: Sinclair Broadcast Group, Inc.
    Inventors: Benjamin Aaron Miller, Jason D. Justman, Lora Clark Bouchard, Michael Ellery Bouchard, Kevin James Cotlove, Mathew Keith Gitchell, Stacia Lynn Haisch, Jonathan David Kersten, Matthew Karl Marchio, Peter Arthur Pulliam, George Allen Smith, Todd Christopher Tibbetts
  • Patent number: 10902840
    Abstract: One embodiment provides a method, including: collecting, at an information handling device, at least one signal received from a living object, in response to an event associated with the living object, wherein the living object is in a communicative state with a person talking to the living object; extracting, using a processor, a set of predetermined features from the signal collected; and determining, responsive to extracting, an intent associated with the living object in response to the event posed to the living object from the strength of the signal. Other embodiments are disclosed and described.
    Type: Grant
    Filed: May 30, 2018
    Date of Patent: January 26, 2021
    Assignee: SAI SOCIETY FOR ADVANCED SCIENTIFIC RESEARCH
    Inventors: Prabhakara Rao Venkata Gouripeddi, Hanumantha Rao Naidu Devireddy
  • Patent number: 10902844
    Abstract: Techniques for automated training content generation are provided. A plurality of questions are retrieved, where each of the plurality of questions is associated with an answer in a plurality of answers. Further, it is determined that a first and a second answer in the plurality of answers are equivalent. A first question corresponding to the first answer and a second question corresponding to the second answer are identified, and a first question cluster including the first question and the second question is generated. The first question cluster is associated with at least one of the first answer and the second answer. Finally, upon determining that a number of questions in the plurality of questions that are included in the first question cluster exceeds a first predefined threshold, the first question cluster is ingested into a question answering system.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Tracy Canada, Eduardo Kaufmann-Malaga, Jim Dewan
  • Patent number: 10902846
    Abstract: A spoken language understanding apparatus according to embodiments of the present disclosure may include: a slot tagging module including: a morpheme analysis unit configured to analyze morphemes with respect to an uttered sentence, a slot tagging unit configured to tag slots corresponding to a semantic entity from a plurality of input tokens generated according to the analyzed morphemes, and a slot name conversion unit configured to convert phrases corresponding to the tagged slots into delexicalized slot names based on neighboring contextual information; and a language generation module configured to generate a combined sequence by combining the delexicalized slot names based on the plurality of input tokens.
    Type: Grant
    Filed: December 11, 2018
    Date of Patent: January 26, 2021
    Assignees: Hyundai Motor Company, Kia Motors Corporation, HYUNDAI MNSOFT, INC., SNU R&DB FOUNDATION
    Inventors: Bi Ho Kim, Sung Soo Park, Sang Goo Lee, You Hyun Shin, Kang Min Yoo, Sang Hoon Lee, Myoung Ki Sung
  • Patent number: 10891966
    Abstract: An audio processing device includes a feature extraction unit and signal generating unit. The feature extraction unit is configured to extract a feature quantity of a first audio signal for each of a plurality of periods. The signal generating unit is configured to for generate a second audio signal by time axis expanding/compressing either a section of the first audio signal in which the feature quantity is steadily maintained for a period time, or a section of the first audio signal in which a fluctuation of the feature quantity is repeated and excluding from the time axis expanding/compressing a section of the first audio signal in which a fluctuation of the feature quantity is not similar to that of other sections of the first audio signal.
    Type: Grant
    Filed: September 19, 2018
    Date of Patent: January 12, 2021
    Assignee: YAMAHA CORPORATION
    Inventor: Akira Maezawa
  • Patent number: 10885909
    Abstract: A speech recognition method to be performed by a computer, the method including: detecting a first keyword uttered by a user from an audio signal representing voice of the user; detecting a term indicating a request of the user from sections that follow the first keyword in the audio signal; and determining a type of speech recognition processing applied to the following sections in accordance with the detected term indicating the request of the user.
    Type: Grant
    Filed: February 6, 2018
    Date of Patent: January 5, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Chikako Matsumoto, Naoshi Matsuo
  • Patent number: 10878004
    Abstract: A keyword extraction method is provided. A candidate keyword from target text is extracted by a server. For each candidate keyword, each effective feature corresponding to the candidate keyword is obtained by the server. Calculation is performed by the server according to each effective feature corresponding to the candidate keyword and a weighting coefficient respectively corresponding to each effective feature, to obtain a probability that the candidate keyword belongs to a target keyword, and the candidate keyword is determined as the target keyword of the target text based on the probability.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: December 29, 2020
    Assignee: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
    Inventor: Xiao Bao
  • Patent number: 10878816
    Abstract: The present disclosure involves systems, software, and computer implemented methods for personalizing interactions within a conversational interface based on an input context. One example system performs operations including receiving a conversational input via a conversational interface associated with a particular user profile. The input is analyzed via a natural language processing engine to determine an intent and a personality input type. A persona response type associated with the determined personality input type is identified, and responsive content is determined. A particular persona associated with the particular user profile based on a related set of social network activity information associated with the user profile and that corresponds to the identified persona response type is identified.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: December 29, 2020
    Assignee: The Toronto-Dominion Bank
    Inventors: Dean C. N. Tseretopoulos, Robert Alexander McCarter, Sarabjit Singh Walia, Vipul Kishore Lalka, Nadia Moretti, Paige Elyse Dickie, Denny Devasia Kuruvilla, Milos Dunjic, Dino Paul D'Agostino, Arun Victor Jagga, John Jong-Suk Lee, Rakesh Thomas Jethwa
  • Patent number: 10867135
    Abstract: A computer implemented method includes building a Positive Knowledge Base with directive words, designated verbs and designated objects. A Negative Knowledge Base with designated phrases and designated legal terms is built. Tasks and phrases from the Positive Knowledge Base and the Negative Knowledge Base are built. Regulations are received. Phrases from the regulations are weighted against the Positive Knowledge Base and the Negative Knowledge Base to isolate positive Maintenance Compliances. The positive Maintenance Compliances are matched to tasks to derive ranked Maintenance Compliances. The ranked Maintenance Compliances are supplied.
    Type: Grant
    Filed: June 11, 2020
    Date of Patent: December 15, 2020
    Assignee: Leonardo247, Inc.
    Inventors: Daniel Cunningham, Baron R. K. Von Wolfsheild
  • Patent number: 10854188
    Abstract: An example method includes receiving, by a computational assistant executing at one or more processors, a representation of an utterance spoken at a computing device; selecting, based on the utterance, an agent from a plurality of agents, wherein the plurality of agents includes one or more first party agents and a plurality of third-party agents; responsive to determining that the selected agent comprises a first party agent, selecting a reserved voice from a plurality of voices; and outputting synthesized audio data using the selected voice to satisfy the utterance.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: December 1, 2020
    Assignee: GOOGLE LLC
    Inventors: Valerie Nygaard, Bogdan Caprita, Robert Stets, Saisuresh Krishnakumaran, Jason Brant Douglas
  • Patent number: 10853747
    Abstract: An example method includes receiving, by a computational assistant executing at one or more processors, a representation of an utterance spoken at a computing device; identifying, based on the utterance, a task to be performed; determining a capability level of a first party (1P) agent to perform the task; determining capability levels of respective third party (3P) agents of a plurality of 3P agents to perform the task; responsive to determining that the capability level of the 1P agent does not satisfy a threshold capability level, that a capability level of a particular 3P agent of the plurality of 3P agents is a greatest of the determined capability levels, and that the capability level of the particular 3P agent satisfies the threshold capability level, selecting the particular 3P agent to perform the task; and performing one or more actions determined by the selected agent to perform the task.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: December 1, 2020
    Assignee: GOOGLE LLC
    Inventors: Bo Wang, Lei Zhong, Barnaby John James, Saisuresh Krishnakumaran, Robert Stets, Bogdan Caprita, Valerie Nygaard
  • Patent number: 10853588
    Abstract: An electronic device includes a controller and a communication device. The controller acquires data indicating first text in a first language. The controller determines whether or not a secret character string is included in the first text. Upon determining that the secret character string is included in the first text, the controller converts the secret character string into a mask character string. The mask character string is for hiding the secret character string. The controller transmits data indicating first text including the mask character string to a translation server through the communication device. The translation server translates the first text into second text in a second language. When the communication device receives data indicating the second text, the controller searches the second text for a translated mask character string. The controller converts the mask character string in the second language into a secret character string in the second language.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: December 1, 2020
    Assignee: KYOCERA Document Solutions Inc.
    Inventor: Atsushi Nishida
  • Patent number: 10832668
    Abstract: Techniques for dynamically maintaining speech processing data on a local device for frequently input commands are described. One or more devices receive speech processing data specific to one or more commands associated with system input frequencies satisfying an input frequency threshold. The device(s) then receives input audio corresponding to an utterance and generate input audio data corresponding thereto. The device(s) performs speech recognition processing on input audio data to generate input text data using a portion of the received speech processing data. The device(s) determines a probability score associated with the input text data and determines the probability score satisfies a threshold probability score. The device(s) then performs natural language processing on the input text data to determine the command using a portion of the speech processing data. The device(s) then outputs audio data responsive to the command.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: November 10, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: David William Devries, Rajesh Mittal
  • Patent number: 10832013
    Abstract: An information processing device acts as a neural network based on time-series data. The information processing device includes a memory and a processor. The memory stores an input variable having an ordinal number in the time-series data, and a parameter group for the neural network. The processor calculates an intermediate variable for each ordinal number based on the input variable having the ordinal number by performing transformation based on the parameter group, and calculates an output variable having the ordinal number based on the calculated intermediate variable. Upon calculating an (n+1)-th intermediate variable, the processor performs weighted sum of a calculation result of an n-th intermediate variable and a transformation result in which the n-th intermediate variable and an (n+1)-th input variable are transformed based on the parameter group, to calculate the (n+1)-th intermediate variable.
    Type: Grant
    Filed: July 29, 2018
    Date of Patent: November 10, 2020
    Assignee: Panasonic Intellectual Property Management Co., Ltd.
    Inventor: Ryo Ishida
  • Patent number: 10827067
    Abstract: A text-to-speech method includes outputting an instruction according to voice information entered by a user; obtaining text information according to the instruction; converting the text information to audio; and playing the audio. According to the embodiments of the present invention, news or other text content in a browser can be played by voice, which liberates hands and eyes of a user. The user can use the browser in some scenarios where the user cannot easily use the browser, such as driving a car, thereby improving user experience.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: November 3, 2020
    Assignee: Guangzhou UCWeb Computer Technology Co., Ltd.
    Inventor: Xiang Liu
  • Patent number: 10817568
    Abstract: Embodiments for recommending predictive modeling methods and features by a processor. One or more extracted methods and features of one or more predictive models are received according to selected criteria from both a structured database and from one or more data sources from a remote database. One or more extracted predictive model methods and features may be recommended according to the selected criteria.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: October 27, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Carlos Alzate Perez, Bei Chen, Ulrike Fischer, Yassine Lassoued
  • Patent number: 10817668
    Abstract: Methods, systems, and computer-readable storage media for receiving a source domain data set including a set of source document and source label pairs, each source label corresponding to a source domain and indicating a sentiment attributed to a respective source document, receiving a target domain data set including a set of target documents absent target labels, processing documents of the source and target domains using a feature encoder of a DAS platform, to map the documents of the source and target domains to a shared feature space through feature representations, the processing including minimizing a distance between the feature representations of the source domain, and feature representations of the target domain based on a set of loss functions, providing an ensemble prediction from the processing, and providing predicted labels based on the ensemble prediction, the predicted labels being used by the sentiment classifier to classify documents from the target domain.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: October 27, 2020
    Assignee: SAP SE
    Inventor: Ruidan He
  • Patent number: 10811000
    Abstract: Systems and methods for a speech recognition system for recognizing speech including overlapping speech by multiple speakers. The system including a hardware processor. A computer storage memory to store data along with having computer-executable instructions stored thereon that, when executed by the processor is to implement a stored speech recognition network. An input interface to receive an acoustic signal, the received acoustic signal including a mixture of speech signals by multiple speakers, wherein the multiple speakers include target speakers. An encoder network and a decoder network of the stored speech recognition network are trained to transform the received acoustic signal into a text for each target speaker. Such that the encoder network outputs a set of recognition encodings, and the decoder network uses the set of recognition encodings to output the text for each target speaker. An output interface to transmit the text for each target speaker.
    Type: Grant
    Filed: April 13, 2018
    Date of Patent: October 20, 2020
    Assignee: Mitsubishi Electric Research Laboratories, Inc.
    Inventors: Jonathan Le Roux, Takaaki Hori, Shane Settle, Hiroshi Seki, Shinji Watanabe, John Hershey
  • Patent number: 10803884
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating an output sequence of audio data that comprises a respective audio sample at each of a plurality of time steps. One of the methods includes, for each of the time steps: providing a current sequence of audio data as input to a convolutional subnetwork, wherein the current sequence comprises the respective audio sample at each time step that precedes the time step in the output sequence, and wherein the convolutional subnetwork is configured to process the current sequence of audio data to generate an alternative representation for the time step; and providing the alternative representation for the time step as input to an output layer, wherein the output layer is configured to: process the alternative representation to generate an output that defines a score distribution over a plurality of possible audio samples for the time step.
    Type: Grant
    Filed: April 22, 2019
    Date of Patent: October 13, 2020
    Assignee: DeepMind Technologies Limited
    Inventors: Aaron Gerard Antonius van den Oord, Sander Etienne Lea Dieleman, Nal Emmerich Kalchbrenner, Karen Simonyan, Oriol Vinyals