Patents Examined by Abdelali Serrou
  • Patent number: 10664665
    Abstract: Computer-implemented techniques can include receiving a selected word in a source language, obtaining one or more parts of speech for the selected word, and for each of the one or more parts-of-speech, obtaining candidate translations of the selected word to a different target language, each candidate translation corresponding to a particular semantic meaning of the selected word. The techniques can include for each semantic meaning of the selected word: obtaining an image corresponding to the semantic meaning of the selected word, and compiling translation information including (i) the semantic meaning, (ii) a corresponding part-of-speech, (iii) the image, and (iv) at least one corresponding candidate translation. The techniques can also include outputting the translation information.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: May 26, 2020
    Assignee: Google LLC
    Inventors: Alexander Jay Cuthbert, Barak Turovsky
  • Patent number: 10656830
    Abstract: The proposed invention relates to the field of inputting simplified Chinese characters, as well as characters of other writing systems that are based on the Chinese characters writing system. The invention essentially offers increased efficiency and speed of inputting characters.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: May 19, 2020
    Inventor: Boris Mikhailovich Putko
  • Patent number: 10635751
    Abstract: Examples of the present disclosure can comprise systems and methods for creating and modifying named entity recognition models. The system can use two or more existing named entity recognition models to output responses to natural language queries for which the models have not yet been trained. When the output from the two or more models match, the query and the resulting output can be stored as training data for a new named entity recognition model. If the output from the two or models do not match, the query and the outputs can be stored in an exceptions file for additional review. In some embodiments, the system can comprise one or more processors and a display for providing a user interface (UI).
    Type: Grant
    Filed: May 23, 2019
    Date of Patent: April 28, 2020
    Assignee: CAPITAL ONE SERVICES, LLC
    Inventors: Aditya Relangi, Michael Langford
  • Patent number: 10629210
    Abstract: A voice recognition apparatus may include a receiver configured to receive a voice command; a provider configured to output a guidance message; and a controller configured to control the provider in response to the voice command, analyze a listening pattern of the guidance message transmitted by the receiver, and adjust an output of the guidance message based on the listening pattern.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: April 21, 2020
    Assignees: Hyundai Motor Company, Kia Motors Corporation
    Inventor: BiHo Kim
  • Patent number: 10621980
    Abstract: Performing speech recognition in a multi-device system includes receiving a first audio signal that is generated by a first microphone in response to a verbal utterance, and a second audio signal that is generated by a second microphone in response to the verbal utterance; dividing the first audio signal into a first sequence of temporal segments; dividing the second audio signal into a second sequence of temporal segments; comparing a sound energy level associated with a first temporal segment of the first sequence to a sound energy level associated with a first temporal segment of the second sequence; based on the comparing, selecting, as a first temporal segment of a speech recognition audio signal, one of the first temporal segment of the first sequence and the first temporal segment of the second sequence; and performing speech recognition on the speech recognition audio signal.
    Type: Grant
    Filed: March 21, 2017
    Date of Patent: April 14, 2020
    Assignee: Harman International Industries, Inc.
    Inventor: Seon Man Kim
  • Patent number: 10613826
    Abstract: Operability of head-mounted display systems is enhanced by incorporating the following: a microphone which receives an utterance input by a person and outputs voice information; a character string generation unit which generates an uttered character string by converting the voice information into a character string; a specific utterance information storage unit which stores specific utterance information that associates at least one program to be started or stopped and/or at least one operating mode to be started or stopped, with specific utterances for starting or stopping each of the programs and/or operating modes; a specific utterance extraction unit which extracts a specific utterance included in the uttered character string with reference to the specific utterance information, and generates an extracted specific utterance signal indicating the extraction result; and a control unit which starts or stops a program or an operating mode with reference to the extracted specific utterance signal.
    Type: Grant
    Filed: December 25, 2014
    Date of Patent: April 7, 2020
    Assignee: MAXELL, LTD.
    Inventor: Seiji Imagawa
  • Patent number: 10614813
    Abstract: Caller identity verification can be improved by employing a multi-step verification that leverages speech features that are obtained from multiple interactions with a caller. An enrollment is performed in which customer speech features and customer information are collected. When a caller calls into the call center, an attempt is made to verify the caller's identity by requesting the caller to speak a predefined phrase, extracting speech features from the spoken phrase, and comparing the phrase. If the purported identity of the caller can be matched with one of the customers based on the comparison, the identity of the caller is verified. If the match cannot be made with a high enough degree of confidence, the customer is asked to speak any phrase that is not predefined. Features are extracted from the caller's speech, combined with features previously extracted from the predefined speech, and compared to the enrollment features.
    Type: Grant
    Filed: November 3, 2017
    Date of Patent: April 7, 2020
    Assignee: Intellisist, Inc.
    Inventors: Gilad Odinak, Yishay Carmiel
  • Patent number: 10600419
    Abstract: Techniques for performing command processing are described. A system receives, from a device, input data corresponding to a command. The system determines NLU processing results associated with multiple applications. The system also determines NLU confidences for the NLU processing results for each application. The system sends NLU processing results to a portion of the multiple applications, and receives output data or instructions from the portion of the applications. The system ranks the portion of the applications based at least in part on the NLU processing results associated with the portion of the applications as well as the output data or instructions provided by the portion of the applications. The system may also rank the portion of the applications using other data. The system causes content corresponding to output data or instructions provided by the highest ranked application to be output to a user.
    Type: Grant
    Filed: September 22, 2017
    Date of Patent: March 24, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Ruhi Sarikaya, Rohit Prasad, Kerry Hammil, Spyridon Matsoukas, Nikko Strom, Frédéric Johan Georges Deramat, Stephen Frederick Potter, Young-Bum Kim
  • Patent number: 10593345
    Abstract: Apparatus for decoding an encoded audio signal including an encoded core signal and parametric data, including: a core decoder for decoding the encoded core signal to obtain a decoded core signal; an analyzer for analyzing the decoded core signal before or after performing a frequency regeneration operation to provide an analysis result; and a frequency regenerator for regenerating spectral portions not included in the decoded core signal using a spectral portion of the decoded core signal, the parametric data, and the analysis result.
    Type: Grant
    Filed: January 20, 2016
    Date of Patent: March 17, 2020
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Sascha Disch, Ralf Geiger, Christian Helmrich, Frederik Nagel, Christian Neukam, Konstantin Schmidt, Michael Fischer
  • Patent number: 10586555
    Abstract: Architectures and techniques to visually indicate an operational state of an electronic device. In some instances, the electronic device comprises a voice-controlled device configured to interact with a user through voice input and visual output. The voice-controlled device may be positioned in a home environment, such as on a table in a room of the environment. The user may interact with the voice-controlled device through speech and the voice-controlled device may perform operations requested by the speech. As the voice-controlled device enters different operational states while interacting with the user, one or more lights of the voice-controlled device may be illuminated to indicate the different operational states.
    Type: Grant
    Filed: August 24, 2017
    Date of Patent: March 10, 2020
    Assignee: Amazon Technologies, Inc.
    Inventors: Scott Blanksteen, Gregory M. Hart, Charles S. Rogers, III, Heinz-Dominik Langhammer, Ronald Edward Webber
  • Patent number: 10572826
    Abstract: Methods, computer program products, and systems are presented. The methods include, for instance: obtaining an utterance input from a user agent, and collecting context data of the utterance input. A context tag is generated based on the context data, and one or more ground truth having respective utterance semantically identical to the utterance input is selected. Semantical relationship between the context tag and an intent of the selected ground truth is examined and the selected ground truth is updated with the context tag.
    Type: Grant
    Filed: April 18, 2017
    Date of Patent: February 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Faheem Altaf, Lisa Seacat Deluca, Raghuram Srinivas
  • Patent number: 10573334
    Abstract: An apparatus for decoding an encoded audio signal, includes a spectral domain audio decoder for generating a first decoded representation of a first set of first spectral portions, the decoded representation having a first spectral resolution; a parametric decoder for generating a second decoded representation of a second set of second spectral portions having a second spectral resolution being lower than the first spectral resolution; a frequency regenerator for regenerating every constructed second spectral portion having the first spectral resolution using a first spectral portion and spectral envelope information for the second spectral portion; and a spectrum time converter for converting the first decoded representation and the reconstructed second spectral portion into a time representation.
    Type: Grant
    Filed: January 20, 2016
    Date of Patent: February 25, 2020
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Sascha Disch, Frederik Nagel, Ralf Geiger, Balaji Nagendran Thoshkahna, Konstantin Schmidt, Stefan Bayer, Christian Neukam, Bernd Edler, Christian Helmrich
  • Patent number: 10564925
    Abstract: Many headsets include automatic noise cancellation (ANC) which dramatically reduces perceived background noise and improves user listening experience. Unfortunately, the voice microphones in these devices often capture ambient noise that the headsets output during phone calls or other communication sessions to other users. In response, many headsets and communication devices provide manual muting circuitry, but users frequently forget to turn the muting on and/or off, creating further problems as they communicate. To address this, the present inventors devised, among other things, an exemplary headset that detects the absence or presence of user speech, automatically muting and unmuting the voice microphone without user intervention. Some embodiments leverage relationships between feedback and feedforward signals in ANC circuitry to detect user speech, avoiding the addition of extra hardware to the headset.
    Type: Grant
    Filed: September 21, 2017
    Date of Patent: February 18, 2020
    Inventors: Jiajin An, Michael Jon Wurtz, David Wurtz, Manpreet Khaira, Amit Kumar, Shawn O'Connor, Shankar Rathoud, James Scanlan, Eric Sorensen
  • Patent number: 10565989
    Abstract: A system for easily importing content related to a device into a speech-controlled system in a manner that makes the content easily accessible using voice commands. A speech-controlled system that detect a device type from which audio data is received and can determine if the utterance of the audio data includes a query related to the specific device. The system can then obtain and ingest content related to the device and analyze that content to identify the portion of the content responsive to the query. The remaining content can be stored to potentially respond to future queries.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: February 18, 2020
    Assignee: Amazon Technogies Inc.
    Inventors: Christopher Wheeler, Chase Brown, Kevin Bedell
  • Patent number: 10552547
    Abstract: Embodiments of the present invention provide a computer-implemented method of real-time translation evaluation service. A plurality of text translations are received for an input text from a plurality of different machine translation servers. A similarity analyzer is executed that generates a first similarity score for each given text translation of the plurality of text translations by comparing the given text translation with others of the plurality of text translations. A translation evaluator is executed that generates a second similarity score for each given text translation by comparing similarity of the given text translation with a plurality of comparison factors that include company word/term usage guidelines and product translation consistency rules. A best translation is identified and transmitted to an integrated development environment. The identification being based at least in part on the first similarity scores and the second similarity scores.
    Type: Grant
    Filed: October 10, 2017
    Date of Patent: February 4, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Debbie Anglin, Su Liu, Boyi Tzen, Fan Yang
  • Patent number: 10553199
    Abstract: A method of providing real-time speech synthesis based on user input includes presenting a graphical user interface having a low-dimensional representation of a multi-dimensional phoneme space, a first dimension representing degree of vocal tract constriction and voicing, a second dimension representing location in a vocal tract. One example employs a disk-shaped layout. User input is received via the interface and translated into a sequence of phonemes that are rendered on an audio output device. Additionally, a synthesis method includes maintaining a library of prerecorded samples of diphones organized into diphone groups, continually receiving a time-stamped sequence of phonemes to be synthesized, and selecting a sequence of diphone groups with their time stamps. A best diphone within each group is identified and placed into a production buffer from which diphones are rendered according to their time stamps.
    Type: Grant
    Filed: May 20, 2016
    Date of Patent: February 4, 2020
    Assignee: Trustees of Boston University
    Inventors: Frank Harold Guenther, Alfonso Nieto-Castanon
  • Patent number: 10547936
    Abstract: The examples relate to implementations of apparatuses, such as lighting devices, and a system that uses a speech-based user interface to provide speech-based navigation services. The speech-based user interface provides navigation instructions that direct a person to the location of an item within a premises. The person interacts with a speech-based apparatus to receive the navigation instructions as speech-based directions through the premises from a specified location to the item location, or as static navigation instructions enabling the person to navigate from the specified location to the item location. A directional microphone and a controllable speaker receive audio inputs from and output audio outputs to a specified location or subarea of the premises to a person using the speech-based user interface. The audio outputs are directed to the person in the subarea of the premises, and have a higher amplitude within the subarea than outside the subarea of the premises.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: January 28, 2020
    Assignee: ABL IP HOLDING LLC
    Inventors: Vernon J. Nagel, Jenish S. Kastee, Jack C. Rains, Jr., Nathaniel W. Hixon, Youssef F. Baker, Daniel M. Megginson, Sean P. White, Niels G. Eegholm
  • Patent number: 10540439
    Abstract: Systems and methods for semantically analyzing digital information. A cognitive engine is configured to determine useful evidentiary information from large digital content data sets. Further, the cognitive engine can analyze or manipulate the evidentiary information to derive data needed to solve problems, identify issues, and identify patterns. The results can then be applied to any application, interface, or automation as appropriate.
    Type: Grant
    Filed: April 17, 2017
    Date of Patent: January 21, 2020
    Assignee: MARCA RESEARCH & DEVELOPMENT INTERNATIONAL, LLC
    Inventors: Mahmoud Azmi Khamis, Bruce Golden, Rami Ikhreishi
  • Patent number: 10515150
    Abstract: A method for configuring an automated, speech driven self-help system based on prior interactions between a plurality of customers and a plurality of agents includes: recognizing, by a processor, speech in the prior interactions between customers and agents to generate recognized text; detecting, by the processor, a plurality of phrases in the recognized text; clustering, by the processor, the plurality of phrases into a plurality of clusters; generating, by the processor, a plurality of grammars describing corresponding ones of the clusters; outputting, by the processor, the plurality of grammars; and invoking configuration of the automated self-help system based on the plurality of grammars.
    Type: Grant
    Filed: July 14, 2015
    Date of Patent: December 24, 2019
    Inventors: Yoni Lev, Tamir Tapuhi, Avraham Faizakof, Amir Lev-Tov, Yochai Konig
  • Patent number: 10515646
    Abstract: A quantization apparatus comprises: a first quantization module for performing quantization without an inter-frame prediction; and a second quantization module for performing quantization with an inter-frame prediction, and the first quantization module comprises: a first quantization part for quantizing an input signal; and a third quantization part for quantizing a first quantization error signal, and the second quantization module comprises: a second quantization part for quantizing a prediction error; and a fourth quantization part for quantizing a second quantization error signal, and the first quantization part and the second quantization part comprise a trellis structured vector quantizer.
    Type: Grant
    Filed: March 30, 2015
    Date of Patent: December 24, 2019
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Ho-sang Sung