Patents Examined by Shaun Roberts
  • Patent number: 10572592
    Abstract: Disclosed is a method for providing at least one word linguistically associated with at least one searched word belonging to a set of words. After having queried (325) a first database of expressions to obtain a set of expressions including the at least one searched word and obtaining the set of expressions, a second database is queried (340), for each expression of at least an expression subset of the obtained set of expressions, to obtain at least one word linguistically associated with the at least one searched word and obtaining the at least one word linguistically associated with the at least one searched word. Next, at least one obtained word linguistically associated with the at least one searched word is selected (350).
    Type: Grant
    Filed: February 2, 2017
    Date of Patent: February 25, 2020
    Inventor: Theo Hoffenberg
  • Patent number: 10572605
    Abstract: An electronic device and method for providing a translations service are disclosed. The electronic device for providing a translation service includes an input unit comprising input circuitry configured to receive input text of a first language, a processor configured to divide the input text into a main segment and a sub-segment and to generate output text of a second language by selecting translation candidate text corresponding to the input text from translation candidate text of the second language, based on a meaning of text included in the sub-segment, and an output unit comprising output circuitry configured to output the output text.
    Type: Grant
    Filed: June 12, 2017
    Date of Patent: February 25, 2020
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Young-ho Han, Il-hwan Kim, Chi-youn Park, Nam-hoon Kim, Kyung-min Lee
  • Patent number: 10565996
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing speaker identification. In some implementations, data identifying a media item including speech of a speaker is received. Based on the received data, one or more other media items that include speech of the speaker are identified. One or more search results are generated that each reference a respective media item of the one or more other media items that include speech of the speaker. The one or more search results are provided for display.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: February 18, 2020
    Assignee: Google LLC
    Inventors: Matthew Sharifi, Ignacio Lopez Moreno, Ludwig Schmidt
  • Patent number: 10558748
    Abstract: A computer-implemented method includes: receiving, by a computing device, an input file defining correct spellings of one or more transliterated words; generating, by the computing device, suffix outputs based on the one or more transliterated words; generating, by the computing device, a dictionary that maps the suffix outputs to the one or more transliterated words; recognizing, by the computing device, an alternatively spelled transliterated word included in a document as one of the one or more correctly spelled transliterated words using the dictionary; and outputting, by the computing device, information corresponding to the recognized transliterated word.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: February 11, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: James E. Bostick, John M. Ganci, Jr., Martin G. Keen, Craig M. Trim
  • Patent number: 10552743
    Abstract: A management system for guiding an agent in a media-specific dialogue has a conversion engine for instantiating ongoing dialogue as machine-readable text, if the dialogue is in voice media, a context analysis engine for determining facts from the text, a rules engine for asserting rules based on fact input, and a presentation engine for presenting information to the agent to guide the agent in the dialogue. The context analysis engine passes determined facts to the rules engine, which selects and asserts to the presentation engine rules based on the facts, and the presentation engine provides periodically updated guidance to the agent based on the rules asserted.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: February 4, 2020
    Inventors: Dave Sneyders, Brian Galvin, S. Michael Perlmutter
  • Patent number: 10552543
    Abstract: A computer natural language conversational agent authors an event-processing rule by carrying out a dialog in natural language with a user. A data model that customizes a dialog and building of the event-processing rule is received. A partial tree data structure is constructed based on a rule's grammar, and specialized based on tokens extracted from the data model. An utterance is received from a user and interpreted according to the grammar as specialized to the data model. Based on the interpreting of the utterance, the grammar, the data model, and context of interactions with the user, a natural language prompt is determined for the computer natural language conversational agent to output to the user. The partial tree data structure is filled based on the natural language prompt and the utterance from the user. The event-processing rule is generated based on the partial tree data structure filled during the dialog.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: February 4, 2020
    Assignee: International Business Machines Corporation
    Inventors: Martin J. Hirzel, Avraham E. Shinnar, Jerome Simeon
  • Patent number: 10540963
    Abstract: A computer-implemented method for generating an input for a classifier. The method includes obtaining n-best hypotheses which is an output of an automatic speech recognition (ASR) for an utterance, combining the n-best hypotheses horizontally in a predetermined order with a separator between each pair of hypotheses, and outputting the combined n-best hypotheses as a single text input to a classifier.
    Type: Grant
    Filed: February 2, 2017
    Date of Patent: January 21, 2020
    Assignee: International Business Machines Corporation
    Inventors: Nobuyasu Itoh, Gakuto Kurata, Ryuki Tachibana
  • Patent number: 10535343
    Abstract: A method at an electronic device with an audio input system includes: receiving a verbal input at the device; processing the verbal input; transmitting a request to a remote system, the request including information determined based on the verbal input; receiving a response to the request, wherein the response is generated by the remote system in accordance with the information based on the verbal input; and performing an operation in accordance with the response, where one or more of the receiving, processing, transmitting, receiving, and performing are performed by one or more voice processing modules of a voice assistant library executing on the electronic device, the voice processing modules providing a plurality of voice processing operations that are accessible to one or more application programs and/or operating software executing or executable on the electronic device.
    Type: Grant
    Filed: May 10, 2017
    Date of Patent: January 14, 2020
    Assignee: GOOGLE LLC
    Inventor: Kenneth Mixter
  • Patent number: 10522142
    Abstract: A vehicle-mounted charger having voice control function includes: a contact plug, a charging socket, a voice acquisition unit, a recognition control unit, and a first BLUETOOTH unit. The voice acquisition unit collects a voice signal and converts the voice signal to an electrical signal. The recognition control unit includes a conversion module, a first storage module, an operation module, and an executive module. The conversion module converts the electrical signal into a data signal and sends the data signal to the operation module. The operation module compares the data signal with predetermined storage data stored by the first storage module, operates the data signal, and sends control command to the executive module. The executive module sends executive command according to the control command. The first BLUETOOTH unit sends BLUETOOTH signal to the portable device according to the executive command to control the portable device to execute corresponding operation.
    Type: Grant
    Filed: February 7, 2018
    Date of Patent: December 31, 2019
    Assignee: SHENZHEN CHUANGYUANTENG TECHNOLOGY CO., LTD.
    Inventors: Pin Wang, Lichao Shi
  • Patent number: 10515637
    Abstract: Techniques for dynamically maintaining speech processing data on a local device for frequently input commands are described. A system determines a usage history associated with a user profile. The usage history represents at least a first command. The system determines the first command is associated with an input frequency that satisfies an input frequently threshold. The system also determines the first command is missing from first speech processing data stored by a device associated with the user profile. The system then generates second speech processing data specific to the first command and sends the second speech processing data to the device.
    Type: Grant
    Filed: September 19, 2017
    Date of Patent: December 24, 2019
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: David William Devries, Rajesh Mittal
  • Patent number: 10515652
    Abstract: Apparatus for decoding an encoded audio signal including an encoded core signal, including: a core decoder for decoding the encoded core signal to obtain a decoded core signal; a tile generator for generating one or more spectral tiles having frequencies not included in the decoded core signal using a spectral portion of the decoded core signal; and a cross-over filter for spectrally cross-over filtering the decoded core signal and a first frequency tile having frequencies extending from a gap filling frequency to an upper border frequency or for spectrally cross-over filtering a first frequency tile and a second frequency tile.
    Type: Grant
    Filed: May 22, 2018
    Date of Patent: December 24, 2019
    Assignee: Fraunhofer-Gesellschaft zur Foerderung der angewandten Forschung e.V.
    Inventors: Sascha Disch, Ralf Geiger, Christian Helmrich, Frederik Nagel, Christian Neukam, Konstantin Schmidt, Michael Fischer
  • Patent number: 10497368
    Abstract: Apparatuses, methods, systems, and program products are disclosed for transmitting audio to an identified recipient. A method includes detecting, by a processor, audio input at a first information handling device. The audio input is intended for a recipient. The method includes deriving an identity of the intended recipient of the audio input based on the audio input. The method includes transmitting the audio input to a second information handling device that is associated with the intended recipient.
    Type: Grant
    Filed: August 15, 2017
    Date of Patent: December 3, 2019
    Assignee: Lenovo (Singapore) PTE. LTD.
    Inventors: Amy Leigh Rose, John Scott Crowe, Gary David Cudak, Jennifer Lee-Baron, Nathan J. Peterson
  • Patent number: 10497371
    Abstract: A system, method and computer-readable storage devices are disclosed for multi-modal interactions with a system via a long-touch gesture on a touch-sensitive display. A system operating per this disclosure can receive a multi-modal input comprising speech and a touch on a display, wherein the speech comprises a pronoun. When the touch on the display has a duration longer than a threshold duration, the system can identify an object within a threshold distance of the touch, associate the object with the pronoun in the speech, to yield an association, and perform an action based on the speech and the association.
    Type: Grant
    Filed: April 29, 2019
    Date of Patent: December 3, 2019
    Assignee: AT&T INTELLECTUAL PROPERTY I, L.P.
    Inventors: Brant J. Vasilieff, Patrick Ehlen, Michael J. Johnston
  • Patent number: 10490198
    Abstract: A sensor device may include a computing device in communication with multiple microphones. A neural network executing on the computing device may receive audio signals from each microphone. One microphone signal may serve as a reference signal. The neural network may extract differences in signal characteristics of the other microphone signals as compared to the reference signal. The neural network may combine these signal differences into a lossy compressed signal. The sensor device may transmit the lossy compressed signal and the lossless reference signal to a remote neural network executing in a cloud computing environment for decompression and sound recognition analysis.
    Type: Grant
    Filed: December 18, 2017
    Date of Patent: November 26, 2019
    Assignee: GOOGLE LLC
    Inventors: Chanwoo Kim, Rajeev Conrad Nongpiur, Tara Sainath
  • Patent number: 10474756
    Abstract: Systems and methods for using autoencoders for training natural language classifiers. An example method comprises: producing, by a computer system, a plurality of feature vectors, wherein each feature vector represents a natural language text of a text corpus, wherein the text corpus comprises a first plurality of annotated natural language texts and a second plurality of un-annotated natural language texts; training, using the plurality of feature vectors, an autoencoder represented by an artificial neural network; producing, by the autoencoder, an output of the hidden layer, by processing a training data set comprising the first plurality of annotated natural language texts; and training, using the training data set, a text classifier that accepts an input vector comprising the output of the hidden layer and yields a degree of association, with a certain text category, of a natural language text utilized to produce the output of the hidden layer.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: November 12, 2019
    Assignee: ABBYY Production LLC
    Inventors: Konstantin Vladimirovich Anisimovich, Evgenii Mikhailovich Indenbom, Ivan Ivanovich Ivashnev
  • Patent number: 10468016
    Abstract: Disclosed herein is a system for compensating for dialects and accents comprising an automatic speech recognition system comprising an automatic speech recognition device that is operative to receive an utterance in an acoustic format from a user with a user interface; a speech to text conversion engine that is operative to receive the utterance from the automatic speech recognition device and to prepare a textual statement of the utterance; and a correction database that is operative to store textual statements of all utterances; where the correction database is operative to secure a corrected transcript of the textual statement of the utterance from the speech to text conversion engine and adds it to the corrections database if the corrected transcript of the textual statement of the utterance is not available.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: November 5, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: David Jaramillo, Neil Katz, Robert Smart, Viney A. Ugave
  • Patent number: 10468031
    Abstract: An approach is provided that receives an audio stream and utilizes a voice activation detection (VAD) process to create a digital audio stream of voices from at least two different speakers. An automatic speech recognition (ASR) process is applied to the digital stream with the ASR process resulting in the spoken words to which a speaker turn detection (STD) process is applied to identify a number of speaker segments with each speaker segment ending at a word boundary. The STD process analyzes a number of speaker segments using a language model that determines when speaker changes occur. A speaker clustering algorithm is then applied to the speaker segments to associate one of the speakers with each of the speaker segments.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: November 5, 2019
    Assignee: International Business Machines Corporation
    Inventors: Kenneth W. Church, Dimitrios B. Dimitriadis, Petr Fousek, Miroslav Novak, George A. Saon
  • Patent number: 10460721
    Abstract: A dialogue act estimation method, in a dialogue act estimation apparatus, includes acquiring first training data indicating, in a mutually associated manner, text data of a first sentence that can be a current uttered sentence, and text data of a second sentence that can be an uttered sentence immediately previous to the first sentence. The method also includes speaker change information indicating whether a speaker of the first sentence is the same as a speaker of the second sentence, and dialogue act information indicating a class of the first sentence. The method further includes learning an association between the current uttered sentence and the dialogue act information by applying the first training data to a model, and storing a result of the learning as learning result information in a memory.
    Type: Grant
    Filed: June 7, 2017
    Date of Patent: October 29, 2019
    Assignee: PANASONIC INTELLECTUAL PROPERTY MANAGEMENT CO., LTD.
    Inventor: Takashi Ushio
  • Patent number: 10452674
    Abstract: A device receives, from a virtual assistant device, a first user input associated with a first account of a user, and causes a natural language processing analysis to be performed on the first user input to identify first information, the first account, and a first operation to be performed in association with first information in the first account. The device identifies a first data management platform, associated with the first account, that is configured to maintain the first information in a first data structure associated with the first data management platform, and determines that the first data management platform is a first type of data management platform based on the first data structure. The device causes the first operation to be performed using a RPA, that uses a user interface of the first data management platform, based on the first data management platform being the first type of data management platform.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: October 22, 2019
    Assignee: Accenture Global Solutions Limited
    Inventors: Gaurav Diwan, Tracy Ann Goguen
  • Patent number: 10446136
    Abstract: A system and method for accent invariant speech recognition comprising: maintaining a database scoring a set of language units in a given language, and for each of the language units, scoring audio samples of pronunciation variations of the language unit pronounced by a plurality of speakers; extracting and storing m the database a feature vector for locating each of the audio samples in a feature space; identifying pronunciation variation distances, which are distances between locations of audio samples of the same language unit in the feature space, and inter-unit distances, which are distances between locations of audio samples of different language units in the feature space; calculating a transformation applicable on the feature space to reduce the pronunciation variation distances relative to the inter-unit distances; and based on the calculated transformation, training a processor to classify as a same language unit pronunciation variations of the same language unit.
    Type: Grant
    Filed: May 11, 2017
    Date of Patent: October 15, 2019
    Assignee: ANTS TECHNOLOGY (HK) LIMITED
    Inventors: Ron Fridental, Ilya Blayvas, Pavel Nosko