Patents Examined by Satwant Singh
  • Patent number: 9704501
    Abstract: The present invention relates to a codec device and method for encoding/decoding voice and audio signals in a communication system, wherein: a fixed codebook excited signal is generated by using a pulse index for a voice signal; a first adaptive codebook excited signal is generated by using a pitch index for the voice signal; a fixed codebook signal is generated by multiplying the fixed codebook excited signal by a fixed codebook gain; a first adaptive codebook signal is generated by multiplying the first adaptive codebook excited signal by a first adaptive codebook gain; and a synthesized filter excited signal is generated by adding the fixed codebook signal and the first adaptive codebook signal.
    Type: Grant
    Filed: October 26, 2012
    Date of Patent: July 11, 2017
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Mi-Suk Lee
  • Patent number: 9697194
    Abstract: A processor generates a temporary dictionary of one or both words and phrases based on an access of a first application. The processor uses the temporary dictionary to carry out auto-correct operations on text included in a second application.
    Type: Grant
    Filed: June 8, 2015
    Date of Patent: July 4, 2017
    Assignee: International Business Machines Corporation
    Inventor: Adam H.E. Eberbach
  • Patent number: 9697196
    Abstract: System and methods are disclosed for determining the connotation or sentiment type of a text unit comprising multiple terms and with a grammatical structure, such as subject+verb, verb+object, adjective+noun, noun+noun, noun+preposition+noun. The connotation or sentiment type of the text unit is determined by applying context rules where the context of the grammatical structure may change the inherent or default connotations of individual terms in the text unit. The methods provide a solution to the challenge of correctly or accurately determining the sentiment type of various linguistic structures under different context, and to the simplistic approach of using the inherent or default connotation of individual terms for the linguistic structure containing such terms.
    Type: Grant
    Filed: March 17, 2013
    Date of Patent: July 4, 2017
    Inventors: Guangsheng Zhang, Chizhong Zhang
  • Patent number: 9699570
    Abstract: A voice signal processing apparatus and a voice signal processing method are provided. A last sampling point of an mth original frequency-lowered signal frame is determined according to a phase reference sampling point number of the mth original frequency-lowered signal frame. Here, the phase reference sampling point number corresponds to a middle sampling point of an mth renovating frequency-lowered signal frame, and the last sampling point is phase-matched with a sampling point corresponding to the phase reference sampling point number in the mth original frequency-lowered signal frame. P consecutive sampling points starting from the last sampling point are applied as sampling points of an (m+1)th renovating frequency-lowered signal frame.
    Type: Grant
    Filed: July 21, 2015
    Date of Patent: July 4, 2017
    Assignee: Acer Incorporated
    Inventors: Po-Jen Tu, Jia-Ren Chang, Kai-Meng Tzeng
  • Patent number: 9672831
    Abstract: A computer-implemented method, computer program product, and computing system is provided for managing quality of experience for communication sessions. In an implementation, a method may include determining a language spoken on a communication session. The method may also include selecting a codec for the communication session based upon, at least in part, the language spoken on the communication session. The method may further include transacting the communication session using the selected codec for the communication session.
    Type: Grant
    Filed: February 25, 2015
    Date of Patent: June 6, 2017
    Assignee: International Business Machines Corporation
    Inventors: Hitham Ahmed Assem Aly Salama, Jonathan Dunne, James P. Galvin, Jr., Liam Harpur
  • Patent number: 9672814
    Abstract: Software that trains an artificial neural network for generating vector representations for natural language text, by performing the following steps: (i) receiving, by one or more processors, a set of natural language text; (ii) generating, by one or more processors, a set of first metadata for the set of natural language text, where the first metadata is generated using supervised learning method(s); (iii) generating, by one or more processors, a set of second metadata for the set of natural language text, where the second metadata is generated using unsupervised learning method(s); and (iv) training, by one or more processors, an artificial neural network adapted to generate vector representations for natural language text, where the training is based, at least in part, on the received natural language text, the generated set of first metadata, and the generated set of second metadata.
    Type: Grant
    Filed: May 8, 2015
    Date of Patent: June 6, 2017
    Assignee: International Business Machines Corporation
    Inventors: Liangliang Cao, James J. Fan, Chang Wang, Bing Xiang, Bowen Zhou
  • Patent number: 9672206
    Abstract: The present invention relates to an apparatus system and method for creating a customizable and application-specific semantic similarity utility that uses a single similarity measuring algorithm with data from broad-coverage structured lexical knowledge bases (dictionaries and thesauri) and corpora (document collections). More specifically the invention includes the use of data from custom or application-specific structured lexical knowledge bases and corpora and semantic mappings from variant expressions to their canonical forms. The invention uses a combination of technologies to simplify the development of a generic semantic similarity utility; and minimize the effort and complexity of customizing the generic utility for a domain- or topic-dependent application. The invention makes customization modular and data-driven, allowing developers to create implementations at varying degrees of customization (e.g., generic, domain-level, company-level, application-level) and also as changes occur over time (e.g.
    Type: Grant
    Filed: June 1, 2015
    Date of Patent: June 6, 2017
    Assignee: Information Extraction Systems, Inc.
    Inventors: Alwin B Carus, Thomas J. DePlonty
  • Patent number: 9672497
    Abstract: Methods and systems for using natural language processing and machine-learning algorithms to process vehicle-service data to generate metadata regarding the vehicle-service data are described herein. A processor can discover vehicle-service data that can be clustered together based on the vehicle-service data having common characteristics. The clustered vehicle-service data can be classified (e.g., categorized) into any one of a plurality of categories. One of the categories can be for clustered vehicle-service data that is tip-worthy (e.g., determined to include data worthy of generating vehicle-service content (e.g., a repair hint). Another category can track instances of vehicle-service data that are considered to be common to an instance of vehicle-service data classified into the tip-worthy category. The vehicle-service data can be collected from repair orders from a plurality of repair shops. The vehicle-service content generated by the systems can be provided to those or other repair shops.
    Type: Grant
    Filed: November 4, 2014
    Date of Patent: June 6, 2017
    Assignee: Snap-on Incorporated
    Inventors: Bradley R. Lewis, Patrick S. Merg, Tilak B. Kasturi, Brett A. Kelley, Jacob G. Foreman
  • Patent number: 9659560
    Abstract: Software that trains an artificial neural network for generating vector representations for natural language text, by performing the following steps: (i) receiving, by one or more processors, a set of natural language text; (ii) generating, by one or more processors, a set of first metadata for the set of natural language text, where the first metadata is generated using supervised learning method(s); (iii) generating, by one or more processors, a set of second metadata for the set of natural language text, where the second metadata is generated using unsupervised learning method(s); and (iv) training, by one or more processors, an artificial neural network adapted to generate vector representations for natural language text, where the training is based, at least in part, on the received natural language text, the generated set of first metadata, and the generated set of second metadata.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: May 23, 2017
    Assignee: International Business Machines Corporation
    Inventors: Liangliang Cao, James J. Fan, Chang Wang, Bing Xiang, Bowen Zhou
  • Patent number: 9659561
    Abstract: A recording support method includes: receiving audio data; acquiring voice data from the audio data; receiving or generating text data corresponding to the voice data; storing at least part of the voice data and at least part of the text data corresponding to the at least part of the voice data; and output the received or generated text data, and wherein the stored at least part of the voice data and the stored at least part of the text data are associated each other, and wherein the at least part of voice data comprises one or more units.
    Type: Grant
    Filed: April 3, 2015
    Date of Patent: May 23, 2017
    Assignee: SAMSUNG ELECTRONICS CO., LTD
    Inventor: Sung Woon Jang
  • Patent number: 9653080
    Abstract: A method and apparatus for voice control of a mobile device are provided. The method establishes a connection between the mobile device and a voice-control module. Responsive to establishing the connection, the mobile device enters into an intermediate mode; and the voice-control module monitors for verbal input comprising a verbal command from among a set of predetermined verbal commands. The voice-control module sends instructions to the mobile device related to the verbal command received; and the mobile device acts on the received instructions. An apparatus/voice control module (VCM) for voice control of a mobile device, wherein the VCM includes a connection module configured for establishing a connection between the VCM and the mobile device; a monitoring module configured for monitoring for a verbal command from among a set of predetermined verbal commands; and a communications module configured for sending instructions to the mobile device related to the verbal command received.
    Type: Grant
    Filed: March 7, 2016
    Date of Patent: May 16, 2017
    Assignee: BlackBerry Limited
    Inventors: Ahmed Abdelsamie, Nicholas Shane Choo, Guowei Zhang, Omar George Joseph Barake, Steven Anthony Lill
  • Patent number: 9653083
    Abstract: The present disclosure relates to a data processing method and system. The method includes obtaining network data and a sound wave synthesized with the network data by a terminal, the sound wave being obtained by performing an encoding conversion on resource data; and according to an operation performed by a user on the network data on the terminal, invoking an audio playback apparatus of the terminal to play the sound wave synthesized with the network data to terminals of one or more users nearby.
    Type: Grant
    Filed: November 25, 2014
    Date of Patent: May 16, 2017
    Assignee: Alibaba Group Holding Limited
    Inventor: Junxue Rao
  • Patent number: 9640193
    Abstract: To improve the intelligibility of speech for users with high-frequency hearing loss, the present systems and methods provide an improved frequency lowering system with enhancement of spectral features responsive to place-of-articulation of the input speech. High frequency components of speech, such as fricatives, may be classified based on one or more features that distinguish place of articulation, including spectral slope, peak location, relative amplitudes in various frequency bands, or a combination of these or other such features. Responsive to the classification of the input speech, a signal or signals may be added to the input speech in a frequency band audible to the hearing-impaired listener, said signal or signals having predetermined distinct spectral features corresponding to the classification, and allowing a listener to easily distinguish various consonants in the input.
    Type: Grant
    Filed: November 1, 2012
    Date of Patent: May 2, 2017
    Assignee: Northeastern University
    Inventor: Ying-Yee Kong
  • Patent number: 9642221
    Abstract: A user interface, a method, and a computer program product are provided for enabling a user to voice control over at least one setting of an apparatus such as a lighting system. The user interface determines a characteristic of an audio signal converted from vocal input of a user. A first setting of the apparatus is adjusted proportionally to a variation in the characteristic. Another setting of the apparatus may be adjusted on the basis of another characteristic of the audio signal. As a result, the user interface enables the user to control a lighting system over a substantially large or continuous range of output.
    Type: Grant
    Filed: November 2, 2012
    Date of Patent: May 2, 2017
    Assignee: PHILIPS LIGHTING HOLDING B.V.
    Inventor: Lucas Josef Maria Schlangen
  • Patent number: 9640183
    Abstract: An electronic device is provided. The electronic device includes a processor configured to perform automatic speech recognition (ASR) on a speech input by using a speech recognition model that is stored in a memory and a communication module configured to provide the speech input to a server and receive a speech instruction, which corresponds to the speech input, from the server. The electronic device may perform different operations according to a confidence score of a result of the ASR. Besides, it may be permissible to prepare other various embodiments speculated through the specification.
    Type: Grant
    Filed: April 7, 2015
    Date of Patent: May 2, 2017
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Seok Yeong Jung, Kyung Tae Kim
  • Patent number: 9633665
    Abstract: Processes are described herein for transforming an audio mixture signal data structure into a specified component data structure and a background component data structure. In the processes described herein, pitch differences between a guide signal and a dialogue component of an audio mixture signal are accounted for by explicit modeling.
    Type: Grant
    Filed: November 26, 2014
    Date of Patent: April 25, 2017
    Assignee: AUDIONMIX
    Inventor: Romain Hennequin
  • Patent number: 9626964
    Abstract: A voice recognition terminal is provided to be able to communicate with a server capable of voice recognition for recognizing voice, and includes a voice input acceptance portion accepting voice input from a user, a voice recognition portion carrying out voice recognition of the voice input accepted, a response processing execution portion performing processing for responding to the user based on a result of voice recognition of the voice input accepted, and a communication portion transmitting the voice input accepted by the voice input acceptance portion to the server and receiving a result of voice recognition in the server. The response processing execution portion performs the processing for responding to the user based on the result of voice recognition determined as more suitable, of the result of voice recognition by the voice recognition portion and the result of voice recognition received from the server.
    Type: Grant
    Filed: November 24, 2014
    Date of Patent: April 18, 2017
    Assignee: SHARP KABUSHIKI KAISHA
    Inventors: Masafumi Hirata, Akira Tojima, Yuri Iwano
  • Patent number: 9613616
    Abstract: A system and computer-implemented method for synthesizing multi-person speech into an aggregate voice is disclosed. The method may include crowd-sourcing a data message configured to include a textual passage. The method may include collecting, from a plurality of speakers, a set of vocal data for the textual passage. Additionally, the method may also include mapping a source voice profile to a subset of the set of vocal data to synthesize the aggregate voice.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: April 4, 2017
    Assignee: International Business Machines Corporation
    Inventors: Jose A. G. de Freitas, Guy P. Hindle, James S. Taylor
  • Patent number: 9606988
    Abstract: A system and method predict the translation quality of a translated input document. The method includes receiving an input document pair composed of a plurality of sentence pairs, each sentence pair including a source sentence in a source language and a machine translation of the source language sentence to a target language sentence. For each of the sentence pairs, a representation of the sentence pair is generated, based on a set of features extracted for the sentence pair. Using a generative model, a representation of the input document pair is generated, based on the sentence pair representations. A translation quality of the translated input document is computed, based on the representation of the input document pair.
    Type: Grant
    Filed: November 4, 2014
    Date of Patent: March 28, 2017
    Assignee: XEROX CORPORATION
    Inventors: Jean-Marc Andreoli, Diane Larlus-Larrondo, Jean-Luc Meunier
  • Patent number: 9601108
    Abstract: Incorporation of an exogenous large-vocabulary model into rule-based speech recognition is provided. An audio stream is received by a local small-vocabulary rule-based speech recognition system (SVSRS), and is streamed to a large-vocabulary statistically-modeled speech recognition system (LVSRS). The SVSRS and LVSRS perform recognitions of the audio. If a portion of the audio is not recognized by the SVSRS, a rule is triggered that inserts a mark-up in the recognition result. The recognition result is sent to the LVSRS. If a mark-up is detected, recognition of a specified portion of the audio is performed. The LVSRS result is unified with the SVSRS result and sent as a hybrid response back to the SVSRS. If the hybrid-recognition rule is not triggered, an arbitration algorithm is evoked to determine whether the SVSRS or the LVSRS recognition has a lesser word error rate. The determined recognition is sent as a response to the SVSRS.
    Type: Grant
    Filed: January 17, 2014
    Date of Patent: March 21, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Travis Wilson, Salman Quazi, John Vicondoa, Pradip Fatehpuria