Patents Examined by Satwant Singh
  • Patent number: 10212465
    Abstract: Apparatus and methods to implement a technique for using voice input to control a network-enabled device. In one implementation, this feature allows the user to conveniently register and manage an IPTV device using voice input rather than employing a bulky remote control or a separate registration website.
    Type: Grant
    Filed: November 14, 2016
    Date of Patent: February 19, 2019
    Assignee: Sony Interactive Entertainment LLC
    Inventors: True Xiong, Charles McCoy
  • Patent number: 10199050
    Abstract: The present invention relates to a codec device and method for encoding/decoding voice and audio signals in a communication system, wherein: a fixed codebook excited signal is generated by using a pulse index for a voice signal; a first adaptive codebook excited signal is generated by using a pitch index for the voice signal; a fixed codebook signal is generated by multiplying the fixed codebook excited signal by a fixed codebook gain; a first adaptive codebook signal is generated by multiplying the first adaptive codebook excited signal by a first adaptive codebook gain; and a synthesized filter excited signal is generated by adding the fixed codebook signal and the first adaptive codebook signal.
    Type: Grant
    Filed: July 10, 2017
    Date of Patent: February 5, 2019
    Assignee: Electronics and Telecommunications Research Institute
    Inventor: Mi-Suk Lee
  • Patent number: 10192551
    Abstract: Methods, apparatus, and computer readable media related to receiving textual input of a user during a dialog between the user and an automated assistant (and optionally one or more additional users), and generating responsive reply content based on the textual input and based on user state information. The reply content is provided for inclusion in the dialog. In some implementations, the reply content is provided as a reply, by the automated assistant, to the user's textual input and may optionally be automatically incorporated in the dialog between the user and the automated assistant. In some implementations, the reply content is suggested by the automated assistant for inclusion in the dialog and is only included in the dialog in response to further user interface input.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: January 29, 2019
    Assignee: Google LLC
    Inventors: Victor Carbune, Daniel Keysers, Thomas Deselaers
  • Patent number: 10170122
    Abstract: A speech recognition method, an electronic device and a speech recognition system are provided. When a local device is not connected to Internet is determined, a voiceprint comparison between the received voice data and the history voice data stored in the voice database is executed to obtain the corresponding history voice data, and an associated history text data is found from a result database of the local device according to the obtained history voice data.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: January 1, 2019
    Assignee: ASUSTeK COMPUTER INC.
    Inventors: Yen-Chun Lee, Hsiao-Chien Chien, Yen-Hwa Chen
  • Patent number: 10152474
    Abstract: A device may obtain a document. The device may identify a skip value for the document. The skip value may relate to a quantity of words or a quantity of characters that are to be skipped in an n-gram. The device may determine one or more skip n-grams using the skip value for the document. A skip n-gram, of the one or more skip n-grams, may include a sequence of one or more words or one or more characters with a set of occurrences in the document. The sequence of one or more words or one or more characters may include a skip value quantity of words or characters within the sequence. The device may extract one or more terms from the document based on the one or more skip n-grams. The device may provide information identifying the one or more terms.
    Type: Grant
    Filed: August 25, 2016
    Date of Patent: December 11, 2018
    Assignee: Accenture Global Services Limited
    Inventors: Anurag Dwarakanath, Aditya Priyadarshi, Bhanu Anand, Bindu Madhav Tummalapalli, Bargav Jayaraman, Nisha Ramachandra, Anitha Chandran, Parvathy Vijay Raghavan, Shalini Chaudhari, Neville Dubash, Sanjay Podder
  • Patent number: 10148846
    Abstract: An image forming apparatus that functions as a client of a distributed file system is provided, in which the image forming apparatus includes: a distributed file system process part for mounting a file system of a server apparatus on the image forming apparatus to enable the image forming apparatus to access the file system of the server apparatus as the distributed file system of the image forming apparatus; and a storing process part for accessing the file system of the server apparatus and storing, in the file system, information that is stored in a storage unit used by the image forming apparatus.
    Type: Grant
    Filed: September 6, 2017
    Date of Patent: December 4, 2018
    Assignee: Ricoh Company, Ltd.
    Inventors: Tsutomu Ohishi, Hiroyuki Tanaka, Kazumi Fujisaki
  • Patent number: 10147418
    Abstract: Systems and methods automatedly evaluate a transcription quality. Audio data is obtained. The audio data is segmented into a plurality of utterances with a voice activity detector operating on a computer processor. The plurality of utterances are transcribed into at least one word lattice with a large vocabulary continuous speech recognition system operating on the processor. A minimum Bayes risk decoder is applied to the at least one word lattice to create at least one confusion network. At least conformity ratio is calculated from the at least one confusion network.
    Type: Grant
    Filed: August 14, 2017
    Date of Patent: December 4, 2018
    Assignee: VERINT SYSTEMS LTD.
    Inventors: Oana Sidi, Ron Wein
  • Patent number: 10133731
    Abstract: There is disclosed a computer-implemented method for generating a summary of a digital text. The method can be executable on a server. The server being coupled to a communication network. Embodiments of the methods disclosed herein generate a summary of the digital text by selecting sentences from the digital text based on a calculated sentence value. The sentence value is calculated by relying on the digital text itself without use of ontology dictionaries. Embodiments of the present method determine the sentence value by firstly breaking the sentence into one or more concept phrases and then determining, for a given sentence of the digital text: (i) a non-contextual value for its concept phrases and (ii) a contextual value for its concept phrases.
    Type: Grant
    Filed: February 8, 2017
    Date of Patent: November 20, 2018
    Assignee: Yandex Europe AG
    Inventor: Yury Grigorievich Zelenkov
  • Patent number: 10134407
    Abstract: To provide a transmission method of an arbitrary signal using an acoustic sound hardly affecting the mood (quality) of the original acoustic sound even if an arbitrary signal within an audible frequency range is combined into the acoustic sound such as music. The method comprises a step of finding a separable sound, of which a fundamental element (fundamental sound) b2 contributing mainly to recognition of a single sound and an additional element contributing collaterally to recognition of the single sound are separable on temporal axis or frequency axis, among plural sounds composing the acoustic sound, or inserting a separable sound into the plural sounds composing the acoustic sound. The additional element of the separable sound found or inserted is transcribed by a signal pattern b1-1, b1-2 of the arbitrary signal. By means of the acoustic sound of which additional element is transcribed, the arbitrary signal b1-1, b1-2 is transmitted.
    Type: Grant
    Filed: February 27, 2015
    Date of Patent: November 20, 2018
    Inventor: Masuo Karasawa
  • Patent number: 10134424
    Abstract: This disclosure generally relates to a portable device. Specifically, this disclosure generally relates to a portable word counter device. The portable word counter device includes a digital microcontroller circuit. The digital microcontroller circuit includes a syllable detector detecting syllables in spoken speech. The syllable detector aggregates a number of detected syllables and applies a syllable to word counted ratio. Based on the syllable to word counted ratio, the syllable detector determines a number of words spoken, and transmits the number of words spoken to a mobile device.
    Type: Grant
    Filed: June 24, 2016
    Date of Patent: November 20, 2018
    Assignee: Versame, Inc.
    Inventors: Alvin Lacson, Jill Desmond, Andy Turk
  • Patent number: 10134393
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating representation of acoustic sequences. One of the methods includes: receiving an acoustic sequence, the acoustic sequence comprising a respective acoustic feature representation at each of a plurality of time steps; processing the acoustic feature representation at an initial time step using an acoustic modeling neural network; for each subsequent time step of the plurality of time steps: receiving an output generated by the acoustic modeling neural network for a preceding time step, generating a modified input from the output generated by the acoustic modeling neural network for the preceding time step and the acoustic representation for the time step, and processing the modified input using the acoustic modeling neural network to generate an output for the time step; and generating a phoneme representation for the utterance from the outputs for each of the time steps.
    Type: Grant
    Filed: July 31, 2017
    Date of Patent: November 20, 2018
    Assignee: Google LLC
    Inventors: Hasim Sak, Andrew W. Senior
  • Patent number: 10127226
    Abstract: A method for performing a dialog between a machine, preferably a humanoid robot, and at least one human speaker, comprises the following steps, implemented by a computer: a) identifying the human speaker; b) extracting from a database a speaker profile comprising a plurality of dialog variables, at least one value being assigned to at least one of the dialog variables; c) receiving and analyzing at least one sentence originating from the speaker; and d) formulating and emitting at least one response sentence as a function at least of the sentence received and interpreted in step c) and of one dialog variable of the speaker profile.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: November 13, 2018
    Assignee: SOFTBANK ROBOTICS EUROPE
    Inventors: Magali Patris, David Houssin, Jérôme Monceaux
  • Patent number: 10122720
    Abstract: A system and method for an automated web source content analysis. The system of automated content analysis performs the following: a search of terms, i.e. key words and phrases, presented in the special dictionary, in the text content; executes a multi-factor genre content analysis based on structural, pragmatic and stylistics properties; executes thematic content analysis using a rubricator built based on illegal subjects and topics and their antagonists; and the system makes a decision based on a combination of thematic and genre properties of the text. The proposed method allows for providing a final decision in terms that are easily understood by a user.
    Type: Grant
    Filed: February 7, 2017
    Date of Patent: November 6, 2018
    Assignee: Plesk International GmbH
    Inventors: Sergey Oleynikov, Yury Zagorulko, Elena Sidorova
  • Patent number: 10108607
    Abstract: A machine translation method includes determining source language text to be translated and obtaining a translation rule table, which has been trained in advance, that includes multiple translation rules associated with the target language text and the source language text in multiple languages; determining candidate results of the target language text; and determine the target language text to be output based on the candidate results. During the translation, a specific language of the source language text need not to be specified by a user. The implementations improve accuracy of the translation, and avoid errors introduced from the process of language identification during recognizing unknown languages. The implementations avoid developing a translation engine for an individual source language of text to be translated for a certain target language, and therefore save development costs and computing resources.
    Type: Grant
    Filed: July 29, 2016
    Date of Patent: October 23, 2018
    Assignee: Alibaba Group Holding Limited
    Inventors: Kai Song, Weihua Luo, Feng Lin
  • Patent number: 10108602
    Abstract: An approach is provided to discover new portmanteau, such as when ingesting documents into a question answering (QA) system. The approach works by analyzing a words included in electronic documents and identifies words as being possible portmanteaus. To analyze a portmanteau found in a document, the approach identifies morphemes that are included in the identified portmanteau and candidate words that correspond to each of the identified morphemes. A meaning for the new portmanteau is then derived from the meanings of the candidate word meanings.
    Type: Grant
    Filed: August 27, 2017
    Date of Patent: October 23, 2018
    Assignee: International Business Machines Corporation
    Inventors: Corville O. Allen, Albert A. Chung, Andrew R. Freed, Sorabh Murgai
  • Patent number: 10102198
    Abstract: Examples of techniques for generating a plurality of action items from a meeting transcript of a meeting are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method comprises chunking the meeting transcript into a plurality of chunks using a meeting topic model. The computer-implemented method also comprises performing, by a processor, information extraction on the plurality of chunks to extract action item information from the plurality of chunks. The computer-implemented method further comprises generating the plurality of action items based on the extracted action item information.
    Type: Grant
    Filed: December 8, 2015
    Date of Patent: October 16, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Tara Astigarraga, Ossama S. Emam, Al Hamid, Sara Noeman, Kimberly G. Starks
  • Patent number: 10102861
    Abstract: Methods, systems, and media for managing a plurality of target devices are provided. A voice command is received by an input associated with a first target device. The voice command includes first voice information and indicates an operation instruction. The first voice information includes identification information. The first target device is specified by referencing a database in which the identification information and a device ID of the first target device are associated. It is determined whether the voice command includes second voice information that identifies a second target device as an operation object for the operation instruction. When the second voice information is not included, the first target device is caused to execute the operation instruction. When the second voice information is included, a control command is transmitted to the second target device for causing the second target device to execute the operation instruction.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: October 16, 2018
    Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICA
    Inventors: Yoshihiro Kojima, Yoichi Ikeda
  • Patent number: 10096321
    Abstract: Techniques are provided for reverberation compensation for far-field speaker recognition. A methodology implementing the techniques according to an embodiment includes receiving an authentication audio signal associated with speech of a user and extracting features from the authentication audio signal. The method also includes scoring results of application of one or more speaker models to the extracted features. Each of the speaker models is trained based on a training audio signal processed by a reverberation simulator to simulate selected far-field environmental effects to be associated with that speaker model. The method further includes selecting one of the speaker models, based on the score, and mapping the selected speaker model to a known speaker identification or label that is associated with the user.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: October 9, 2018
    Assignee: Intel Corporation
    Inventors: Gokcen Cilingir, Narayan Biswal
  • Patent number: 10096330
    Abstract: An utterance condition determination device includes a memory configured to a voice signal of a first speaker and a voice signal of a second speaker, and a processor configured to estimate an average backchannel frequency that represents a backchannel frequency of the second speaker in a period of time from a voice start time of the voice signal of the second speaker to a predetermined time based on the voice signal of the first speaker and the voice signal of the second speaker, to calculate the backchannel frequency of the second speaker for each unit of time based on the voice signal of the first speaker and the voice signal of the second speaker, and to determine a satisfaction level of the second speaker based on the estimated average backchannel frequency and the calculated backchannel frequency.
    Type: Grant
    Filed: August 25, 2016
    Date of Patent: October 9, 2018
    Assignee: FUJITSU LIMITED
    Inventors: Sayuri Kohmura, Taro Togawa, Takeshi Otani
  • Patent number: 10096318
    Abstract: A system and method for processing speech includes receiving a first information stream associated with speech, the first information stream comprising micro-modulation features and receiving a second information stream associated with the speech, the second information stream comprising features. The method includes combining, via a non-linear multilayer perceptron, the first information stream and the second information stream to yield a third information stream. The system performs automatic speech recognition on the third information stream. The third information stream can also be used for training HMMs.
    Type: Grant
    Filed: August 29, 2017
    Date of Patent: October 9, 2018
    Assignee: NUANCE COMMUNICATIONS, INC.
    Inventors: Enrico Luigi Bocchieri, Dimitrios Dimitriadis