Patents Examined by Vincent P. Harper
  • Patent number: 9418060
    Abstract: Techniques for obtaining and utilizing a sample translation of a work and evaluating a translator are described herein. The techniques may include obtaining a translation a portion of a work from a translator of a specified level of experience. The translation may be sent to a reader and feedback may be received from the reader regarding the translation. A determination may be made as to whether to proceed with obtaining a complete translation of the work based on the feedback. In some instances, the translator may be evaluated based on the feedback.
    Type: Grant
    Filed: March 19, 2012
    Date of Patent: August 16, 2016
    Assignee: Amazon Technologies, Inc.
    Inventors: Mark A. Winham, Daniel Leng, Mary Ellen Fullhart, Iliana C Sach, Chong Chung, Troy Fendall
  • Patent number: 9122772
    Abstract: A method for analyzing a large number of messages, wherein the number of messages is reduced based on pattern recognition and pattern simplification, rules for the pattern recognition and pattern simplification are based on a regular grammatical structure, and patterns are sought in the remaining messages, or directly, i.e., without previous simplification. Syntactic pattern recognition is used for each type of pattern search, and a finite machine is derivable using the regular grammatical structure underlying each pattern recognition by transforming the mapping rules into transfer function, such that structural connections between the messages can be displayed graphically.
    Type: Grant
    Filed: May 18, 2009
    Date of Patent: September 1, 2015
    Assignee: Siemens Aktiengesellschaft
    Inventors: Jens Folmer, Uwe Katzke, Dorothea Pantförder, Bernd-Markus Pfeiffer, Birgit Vogel-Heuser
  • Patent number: 9098488
    Abstract: A communication object including a plurality of object words may be received. The communication object may be parsed to identify each of the object words as tokens. A first natural language and at least one natural language different from the first natural language that are associated with the plurality of object words may be determined, based on a language analysis of the tokens. Tokens associated with the first natural language and tokens included in embedded word phrases associated with the embedded natural language may be translated, via a translating device processor, to a target natural language, based on at least one context associated with the communication object.
    Type: Grant
    Filed: April 3, 2011
    Date of Patent: August 4, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ahmed Abdul Hamid, Kareem Darwish
  • Patent number: 9093080
    Abstract: To provide a bandwidth extension method which allows reduction of computation amount in bandwidth extension and suppression of deterioration of quality in the bandwidth to be extended. In the bandwidth extension method: a low frequency bandwidth signal is transformed into a QMF domain to generate a first low frequency QMF spectrum; pitch-shifted signals are generated by applying different shifting factors on the low frequency bandwidth signal; a high frequency QMF spectrum is generated by time-stretching the pitch-shifted signals in the QMF domain; the high frequency QMF spectrum is modified; and the modified high frequency QMF spectrum is combined with the first low frequency QMF spectrum.
    Type: Grant
    Filed: June 6, 2011
    Date of Patent: July 28, 2015
    Assignee: Panasonic Intellectual Property Corporation of America
    Inventors: Tomokazu Ishikawa, Takeshi Norimatsu, Huan Zhou, Kok Seng Chong, Haishan Zhong
  • Patent number: 9093151
    Abstract: A regular expression matcher system, including: a deterministic finite state machine (DFSM); a ternary content addressable memory (TCAM) matcher to compare a word stored at the TCAM matcher to an input stream, wherein the word determines a state-to-state transition of the DFSM from a comparison result; a programmable logic connected to an output of the TCAM matcher to identify a next state in the DFSM based on the comparison result; a state register to update a current state of the DFSM to the next state; and a collection data structure coupled to the TCAM matcher and the programmable logic to store a sequence of required state transitions for the DFSM, wherein the programmable logic determines a next required state transition to be matched from the sequence.
    Type: Grant
    Filed: June 13, 2012
    Date of Patent: July 28, 2015
    Assignee: International Business Machines Corporation
    Inventors: Richard F. Freitas, Robert K. Montoye, Rajendra Shinde
  • Patent number: 9076447
    Abstract: Streaming audio is received. The streaming audio includes a frame having plurality of samples. An energy estimate is obtained for the plurality of samples. The energy estimate is compared to at least one threshold. In addition, a band pass estimate of the signal is determined. An energy estimate is obtained for the band-passed plurality of samples. The two energy estimates are compared to at least one threshold each. Based upon the comparison operation, a determination is made as to whether speech is detected.
    Type: Grant
    Filed: October 23, 2014
    Date of Patent: July 7, 2015
    Inventors: Dibyendu Nandy, Yang Li, Henrick Thomsen, Claus Furst
  • Patent number: 9047862
    Abstract: An audio apparatus including a decorrelator for generating decorrelated signals by applying a phase shifting value adjusted based on a correlation difference between audio signals included in a multi-channel signal to the audio signals; and a speaker set including at least two speakers for outputting acoustic signals corresponding to the decorrelated signals.
    Type: Grant
    Filed: May 30, 2012
    Date of Patent: June 2, 2015
    Inventors: Jae-hoon Jeong, So-young Jeong, Jeong-su Kim, Jung-eun Park, Woo-jung Lee
  • Patent number: 9047275
    Abstract: Computer-implemented systems and methods align fragments of a first text with corresponding fragments of a second text, which is a translation of the first text. One preferred embodiment preliminarily divides the first and second texts into fragments; generates a hypothesis about the correspondence between the fragments of the first and second texts; performs a lexico-morphological analysis of the fragments using linguistic descriptions; performs a syntactic analysis of the fragments using linguistic descriptions and generates syntactic structures for the fragments; generates semantic structures for the fragments; and estimates the degree of correspondence between the semantic structures.
    Type: Grant
    Filed: May 4, 2012
    Date of Patent: June 2, 2015
    Assignee: ABBYY InfoPoisk LLC
    Inventors: Tatiana Parfentieva, Anton Krivenko, Konstantin Zuev, Konstantin Anisimovich, Vladimir Selegey
  • Patent number: 9043211
    Abstract: In a mobile device, a bone conduction or vibration sensor is used to detect the user's speech and the resulting output is used as the source for a low power Voice Trigger (VT) circuit that can activate the Automatic Speech Recognition (ASR) of the host device. This invention is applicable to mobile devices such as wearable computers with head mounted displays, mobile phones and wireless headsets and headphones which use speech recognition for the entering of input commands and control. The speech sensor can be a bone conduction microphone used to detect sound vibrations in the skull, or a vibration sensor, used to detect sound pressure vibrations from the user's speech. This VT circuit can be independent of any audio components of the host device and can therefore be designed to consume ultra-low power. Hence, this VT circuit can be active when the host device is in a sleeping state and can be used to wake the host device on detection of speech from the user.
    Type: Grant
    Filed: May 8, 2014
    Date of Patent: May 26, 2015
    Assignee: DSP GROUP LTD.
    Inventors: Moshe Haiut, Arie Heiman, Uri Yehuday
  • Patent number: 9037460
    Abstract: Dynamic features are utilized with CRFs to handle long-distance dependencies of output labels. The dynamic features present a probability distribution involved in explicit distance from/to a special output label that is pre-defined according to each application scenario. Besides the number of units in the segment (from the previous special output label to the current unit), the dynamic features may also include the sum of any basic features of units in the segment. Since the added dynamic features are involved in the distance from the previous specific label, the searching lattice associated with Viterbi searching is expanded to distinguish the nodes with various distances. The dynamic features may be used in a variety of different applications, such as Natural Language Processing, Text-To-Speech and Automatic Speech Recognition. For example, the dynamic features may be used to assist in prosodic break and pause prediction.
    Type: Grant
    Filed: March 28, 2012
    Date of Patent: May 19, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jian Luan, Linfang Wang, Hairong Xia, Sheng Zhao, Daniela Braga
  • Patent number: 9009027
    Abstract: Computer-implemented systems and methods are provided for determining an overall mood score of a document. For example, the document is received from a computer-readable medium. A text segment in a document is identified to be indicative of a mood of the document. The text segment is mapped to a mood scale among a predetermined set of mood scales. A mood weight associated with the mood scale for the text segment is generated. An overall mood score of the document is determined based at least in part on the mood weight.
    Type: Grant
    Filed: May 30, 2012
    Date of Patent: April 14, 2015
    Assignee: SAS Institute Inc.
    Inventors: Thomas Lehman, Jody Porowski, Bruce Monroe Mills, Michael T. Brooks, Heather Michelle Goodykoontz
  • Patent number: 9002709
    Abstract: Provided is a voice recognition system capable of, while suppressing negative influences from sound not to be recognized, correctly estimating utterance sections that are to be recognized. A voice segmenting means calculates voice feature values, and segments voice sections or non-voice sections by comparing the voice feature values with a threshold value. Then, the voice segmenting means determines, to be first voice sections, those segmented sections or sections obtained by adding a margin to the front and rear of each of those segmented sections. On the basis of voice and non-voice likelihoods, a search means determines, to be second voice sections, sections to which voice recognition is to be applied. A parameter updating means updates the threshold value and the margin. The voice segmenting means determines the first voice sections by using the one of the threshold value and the margin which has been updated by the parameter updating means.
    Type: Grant
    Filed: November 26, 2010
    Date of Patent: April 7, 2015
    Assignee: NEC Corporation
    Inventor: Takayuki Arakawa
  • Patent number: 8996351
    Abstract: Techniques are provided for translating a document that was scanned by a multi-function peripheral (MFP). A server within a computing cloud receives an MFP identifier and processed scan data that results from optical character recognition and/or natural language translation having been performed on scan data produced by the MFP. In response to the receipt of the processed scan data at the server, the server selects a set of rules that is mapped to a context to which the MFP identifier is mapped. Corrected processed scan data is generated by applying the set of rules to the processed scan data that was received by the server. Manual corrections made to the corrected processed scan data may be used to update the set of rules so that those corrections are also made to other processed scan data produced by MFPs having identifiers mapped to the same context.
    Type: Grant
    Filed: August 24, 2011
    Date of Patent: March 31, 2015
    Assignee: Ricoh Company, Ltd.
    Inventor: Deeksha Sharma
  • Patent number: 8977550
    Abstract: Part units of speech information are arranged in a predetermined order to generate a sentence unit of a speech information set. To each of a plurality of speech part units of the speech information, an attribute of “interrupt possible after reproduction” with which reproduction of priority interrupt information can be started after the speech part unit of the speech information is reproduced or another attribute of “interrupt impossible after reproduction” with which reproduction of the priority interrupt information cannot be started even after the speech part unit of the speech information is reproduced is set. When the priority interrupt information having a high priority rank than the speech information set being currently reproduced is inputted, if the attribute of the speech information being reproduced at the point in time is “interrupt impossible after reproduction,” then the priority interrupt information is reproduced after the speech information is reproduced.
    Type: Grant
    Filed: May 6, 2011
    Date of Patent: March 10, 2015
    Assignee: Honda Motor Co., Ltd.
    Inventor: Tokujiro Kizaki
  • Patent number: 8965776
    Abstract: A system is to receive a word on which to perform error correction; obtain segments, from the word, each segment including a respective subset of samples; update, on a per segment basis, the word based on extrinsic information associated with a previous word; identify sets of least reliable positions (LRPs) associated with the segments; create a subset of LRPs based on a subset of samples within the sets of LRPs; generate candidate words based on the subset of LRPs; identify errors within the word or the candidate words; update, using the extrinsic information, a segment of the word that includes an error; determine distances between the candidate words and the updated word that includes the updated segment; identify best words associated with shortest distances; and perform error correction, on a next word, using other extrinsic information that is based on the best words.
    Type: Grant
    Filed: March 30, 2012
    Date of Patent: February 24, 2015
    Assignee: Infinera Corporation
    Inventors: Stanley H. Blakey, Alexander Kaganov, Yuejian Wu, Sandy Thomson
  • Patent number: 8965775
    Abstract: A method of binary allocation in an enhancement coding/decoding for improving a hierarchical coding/decoding of digital audio signals, including a core coding/decoding in a first frequency band and a band extension coding/decoding in a second frequency band. For a predetermined number of bits to be allocated for the enhancement coding/decoding, a first number of bits is allocated to a coding/decoding for correcting the core coding/decoding in the first frequency band and according to a first mode of coding/decoding and a second number of bits is allocated to an enhancement coding/decoding for improving the extension coding/decoding in the second frequency band and according to a second mode of coding/decoding. Also provided are an allocation module implementing the method and a coder and decoder including this module.
    Type: Grant
    Filed: June 25, 2010
    Date of Patent: February 24, 2015
    Assignee: Orange
    Inventors: David Virette, Pierre Berthet
  • Patent number: 8949115
    Abstract: In an audio output terminal device, a buffer control unit adjusts the buffer size of a jitter buffer in accordance with the setting of a sound output mode instructed in an instruction receiving unit. If the instruction receiving unit acknowledges an instruction for setting an audio output mode that requires low delay in outputting sound, the buffer control unit reduces the buffer size of the jitter buffer. Further, the buffer control unit controls, in accordance with the instructed setting of the sound output mode, timing for allowing a media buffer to transmit one or more voice packets to the jitter buffer.
    Type: Grant
    Filed: September 16, 2010
    Date of Patent: February 3, 2015
    Assignees: Sony Corporation, Sony Computer Entertainment Inc.
    Inventors: Kiyoto Shibuya, Jin Nakamura, Katsuhiko Shibata, Kazuhiro Yanase, Akitoshi Yamaguchi, Akiyoshi Morita, Kouichi Kazama
  • Patent number: 8942977
    Abstract: The present invention defines a pitch-synchronous parametrical representation of speech signals as the basis of speech recognition, and discloses methods of generating the said pitch-synchronous parametrical representation from speech signals. The speech signal is first going through a pitch-marks picking program to identify the pitch periods. The speech signal is then segmented into pitch-synchronous frames. An ends-matching program equalizes the values at the two ends of the waveform in each frame. Using Fourier analysis, the speech signal in each frame is converted into a pitch-synchronous amplitude spectrum. Using Laguerre functions, the said amplitude spectrum is converted into a unit vector, referred to as the timbre vector. By using a database of correlated phonemes and timbre vectors, the most likely phoneme sequence of an input speech signal can be decoded in the acoustic stage of a speech recognition system.
    Type: Grant
    Filed: March 17, 2014
    Date of Patent: January 27, 2015
    Inventor: Chengjun Julian Chen
  • Patent number: 8935169
    Abstract: According to one embodiment, an electronic apparatus includes an acquiring module and a display process module. The acquiring module is configured to acquire information regarding a plurality of persons using information of video content data, the plurality of persons appearing in a plurality of sections in the video content data. The display process module is configured to display (i) a time bar representative of a sequence of the video content data, (ii) information regarding a first person appearing in a first section of the sections, and (iii) information regarding a second person different from the first person, the second person appearing in a second section of the sections. The first area of the time bar corresponds to the first section is displayed in a first form, and a second area of the time bar corresponds to the second section is displayed in a second form different from the first form.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: January 13, 2015
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Tetsuya Fujii
  • Patent number: 8909525
    Abstract: An interactive voice recognition electronic device converts a received voice signal to a text, and searches a voice databases to find a matched voice text of the converted text. The matched voice text is taken as a recognized voice text of the voice signal if the matched voice text exists in the voice database. The electronic device obtains a predetermined number of similar voice texts if no matched voice text exists in the voice database. The electronic device converts the predetermined number of similar voice texts to the voice signals, outputs the converted voice signals in turn, and selects one of the similar voice texts as the recognized voice text according to the selection of the user. The electronic device obtains the associated answer text of the recognized voice text in the voice database and converts the answer text to voice signals.
    Type: Grant
    Filed: August 9, 2011
    Date of Patent: December 9, 2014
    Assignees: Fu Tai Hua Industry (Shenzhen) Co., Ltd., Hon Hai Precision Industry Co., Ltd.
    Inventors: Yu-Kai Xiong, Xin Lu, Shih-Fang Wong, Dong-Sheng Lv, Xin-Hua Li, Yu-Yong Zhang, Jian-Jian Zhu