Formant Patents (Class 704/209)
  • Patent number: 7933768
    Abstract: A vocoder system for improving the performance expression of an output sound while lightening the computational load. The system includes formant detection means and division means in which the center frequencies have been fixed. The modulation level with which the levels of each of the frequency bands that have been divided in the division means are set by a setting means based on the levels of each of the frequency bands that correspond to those that have been detected in the formant detection means and formant information with which the formants are changed. Therefore, it is possible to improve the performance expression of the output sound with a light computational load and without the need to calculate and change the filter figure of each filter for each sample in order to change the center frequency and bandwidth of each of the filters comprising the division means.
    Type: Grant
    Filed: March 23, 2004
    Date of Patent: April 26, 2011
    Assignee: Roland Corporation
    Inventor: Tadao Kikumoto
  • Patent number: 7912709
    Abstract: A degree of voicing is extracted using the characteristic of harmonic peaks existing in a constant period by converting an input speech or audio signal to a speech signal of the frequency domain, selecting the greatest peak in a first pitch period of the converted speech signal as a harmonic peak, thereafter selecting a peak having the greatest spectral value among peaks existing in each peak search range of the speech signal as a harmonic peak, extracting harmonic spectral envelope information by performing interpolation of the selected harmonic peaks, extracting non-harmonic spectral envelope information by performing interpolation of the non-harmonic peaks, and comparing the two pieces of envelope information to each other.
    Type: Grant
    Filed: April 4, 2007
    Date of Patent: March 22, 2011
    Assignee: Samsung Electronics Co., Ltd
    Inventor: Hyun-Soo Kim
  • Patent number: 7881926
    Abstract: The invention relates to the field of automated processing of speech signals and particularly to a method for tracking the formant frequencies in a speech signal, comprising the steps of: obtaining an auditory image of the speech signal; sequentially estimating formant locations; segmenting the frequency range into sub-regions; smoothing the obtained component filtering distributions; and calculating exact formant locations.
    Type: Grant
    Filed: September 20, 2007
    Date of Patent: February 1, 2011
    Assignee: Honda Research Institute Europe GmbH
    Inventors: Frank Joublin, Martin Heckmann, Claudius Glaeser
  • Patent number: 7860714
    Abstract: The present invention is a detection system of a segment including specific sound signal which detects a segment in a stored sound signal similar to a reference sound signal, including: a reference signal spectrogram division portion which divides a reference signal spectrogram into spectrograms of small-regions; a small-region reference signal spectrogram coding portion which encodes the small-region reference signal spectrogram to a reference signal small-region code; a small-region stored signal spectrogram coding portion which encodes a small-region stored signal spectrogram to a stored signal small-region code; a similar small-region spectrogram detection portion which detects a small-region spectrogram similar to the small-region reference signal spectrograms based on a degree of similarity of a code; and a degree of segment similarity calculation portion which uses a degree of small-region similarity and calculates a degree of similarity between the segment of the stored signal and the reference signal
    Type: Grant
    Filed: July 1, 2005
    Date of Patent: December 28, 2010
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Hidehisa Nagano, Takayuki Kurozumi, Kunio Kashino
  • Publication number: 20100318350
    Abstract: A voice band expansion device includes a time-frequency converter that calculates a frequency spectrum of a voice signal having a first frequency band; a separator that extracts, from the frequency spectrum, an envelope amplitude spectrum, a periodic amplitude spectrum, and a random amplitude spectrum; an envelope amplitude spectrum band expander that expands a frequency band to a second frequency band that is different from the first frequency band; a periodic amplitude spectrum band expander that expands a frequency band to the second frequency band; a random amplitude spectrum band expander that expands a frequency band of the random amplitude spectrum to the second frequency band; a broadband spectrum calculator that calculates a broadband frequency spectrum having the first frequency band and the second frequency band; and a frequency-time converter generates a voice signal having the first frequency band and the second frequency band.
    Type: Application
    Filed: May 11, 2010
    Publication date: December 16, 2010
    Applicant: FUJITSU LIMITED
    Inventors: Kaori ENDO, Takeshi Otani, Taro Togawa, Yasuji Ota
  • Patent number: 7818169
    Abstract: A formant frequency estimation method which is important information in speech recognition by accelerating a spectrum using a pitch frequency, and an apparatus using the method is provided. That is, the formant frequency estimation method includes preprocessing an input speech signal and generating a spectrum by a fast Fourier transforming the preprocessed input speech signal; smoothing the generated spectrum; accelerating the smoothed spectrum; and determining a formant frequency on the basis of the accelerated spectrum.
    Type: Grant
    Filed: January 4, 2007
    Date of Patent: October 19, 2010
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kwang Cheol Oh, Jae-Hoon Jeong, So-Young Jeong
  • Patent number: 7818168
    Abstract: A method of measuring the degree of enhancement made to a voice signal by receiving the voice signal, identifying formant regions in the voice signal, computing stationarity for each identified formant region, enhancing the voice signal, identifying formant regions in the enhanced voice signal that correspond to those identified in the received voice signal, computing stationarity for each formant region identified in the enhanced voice signal, comparing corresponding stationarity results for the received and enhanced voice signals, and calculating at least one user-definable statistic of the comparison results as the degree of enhancement made to the received voice signal.
    Type: Grant
    Filed: December 1, 2006
    Date of Patent: October 19, 2010
    Assignee: The United States of America as represented by the Director, National Security Agency
    Inventor: Adolf Cusmariu
  • Patent number: 7756700
    Abstract: Pitch estimation and classification into voiced, unvoiced and transitional speech were performed by a spectro-temporal auto-correlation technique. A peak picking formula was then employed. A weighing function was then applied to the power spectrum. The harmonics weighted power spectrum underwent mel-scaled band-pass filtering, and the log-energy of the filter's output was discrete cosine transformed to produce cepstral coefficients. A within-filter cubic-root amplitude compression was applied to reduce amplitude variation without compromise of the gain invariance properties.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: July 13, 2010
    Assignee: The Regents of the University of California
    Inventors: Kenneth Rose, Liang Gu
  • Patent number: 7756703
    Abstract: A formant tracking apparatus and a formant tracking method are provided. The formant tracking apparatus includes: a framing unit dividing an input voice signal into a plurality of frames; a linear prediction analyzing unit obtaining linear prediction coefficients for each frame; a segmentation unit segmenting each of the linear prediction coefficients into a plurality of segments; a formant candidate determining unit obtaining formant candidates by using the linear prediction coefficients, and summing the formant candidates for each segment to determine formant candidates for each segment; a formant number determining unit determining a number of tracking formants for each segment among the formant candidates satisfying a predetermined condition; and a tracking unit searching the tracking formants as many as the number of the tracking formants determined in the formant number determining unit among the formant candidates belonging to each segment.
    Type: Grant
    Filed: October 12, 2005
    Date of Patent: July 13, 2010
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yongbeom Lee, Yuan Yuan Shi, Jaewon Lee
  • Patent number: 7684978
    Abstract: The present invention overcomes problems of tandem coding method such as degradation of speech quality, increased system latency and computations. An apparatus for trans-coding between code excited linear prediction (CELP) type codecs with different bandwidths, includes: a format parameter translating unit for generating output formant parameters by translating formant parameters from input CELP format to output CELP format; a formant parameter quantizing unit for receiving the output format formant parameters and quantizing the output format formant filter coefficients; an excited parameter translating unit for generating output excitation parameters by translating excitation parameters from input CELP format to output CELP format; and an excitation quantizing unit for receiving the output format excitation parameters and quantizing the output format excitation parameters.
    Type: Grant
    Filed: October 30, 2003
    Date of Patent: March 23, 2010
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jongmo Sung, Sang Taick Park, Do Young Kim, Bong Tae Kim
  • Patent number: 7676362
    Abstract: A speech filter (108) enhances the loudness of a speech signal by expanding the formant regions of the speech signal beyond a natural bandwidth of the formant regions. The energy level of the speech signal is maintained so that the filtered speech signal contains the same energy as the pre-filtered signal. By expanding the formant regions of the speech signal on a critical band scale corresponding to human hearing, the listener of the speech signal perceives it to be louder even though the signal contains the same energy.
    Type: Grant
    Filed: December 31, 2004
    Date of Patent: March 9, 2010
    Assignee: Motorola, Inc.
    Inventors: Marc A. Boillot, John G. Harris
  • Patent number: 7672838
    Abstract: In accordance with the present invention, computer implemented methods and systems are provided for representing and modeling the temporal structure of audio signals. In response to receiving a signal, a time-to-frequency domain transformation on at least a portion of the received signal to generate a frequency domain representation is performed. The time-to-frequency domain transformation converts the signal from a time domain representation to the frequency domain representation. A frequency domain linear prediction (FDLP) is performed on the frequency domain representation to estimate a temporal envelope of the frequency domain representation. Based on the temporal envelope, one or more speech features are generated.
    Type: Grant
    Filed: December 1, 2004
    Date of Patent: March 2, 2010
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Marios Athineos, Hynek Hermansky, Daniel P. W. Ellis
  • Patent number: 7653535
    Abstract: A statistical trajectory speech model is constructed where the targets for vocal tract resonances are represented as random vectors and where the mean vectors of the target distributions are estimated using a likelihood function for joint acoustic observation vectors. The target mean vectors can be estimated without formant data. To form the model, time-dependent filter parameter vectors based on time-dependent coarticulation parameters are constructed that are a function of the ordering and identity of the phones in the phone sequence in each speech utterance. The filter parameter vectors are also a function of the temporal extent of coarticulation and of the speaker's speaking effort.
    Type: Grant
    Filed: December 15, 2005
    Date of Patent: January 26, 2010
    Assignee: Microsoft Corporation
    Inventors: Li Deng, Dong Yu, Alejandro Acero
  • Patent number: 7643989
    Abstract: A method and apparatus map a set of vocal tract resonant frequencies, together with their corresponding bandwidths, into a simulated acoustic feature vector in the form of LPC cepstrum by calculating a separate function for each individual vocal tract resonant frequency/bandwidth and summing the result to form an element of the simulated feature vector. The simulated feature vector is applied to a model along with an input feature vector to determine a probability that the set of vocal tract resonant frequencies is present in a speech signal. Under one embodiment, the model includes a target-guided transition model that provides a probability of a vocal tract resonant frequency based on a past vocal tract resonant frequency and a target for the vocal tract resonant frequency. Under another embodiment, the phone segmentation is provided by an HMM system and is used to precisely determine which target value to use at each frame.
    Type: Grant
    Filed: August 29, 2003
    Date of Patent: January 5, 2010
    Assignee: Microsoft Corporation
    Inventors: Li Deng, Alejandro Acero, Issam Bazzi
  • Patent number: 7636659
    Abstract: In accordance with the present invention, computer implemented methods and systems are provided for representing and modeling the temporal structure of audio signals. In response to receiving a signal, a time-to-frequency domain transformation on at least a portion of the received signal to generate a frequency domain representation is performed. The time-to-frequency domain transformation converts the signal from a time domain representation to the frequency domain representation. A frequency domain linear prediction (FDLP) is performed on the frequency domain representation to estimate a temporal envelope of the frequency domain representation. Based on the temporal envelope, one or more speech features are generated.
    Type: Grant
    Filed: March 25, 2005
    Date of Patent: December 22, 2009
    Assignee: The Trustees of Columbia University in the City of New York
    Inventors: Marios Athineos, Daniel P. W. Ellis
  • Patent number: 7627468
    Abstract: An apparatus enabling automatic determination of a portion that reliably represents a feature of a speech waveform includes: an acoustic/prosodic analysis unit calculating, from data, distribution of an energy of a prescribed frequency range of the speech waveform on a time axis, and for extracting, among various syllables of the speech waveform, a range that is generated stably, based on the distribution and the pitch of the speech waveform; cepstral analysis unit estimating, based on the spectral distribution of the speech waveform on the time axis, a range of the speech waveform of which change is well controlled by a speaker; and a pseudo-syllabic center extracting unit extracting, as a portion of high reliability of the speech waveform, that range which has been estimated to be the stably generated range and of which change is estimated to be well controlled by the speaker.
    Type: Grant
    Filed: February 21, 2003
    Date of Patent: December 1, 2009
    Assignees: Japan Science and Technology Agency, Advanced Telecommunication Research Institute International
    Inventors: Nick Campbell, Parham Mokhtari
  • Patent number: 7606702
    Abstract: A code separation/decoding unit restores a vocal tract characteristic sp1 and a vocal source signal r1. A vocal tract characteristic modification unit modifies the vocal tract characteristic sp1 and outputs the modified vocal tract characteristic sp2. In this method, an emphasized vocal tract characteristic sp2 is generated to output by applying formant emphasis, using amplification ratios calculated based on estimated formants, directly to the vocal tract characteristic sp1 for instance. A signal synthesis unit synthesizes the modified vocal tract characteristic sp2 and the vocal source signal r1 to generate and output an output voice, s.
    Type: Grant
    Filed: April 27, 2005
    Date of Patent: October 20, 2009
    Assignee: Fujitsu Limited
    Inventors: Masakiyo Tanaka, Masanao Suzuki, Yasuji Ota, Yoshiteru Tsuchinaga
  • Patent number: 7546237
    Abstract: A system extends the bandwidth of a narrowband speech signal into a wideband spectrum. The system includes a high-band generator that generates a high frequency spectrum based on a narrowband spectrum. A background noise generator generates a high frequency background noise spectrum based on a background noise within the narrowband spectrum. A summing circuit linked to the high-band generator and the background noise generator combines the high frequency spectrum and narrowband spectrum and the high frequency background noise spectrum.
    Type: Grant
    Filed: December 23, 2005
    Date of Patent: June 9, 2009
    Assignee: QNX Software Systems (Wavemakers), Inc.
    Inventors: Rajeev Nongpiur, Xueman Li, Phillip A. Hetherington
  • Patent number: 7542956
    Abstract: The disparate data and commands from are received from a managed resource (102) and have potentially different semantics. The disparate data and commands are processed according to rules received from an autonomic manager (112) to produce a single normalized view of this information. The actual state of the managed resource is determined from the normalized view of disparate data. The actual state of the managed resource (102) is compared to a desired state of the managed resource (102). When a match does not exist between the actual state and the desired state, a configuration adjustment to the managed resource (102) and/or another resource is determined to allow the actual state to be the same as the desired state. Then, the configuration adjustment is applied to the managed resource (102). When a match exists between the actual state and the desired state, maintenance functions associated with the managed resource (102) are performed.
    Type: Grant
    Filed: June 7, 2006
    Date of Patent: June 2, 2009
    Assignee: Motorola, Inc.
    Inventors: John C. Strassner, Barry J. Menich
  • Patent number: 7519531
    Abstract: A computer-implemented method is provided for training a hidden trajectory model, of a speech recognition system, which generates Vocal Tract Resonance (VTR) targets. The method includes obtaining generic VTR target parameters corresponding to a generic speaker used by a target selector to generate VTR target sequences. The generic VTR target parameters are scaled for a particular speaker using a speaker-dependent scaling factor for the particular speaker to generate speaker-adaptive VTR target parameters. This scaling is performed for both the training data and the test data, and for the training data, the scaling is performed iteratively with the process of obtaining the generic targets. The computation of the scaling factor makes use of the results of a VTR tracker. The speaker-adaptive VTR target parameters for the particular speaker are then stored in order to configure the hidden trajectory model to perform speech recognition for the particular speaker using the speaker-adaptive VTR target parameters.
    Type: Grant
    Filed: March 30, 2005
    Date of Patent: April 14, 2009
    Assignee: Microsoft Corporation
    Inventors: Alejandro Acero, Dong Yu, Li Deng
  • Patent number: 7475011
    Abstract: A method and apparatus identify values for components of a vocal tract resonance vector by sequentially determining values for each component of the vocal tract resonance vector. To determine a value for a component, the other components are set to static values. A plurality of values for a function are then determined using a plurality of values for the component that is being determined while using the static values for all of the other components. One of the plurality of values for the component is then selected based on the plurality of values for the function.
    Type: Grant
    Filed: August 25, 2004
    Date of Patent: January 6, 2009
    Assignee: Microsoft Corporation
    Inventors: Li Deng, Alejandro Acero, Issam H. Bazzi
  • Patent number: 7454350
    Abstract: A method for evaluating near-term suicidal risk by analysis of a series of spoken words includes converting the spoken series of words into a signal having characteristics indicative of said words as spoken, dynamically monitoring said signal to detect changes therein and identifying the person as having a relatively high near-term risk of suicide on the basis of such monitored changes in the signal relative to the speech of individuals in good mental health having no near-term suicidal risk.
    Type: Grant
    Filed: May 24, 2006
    Date of Patent: November 18, 2008
    Inventors: Marilyn K. Silverman, legal representative, Stephen E. Silverman
  • Patent number: 7424423
    Abstract: A method of tracking formants defines a formant search space comprising sets of formants to be searched. Formants are identified for a first frame in the speech utterance by searching the entirety of the formant search space using the codebook, and for the remaining frames by searching the same space using both the codebook and the continuity constraint across adjacent frames. Under one embodiment, the formants are identified by mapping sets of formants into feature vectors and applying the feature vectors to a model. Formants are also identified by applying dynamic programming to search for the best sequence that optimally satisfies the continuity constraint required by the model.
    Type: Grant
    Filed: April 1, 2003
    Date of Patent: September 9, 2008
    Assignee: Microsoft Corporation
    Inventors: Issam Bazzi, Li Deng, Alejandro Acero
  • Patent number: 7376562
    Abstract: The present invention relates to systems and methods for processing acoustic signals, such as music and speech. The method involves nonlinear frequency analysis of an incoming acoustic signal. In one aspect, a network of nonlinear oscillators, each with a distinct frequency, is applied to process the signal. The frequency, amplitude, and phase of each signal component are identified. In addition, nonlinearities in the network recover components that are not present or not fully resolvable in the input signal. In another aspect, a modification of the nonlinear oscillator network is used to track changing frequency components of an input signal.
    Type: Grant
    Filed: June 22, 2004
    Date of Patent: May 20, 2008
    Assignees: Florida Atlantic University, Circular Logic, Inc.
    Inventor: Edward W. Large
  • Patent number: 7366656
    Abstract: A method of identifying patterns in a digitized acoustic signal is disclosed. The method comprises: (i) converting the digitized acoustic signal into a spatial representation being defined by a plurality of regions on a vibrating membrane, each the regions having a different vibration resonance, each the vibration resonance corresponding to a different frequency of the acoustic signal; (ii) iteratively calculating a weight function, the weight function having a spatial dependence being representative of acoustic patterns of each region of the plurality of regions; and (iii) using the weight function for converting the spatial representation into a reconstructed acoustic signal; thereby identifying the patterns in the acoustic signal.
    Type: Grant
    Filed: February 19, 2004
    Date of Patent: April 29, 2008
    Assignee: Ramot At Tel Aviv University Ltd.
    Inventors: Miriam Furst-Yust, Azaria Cohen, Vered Weisz
  • Publication number: 20080082322
    Abstract: The invention relates to the field of automated processing of speech signals and particularly to a method for tracking the formant frequencies in a speech signal, comprising the steps of: obtaining an auditory image of the speech signal; sequentially estimating formant locations; segmenting the frequency range into sub-regions; smoothing the obtained component filtering distributions; and calculating exact formant locations.
    Type: Application
    Filed: September 20, 2007
    Publication date: April 3, 2008
    Applicant: Honda Research Institute Europe GmbH
    Inventors: Frank Joublin, Martin Heckmann, Claudius Glaeser
  • Patent number: 7337107
    Abstract: Pitch estimation and classification into voiced, unvoiced and transitional speech were performed by a spectro-temporal auto-correlation technique. A peak picking formula was then employed. A weighting function was then applied to the power spectrum. The harmonics weighted power spectrum underwent mel-scaled band-pass filtering, and the log-energy of the filter's output was discrete cosine transformed to produce cepstral coefficients. A within-filter cubic-root amplitude compression was applied to reduce amplitude variation without compromise of the gain invariance properties.
    Type: Grant
    Filed: October 2, 2001
    Date of Patent: February 26, 2008
    Assignee: The Regents of the University of California
    Inventors: Kenneth Rose, Liang Gu
  • Patent number: 7330813
    Abstract: A speech processing apparatus able to enhance formants more naturally, wherein a speech analyzing unit analyzes an input speech signal to find LPCs and converts the LPCs to LSPs, a speech decoding unit calculates a distance between adjacent orders of the LSPs by an LSP analytical processing unit and calculates LSP adjusting amounts of larger values for LSPs of adjacent orders closer in distance by an LSP adjusting amount calculating unit, an LSP adjusting unit adjusts the LSPs based on the LSP adjusting amounts such that the LSPs of adjacent orders closer in distance become closer, an LSP-LPC converting unit converts the adjusted LSPs to LPCs, and an LPC combining unit uses the LPCs and sound source parameters to obtain formant-enhanced speech.
    Type: Grant
    Filed: August 5, 2003
    Date of Patent: February 12, 2008
    Assignee: Fujitsu limited
    Inventor: Mutsumi Saito
  • Patent number: 7272559
    Abstract: Noninvasive, remote methods and apparatus for detecting early phases of neuro diseases such as the non-tremor phase of Parkinson's disease, dyskinesia, dyslexia and neuroatrophy, etc., are disclosed. Five words spoken either directly into a microphone connected to a local analysis system or remotely, as by way of a telephone link to a system for analysis of time and frequency domains of speech characteristics are representative of the presence of disease. The method includes the steps of transducing a set of unmodified spoken words or numbers into electrical signals which are bandlimited and amplified. These signals are analyzed in both time and frequency domains to detect and measure the manifestation of neurological disorders in the envelope of the time representation and spectral density of the words.
    Type: Grant
    Filed: October 2, 2003
    Date of Patent: September 18, 2007
    Assignee: CEIE specs, Inc.
    Inventor: Harbhajan S. Hayre
  • Publication number: 20070192088
    Abstract: A formant frequency estimation method which is important information in speech recognition by accelerating a spectrum using a pitch frequency, and an apparatus using the method is provided. That is, the formant frequency estimation method includes preprocessing an input speech signal and generating a spectrum by a fast Fourier transforming the preprocessed input speech signal; smoothing the generated spectrum; accelerating the smoothed spectrum; and determining a formant frequency on the basis of the accelerated spectrum.
    Type: Application
    Filed: January 4, 2007
    Publication date: August 16, 2007
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Kwang Cheol Oh, Jae-Hoon Jeong, So-Young Jeong
  • Patent number: 7219061
    Abstract: Predetermined macrosegments of the fundamental frequency are determined by a neural network, and these predefined macrosegments are reproduced by fundamental-frequency sequences stored in a database. The fundamental frequency is generated on the basis of a relatively large text section which is analyzed by the neural network. Microstructures from the database are received in the fundamental frequency. The fundamental frequency thus formed is thus optimized both with regard to its macrostructure and to its microstructure. As a result, an extremely natural sound is achieved.
    Type: Grant
    Filed: October 24, 2000
    Date of Patent: May 15, 2007
    Assignee: Siemens Aktiengesellschaft
    Inventors: Caglayan Erdem, Martin Holzapfel
  • Patent number: 7177803
    Abstract: Human hearing perceives loudness based on critical bands corresponding to different frequency ranges. As a sound's frequency spectrum increases beyond a critical band into a previously unexcited critical band, the perception is that the sound has increased in loudness. To take advantage of this principle, a filter is applied to a speech signal so as to expand the formant bandwidths of formants in the speech sample.
    Type: Grant
    Filed: October 22, 2002
    Date of Patent: February 13, 2007
    Assignee: Motorola, Inc.
    Inventors: Marc A. Boillot, John G. Harris, Thomas L. Reinke, Zaffer S. Merchant, Jaime A. Borras
  • Patent number: 7155311
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 1, 2005
    Date of Patent: December 26, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7155312
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 1, 2005
    Date of Patent: December 26, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7155314
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 5, 2005
    Date of Patent: December 26, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7155310
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 1, 2005
    Date of Patent: December 26, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7155313
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 5, 2005
    Date of Patent: December 26, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7151984
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 4, 2005
    Date of Patent: December 19, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7151983
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 1, 2005
    Date of Patent: December 19, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7149603
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 4, 2005
    Date of Patent: December 12, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7146251
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 5, 2005
    Date of Patent: December 5, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7146249
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 4, 2005
    Date of Patent: December 5, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7146250
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 5, 2005
    Date of Patent: December 5, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7146252
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 5, 2005
    Date of Patent: December 5, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7142946
    Abstract: A robot apparatus is provided. A CPU 15 determines an output of a feeling model based on signals supplied from a touch sensor 20. The CPU 15 also deciphers whether or not an output value of the feeling model exceeds a pre-set threshold value. If the CPU finds that the output value exceeds the pre-set threshold value, it verifies whether or not there is any vacant area in a memory card 13. If the CPU finds that there is any vacant area in a memory card 13, it causes the picture data captured from the CCD video camera 11 to be stored in the vacant area in the memory card 13. At this time, the CPU 15 causes the time and date data and the feeling parameter in the memory card 13 in association with the picture data. The CPU 15 also re-arrays the picture data stored in the memory card 13 in the sequence of the decreasing magnitude of the feeling model output.
    Type: Grant
    Filed: April 5, 2005
    Date of Patent: November 28, 2006
    Assignee: Sony Corporation
    Inventors: Kotaro Sabe, Masahiro Fujita
  • Patent number: 7136876
    Abstract: A method and apparatus for building an abbreviation dictionary involves searching through a set of source documents. The abbreviations having likely definitions are identified and the definitions extracted from the document. The definitions having identical associated abbreviations are grouped together. The definition groups are each arranged into clusters based on an n-gram or other combinatorial method to determine similar definition. Further disambiguation is provided by looking at similarity between clusters using an annotation associated with the source documents from which the definitions were extracted.
    Type: Grant
    Filed: March 3, 2003
    Date of Patent: November 14, 2006
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Eytan Adar, Lada A. Adamic
  • Patent number: 7124077
    Abstract: A method and system of performing postfiltering in the frequency domain to improve the quality of a speech signal, especially for synthesized speech resulting from codecs of low bit-rate, is provided. The method comprises LPC tilt computation and compensation methods and modules, a formant filter gain computation method and module, and an anti-aliasing method and module. The formant filter gain calculation employs an LPC representation, an all-pole modeling, a non-linear transformation and a phase computation. The LPC used for deriving the postfilter may be transmitted from an encoder or may be estimated from a synthesized or other speech signal in a decoder or receiver. The invention may be implemented in a linked decoder and encoder. A separate LPC evaluation unit that is responsible for processing and or deriving the LPC may be implemented within the invention.
    Type: Grant
    Filed: January 28, 2005
    Date of Patent: October 17, 2006
    Assignee: Microsoft Corporation
    Inventors: Hong Wang, Vladimir Cuperman, Allen Gersho, Hosam A. Khalil
  • Patent number: 7120575
    Abstract: A digitized speech signal (600) is input to an F0 (fundamental frequency) processor that computes (610) a continuous F0 data from the speech signal. By the criterion voicing state transition (voiced/unvoiced transitions) the speech signal is presegmented (620) into segments. For each segment (630) it is evaluated (640) whether F0 is defined or not defined i.e. whether F0 is ON or OFF. In case of F0=OFF a candidate segment boundary is assumed as described above and, starting from that boundary, prosodic features are computed (650). The feature values are input into a classification tree and each candidate segment is classified thereby revealing, as a result, the existence or non-existence of a semantic or syntactic speech unit.
    Type: Grant
    Filed: August 2, 2001
    Date of Patent: October 10, 2006
    Assignee: International Business Machines Corporation
    Inventors: Martin Haase, Werner Kriechbaum, Gerhard Stenzel
  • Patent number: 7117152
    Abstract: A communication system includes communications equipment having a voice input device, an acoustic output device, and a visual display device. The communications equipment receives voice information from a user using the voice input device, converts the voice information into text, and communicates packets encoding the voice information and the text to a remote location. The communications equipment also receives packets encoding voice and text information from the remote location, outputs the voice information using the acoustic output device, and outputs the text information using the visual display device.
    Type: Grant
    Filed: June 23, 2000
    Date of Patent: October 3, 2006
    Assignee: Cisco Technology, Inc.
    Inventors: Arijit Mukherji, Gerardo Chaves, Christopher A White
  • Patent number: RE40634
    Abstract: A signal monitoring apparatus and method involving devices for monitoring signals representing communications traffic, devices for identifying at least one predetermined parameter by analyzing the context of the at least one monitoring signal, a device for recording the occurrence of the identified parameter, a device for identifying the traffic stream associated with the identified parameter, a device for analyzing the recorded data relating to the occurrence, and a device, responsive to the analysis of the recorded data, for controlling the handling of communications traffic within the apparatus.
    Type: Grant
    Filed: August 24, 2006
    Date of Patent: February 10, 2009
    Assignee: Verint Americas
    Inventors: Christopher Douglas Blair, Roger Louis Keenan