Patents by Inventor Hiraku Kayama

Hiraku Kayama has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180122397
    Abstract: A sound processing method includes a step of applying a nonlinear filter to a temporal sequence of spectral envelope of an acoustic signal, wherein the nonlinear filter smooths a fine temporal perturbation of the spectral envelope without smoothing out a large temporal change. A sound processing apparatus includes a smoothing processor configured to apply a nonlinear filter to a temporal sequence of spectral envelope of an acoustic signal, wherein the nonlinear filter smooths a fine temporal perturbation of the spectral envelope without smoothing out a large temporal change.
    Type: Application
    Filed: November 1, 2017
    Publication date: May 3, 2018
    Inventors: Ryunosuke DAIDO, Hiraku KAYAMA
  • Publication number: 20170263270
    Abstract: Information related to voice of a question and information related to voice of a response to the question are received. An analysis section acquires a representative pitch of the question (e.g., a pitch of the end of the question), and a representative pitch of the response (e.g., an average pitch of the response) based on the received information. On the basis of comparison between the representative pitch of the question and the representative pitch of the response, an evaluation section evaluates the voice of the response to the question on the basis of how much a difference between the respective representative pitches of the question and the response is away from a predetermined reference value (e.g., a fifth consonant interval). Further, a conversation interval detection section is provided for detecting a conversation interval, i.e., a time interval from the end of the question to the start of the response.
    Type: Application
    Filed: May 31, 2017
    Publication date: September 14, 2017
    Inventor: Hiraku KAYAMA
  • Publication number: 20170221470
    Abstract: This invention is an improvement of technology for automatically generating response voice to voice uttered by a speaker (user), and is characterized by controlling a pitch of the response voice in accordance with a pitch of the speaker's utterance. A voice signal of the speaker's utterance (e.g., question) is received, and a pitch (e.g., highest pitch) of a representative portion of the utterance is detected. Voice data of a responsive to the utterance is acquired, and a pitch (e.g., average pitch) based on the acquired response voice data is acquired. A pitch shift amount for shifting the acquired pitch to a target pitch having a particular relationship to the pitch of the representative portion is determined. When response voice is to be synthesized on the basis of the response voice data, the pitch of the response voice to be synthesized is shifted in accordance with the pitch shift amount.
    Type: Application
    Filed: April 19, 2017
    Publication date: August 3, 2017
    Inventors: Hiraku KAYAMA, Hiroaki MATSUBARA
  • Patent number: 9640172
    Abstract: A sound synthesizing apparatus includes a waveform storing section which stores a plurality of unit waveforms extracted from different positions, on a time axis, of a sound waveform indicating a voiced sound, and a waveform generating section which generates, for each of a first processing period and a second processing period, a synthesized waveform by arranging the plurality of unit waveforms on the time axis, wherein the second processing period is an immediately succeeding processing period after the first processing period.
    Type: Grant
    Filed: August 30, 2012
    Date of Patent: May 2, 2017
    Assignee: Yamaha Corporation
    Inventor: Hiraku Kayama
  • Patent number: 9552806
    Abstract: A sound synthesizing apparatus includes a processor coupled to a memory. The processor configured to execute computer-executable units comprising: an information acquirer adapted to acquire synthesis information which specifies a duration and an utterance content for each unit sound; a prolongation setter adapted to set whether prolongation is permitted or inhibited for each of a plurality of phonemes corresponding to the utterance content of the each unit sound; and a sound synthesizer adapted to generate a synthesized sound corresponding to the synthesis information by connecting a plurality of sound fragments corresponding to the utterance content of the each unit sound. The sound synthesizer prolongs a sound fragment corresponding to the phoneme the prolongation of which is permitted in accordance with the duration of the unit sound.
    Type: Grant
    Filed: February 26, 2013
    Date of Patent: January 24, 2017
    Assignee: Yamaha Corporation
    Inventors: Hiraku Kayama, Motoki Ogasawara
  • Publication number: 20140136207
    Abstract: A voice synthesizing apparatus includes a first receiver configured to receive first utterance control information generated by detecting a start of a manipulation on a manipulating member by a user, a first synthesizer configured to synthesize, in response to a reception of the first utterance control information, a first voice corresponding to a first phoneme in a phoneme sequence of a voice to be synthesized to output the first voice, a second receiver configured to receive second utterance control information generated by detecting a completion of the manipulation on the manipulating member or a manipulation on a different manipulating member, and a second synthesizer configured to synthesize, in response to a reception of the second utterance control information, a second voice including at least the first phoneme and a succeeding phoneme being subsequent to the first phoneme of the voice to be synthesized to output the second voice.
    Type: Application
    Filed: November 14, 2013
    Publication date: May 15, 2014
    Applicant: Yamaha Corporation
    Inventors: Hiraku KAYAMA, Yoshiki NISHITANI
  • Publication number: 20130262121
    Abstract: A sound synthesizing apparatus includes a processor coupled to a memory. The processor configured to execute computer-executable units comprising: an information acquirer adapted to acquire synthesis information which specifies a duration and an utterance content for each unit sound; a prolongation setter adapted to set whether prolongation is permitted or inhibited for each of a plurality of phonemes corresponding to the utterance content of the each unit sound; and a sound synthesizer adapted to generate a synthesized sound corresponding to the synthesis information by connecting a plurality of sound fragments corresponding to the utterance content of the each unit sound. The sound synthesizer prolongs a sound fragment corresponding to the phoneme the prolongation of which is permitted in accordance with the duration of the unit sound.
    Type: Application
    Filed: February 26, 2013
    Publication date: October 3, 2013
    Applicant: Yamaha Corporation
    Inventors: Hiraku Kayama, Motoki Ogasawara
  • Publication number: 20130231928
    Abstract: A sound synthesizing apparatus includes a waveform storing section which stores a plurality of unit waveforms extracted from different positions, on a time axis, of a sound waveform indicating a voiced sound, and a waveform generating section which generates a synthesized waveform by arranging the plurality of unit waveforms on the time axis.
    Type: Application
    Filed: August 30, 2012
    Publication date: September 5, 2013
    Applicant: Yamaha Corporation
    Inventor: Hiraku KAYAMA
  • Patent number: 7999169
    Abstract: A sound synthesizer has a storage unit, a setting unit and a sound synthesis unit. The storage unit stores a plurality of sound data respectively representing a plurality of sounds collected by different sound collecting points corresponding to the plurality of the sound data. The setting unit variably sets a position of a sound receiving point according to an instruction from a user. The sound synthesis unit synthesizes a sound by processing each of the plurality of the sound data according to a relation between a position of the sound collecting point corresponding to the sound data and the position of the sound receiving point specified by the user.
    Type: Grant
    Filed: June 3, 2009
    Date of Patent: August 16, 2011
    Assignee: Yamaha Corporation
    Inventor: Hiraku Kayama
  • Publication number: 20090308230
    Abstract: A sound synthesizer has a storage unit, a setting unit and a sound synthesis unit. The storage unit stores a plurality of sound data respectively representing a plurality of sounds collected by different sound collecting points corresponding to the plurality of the sound data. The setting unit variably sets a position of a sound receiving point according to an instruction from a user. The sound synthesis unit synthesizes a sound by processing each of the plurality of the sound data according to a relation between a position of the sound collecting point corresponding to the sound data and the position of the sound receiving point specified by the user.
    Type: Application
    Filed: June 3, 2009
    Publication date: December 17, 2009
    Applicant: Yamaha Corporation
    Inventor: Hiraku Kayama
  • Patent number: 7606709
    Abstract: An apparatus is constructed for converting an input voice signal into an output voice signal according to a target voice signal. In the apparatus, an input device provides the input voice signal composed of original sinusoidal components and original residual components other than the original sinusoidal components. An extracting device extracts original attribute data from at least the sinusoidal components of the input voice signal. The original attribute data is characteristic of the input voice signal. A synthesizing device synthesizes new attribute data based on both of the original attribute data derived from the input voice signal and target attribute data being characteristic of the target voice signal composed of target sinusoidal components and target residual components other than the sinusoidal components. The target attribute data is derived from at least the target sinusoidal components.
    Type: Grant
    Filed: October 29, 2002
    Date of Patent: October 20, 2009
    Assignees: Yamaha Corporation, Pompeu Fabra University
    Inventors: Yasuo Yoshioka, Hiraku Kayama, Xavier Serra, Jordi Bonada
  • Patent number: 7249022
    Abstract: There are provided a singing voice-synthesizing method and apparatus capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. Each performance data is inputted in timing earlier than the actual singing-starting time point, and a phonetic unit transition time length is generated. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time.
    Type: Grant
    Filed: December 1, 2005
    Date of Patent: July 24, 2007
    Assignee: Yamaha Corporation
    Inventors: Hiraku Kayama, Oscar Celma, Jaume Ortola
  • Patent number: 7149682
    Abstract: An apparatus is constructed for converting an input voice signal into an output voice signal according to a target voice signal. In the apparatus, an input device provides the input voice signal composed of original sinusoidal components and original residual components other than the original sinusoidal components. An extracting device extracts original attribute data from at least the sinusoidal components of the input voice signal. The original attribute data is characteristic of the input voice signal. A synthesizing device synthesizes new attribute data based on both of the original attribute data derived from the input voice signal and target attribute data being characteristic of the target voice signal composed of target sinusoidal components and target residual components other than the sinusoidal components. The target attribute data is derived from at least the target sinusoidal components.
    Type: Grant
    Filed: October 29, 2002
    Date of Patent: December 12, 2006
    Assignees: Yamaha Corporation, Pompeu Fabra University
    Inventors: Yasuo Yoshioka, Hiraku Kayama, Xavier Serra, Jordi Bonada
  • Patent number: 7124084
    Abstract: There are provided a singing voice-synthesizing method and apparatus capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. Each performance data is inputted in timing earlier than the actual singing-starting time point, and a phonetic unit transition time length is generated. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time.
    Type: Grant
    Filed: December 27, 2001
    Date of Patent: October 17, 2006
    Assignee: Yamaha Corporation
    Inventors: Hiraku Kayama, Oscar Celma, Jaume Ortola
  • Patent number: 7094962
    Abstract: For a plurality of types of additional attribute data included in note data, a selection section selects one or more of the plurality of types of additional attribute data. For a plurality of the note data, a display section displays pictorial figures or the like representative of the contents of the additional attribute data of the types selected by the selection section, in proximity to pictorial figures or the like representative of pitches and sounding periods of the note data. The display section also displays pictorial figures or the like indicative of the contents of the additional attribute data, at positions and in sizes corresponding to periods or timing when musical expressions or the like indicated by the additional attribute data are to be applied.
    Type: Grant
    Filed: February 25, 2004
    Date of Patent: August 22, 2006
    Assignee: Yamaha Corporation
    Inventor: Hiraku Kayama
  • Publication number: 20060085198
    Abstract: There are provided a singing voice-synthesizing method and apparatus capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. Each performance data is inputted in timing earlier than the actual singing-starting time point, and a phonetic unit transition time length is generated. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time.
    Type: Application
    Filed: December 1, 2005
    Publication date: April 20, 2006
    Applicant: YAMAHA CORPORATION
    Inventors: Hiraku Kayama, Oscar Celma, Jaume Ortola
  • Publication number: 20060085196
    Abstract: There are provided a singing voice-synthesizing method and apparatus capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. Each performance data is inputted in timing earlier than the actual singing-starting time point, and a phonetic unit transition time length is generated. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time.
    Type: Application
    Filed: December 1, 2005
    Publication date: April 20, 2006
    Applicant: YAMAHA CORPORATION
    Inventors: Hiraku Kayama, Oscar Celma, Jaume Ortola
  • Publication number: 20060085197
    Abstract: There are provided a singing voice-synthesizing method and apparatus capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. Each performance data is inputted in timing earlier than the actual singing-starting time point, and a phonetic unit transition time length is generated. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time.
    Type: Application
    Filed: December 1, 2005
    Publication date: April 20, 2006
    Applicant: YAMAHA CORPORATION
    Inventors: Hiraku Kayama, Oscar Celma, Jaume Ortola
  • Publication number: 20040177745
    Abstract: For a plurality of types of additional attribute data included in note data, a selection section selects one or more of the plurality of types of additional attribute data. For a plurality of the note data, a display section displays pictorial figures or the like representative of the contents of the additional attribute data of the types selected by the selection section, in proximity to pictorial figures or the like representative of pitches and sounding periods of the note data. The display section also displays pictorial figures or the like indicative of the contents of the additional attribute data, at positions and in sizes corresponding to periods or timing when musical expressions or the like indicated by the additional attribute data are to be applied.
    Type: Application
    Filed: February 25, 2004
    Publication date: September 16, 2004
    Applicant: YAMAHA CORPORATION
    Inventor: Hiraku Kayama
  • Publication number: 20030061047
    Abstract: An apparatus is constructed for converting an input voice signal into an output voice signal according to a target voice signal. In the apparatus, an input device provides the input voice signal composed of original sinusoidal components and original residual components other than the original sinusoidal components. An extracting device extracts original attribute data from at least the sinusoidal components of the input voice signal. The original attribute data is characteristic of the input voice signal. A synthesizing device synthesizes new attribute data based on both of the original attribute data derived from the input voice signal and target attribute data being characteristic of the target voice signal composed of target sinusoidal components and target residual components other than the sinusoidal components. The target attribute data is derived from at least the target sinusoidal components.
    Type: Application
    Filed: October 29, 2002
    Publication date: March 27, 2003
    Applicant: YAMAHA CORPORATION
    Inventors: Yasuo Yoshioka, Hiraku Kayama, Xavier Serra, Jordi Bonada