Patents by Inventor Hiraku Kayama
Hiraku Kayama has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20180122397Abstract: A sound processing method includes a step of applying a nonlinear filter to a temporal sequence of spectral envelope of an acoustic signal, wherein the nonlinear filter smooths a fine temporal perturbation of the spectral envelope without smoothing out a large temporal change. A sound processing apparatus includes a smoothing processor configured to apply a nonlinear filter to a temporal sequence of spectral envelope of an acoustic signal, wherein the nonlinear filter smooths a fine temporal perturbation of the spectral envelope without smoothing out a large temporal change.Type: ApplicationFiled: November 1, 2017Publication date: May 3, 2018Inventors: Ryunosuke DAIDO, Hiraku KAYAMA
-
Publication number: 20170263270Abstract: Information related to voice of a question and information related to voice of a response to the question are received. An analysis section acquires a representative pitch of the question (e.g., a pitch of the end of the question), and a representative pitch of the response (e.g., an average pitch of the response) based on the received information. On the basis of comparison between the representative pitch of the question and the representative pitch of the response, an evaluation section evaluates the voice of the response to the question on the basis of how much a difference between the respective representative pitches of the question and the response is away from a predetermined reference value (e.g., a fifth consonant interval). Further, a conversation interval detection section is provided for detecting a conversation interval, i.e., a time interval from the end of the question to the start of the response.Type: ApplicationFiled: May 31, 2017Publication date: September 14, 2017Inventor: Hiraku KAYAMA
-
Publication number: 20170221470Abstract: This invention is an improvement of technology for automatically generating response voice to voice uttered by a speaker (user), and is characterized by controlling a pitch of the response voice in accordance with a pitch of the speaker's utterance. A voice signal of the speaker's utterance (e.g., question) is received, and a pitch (e.g., highest pitch) of a representative portion of the utterance is detected. Voice data of a responsive to the utterance is acquired, and a pitch (e.g., average pitch) based on the acquired response voice data is acquired. A pitch shift amount for shifting the acquired pitch to a target pitch having a particular relationship to the pitch of the representative portion is determined. When response voice is to be synthesized on the basis of the response voice data, the pitch of the response voice to be synthesized is shifted in accordance with the pitch shift amount.Type: ApplicationFiled: April 19, 2017Publication date: August 3, 2017Inventors: Hiraku KAYAMA, Hiroaki MATSUBARA
-
Patent number: 9640172Abstract: A sound synthesizing apparatus includes a waveform storing section which stores a plurality of unit waveforms extracted from different positions, on a time axis, of a sound waveform indicating a voiced sound, and a waveform generating section which generates, for each of a first processing period and a second processing period, a synthesized waveform by arranging the plurality of unit waveforms on the time axis, wherein the second processing period is an immediately succeeding processing period after the first processing period.Type: GrantFiled: August 30, 2012Date of Patent: May 2, 2017Assignee: Yamaha CorporationInventor: Hiraku Kayama
-
Patent number: 9552806Abstract: A sound synthesizing apparatus includes a processor coupled to a memory. The processor configured to execute computer-executable units comprising: an information acquirer adapted to acquire synthesis information which specifies a duration and an utterance content for each unit sound; a prolongation setter adapted to set whether prolongation is permitted or inhibited for each of a plurality of phonemes corresponding to the utterance content of the each unit sound; and a sound synthesizer adapted to generate a synthesized sound corresponding to the synthesis information by connecting a plurality of sound fragments corresponding to the utterance content of the each unit sound. The sound synthesizer prolongs a sound fragment corresponding to the phoneme the prolongation of which is permitted in accordance with the duration of the unit sound.Type: GrantFiled: February 26, 2013Date of Patent: January 24, 2017Assignee: Yamaha CorporationInventors: Hiraku Kayama, Motoki Ogasawara
-
Publication number: 20140136207Abstract: A voice synthesizing apparatus includes a first receiver configured to receive first utterance control information generated by detecting a start of a manipulation on a manipulating member by a user, a first synthesizer configured to synthesize, in response to a reception of the first utterance control information, a first voice corresponding to a first phoneme in a phoneme sequence of a voice to be synthesized to output the first voice, a second receiver configured to receive second utterance control information generated by detecting a completion of the manipulation on the manipulating member or a manipulation on a different manipulating member, and a second synthesizer configured to synthesize, in response to a reception of the second utterance control information, a second voice including at least the first phoneme and a succeeding phoneme being subsequent to the first phoneme of the voice to be synthesized to output the second voice.Type: ApplicationFiled: November 14, 2013Publication date: May 15, 2014Applicant: Yamaha CorporationInventors: Hiraku KAYAMA, Yoshiki NISHITANI
-
Publication number: 20130262121Abstract: A sound synthesizing apparatus includes a processor coupled to a memory. The processor configured to execute computer-executable units comprising: an information acquirer adapted to acquire synthesis information which specifies a duration and an utterance content for each unit sound; a prolongation setter adapted to set whether prolongation is permitted or inhibited for each of a plurality of phonemes corresponding to the utterance content of the each unit sound; and a sound synthesizer adapted to generate a synthesized sound corresponding to the synthesis information by connecting a plurality of sound fragments corresponding to the utterance content of the each unit sound. The sound synthesizer prolongs a sound fragment corresponding to the phoneme the prolongation of which is permitted in accordance with the duration of the unit sound.Type: ApplicationFiled: February 26, 2013Publication date: October 3, 2013Applicant: Yamaha CorporationInventors: Hiraku Kayama, Motoki Ogasawara
-
Publication number: 20130231928Abstract: A sound synthesizing apparatus includes a waveform storing section which stores a plurality of unit waveforms extracted from different positions, on a time axis, of a sound waveform indicating a voiced sound, and a waveform generating section which generates a synthesized waveform by arranging the plurality of unit waveforms on the time axis.Type: ApplicationFiled: August 30, 2012Publication date: September 5, 2013Applicant: Yamaha CorporationInventor: Hiraku KAYAMA
-
Patent number: 7999169Abstract: A sound synthesizer has a storage unit, a setting unit and a sound synthesis unit. The storage unit stores a plurality of sound data respectively representing a plurality of sounds collected by different sound collecting points corresponding to the plurality of the sound data. The setting unit variably sets a position of a sound receiving point according to an instruction from a user. The sound synthesis unit synthesizes a sound by processing each of the plurality of the sound data according to a relation between a position of the sound collecting point corresponding to the sound data and the position of the sound receiving point specified by the user.Type: GrantFiled: June 3, 2009Date of Patent: August 16, 2011Assignee: Yamaha CorporationInventor: Hiraku Kayama
-
Publication number: 20090308230Abstract: A sound synthesizer has a storage unit, a setting unit and a sound synthesis unit. The storage unit stores a plurality of sound data respectively representing a plurality of sounds collected by different sound collecting points corresponding to the plurality of the sound data. The setting unit variably sets a position of a sound receiving point according to an instruction from a user. The sound synthesis unit synthesizes a sound by processing each of the plurality of the sound data according to a relation between a position of the sound collecting point corresponding to the sound data and the position of the sound receiving point specified by the user.Type: ApplicationFiled: June 3, 2009Publication date: December 17, 2009Applicant: Yamaha CorporationInventor: Hiraku Kayama
-
Patent number: 7606709Abstract: An apparatus is constructed for converting an input voice signal into an output voice signal according to a target voice signal. In the apparatus, an input device provides the input voice signal composed of original sinusoidal components and original residual components other than the original sinusoidal components. An extracting device extracts original attribute data from at least the sinusoidal components of the input voice signal. The original attribute data is characteristic of the input voice signal. A synthesizing device synthesizes new attribute data based on both of the original attribute data derived from the input voice signal and target attribute data being characteristic of the target voice signal composed of target sinusoidal components and target residual components other than the sinusoidal components. The target attribute data is derived from at least the target sinusoidal components.Type: GrantFiled: October 29, 2002Date of Patent: October 20, 2009Assignees: Yamaha Corporation, Pompeu Fabra UniversityInventors: Yasuo Yoshioka, Hiraku Kayama, Xavier Serra, Jordi Bonada
-
Patent number: 7249022Abstract: There are provided a singing voice-synthesizing method and apparatus capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. Each performance data is inputted in timing earlier than the actual singing-starting time point, and a phonetic unit transition time length is generated. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time.Type: GrantFiled: December 1, 2005Date of Patent: July 24, 2007Assignee: Yamaha CorporationInventors: Hiraku Kayama, Oscar Celma, Jaume Ortola
-
Patent number: 7149682Abstract: An apparatus is constructed for converting an input voice signal into an output voice signal according to a target voice signal. In the apparatus, an input device provides the input voice signal composed of original sinusoidal components and original residual components other than the original sinusoidal components. An extracting device extracts original attribute data from at least the sinusoidal components of the input voice signal. The original attribute data is characteristic of the input voice signal. A synthesizing device synthesizes new attribute data based on both of the original attribute data derived from the input voice signal and target attribute data being characteristic of the target voice signal composed of target sinusoidal components and target residual components other than the sinusoidal components. The target attribute data is derived from at least the target sinusoidal components.Type: GrantFiled: October 29, 2002Date of Patent: December 12, 2006Assignees: Yamaha Corporation, Pompeu Fabra UniversityInventors: Yasuo Yoshioka, Hiraku Kayama, Xavier Serra, Jordi Bonada
-
Patent number: 7124084Abstract: There are provided a singing voice-synthesizing method and apparatus capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. Each performance data is inputted in timing earlier than the actual singing-starting time point, and a phonetic unit transition time length is generated. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time.Type: GrantFiled: December 27, 2001Date of Patent: October 17, 2006Assignee: Yamaha CorporationInventors: Hiraku Kayama, Oscar Celma, Jaume Ortola
-
Patent number: 7094962Abstract: For a plurality of types of additional attribute data included in note data, a selection section selects one or more of the plurality of types of additional attribute data. For a plurality of the note data, a display section displays pictorial figures or the like representative of the contents of the additional attribute data of the types selected by the selection section, in proximity to pictorial figures or the like representative of pitches and sounding periods of the note data. The display section also displays pictorial figures or the like indicative of the contents of the additional attribute data, at positions and in sizes corresponding to periods or timing when musical expressions or the like indicated by the additional attribute data are to be applied.Type: GrantFiled: February 25, 2004Date of Patent: August 22, 2006Assignee: Yamaha CorporationInventor: Hiraku Kayama
-
Publication number: 20060085198Abstract: There are provided a singing voice-synthesizing method and apparatus capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. Each performance data is inputted in timing earlier than the actual singing-starting time point, and a phonetic unit transition time length is generated. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time.Type: ApplicationFiled: December 1, 2005Publication date: April 20, 2006Applicant: YAMAHA CORPORATIONInventors: Hiraku Kayama, Oscar Celma, Jaume Ortola
-
Publication number: 20060085196Abstract: There are provided a singing voice-synthesizing method and apparatus capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. Each performance data is inputted in timing earlier than the actual singing-starting time point, and a phonetic unit transition time length is generated. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time.Type: ApplicationFiled: December 1, 2005Publication date: April 20, 2006Applicant: YAMAHA CORPORATIONInventors: Hiraku Kayama, Oscar Celma, Jaume Ortola
-
Publication number: 20060085197Abstract: There are provided a singing voice-synthesizing method and apparatus capable of performing synthesis of natural singing voices close to human singing voices based on performance data being input in real time. Performance data is inputted for each phonetic unit constituting a lyric, to supply phonetic unit information, singing-starting time point information, singing length information, etc. Each performance data is inputted in timing earlier than the actual singing-starting time point, and a phonetic unit transition time length is generated. By using the phonetic unit transition time, the singing-starting time point information, and the singing length information, the singing-starting time points and singing duration times of the first and second phonemes are determined. In the singing voice synthesis, for each phoneme, a singing voice is generated at the determined singing-starting time point and continues to be generated for the determined singing duration time.Type: ApplicationFiled: December 1, 2005Publication date: April 20, 2006Applicant: YAMAHA CORPORATIONInventors: Hiraku Kayama, Oscar Celma, Jaume Ortola
-
Publication number: 20040177745Abstract: For a plurality of types of additional attribute data included in note data, a selection section selects one or more of the plurality of types of additional attribute data. For a plurality of the note data, a display section displays pictorial figures or the like representative of the contents of the additional attribute data of the types selected by the selection section, in proximity to pictorial figures or the like representative of pitches and sounding periods of the note data. The display section also displays pictorial figures or the like indicative of the contents of the additional attribute data, at positions and in sizes corresponding to periods or timing when musical expressions or the like indicated by the additional attribute data are to be applied.Type: ApplicationFiled: February 25, 2004Publication date: September 16, 2004Applicant: YAMAHA CORPORATIONInventor: Hiraku Kayama
-
Publication number: 20030061047Abstract: An apparatus is constructed for converting an input voice signal into an output voice signal according to a target voice signal. In the apparatus, an input device provides the input voice signal composed of original sinusoidal components and original residual components other than the original sinusoidal components. An extracting device extracts original attribute data from at least the sinusoidal components of the input voice signal. The original attribute data is characteristic of the input voice signal. A synthesizing device synthesizes new attribute data based on both of the original attribute data derived from the input voice signal and target attribute data being characteristic of the target voice signal composed of target sinusoidal components and target residual components other than the sinusoidal components. The target attribute data is derived from at least the target sinusoidal components.Type: ApplicationFiled: October 29, 2002Publication date: March 27, 2003Applicant: YAMAHA CORPORATIONInventors: Yasuo Yoshioka, Hiraku Kayama, Xavier Serra, Jordi Bonada