Patents by Inventor Ryunosuke DAIDO

Ryunosuke DAIDO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11295723
    Abstract: A voice synthesis method includes: supplying a first trained model with control data including phonetic identifier data to generate a series of frequency spectra of harmonic components; supplying a second trained model with the control data to generate a waveform signal representative of non-harmonic components; and generating a voice signal including the harmonic components and the non-harmonic components based on the series of frequency spectra of the harmonic components generated by the first trained model and the waveform signal representative of the non-harmonic components generated by the second trained model.
    Type: Grant
    Filed: May 28, 2020
    Date of Patent: April 5, 2022
    Assignee: YAMAHA CORPORATION
    Inventors: Ryunosuke Daido, Masahiro Shimizu
  • Patent number: 11289066
    Abstract: A voice synthesis method includes: sequentially acquiring voice units comprising at least one of diphone or a triphone in accordance with synthesis information for synthesizing voices; generating statistical spectral envelopes using a statistical model built by machine learning in accordance with the synthesis information for synthesizing the voices; and concatenating the sequentially acquired voice units and modifying a frequency spectral envelope of each voice unit in accordance with the generated statistical spectral envelope, thereby synthesizing a voice signal based on the concatenated voice units having the modified frequency spectra.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: March 29, 2022
    Assignee: YAMAHA CORPORATION
    Inventors: Yuji Hisaminato, Ryunosuke Daido, Keijiro Saino, Jordi Bonada, Merlijn Blaauw
  • Publication number: 20220084492
    Abstract: A method analyzes a reference signal representing a sound in each of analysis periods to obtain a series of phase spectra and a series of amplitude spectra over the analysis periods; adjusts, for each respective analysis period, phases of each harmonic component in a respective phase spectrum to be a target value at a pitch mark corresponding to the respective analysis period; synthesizes a first sound signal over the analysis periods based on the adjusted series of phase spectra of the reference signal and the obtained series of amplitude spectra of the reference signal; prepares training data including first control data representing of generative conditions of the reference signal and the first sound signal synthesized from the reference signal; and establishes, through machine learning using the training data, the generative model that generates a second sound signal based on second control data representing generative conditions of the second sound signal.
    Type: Application
    Filed: November 24, 2021
    Publication date: March 17, 2022
    Inventor: Ryunosuke DAIDO
  • Publication number: 20210383816
    Abstract: A computer-implemented sound signal generation method includes: obtaining a first sound source spectrum of a sound signal to be generated; obtaining a first spectral envelope of the sound signal; and estimating fragment data representative of samples of the sound signal based on the obtained first sound source spectrum and the obtained first spectral envelope.
    Type: Application
    Filed: August 18, 2021
    Publication date: December 9, 2021
    Inventors: Jordi BONADA, Merlijn BLAAUW, Ryunosuke DAIDO
  • Publication number: 20210375248
    Abstract: A computer-implemented sound signal synthesis method includes: generating, based on first control data representative of a plurality of conditions of a sound signal to be generated, (i) first data representative of a sound source spectrum of the sound signal, and (ii) second data representative of a spectral envelope of the sound signal; and synthesizing the sound signal based on the sound source spectrum indicated by the first data and the spectral envelope indicated by the second data.
    Type: Application
    Filed: August 18, 2021
    Publication date: December 2, 2021
    Inventors: Jordi BONADA, Merlijn BLAAUW, Ryunosuke DAIDO
  • Publication number: 20210366454
    Abstract: A sound signal synthesis method includes generating first data representing a deterministic component of a sound signal based on second control data representing conditions of the sound signal, generating, using a first generation model, second data representing a stochastic component of the sound signal based on the first data and first control data representing conditions of the sound signal, and combining the deterministic component represented by the first data and the stochastic component represented by the second data and thereby generating the sound signal.
    Type: Application
    Filed: August 3, 2021
    Publication date: November 25, 2021
    Inventor: Ryunosuke DAIDO
  • Publication number: 20210350783
    Abstract: A sound signal synthesis method includes inputting control data representing conditions of a sound signal into a neural network, and thereby estimating first data representing a deterministic component of the sound signal and second data representing a stochastic component of the sound signal, and combining the deterministic component represented by the first data and the stochastic component represented by the second data, and thereby generating the sound signal. The neural network has learned a relationship between control data that represents conditions of a sound signal of a reference signal, a deterministic component of the sound signal of the reference signal, and a stochastic component of the sound signal of the reference signal.
    Type: Application
    Filed: July 20, 2021
    Publication date: November 11, 2021
    Inventor: Ryunosuke DAIDO
  • Publication number: 20210256960
    Abstract: An information processing system includes at least one memory storing a program and at least one processor. The at least one processor implements the program to input a piece of sound source data representative of a sound source, a piece of style data representative of a performance style, and synthesis data representative of sounding conditions into a synthesis model generated by machine learning, and to generate, using the synthesis model, feature data representative of acoustic features of a target sound of the sound source to be generated in the performance style and according to the sounding conditions, and to generate an audio signal corresponding to the target sound using the generated feature data.
    Type: Application
    Filed: May 4, 2021
    Publication date: August 19, 2021
    Inventors: Ryunosuke DAIDO, Merlijn BLAAUW, Jordi BONADA
  • Publication number: 20210256959
    Abstract: An audio processing system includes a memory and a processor. The processor implements instructions to: establish a re-trained synthesis model by additional training a pre-trained synthesis model for generating feature data representative of acoustic features of an audio signal according to condition data representative of sounding conditions, using: first condition data representative of sounding conditions identified from a first audio signal of a first sound source; and first feature data representative of acoustic features of the first audio signal; receive an instruction to modify at least one of the sounding conditions of the first audio signal; generate second feature data by inputting second condition data representative of the modified at least one sounding condition into the re-trained synthesis model established by the additional training; and generate a modified audio signal in accordance with the generated second feature data.
    Type: Application
    Filed: May 3, 2021
    Publication date: August 19, 2021
    Inventor: Ryunosuke DAIDO
  • Patent number: 11094312
    Abstract: A voice synthesis method designates a target feature of a voice to be synthesized; specifies harmonic frequencies for a plurality of respective harmonic components of the voice and an amplitude spectrum envelope of the voice; specifies a harmonic amplitude distribution of each of the plurality of respective harmonic components based on (i) the target feature, (ii) the amplitude spectrum envelope, and (iii) the harmonic frequency specified for the respective harmonic component, the harmonic amplitude distribution representing a distribution of amplitudes in a unit band with a peak amplitude corresponding to the respective harmonic component; and generates a frequency spectrum of the voice with the target feature based on harmonic amplitude distributions specified for each of the plurality of respective harmonic components and the amplitude spectrum envelope.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: August 17, 2021
    Assignee: YAMAHA CORPORATION
    Inventor: Ryunosuke Daido
  • Publication number: 20210166128
    Abstract: A computer-implemented method generates a frequency component vector of time series data, by executing a first process and a second process in each unit step. The first process includes: receiving first data; and processing the first data using a first neural network to generate intermediate data. The second process includes: receiving the generated intermediate data; and generating a plurality of component values corresponding to a plurality of frequency bands based on the generated intermediate data such that: a first component value corresponding to a first frequency band is generated using a second neural network based on the generated intermediate data; and a second component value corresponding to a second frequency band different from the first frequency band is generated using the second neural network based on the generated intermediate data and the generated first component value corresponding to the first frequency band.
    Type: Application
    Filed: February 9, 2021
    Publication date: June 3, 2021
    Inventors: Ryunosuke DAIDO, Kanru HUA
  • Publication number: 20210005176
    Abstract: A sound processing method obtains note data representative of a note; obtains an audio signal to be processed; specifies, in accordance with the note, an expression sample representative of a sound expression to be imparted to the note and an expression period, of the audio signal, to which the sound expression is to be imparted to the note; and specifies, in accordance with the expression sample and the expression period, a processing parameter relating to an expression imparting processing for imparting the sound expression to a portion corresponding to the expression period in the audio signal. The method then generates a processed audio signal by performing the expression imparting processing in accordance with the expression sample, the expression period, and the processing parameter to the audio signal.
    Type: Application
    Filed: September 21, 2020
    Publication date: January 7, 2021
    Inventors: Merlijn BLAAUW, Jordi BONADA, Ryunosuke DAIDO, Yuji HISAMINATO
  • Publication number: 20200402525
    Abstract: A method obtains a first sound signal representative of a first sound, including a first spectrum envelope contour and a first reference spectrum envelope contour; obtains a second sound signal, representative of a second sound differing in sound characteristics from the first sound, including a second spectrum envelope contour and a second reference spectrum envelope contour; generates a synthesis spectrum envelope contour by transforming the first spectrum envelope contour based on a first difference between the first spectrum envelope contour and the first reference spectrum envelope contour at a first time point of the first sound signal, and a second difference between the second spectrum envelope contour and the second reference spectrum envelope contour at a second time point of the second sound signal; and generates a third sound signal representative of the first sound that has been transformed using the generated synthesis spectrum envelope contour.
    Type: Application
    Filed: September 8, 2020
    Publication date: December 24, 2020
    Inventors: Ryunosuke DAIDO, Hiraku KAYAMA
  • Publication number: 20200365170
    Abstract: A voice processing method realized by a computer includes compressing forward a first steady period of a plurality of steady periods in a voice signal representing voice, and extending forward a transition period between the first steady period and a second steady period of the plurality of steady periods in the voice signal. Each of the plurality of steady periods is a period in which acoustic characteristics are temporally stable. The second steady period is a period immediately after the first steady period and has a pitch that is different from a pitch of the first steady period.
    Type: Application
    Filed: July 31, 2020
    Publication date: November 19, 2020
    Inventors: Ryunosuke DAIDO, Hiraku KAYAMA
  • Publication number: 20200342848
    Abstract: A voice synthesis method designates a target feature of a voice to be synthesized; specifies harmonic frequencies for a plurality of respective harmonic components of the voice and an amplitude spectrum envelope of the voice; specifies a harmonic amplitude distribution of each of the plurality of respective harmonic components based on (i) the target feature, (ii) the amplitude spectrum envelope, and (iii) the harmonic frequency specified for the respective harmonic component, the harmonic amplitude distribution representing a distribution of amplitudes in a unit band with a peak amplitude corresponding to the respective harmonic component; and generates a frequency spectrum of the voice with the target feature based on harmonic amplitude distributions specified for each of the plurality of respective harmonic components and the amplitude spectrum envelope.
    Type: Application
    Filed: July 9, 2020
    Publication date: October 29, 2020
    Inventor: Ryunosuke DAIDO
  • Publication number: 20200294486
    Abstract: A voice synthesis method includes: supplying a first trained model with control data including phonetic identifier data to generate a series of frequency spectra of harmonic components; supplying a second trained model with the control data to generate a waveform signal representative of non-harmonic components; and generating a voice signal including the harmonic components and the non-harmonic components based on the series of frequency spectra of the harmonic components generated by the first trained model and the waveform signal representative of the non-harmonic components generated by the second trained model.
    Type: Application
    Filed: May 28, 2020
    Publication date: September 17, 2020
    Inventors: Ryunosuke DAIDO, Masahiro SHIMIZU
  • Publication number: 20200294484
    Abstract: Voice synthesis method and apparatus generate second control data using an intermediate trained model with first input data including first control data designating phonetic identifiers, change the second control data in accordance with a first user instruction provided by a user, generate synthesis data representing frequency characteristics of a voice to be synthesized using a final trained model with final input data including the first control data and the changed second control data, and generate a voice signal based on the generated synthesis data.
    Type: Application
    Filed: May 28, 2020
    Publication date: September 17, 2020
    Inventor: Ryunosuke DAIDO
  • Patent number: 10482893
    Abstract: A sound processing method includes a step of applying a nonlinear filter to a temporal sequence of spectral envelope of an acoustic signal, wherein the nonlinear filter smooths a fine temporal perturbation of the spectral envelope without smoothing out a large temporal change. A sound processing apparatus includes a smoothing processor configured to apply a nonlinear filter to a temporal sequence of spectral envelope of an acoustic signal, wherein the nonlinear filter smooths a fine temporal perturbation of the spectral envelope without smoothing out a large temporal change.
    Type: Grant
    Filed: November 1, 2017
    Date of Patent: November 19, 2019
    Assignee: YAMAHA CORPORATION
    Inventors: Ryunosuke Daido, Hiraku Kayama
  • Publication number: 20190251950
    Abstract: A voice synthesis method according to an embodiment includes altering a series of synthesis spectra in a partial period of a synthesis voice based on a series of amplitude spectrum envelope contours of a voice expression to obtain a series of altered spectra to which the voice expression has been imparted, and synthesizing a series of voice samples to which the voice expression has been imparted, based on the series of altered spectra.
    Type: Application
    Filed: April 26, 2019
    Publication date: August 15, 2019
    Inventors: Jordi BONADA, Merlijn BLAAUW, Keijiro SAINO, Ryunosuke DAIDO, Michael WILSON, Yuji HISAMINATO
  • Patent number: 10354631
    Abstract: A method for processing an input sound signal of singing voice, to obtain a sound signal with an impression different from the input sound signal, includes: selecting a genre from among a plurality of tune genres in accordance with a selection operation by a user, setting, to a first unit, a set of first parameters corresponding to the selected genre, displaying a first impression identifier corresponding to the selected genre for a first control of a first user parameter in the set of first parameters, changing the first user parameter in accordance with a change operation on the first control by the user, and strengthening, by the first unit, signal components within a particular frequency band of the sound signal, in accordance with the set of first parameters including the first user parameters.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: July 16, 2019
    Assignee: YAMAHA CORPORATION
    Inventors: Ryunosuke Daido, Hiraku Kayama, Sumiya Ishikawa