Patents by Inventor Jordi Bonada

Jordi Bonada has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8706496
    Abstract: A sequence is received of time domain digital audio samples representing sound (e.g., a sound generated by a human voice or a musical instrument). The time domain digital audio samples are processed to derive a corresponding sequence of audio pulses in the time domain. Each of the audio pulses is associated with a characteristic frequency. Frequency domain information is derived about each of at least some of the audio pulses. The sound represented by the time domain digital audio samples is transformed by processing the audio pulses using the frequency domain information. The sound transformation utilizes overlapping windows and a computational cost function which depends on a product of the number of the pitch periods and the inverse of the minimum fundamental frequency within the window is determined.
    Type: Grant
    Filed: September 13, 2007
    Date of Patent: April 22, 2014
    Assignee: Universitat Pompeu Fabra
    Inventor: Jordi Bonada Sanjaume
  • Publication number: 20140006018
    Abstract: In a voice processing apparatus, a processor is configured to adjust, a fundamental frequency of a first voice signal corresponding to a voice having target voice characteristics to a fundamental frequency of a second voice signal corresponding to a voice having initial voice characteristics different from the target voice characteristics. The processor is further configured to sequentially generate a processed spectrum based on a spectrum of the first voice signal and a spectrum of the second voice signal by: dividing the spectrum of the first voice signal into a plurality of harmonic band components after the fundamental frequency of the first voice signal has been adjusted; allocating each harmonic band component of the first voice signal to each harmonic frequency associated with the fundamental frequency of the second voice signal; and adjusting an envelope and phase of each harmonic band component according to the spectrum of the second voice signal.
    Type: Application
    Filed: June 20, 2013
    Publication date: January 2, 2014
    Applicant: Yamaha Corporation
    Inventors: Jordi BONADA, Merlijn BLAAUW, Yuji HISAMINATO
  • Publication number: 20130311189
    Abstract: In a voice processing apparatus, a processor performs generating a converted feature by applying a source feature of source voice to a conversion function, generating an estimated feature based on a probability that the source feature belongs to each element distribution of a mixture distribution model that approximates distribution of features of voices having different characteristics, generating a first conversion filter based on a difference between a first spectrum corresponding to the converted feature and an estimated spectrum corresponding to the estimated feature, generating a second spectrum by applying the first conversion filter to a source spectrum corresponding to the source feature, generating a second conversion filter based on a difference between the first spectrum and the second spectrum, and generating target voice by applying the first conversion filter and the second conversion filter to the source spectrum.
    Type: Application
    Filed: May 16, 2013
    Publication date: November 21, 2013
    Applicant: Yamaha Corporation
    Inventors: Fernando Villavicencio, Jordi Bonada
  • Patent number: 8423367
    Abstract: Variation over time in fundamental frequency in singing voices is separated into a melody-dependent component and a phoneme-dependent component, modeled for each of the components and stored into a singing synthesizing database. In execution of singing synthesis, a pitch curve indicative of variation over time in fundamental frequency of the melody is synthesized in accordance with an arrangement of notes represented by a singing synthesizing score and the melody-dependent component, and the pitch curve is corrected, for each of pitch curve sections corresponding to phonemes constituting lyrics, using a phoneme-dependent component model corresponding to the phoneme. Such arrangements can accurately model a singing expression, unique to a singing person and appearing in a melody singing style of the person, while taking into account phoneme-dependent pitch variation, and thereby permits synthesis of singing voices that sound more natural.
    Type: Grant
    Filed: July 1, 2010
    Date of Patent: April 16, 2013
    Assignee: Yamaha Corporation
    Inventors: Keijiro Saino, Jordi Bonada
  • Patent number: 8338687
    Abstract: Waveform data representative of singing voices of a singing music piece are analyzed to generate melody component data representative of variation over time in fundamental frequency component presumed to represent a melody in the singing voices. Then, through machine learning that uses score data representative of a musical score of the singing music piece and the melody component data, a melody component model, representative of a variation component presumed to represent the melody among the variation over time in fundamental frequency component, is generated for each combination of notes. Parameters defining the melody component models and note identifiers indicative of the combinations of notes whose variation over time in fundamental frequency component are represented by the melody component models are stored into a pitch curve generating database in association with each other.
    Type: Grant
    Filed: January 10, 2012
    Date of Patent: December 25, 2012
    Assignee: Yamaha Corporation
    Inventors: Keijiro Saino, Jordi Bonada
  • Publication number: 20120310650
    Abstract: In a voice synthesis apparatus, a phoneme piece interpolator acquires first phoneme piece data corresponding to a first value of sound characteristic, and second phoneme piece data corresponding to a second value of the sound characteristic. The first and second phoneme piece data indicate a spectrum of each frame of a phoneme piece. The phoneme piece interpolator interpolates between each frame of the first phoneme piece data and each frame of the second phoneme piece data so as to create phoneme piece data of the phoneme piece corresponding to a target value of the sound characteristic which is different from either of the first and second values of the sound characteristic. A voice synthesizer generates a voice signal having the target value of the sound characteristic based on the created phoneme piece data.
    Type: Application
    Filed: May 24, 2012
    Publication date: December 6, 2012
    Applicant: YAMAHA CORPORATION
    Inventors: Jordi BONADA, Merlijn BLAAUW, Makoto Tachibana
  • Publication number: 20120201385
    Abstract: Signal processing section of a terminal converts acquired audio signals of a plurality of channels into frequency spectra set, calculates sound image positions corresponding to individual frequency components, and displays, on a display screen, the calculated sound image positions results by use of a coordinate system having coordinate axes of the frequency components and sound image positions. User-designated partial region of the coordinate system is set as a designated region and an amplitude-level adjusting amount is set for the designated region, so that the signal processing section adjusts amplitude levels of frequency components included in the frequency spectra and in the designated region, converts the adjusted frequency components into audio signals and outputs the converted audio signals.
    Type: Application
    Filed: February 7, 2012
    Publication date: August 9, 2012
    Applicant: YAMAHA CORPORATION
    Inventors: Yasuyuki UMEYAMA, Kazunobu Kondo, Yu Takahashi, Jordi Bonada, Jordi Janer, Ricard Marxer
  • Publication number: 20120106746
    Abstract: Candidate frequencies per unit segment of an audio signal are identified. First processing section identifies an estimated train that is a time series of candidate frequencies, each selected for a different one of the segments, arranged over a plurality of the unit segments and that has a high likelihood of corresponding to a time series of fundamental frequencies of a target component. Second processing section identifies a state train of states, each indicative of one of sound-generating and non-sound-generating states of the target component in a different one of the segments, arranged over the unit segments. Frequency information which designates, as a fundamental frequency of the target component, a candidate frequency corresponding to the unit segment in the estimated train is generated for each unit segment corresponding to the sound-generating state. Frequency information indicative of no sound generation is generated for each unit segment corresponding to the non-sound-generating state.
    Type: Application
    Filed: October 28, 2011
    Publication date: May 3, 2012
    Applicant: YAMAHA CORPORATION
    Inventors: Jordi BONADA, Jordi Janer, Ricard Marxer, Yasuyuki Umeyama, Kazunobu Kondo, Francisco Garcia
  • Publication number: 20120106758
    Abstract: A coefficient train processing section, which sequentially generates per unit segment a processing coefficient train for suppressing a target component of an audio signal, includes a basic coefficient train generation section and coefficient train processing section. The basic coefficient train generation section generates a basic coefficient train where basic coefficient values corresponding to frequencies within a particular frequency band range are each set at a suppression value that suppresses the audio signal while coefficient values corresponding to frequencies outside the particular frequency band range are each set at a pass value that maintains the audio signal. The coefficient train processing section generates the processing coefficient train, per unit segment, by changing, to the pass value, each of the coefficient values corresponding to frequencies other than the target component among the coefficient values corresponding to the frequencies within the particular frequency band range.
    Type: Application
    Filed: October 28, 2011
    Publication date: May 3, 2012
    Applicant: YAMAHA CORPORATION
    Inventors: Jordi BONADA, Jordi JANER, Ricard MARXER, Yasuyuki UMEYAMA, Kazunobu KONDO
  • Patent number: 8170870
    Abstract: In an audio signal processing apparatus, a generation section generates an audio signal representing a voice. A distribution section distributes the audio signal generated by the generation section to a first channel and a second channel, respectively. A delay section delays the audio signal of the first channel relative to the audio signal of the second channel for creating a phase difference between the audio signal of the first channel and the audio signal of the second channel such that the created phase difference has a duration corresponding to either an added value of a first duration which is approximately one half of a period of the audio signal generated by the generation section and a second duration which is set shorter than the first duration, or a difference value of the first duration and the second duration.
    Type: Grant
    Filed: November 14, 2005
    Date of Patent: May 1, 2012
    Assignee: Yamaha Corporation
    Inventors: Hideki Kemmochi, Jordi Bonada
  • Patent number: 8158871
    Abstract: An audio recording is processed and evaluated. A sequence of identified notes corresponding to the audio recording is determined by iteratively identifying potential notes within the audio recording. A rating for the audio recording is determined using a tuning rating and an expression rating. The audio recording includes a recording of at least a portion of a musical composition.
    Type: Grant
    Filed: April 29, 2011
    Date of Patent: April 17, 2012
    Assignees: Universitat Pompeu Fabra, BMAT Licensing, S.L.
    Inventors: Jordi Janer Mestres, Jordi Bonada Sanjaume, Maarten de Boer, Alex Loscos Mira
  • Patent number: 8115089
    Abstract: Waveform data representative of singing voices of a singing music piece are analyzed to generate melody component data representative of variation over time in fundamental frequency component presumed to represent a melody in the singing voices. Then, through machine learning that uses score data representative of a musical score of the singing music piece and the melody component data, a melody component model, representative of a variation component presumed to represent the melody among the variation over time in fundamental frequency component, is generated for each combination of notes. Parameters defining the melody component models and note identifiers indicative of the combinations of notes whose variation over time in fundamental frequency component are represented by the melody component models are stored into a pitch curve generating database in association with each other.
    Type: Grant
    Filed: July 1, 2010
    Date of Patent: February 14, 2012
    Assignee: Yamaha Corporation
    Inventors: Keijiro Saino, Jordi Bonada
  • Patent number: 8013231
    Abstract: A sound signal processing apparatus which is capable of correctly detecting expression modes and expression transitions of a song or performance from an input sound signal. A sound signal produced by performance or singing of musical tones is input and divided into frames of predetermined time periods. Characteristic parameters of the input sound signal are detected on a frame-by-frame basis. An expression determining process is carried out in which a plurality of expression modes of a performance or song are modeled as respective states, the probability that a section including a frame or a plurality of continuous frames lies in a specific state is calculated with respect to a predetermined observed section based on the characteristic parameters, and the optimum route of state transition in the predetermined observed section is determined based on the calculated probabilities so as to determine expression modes of the sound signal and lengths thereof.
    Type: Grant
    Filed: May 24, 2006
    Date of Patent: September 6, 2011
    Assignee: Yamaha Corporation
    Inventors: Takuya Fujishima, Alex Loscos, Jordi Bonada, Oscar Mayor
  • Publication number: 20110209596
    Abstract: An audio recording is processed and evaluated. A sequence of identified notes corresponding to the audio recording is determined by iteratively identifying potential notes within the audio recording. A rating for the audio recording is determined using a tuning rating and an expression rating. The audio recording includes a recording of at least a portion of a musical composition.
    Type: Application
    Filed: April 29, 2011
    Publication date: September 1, 2011
    Inventors: Jordi Janer Mestres, Jordi Bonada Sanjaume, Maarten de Boer, Alex Loscos Mira
  • Patent number: 7945446
    Abstract: Spectrum envelope of an input sound is detected. In the meantime, a converting spectrum is acquired which is a frequency spectrum of a converting sound comprising a plurality of sounds, such as unison sounds. Output spectrum is generated by imparting the detected spectrum envelope of the input sound to the acquired converting spectrum. Sound signal is synthesized on the basis of the generated output spectrum. Further, a pitch of the input sound may be detected, and frequencies of peaks in the acquired converting spectrum may be varied in accordance with the detected pitch of the input sound. In this manner, the output spectrum can have the pitch and spectrum envelope of the input sound and spectrum frequency components of the converting sound comprising a plurality of sounds, and thus, unison sounds can be readily generated with simple arrangements.
    Type: Grant
    Filed: March 9, 2006
    Date of Patent: May 17, 2011
    Assignee: Yamaha Corporation
    Inventors: Hideki Kemmochi, Yasuo Yoshioka, Jordi Bonada
  • Publication number: 20110000360
    Abstract: Waveform data representative of singing voices of a singing music piece are analyzed to generate melody component data representative of variation over time in fundamental frequency component presumed to represent a melody in the singing voices. Then, through machine learning that uses score data representative of a musical score of the singing music piece and the melody component data, a melody component model, representative of a variation component presumed to represent the melody among the variation over time in fundamental frequency component, is generated for each combination of notes. Parameters defining the melody component models and note identifiers indicative of the combinations of notes whose variation over time in fundamental frequency component are represented by the melody component models are stored into a pitch curve generating database in association with each other.
    Type: Application
    Filed: July 1, 2010
    Publication date: January 6, 2011
    Applicant: Yamaha Corporation
    Inventors: Keijiro SAINO, Jordi BONADA
  • Publication number: 20110004476
    Abstract: Variation over time in fundamental frequency in singing voices is separated into a melody-dependent component and a phoneme-dependent component, modeled for each of the components and stored into a singing synthesizing database. In execution of singing synthesis, a pitch curve indicative of variation over time in fundamental frequency of the melody is synthesized in accordance with an arrangement of notes represented by a singing synthesizing score and the melody-dependent component, and the pitch curve is corrected, for each of pitch curve sections corresponding to phonemes constituting lyrics, using a phoneme-dependent component model corresponding to the phoneme. Such arrangements can accurately model a singing expression, unique to a singing person and appearing in a melody singing style of the person, while taking into account phoneme-dependent pitch variation, and thereby permits synthesis of singing voices that sound more natural.
    Type: Application
    Filed: July 1, 2010
    Publication date: January 6, 2011
    Applicant: Yamaha Corporation
    Inventors: Keijiro Saino, Jordi Bonada
  • Patent number: 7812239
    Abstract: Storage section has stored therein music piece data sets of a plurality of music pieces, each of the music piece data sets including respective tone data of a plurality of fragments of the music piece and respective character values indicative of musical characters of the fragments. Each of the fragments of a selected main music piece is selected as a main fragment, and each one, other than the selected main fragment, of a plurality of fragments of two or more music pieces is selected as a sub fragment. A similarity index value indicative of a degree of similarity between the character value of the main fragment and the character value of the specified sub fragment is calculated. For each of the main fragments, a sub fragment presenting a similarity index value that satisfies a predetermined selection condition is selected for processing the tone data of the main music piece.
    Type: Grant
    Filed: July 15, 2008
    Date of Patent: October 12, 2010
    Assignee: Yamaha Corporation
    Inventors: Takuya Fujishima, Maarten De Boer, Jordi Bonada, Samuel Roig, Fokke De Jong, Sebastian Streich
  • Patent number: 7812240
    Abstract: Analysis section divides waveform data of a given music piece into waveform data of a plurality of fragments and divides the waveform data of each of the fragments into one or more events of sound, and obtains a character value indicative of a character of the waveform data pertaining to each of the divided events. Storage section stores respective music piece data and music piece composing data of one or more music pieces. The music piece composing data include a character value indicative of a character of the waveform data pertaining to each of the events of each of the fragments. Search section searches (or retrieves) for, from among the stored music piece composing data, one event or a plurality of successive events having a character value of a high degree of similarity to one or more events included in a designated fragment.
    Type: Grant
    Filed: October 10, 2008
    Date of Patent: October 12, 2010
    Assignee: Yamaha Corporation
    Inventors: Sebastian Streich, Jordi Bonada, Samuel Roig
  • Patent number: 7750228
    Abstract: For at least one music piece, a storage section stores tone data of each of a plurality of fragments segmented from the music piece and stores a first descriptor indicative of a musical character of each of the fragments in association with the fragment. Descriptor generation section receives input data based on operation by a user and generates a second descriptor, indicative of a musical character, on the basis of the received input data. Determination section determines similarity between the second descriptor and the first descriptor of each of the fragments. Selection section selects the tone data of at least one fragment on the basis of a result of the similarity determination by the determination section. On the basis of the tone data of the selected at least one fragment, a data generation section generates tone data to be outputted.
    Type: Grant
    Filed: January 7, 2008
    Date of Patent: July 6, 2010
    Assignee: Yamaha Corporation
    Inventors: Takuya Fujishima, Jordi Bonada, Maarten De Boer