Patents by Inventor Jordi Janer

Jordi Janer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11087727
    Abstract: A method for processing a voice signal by an electronic system to create a song is disclosed. The method comprises the steps in the electronic system of acquiring an input singing voice recording (11); estimating a musical key (15b) and a Tempo (15a) from the singing voice recording (11); defining a tuning control (16) and a timing control (17) able to align the singing voice recording (11) with the estimated musical key (15b) and Tempo (15a); applying the tuning control (16) and the timing control (17) to the singing voice recording (11) so that an aligned voice recording (20) is obtained. Next, the method comprises the step of generating an music accompaniment (23) as function of the estimated musical key (15b) and Tempo (15a) and an arrangement database (22) and mixing the aligned voice recording (20) and the music accompaniment (23) to obtain the song (12). A system a server and a device are also disclosed.
    Type: Grant
    Filed: April 9, 2018
    Date of Patent: August 10, 2021
    Assignee: SUGARMUSIC S.P.A.
    Inventors: Filippo Sugar, Jordi Janer, Roberto Vernetti, Oscar Mayor
  • Publication number: 20200074966
    Abstract: A method for processing a voice signal by an electronic system to create a song is disclosed. The method comprises the steps in the electronic system of acquiring an input singing voice recording (11); estimating a musical key (15b) and a Tempo (15a) from the singing voice recording (11); defining a tuning control (16) and a timing control (17) able to align the singing voice recording (11) with the estimated musical key (15b) and Tempo (15a); applying the tuning control (16) and the timing control (17) to the singing voice recording (11) so that an aligned voice recording (20) is obtained. Next, the method comprises the step of generating an music accompaniment (23) as function of the estimated musical key (15b) and Tempo (15a) and an arrangement database (22) and mixing the aligned voice recording (20) and the music accompaniment (23) to obtain the song (12). A system a server and a device are also disclosed.
    Type: Application
    Filed: April 9, 2018
    Publication date: March 5, 2020
    Inventors: Filippo Sugar, Jordi Janer, Roberto Vernetti, Oscar Mayor
  • Patent number: 9224406
    Abstract: Candidate frequencies per unit segment of an audio signal are identified. First processing section identifies an estimated train that is a time series of candidate frequencies, each selected for a different one of the segments, arranged over a plurality of the unit segments and that has a high likelihood of corresponding to a time series of fundamental frequencies of a target component. Second processing section identifies a state train of states, each indicative of one of sound-generating and non-sound-generating states of the target component in a different one of the segments, arranged over the unit segments. Frequency information which designates, as a fundamental frequency of the target component, a candidate frequency corresponding to the unit segment in the estimated train is generated for each unit segment corresponding to the sound-generating state. Frequency information indicative of no sound generation is generated for each unit segment corresponding to the non-sound-generating state.
    Type: Grant
    Filed: October 28, 2011
    Date of Patent: December 29, 2015
    Assignee: Yamaha Corporation
    Inventors: Jordi Bonada, Jordi Janer, Ricard Marxer, Yasuyuki Umeyama, Kazunobu Kondo, Francisco Garcia
  • Patent number: 9070370
    Abstract: A coefficient train processing section, which sequentially generates per unit segment a processing coefficient train for suppressing a target component of an audio signal, includes a basic coefficient train generation section and coefficient train processing section. The basic coefficient train generation section generates a basic coefficient train where basic coefficient values corresponding to frequencies within a particular frequency band range are each set at a suppression value that suppresses the audio signal while coefficient values corresponding to frequencies outside the particular frequency band range are each set at a pass value that maintains the audio signal. The coefficient train processing section generates the processing coefficient train, per unit segment, by changing, to the pass value, each of the coefficient values corresponding to frequencies other than the target component among the coefficient values corresponding to the frequencies within the particular frequency band range.
    Type: Grant
    Filed: October 28, 2011
    Date of Patent: June 30, 2015
    Assignee: Yamaha Corporation
    Inventors: Jordi Bonada, Jordi Janer, Ricard Marxer, Yasuyuki Umeyama, Kazunobu Kondo
  • Patent number: 9002035
    Abstract: Signal processing section of a terminal converts acquired audio signals of a plurality of channels into frequency spectra set, calculates sound image positions corresponding to individual frequency components, and displays, on a display screen, the calculated sound image positions results by use of a coordinate system having coordinate axes of the frequency components and sound image positions. User-designated partial region of the coordinate system is set as a designated region and an amplitude-level adjusting amount is set for the designated region, so that the signal processing section adjusts amplitude levels of frequency components included in the frequency spectra and in the designated region, converts the adjusted frequency components into audio signals and outputs the converted audio signals.
    Type: Grant
    Filed: February 7, 2012
    Date of Patent: April 7, 2015
    Assignee: Yamaha Corporation
    Inventors: Yasuyuki Umeyama, Kazunobu Kondo, Yu Takahashi, Jordi Bonada, Jordi Janer, Ricard Marxer
  • Publication number: 20120201385
    Abstract: Signal processing section of a terminal converts acquired audio signals of a plurality of channels into frequency spectra set, calculates sound image positions corresponding to individual frequency components, and displays, on a display screen, the calculated sound image positions results by use of a coordinate system having coordinate axes of the frequency components and sound image positions. User-designated partial region of the coordinate system is set as a designated region and an amplitude-level adjusting amount is set for the designated region, so that the signal processing section adjusts amplitude levels of frequency components included in the frequency spectra and in the designated region, converts the adjusted frequency components into audio signals and outputs the converted audio signals.
    Type: Application
    Filed: February 7, 2012
    Publication date: August 9, 2012
    Applicant: YAMAHA CORPORATION
    Inventors: Yasuyuki UMEYAMA, Kazunobu Kondo, Yu Takahashi, Jordi Bonada, Jordi Janer, Ricard Marxer
  • Publication number: 20120106758
    Abstract: A coefficient train processing section, which sequentially generates per unit segment a processing coefficient train for suppressing a target component of an audio signal, includes a basic coefficient train generation section and coefficient train processing section. The basic coefficient train generation section generates a basic coefficient train where basic coefficient values corresponding to frequencies within a particular frequency band range are each set at a suppression value that suppresses the audio signal while coefficient values corresponding to frequencies outside the particular frequency band range are each set at a pass value that maintains the audio signal. The coefficient train processing section generates the processing coefficient train, per unit segment, by changing, to the pass value, each of the coefficient values corresponding to frequencies other than the target component among the coefficient values corresponding to the frequencies within the particular frequency band range.
    Type: Application
    Filed: October 28, 2011
    Publication date: May 3, 2012
    Applicant: YAMAHA CORPORATION
    Inventors: Jordi BONADA, Jordi JANER, Ricard MARXER, Yasuyuki UMEYAMA, Kazunobu KONDO
  • Publication number: 20120106746
    Abstract: Candidate frequencies per unit segment of an audio signal are identified. First processing section identifies an estimated train that is a time series of candidate frequencies, each selected for a different one of the segments, arranged over a plurality of the unit segments and that has a high likelihood of corresponding to a time series of fundamental frequencies of a target component. Second processing section identifies a state train of states, each indicative of one of sound-generating and non-sound-generating states of the target component in a different one of the segments, arranged over the unit segments. Frequency information which designates, as a fundamental frequency of the target component, a candidate frequency corresponding to the unit segment in the estimated train is generated for each unit segment corresponding to the sound-generating state. Frequency information indicative of no sound generation is generated for each unit segment corresponding to the non-sound-generating state.
    Type: Application
    Filed: October 28, 2011
    Publication date: May 3, 2012
    Applicant: YAMAHA CORPORATION
    Inventors: Jordi BONADA, Jordi Janer, Ricard Marxer, Yasuyuki Umeyama, Kazunobu Kondo, Francisco Garcia
  • Patent number: 8158871
    Abstract: An audio recording is processed and evaluated. A sequence of identified notes corresponding to the audio recording is determined by iteratively identifying potential notes within the audio recording. A rating for the audio recording is determined using a tuning rating and an expression rating. The audio recording includes a recording of at least a portion of a musical composition.
    Type: Grant
    Filed: April 29, 2011
    Date of Patent: April 17, 2012
    Assignees: Universitat Pompeu Fabra, BMAT Licensing, S.L.
    Inventors: Jordi Janer Mestres, Jordi Bonada Sanjaume, Maarten de Boer, Alex Loscos Mira
  • Publication number: 20110209596
    Abstract: An audio recording is processed and evaluated. A sequence of identified notes corresponding to the audio recording is determined by iteratively identifying potential notes within the audio recording. A rating for the audio recording is determined using a tuning rating and an expression rating. The audio recording includes a recording of at least a portion of a musical composition.
    Type: Application
    Filed: April 29, 2011
    Publication date: September 1, 2011
    Inventors: Jordi Janer Mestres, Jordi Bonada Sanjaume, Maarten de Boer, Alex Loscos Mira
  • Publication number: 20090193959
    Abstract: An audio recording is processed and evaluated. A sequence of identified notes corresponding to the audio recording is determined by iteratively identifying potential notes within the audio recording. A rating for the audio recording is determined using a tuning rating and an expression rating. The audio recording includes a recording of at least a portion of a musical composition.
    Type: Application
    Filed: February 6, 2008
    Publication date: August 6, 2009
    Inventors: Jordi Janer Mestres, Jordi Bonada Sanjaume, Maarten De Boer, Alex Loscos Mira
  • Publication number: 20080300702
    Abstract: Systems and methods for determining similarity between two or more audio pieces are disclosed. An illustrative method for determining musical similarities includes extracting one or more descriptors from each audio piece, generating a vector for each of the audio pieces, extracting one or more audio features from each of the audio pieces, calculating values for each audio feature, calculating a distance between a vector containing the normalized values and the vectors containing the audio pieces, and outputting a response to a user or process indicating the similarity between the audio pieces. The descriptors can be used in performing content-based audio classification and for determining similarities between music. The descriptors that can be extracted from each audio piece can include tonal descriptors, dissonance descriptors, rhythm descriptors, and spatial descriptors.
    Type: Application
    Filed: May 29, 2008
    Publication date: December 4, 2008
    Applicant: UNIVERSITAT POMPEU FABRA
    Inventors: Emilia Gomez, Perfecto Herrera, Pedro Cano Vila, Jordi Janer, Joan Serra, Jordi Bonada, Shadi Walid El-Hajj, Thomas Etienne Aussenac, Gunnar Nils Holmberg