Patents by Inventor Takuya Fujishima
Takuya Fujishima has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9024169Abstract: A music analysis apparatus has a feature extractor and an analysis processor. The feature extractor generates a time series of feature values from a sequence of notes which is designated as an object of analysis. The analysis processor computes an evaluation index value which indicates a probability that the designated sequence of notes is present in each of a plurality of reference music pieces by applying a probabilistic model to the time series of the feature values generated from the designated sequence of notes. The probabilistic model is generated by machine learning of the plurality of reference music pieces using time series of feature values obtained from the reference music pieces.Type: GrantFiled: July 26, 2012Date of Patent: May 5, 2015Assignee: Yamaha CorporationInventors: Kouhei Sumi, Takuya Fujishima, Takeshi Kikuchi
-
Publication number: 20140033902Abstract: When a music audio to be analyzed is to be divided into a plurality of possible pattern segments based on estimated beat positions, it is divided in a plurality of different ways according to individual division models obtained by sequentially shifting the leading or first beat position of the possible pattern segments. Such division can provide plural sets of possible pattern segments with each of the sets corresponding to one of the division models. For each of the sets, comparison is made between individual possible pattern segments and individual reference performance patterns. For each of the possible pattern segments of the set, at least one reference performance pattern that matches the possible pattern segment is determined. Then, at least one combination of reference performance patterns is identified for each of the sets of possible pattern segments. One optimal combination is selected from among the identified combinations.Type: ApplicationFiled: July 31, 2013Publication date: February 6, 2014Applicant: Yamaha CorporationInventors: Dan SASAI, Takuya FUJISHIMA
-
Patent number: 8543387Abstract: Disclosed herein is a pitch estimation apparatus and associated methods for estimating a fundamental frequency of an audio signal from a fundamental frequency probability density function by modeling the audio signal as a weighted mixture of a plurality of tone models corresponding respectively to harmonic structures of individual fundamental frequencies, so that the fundamental frequency probability density function of the audio signal is given as a distribution of respective weights of the plurality of the tone models.Type: GrantFiled: August 31, 2007Date of Patent: September 24, 2013Assignee: Yamaha CorporationInventors: Masataka Goto, Takuya Fujishima, Keita Arimoto
-
Publication number: 20130192445Abstract: A music analysis apparatus has a feature extractor and an analysis processor. The feature extractor generates a time series of feature values from a sequence of notes which is designated as an object of analysis. The analysis processor computes an evaluation index value which indicates a probability that the designated sequence of notes is present in each of a plurality of reference music pieces by applying a probabilistic model to the time series of the feature values generated from the designated sequence of notes. The probabilistic model is generated by machine learning of the plurality of reference music pieces using time series of feature values obtained from the reference music pieces.Type: ApplicationFiled: July 26, 2012Publication date: August 1, 2013Applicant: Yamaha CorporationInventors: Kouhei SUMI, Takuya FUJISHIMA, Takeshi KIKUCHI
-
Patent number: 8494668Abstract: Character value of a sound signal is extracted for each unit portion, and degrees of similarity between the character values of the individual unit portions are calculated and arranged in a matrix configuration. The matrix has arranged in each column the degrees of similarity acquired by comparing, for each of the unit portions, the sound signal and a delayed sound signal obtained by delaying the sound signal by a time difference equal to an integral multiple of a time length of the unit portion, and it has a plurality of the columns in association with different time differences. Repetition probability is calculated for each of the columns corresponding to the different time differences in the matrix. A plurality of peaks in a distribution of the repetition probabilities are identified. The loop region in the sound signal is identified by collating a reference matrix with the degree of similarity matrix.Type: GrantFiled: February 19, 2009Date of Patent: July 23, 2013Assignee: Yamaha CorporationInventors: Bee Suan Ong, Sebastian Streich, Takuya Fujishima, Keita Arimoto
-
Publication number: 20120278358Abstract: A user inputs, as a query pattern, a desired search-object rhythm pattern using a control, corresponding to a desired one of a plurality of performance parts constituting a performance data set (automatic accompaniment data set), in a rhythm input device. An input rhythm pattern storage section stores the input rhythm pattern (query pattern) into a RAM on the basis of a clock signal output from a bar line clock output section and input trigger data. A part identification section identifies a search-object performance part corresponding to the user-operated control. For the identified performance part, a rhythm pattern search section searches an automatic accompaniment database for an automatic accompaniment data set including a rhythm pattern that matches, i.e. has the highest similarity to, the input rhythm pattern (query pattern).Type: ApplicationFiled: April 19, 2012Publication date: November 1, 2012Applicant: Yamaha CorporationInventors: Daichi Watanabe, Takuya Fujishima
-
Publication number: 20120271847Abstract: A user inputs, as a query pattern, a desired search-object rhythm pattern by operating a rhythm input device. At that time, a determination is made of a style of operation performed by a user on a rhythm input device, such as whether the user has operated a single key or a plurality of keys, or duration or intensity of the operation, and then, a user-intended performance part is identified, on the basis of the determined style of operation, from among one or more performance parts constituting a performance data set (automatic accompaniment data set). For the thus-identified performance part, a rhythm pattern search section searches an automatic accompaniment database for an automatic accompaniment data set including a rhythm pattern that matches the input rhythm pattern (query pattern).Type: ApplicationFiled: April 19, 2012Publication date: October 25, 2012Applicant: Yamaha CorporationInventors: Daichi WATANABE, Takuya Fujishima
-
Patent number: 8269091Abstract: Mask generation section generates an evaluating mask, indicative of a degree of dissonance with a target sound per each frequency along a frequency axis, by setting, for each of a plurality of peaks in spectra of the target sound, a dissonance function indicative of relationship between a frequency difference from the peak and a degree of dissonance with a component of the peak. Index calculation section collates spectra of an evaluated sound with the evaluating mask to thereby calculate a consonance index value indicative of a degree of consonance or dissonance between the target sound and the evaluated sound.Type: GrantFiled: June 18, 2009Date of Patent: September 18, 2012Assignee: Yamaha CorporationInventors: Sebastian Streich, Takuya Fujishima
-
Patent number: 8013231Abstract: A sound signal processing apparatus which is capable of correctly detecting expression modes and expression transitions of a song or performance from an input sound signal. A sound signal produced by performance or singing of musical tones is input and divided into frames of predetermined time periods. Characteristic parameters of the input sound signal are detected on a frame-by-frame basis. An expression determining process is carried out in which a plurality of expression modes of a performance or song are modeled as respective states, the probability that a section including a frame or a plurality of continuous frames lies in a specific state is calculated with respect to a predetermined observed section based on the characteristic parameters, and the optimum route of state transition in the predetermined observed section is determined based on the calculated probabilities so as to determine expression modes of the sound signal and lengths thereof.Type: GrantFiled: May 24, 2006Date of Patent: September 6, 2011Assignee: Yamaha CorporationInventors: Takuya Fujishima, Alex Loscos, Jordi Bonada, Oscar Mayor
-
Patent number: 7858869Abstract: A sound analysis apparatus employs tone models which are associated with various fundamental frequencies and each of which simulates a harmonic structure of a performance sound generated by a musical instrument, then defines a weighted mixture of the tone models to simulate frequency components of the performance sound, further sequentially updates and optimizes weight values of the respective tone models so that a frequency distribution of the weighted mixture of the tone models corresponds to a distribution of the frequency components of the performance sound, and estimates the fundamental frequency of the performance sound based on the optimized weight values.Type: GrantFiled: February 25, 2008Date of Patent: December 28, 2010Assignees: National Institute of Advanced Industrial Science and Technology, Yamaha CorporationInventors: Masataka Goto, Takuya Fujishima, Keita Arimoto
-
Patent number: 7812239Abstract: Storage section has stored therein music piece data sets of a plurality of music pieces, each of the music piece data sets including respective tone data of a plurality of fragments of the music piece and respective character values indicative of musical characters of the fragments. Each of the fragments of a selected main music piece is selected as a main fragment, and each one, other than the selected main fragment, of a plurality of fragments of two or more music pieces is selected as a sub fragment. A similarity index value indicative of a degree of similarity between the character value of the main fragment and the character value of the specified sub fragment is calculated. For each of the main fragments, a sub fragment presenting a similarity index value that satisfies a predetermined selection condition is selected for processing the tone data of the main music piece.Type: GrantFiled: July 15, 2008Date of Patent: October 12, 2010Assignee: Yamaha CorporationInventors: Takuya Fujishima, Maarten De Boer, Jordi Bonada, Samuel Roig, Fokke De Jong, Sebastian Streich
-
Patent number: 7754958Abstract: A sound analysis apparatus stores sound source structure data defining a constraint on one or more of sounds that can be simultaneously generated by a sound source of an input audio signal. A form estimation part selects fundamental frequencies of one or more of sounds likely to be contained in the input audio signal with peaked weights from various fundamental frequencies during sequential updating and optimizing of weights of tone models corresponding to the various fundamental frequencies, so that the sounds of the selected fundamental frequencies satisfy the sound source structure data, and creates form data specifying the selected fundamental frequencies. A previous distribution imparting part imparts a previous distribution to the weights of the tone models corresponding to the various fundamental frequencies so as to emphasize weights corresponding to the fundamental frequencies specified by the form data created by the form estimation part.Type: GrantFiled: August 31, 2007Date of Patent: July 13, 2010Assignee: Yamaha CorporationInventors: Masataka Goto, Takuya Fujishima, Keita Arimoto
-
Patent number: 7750228Abstract: For at least one music piece, a storage section stores tone data of each of a plurality of fragments segmented from the music piece and stores a first descriptor indicative of a musical character of each of the fragments in association with the fragment. Descriptor generation section receives input data based on operation by a user and generates a second descriptor, indicative of a musical character, on the basis of the received input data. Determination section determines similarity between the second descriptor and the first descriptor of each of the fragments. Selection section selects the tone data of at least one fragment on the basis of a result of the similarity determination by the determination section. On the basis of the tone data of the selected at least one fragment, a data generation section generates tone data to be outputted.Type: GrantFiled: January 7, 2008Date of Patent: July 6, 2010Assignee: Yamaha CorporationInventors: Takuya Fujishima, Jordi Bonada, Maarten De Boer
-
Patent number: 7728212Abstract: Music piece data composed of audio waveform data are stored in a memory. Analysis section analyzes the music piece data stored in the memory to determine sudden change points of sound condition in the music piece data. Display device displays individual sound fragment data, obtained by dividing the music piece data at the sudden change points, in a menu format having the sound fragment data arranged therein in order of their complexity. Through user's operation via an operation section, desired sound fragment data is selected from the menu displayed on the display device, and a time-axial position where the selected sound fragment data is to be positioned is designated. New music piece data set is created by each user-selected sound fragment data being positioned at a user-designated time-axial position.Type: GrantFiled: July 11, 2008Date of Patent: June 1, 2010Assignee: Yamaha CorporationInventors: Takuya Fujishima, Naoaki Kojima, Kiyohisa Sugii
-
Patent number: 7642444Abstract: For each of a plurality of music pieces, a storage device stores respective tone data of a plurality of fragments of the music piece and respective musical character values of the fragments. Similarity determination section calculates a similarity index value indicative of a degree of similarity between the character values of each of the fragments of a main music piece and the character values of each individual fragment of a plurality of sub music pieces. Each of the similarity index values calculated for the fragments of each of the sub music pieces can be adjusted in accordance with a user's control. Processing section processes the tone data of each of the fragments of the main music piece on the basis of the tone data of any one of the fragments of the sub music pieces of which the similarity index value indicates sufficient similarity.Type: GrantFiled: November 13, 2007Date of Patent: January 5, 2010Assignee: Yamaha CorporationInventors: Takuya Fujishima, Jordi Bonada, Maarten De Boer, Sebastian Streich, Bee Suan Ong
-
Publication number: 20090316915Abstract: Mask generation section generates an evaluating mask, indicative of a degree of dissonance with a target sound per each frequency along a frequency axis, by setting, for each of a plurality of peaks in spectra of the target sound, a dissonance function indicative of relationship between a frequency difference from the peak and a degree of dissonance with a component of the peak. Index calculation section collates spectra of an evaluated sound with the evaluating mask to thereby calculate a consonance index value indicative of a degree of consonance or dissonance between the target sound and the evaluated sound.Type: ApplicationFiled: June 18, 2009Publication date: December 24, 2009Applicant: Yamaha CorporationInventors: Sebastian Streich, Takuya Fujishima
-
Publication number: 20090216354Abstract: Character value of a sound signal is extracted for each unit portion, and degrees of similarity between the character values of the individual unit portions are calculated and arranged in a matrix configuration. The matrix has arranged in each column the degrees of similarity acquired by comparing, for each of the unit portions, the sound signal and a delayed sound signal obtained by delaying the sound signal by a time difference equal to an integral multiple of a time length of the unit portion, and it has a plurality of the columns in association with different time differences. Repetition probability is calculated for each of the columns corresponding to the different time differences in the matrix. A plurality of peaks in a distribution of the repetition probabilities are identified. The loop region in the sound signal is identified by collating a reference matrix with the degree of similarity matrix.Type: ApplicationFiled: February 19, 2009Publication date: August 27, 2009Applicant: Yamaha CorporationInventors: Bee Suan Ong, Sebastian Streich, Takuya Fujishima, Keita Arimoto
-
Patent number: 7490035Abstract: A pitch shifting apparatus detects peak spectra P1 and P2 from amplitude spectra of inputs sound. The pitch shifting apparatus compresses or expands an amplitude spectrum distribution AM1 in a first frequency region A1 including a first frequency f1 of the peak spectrum P1 using a pitch shift ratio which keeps its shape to obtain an amplitude spectrum distribution AM10 for a pitch-shifted first frequency region A10. The pitch shifting apparatus similarly compresses or expands an amplitude spectrum distribution AM2 adjacent to the peak spectrum P2 to obtain an amplitude spectrum distribution AM20. The pitch shifting apparatus performs pitch shifting by compressing or expanding amplitude spectra in an intermediate frequency region A3 between the peak spectra P1 and P2 at a given pitch shift ratio in response to the each amplitude spectrum.Type: GrantFiled: April 25, 2007Date of Patent: February 10, 2009Assignee: Yamaha CorporationInventors: Takuya Fujishima, Jordi Bonada
-
Publication number: 20090019996Abstract: Storage section has stored therein music piece data sets of a plurality of music pieces, each of the music piece data sets including respective tone data of a plurality of fragments of the music piece and respective character values indicative of musical characters of the fragments. Each of the fragments of a selected main music piece is selected as a main fragment, and each one, other than the selected main fragment, of a plurality of fragments of two or more music pieces is selected as a sub fragment. A similarity index value indicative of a degree of similarity between the character value of the main fragment and the character value of the specified sub fragment is calculated. For each of the main fragments, a sub fragment presenting a similarity index value that satisfies a predetermined selection condition is selected for processing the tone data of the main music piece.Type: ApplicationFiled: July 15, 2008Publication date: January 22, 2009Applicant: Yamaha CorporationInventors: Takuya Fujishima, Maarten De Boer, Jordi Bonada, Samuel Roig, Fokke De Jong, Sebastian Streich
-
Publication number: 20090013855Abstract: Music piece data composed of audio waveform data are stored in a memory. Analysis section analyzes the music piece data stored in the memory to determine sudden change points of sound condition in the music piece data. Display device displays individual phoneme component data, obtained by dividing the music piece data at the sudden change points, in a menu format having the phoneme component data arranged therein in order of their complexity. Through user's operation via an operation section, desired phoneme component data is selected from the menu displayed on the display device, and a time-axial position where the selected phoneme component data is to be positioned is designated. New music piece data set is created by each user-selected phoneme component data being positioned at a user-designated time-axial position.Type: ApplicationFiled: July 11, 2008Publication date: January 15, 2009Applicant: Yamaha CorporationInventors: Takuya Fujishima, Naoaki Kojima, Kiyohisa Sugii