Audio processing method and audio processing apparatus, and training method
Audio processing method and audio processing apparatus, and training method are described. According to embodiments of the application, an accent identifier is used to identify accent frames from a plurality of audio frames, resulting in an accent sequence comprised of probability scores of accent and/or non-accent decisions with respect to the plurality of audio frames. Then a tempo estimator is used to estimate a tempo sequence of the plurality of audio frames based on the accent sequence. The embodiments can be well adaptive to the change of tempo, and can be further used to tracking beats properly.
Latest Dolby Labs Patents:
This application claims benefit of priority to related, U.S. Provisional Patent Application No. 61/837,275 filed on Jun. 20, 2013 entitled “Audio Processing Method and Audio Processing Apparatus, and Training Method” and Chinese Patent Application number 201310214901.6 filed in May 31, 2013 entitled “Audio Processing Method and Audio Processing Apparatus, and Training Method” which are incorporated herein by reference in its entirely.
TECHNICAL FIELDThe present invention relates generally to audio signal processing. More specifically, embodiments of the present invention relate to audio processing methods and audio processing apparatus for estimating tempo values of an audio segment, and a training method for training an audio classifier.
BACKGROUNDAlthough some existing tempo estimating methods are very successful, there are still certain limitations and problems with them. For example, they apply primarily to a constrained range of genres and instruments such as pop-dance music with drum in a static tempo or with “strong beats”. However, it is challenging to maintain the performance/accuracy while confronting a wide diversity of music such as those with soft notes, with time-varying tempos, or with very noisy and complex musical onset representations.
SUMMARYAccording to an embodiment of the present application, an audio processing apparatus is provided, comprising: an accent identifier for identifying accent frames from a plurality of audio frames, resulting in an accent sequence comprised of probability scores of accent and/or non-accent decisions with respect to the plurality of audio frames; and a tempo estimator for estimating a tempo sequence of the plurality of audio frames based on the accent sequence.
According to another embodiment, an audio processing method is provided, comprising: identifying accent frames from a plurality of audio frames, resulting in an accent sequence comprised of probability scores of accent and/or non-accent decisions with respect to the plurality of audio frames; and estimating a tempo sequence of the plurality of audio frames based on the accent sequence.
According to yet another embodiment, a method for training an audio classifier for identifying accent/non-accent frames in an audio segment is provided, comprising: transforming a training audio segment into a plurality of frames; labeling accent frames among the plurality of frames; selecting randomly at least one frame from between two adjacent accent frames, and labeling it as non-accent frame; and training the audio classifier using the accent frames plus the non-accent frames as training dataset.
Yet another embodiment involves a computer-readable medium having computer program instructions recorded thereon, when being executed by a processor, the instructions enabling the processor to execute an audio processing method as described above.
Yet another embodiment involves a computer-readable medium having computer program instructions recorded thereon, when being executed by a processor, the instructions enabling the processor to execute a method for training an audio classifier for identifying accent/non-accent frames in an audio segment as described above.
According to the embodiments of the present application, the audio processing apparatus and methods can, at least, be well adaptive to the change of tempo, and can be further used to tracking beats properly.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
The embodiments of the present invention are below described by referring to the drawings. It is to be noted that, for purpose of clarity, representations and descriptions about those components and processes known by those skilled in the art but not necessary to understand the present invention are omitted in the drawings and the description.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, a device (e.g., a cellular telephone, a portable media player, a personal computer, a server, a television set-top box, or a digital video recorder, or any other media player), a method or a computer program product. Accordingly, aspects of the present invention may take the form of a hardware embodiment, a software embodiment (including firmware, resident software, microcodes, etc.) or an embodiment combining both software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable mediums having computer readable program code embodied thereon.
Any combination of one or more computer readable mediums may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic or optical signal, or any suitable combination thereof.
A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wired line, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer as a stand-alone software package, or partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Overall SolutionsAs shown in
The first embodiment will be described below.
In the accent identifier 200, accent frames are identified from a plurality of audio frames, resulting in an accent sequence comprised of probability scores of accent and/or non-accent decisions with respect to the plurality of audio frames. In the tempo estimator 300, a tempo sequence of the plurality of audio frames is estimated based on the accent sequence obtained by the accent identifier 200.
The plurality of audio frames may be prepared by any existing techniques. The input audio signal may be re-sampled into mono signal with a pre-defined sampling rate, and then divided into frames. But the present application is not limited thereto, and audio frames on multiple channels may also be processed with the solutions in the present application.
The audio frames may be successive to each other, but may also be overlapped with each other to some extent for the purpose of the present application. As an exemplary implementation, an audio signal may be re-sampled to 44.1 kHz, and divided into 2048-sample (0.0464 seconds) frames with 512-sample hop size. That is, the overlapped portion occupies 75% of a frame. Of course, the re-sampling frequency, the sample numbers in a frame and the hop size (and thus the overlapping ratio) may be other values.
The accent identifier 200 may work in either time domain or frequency domain. In other words, each of the plurality of audio frames may be in the form of time-variable signal, or may be transformed into various spectrums, such as frequency spectrum or energy spectrum. For example, each audio frame may be converted into FFT frequency domain. A short-time Fourier transform (STFT) may be used to obtain a spectrum for each audio frame:
X(t,k),k=1,2, . . . , K. (1)
Where K is the number of Fourier coefficients for an audio frame, t is temporally sequential number (index) of the audio frame.
Other kinds of spectrum may also be used, such as Time-Corrected Instantaneous Frequency (TCIF) spectrum, or Complex Quadrature Minor Filter (CQMF) transformed spectrum, and the spectrum may also be represented with X(t,k).
The term “accent” used herein means, in music, an emphasis placed on a particular note. Accents contribute to the articulation and prosody of a performance of a musical phrase. Compared to surrounding notes: 1) A dynamic accent or stress accent is an emphasis using louder sound, typically most pronounced on the attack of the sound; 2) A tonic accent is an emphasis on notes by virtue of being higher in pitch as opposed to higher in volume; and 3) An agogic accent is an emphasis by virtue of being longer in duration. In addition, in rhythmic context, accents have some perceptual properties, for example, generally percussive sounds, bass, etc may be considered as accents.
The present application is not limited to accents in music. In some applications, “accent” may mean phonetic prominence given to a particular syllable in a word, or to a particular word within a phrase. When this prominence is produced through greater dynamic force, typically signalled by a combination of amplitude (volume), syllable or vowel length, full articulation of the vowel, and a non-distinctive change in pitch, the result is called stress accent, dynamic accent, or simply stress; when it is produced through pitch alone, it is called pitch accent; and when it is produced through length alone it is called quantitative accent.
In other audio signals than music or speech, accents may also exist, such as in the rhythm of heart, or clapping, and may be described with properties similar to above.
The definition of “accent” described above implies inherent properties of accents in an audio signal or audio frames. Based on such inherent properties, in the accent identifier 200 features may be extracted and audio frames may be classified based on the features. In other words, the accent identifier 200 may comprises a machine-learning based classifier 210 (
The features may include, for example, a complex domain feature combining spectral amplitude and phase information, or any other features reflecting one or more facets of the music rhythmic properties. More features may include timbre-related features consisting of at least one of Mel-frequency Cepstral Coefficients (MFCC), spectral centroid, spectral roll-off, energy-related features consisting of at least one of spectrum fluctuation (spectral flux), Mel energy distribution, and melody-related features consisting of bass Chroma and Chroma. For example, the changing positions of Chroma always indicate chord changes which are by and large the downbeat points for certain music styles.
These features may be extracted with existing techniques. The corresponding hardware components or software modules are indicated with “feature extractor set” 206 in
As a modification to the embodiment, the accent identifier 200 may comprise as many feature extractors as possible in the feature extractor set 206 and obtain a feature set comprising as many features as possible. Then a subset selector 208 (
The classifier 210 may be any type of classifier in the art. In one embodiment, Bidirectional Long Short Term Memory (BLSTM) may be adopted as the classifier 210. It is a neural network learning model, where ‘Bidirectional’ means the input is presented forwards and backwards to two separate recurrent nets, both of which are connected to the same output layer, and ‘long short term memory’ means an alternative neural architecture capable of learning long time-dependencies, which is proven in our experiment well suited to tasks such as accent/non-accent classification. AdaBoost can also be adopted as an alternative algorithm for the accent/non-accent classification. Conceptually, AdaBoost builds a strong classifier by combining a sequence of weak classifiers, with an adaptive weight for each weak classifier according to its error rate. A variety of classifiers can also be used for this task, such as Support Vector Machine (SVM), Hidden Markov Model (HMM), Gaussian Mixture Model (GMM), and Decision Tree (DT).
Among various classifiers, BLSTM is preferable for estimating posterior probabilities of accents. Other classification approaches such as AdaBoost and SVM maximize differences between positive and negative classes, but result in a large imbalance between them, especially for infrequent positive samples (e.g., accent samples), whereas BLSTM doesn't suffer from such an issue. Furthermore, for classification approaches such as AdaBoost and SVM, long-term information is lost since features such as the first and the second order differences of spectral flux and MFCC only carry short-term sequence information but not the long-term information. In contrast, the bidirectional structure of BLSTM can encode long term information in both directions, hence it is more appropriate for accent tracking tasks. Our evaluations show that BLSTM gives consistently improved performance for accent classification comparing to the conventional classifiers.
The classifier 210 may be trained before hand with any conventional approach. That is, in a dataset to train an accent/non-accent classifier, each frame in the dataset is labelled as accent or non-accent class. However, the two classes are very unbalanced as the non-accent frames are much more than the accent frames. To alleviate the unbalance problem, it is proposed in the present application that non-accent frames are generated by randomly selecting at least one frame between each pair of accent frames.
Therefore, in the present application a method for training an audio classifier for identifying accent/non-accent frames in an audio segment is also provided, as shown in
Then return to
In musical terminology, tempo is the speed or pace of a given piece. Tempo is usually indicated in beats per minute (BPM). This means that a particular note value (for example, a quarter note or crotchet) is specified as the beat, and a certain number of these beats must be played per minute. The greater the tempo, the larger the number of beats that must be played in a minute is, and, therefore, the faster a piece must be played. The beat is the basic unit of time, the pulse of the mensural level. Beat relates to the rhythmic element of music. Rhythm in music is characterized by a repeating sequence of stressed and unstressed beats (often called “strong” and “weak”).
The present application is not limited to music. For other audio signals than music, tempo and beat may have similar meaning and correspondingly similar physical properties.
Basically, all beats are accents, but not all accents are beats, although there are also some exceptions where some beats are not accents. Considering there are more accents than beats, it will be more accurate to estimate the tempo based on accents than based on beats. Therefore, in the present application, it is proposed to estimate the tempo value through detecting accents. Specifically, the tempo estimator 300 estimates a tempo sequence based on the accent sequence obtained by the accent identifier 200. Moreover, rather than estimating a single constant tempo value, the tempo estimator 300 obtains a tempo sequence, which may consist of a sequence of tempo values varying with frames, that is varying with time. In other words, each frame (or every several frames) has its (or their) own tempo value.
The tempo estimator 300 may be realized with any periodicity estimating techniques. If periodicity is found in an audio segment (in the form of an accent sequence), the period τ corresponds to a tempo value.
Possible periodicity estimating techniques may include autocorrelation function (ACF), wherein the autocorrelation value at a specific lag reflects the probability score of the lag (which corresponds to the period τ, and further corresponds to the tempo value); comb filtering, wherein the cross-correlation value at a specific period/lag τ reflects the probability score of the period/lag; histogram technique, wherein the occurrence probability/count of the period/lag between every two detected accents may reflect the probability score of the period/lag; periodicity transform such as Fast Fourier Transform FFT (here it is the accent sequence, not the original audio signal/frames, that is Fourier transformed), wherein the FFT value at a certain period/lag τ may reflect the probability score of the period/lag; and multi-agent based induction method, wherein a goodness/matchness by using a specific period/lag τ (representing an agent) in tempo tracking/estimation may reflect the probability score of the period/lag. In each possible technique, for a specific frame or a specific audio segment, the period/lag with the highest probability score shall be selected.
The audio processing apparatus 100 may, in a second embodiment, further comprises a beat tracking unit 400 for estimating a sequence of beat positions in a section of the accent sequence based on the tempo sequence. Again, since the estimated tempo sequence may well reflect the variation of the tempo, the estimated beat positions will not be in a constant periodicity and may well match the changing tempo values. Compared with conventional techniques of directly estimating beat positions (then estimating tempo values based thereon), the present embodiments of firstly estimating tempo values based on accent estimation, then estimating beat positions based on the tempo values may obtain more accurate results.
A specific tempo value corresponds to a specific period or inter-beat duration (lag). Therefore, if one ground truth beat position is obtained, then all other beat positions may be obtained according to the tempo sequence. The one ground truth beat position may be called a “seed” of the beat positions.
In the present application, the beat position seed may be estimated using any techniques. For example, the accent in the accent sequence with the highest probability score may be taken as the beat position seed. Or any other existing techniques for beat estimation may be used, but only to obtain the seed, not all the beat positions, because the other beat positions will be determined based on the tempo sequence. Such existing techniques may include but not limited to peak picking method, machine-learning based beat classifier or pattern-recognition based beat identifier.
Attack Saliency FeatureIn a third embodiment, a new feature is proposed to enrich the feature space used by the classifier 210 (and/or the subset selector 208), and improve the performance of the classifier 210 and thus the performance of the accent identifier 200 significantly. The new feature may be called “attack saliency feature”, but it should be noted that the nomination of the feature is not intended to limit the feature and the present application in any sense.
Accordingly, a first feature extractor 202 (
Simply speaking, an attack saliency feature represents the proportion that an elementary attack sound component takes in an audio frame. The term “attack” means a perceptible sound impulse or a perceptible start/onset of an auditory sound event. Examples of “attack” sound may include the sounds of percussive instruments, such as hat, cymbal or drum, including snare-drum, kick, tom, bass drum, etc., the sounds of hand-clapping or stamping, etc. The attack sound has its own physical properties and may be decomposed into a series of elementary attack sound components which may be regarded as characterizing the attack sound. Therefore, the proportion of an elementary attack sound component in an audio frame may be used as the attack saliency feature indicating to what extent the audio frame sounds like an attack and thus is possible to be an accent.
The elementary attack sound components may be known beforehand. On one aspect, the elementary attack sound components may be learned from a collection of various attack sound sources like those listed in the previous paragraph. For this purpose, any decomposition algorithms or source separation methods may be adopted, such as Non-negative Matrix Factorization (NMF) algorithm, Principle Component Analysis (PCA) and Independent Component Analysis (ICA). That is, it can be regarded that a general attack sound source generalized from the collection of various attack sound sources is decomposed into a plurality of elementary attack sound components (still taking STFT spectrum as an example, but other spectrums are also feasible):
Xs(t,k)=A(t,n)*D(n,k)=[Aatt(t,1),Aatt(t,2), . . . , Aatt(t,N)]*[Datt(1,k),Datt(2,k), . . . , Datt(N,k)]′ (2)
Where Xs(t,k) is the attack sound source, k=1, 2, . . . , K, K is the number of Fourier coefficients for an audio frame, t is temporally sequential number (index) of the audio frame, D(n,k)=[Datt(1,k), Datt(2,k), . . . , Datt(N,k)]′ is elementary attack sound components, n=1, 2, . . . , N, and N is the number of elementary attack sound components, A(t,n)=[Aatt(t,1), Aatt(t,2), . . . , Aatt(t,N)] is a matrix of mixing factors of respective elementary attack sound components.
In the learning stage, through the decomposition algorithm or source separation method stated above, but not limited thereto, both the matrix of mixing factors A(t,n) and the set of elementary attack sound components D(n,k) may be obtained, but we need only D(n,k) and A(t,n) may be discarded.
Then, in the accent identifier 200, the first feature extractor 202 uses the same or similar decomposition algorithm or source separation method to decompose an audio frame to be processed into at least one of the elementary attack sound components D(n,k) obtained in the learning stage, resulting a matrix of mixing factors, which may collectively or individually be used as the at least one attack saliency feature. That is,
Where X(t,k) is the audio frame obtained in equation (1), k=1, 2, . . . , K, K is the number of Fourier coefficients for an audio frame, t is temporally sequential number (index) of the audio frame, D(n,k) is elementary attack sound components obtained in equation (2), n=1, 2, . . . , N, and N is the number of elementary attack sound components, F(t,n)=[Fatt(t,1), Fatt(t,2), . . . , Fatt(t,N)] is a matrix of mixing factors of respective elementary attack sound components. The matrix F(t,n) as a whole, or any element in the matrix, may used as the at least one attack saliency feature. The matrix of mixing factors may be further processed to derive the attack saliency feature, such as some statistics of the mixing factors, a linear/nonlinear combination of some or all the mixing factors, etc.
In a variant of the embodiment, the at least one elementary attack sound component may also be derived beforehand from musicology knowledge by manually construction. This is because an attack sound source has its inherent physical properties and has its own specific spectrum. Then, based on knowledge about the spectrum properties of the attack sound sources, elementary attack sound components may be constructed manually.
In a further variant of the embodiment, non-attack sound components may also be considered, since even an attack sound source such as a percussive instrument may comprise some non-attack sound components, which however are also characteristics of the attack sound source such as the percussive instrument. And in a real piece of music, it is the whole sound of the percussive instrument, such as a drum, rather than only some components of the drum, to indicate the accents or beats in the music. From another viewpoint, even if finally the mixing factors of the non-attack sound components are not considered in the attack saliency features, more accurate results may be obtained if the decomposition algorithm takes into account all possible components including non-attack sound component; in other words, with non-attack components taken into account, we can properly decompose all kinds of audio signals even if they contain more or less non-attack sound components or are mostly or entirely comprised of non-attack sound component.
Accordingly, in the learning stage, the sound source may be decomposed as follows:
Xs(t,k)=A(t,n)*D(n,k)=[Aatt(t,1),Aatt(t,2), . . . , Aatt(t,N1),Anon(t,N1+1),Anon(t,N1+2), . . . , Anon(t,N1+N2)]*[Datt(1,k),Datt(2,k), . . . , Datt(N1,k),Dnon(N1+1,k),Dnon(N1+2,k), . . . , Dnon(N1+N2,k)]′ (4)
Where Xs(t,k) is the attack sound source, k=1, 2, . . . , K, K is the number of Fourier coefficients for an audio frame, t is temporally sequential number (index) of the audio frame, D(n,k)=[Datt(1,k), Datt(2,k), . . . , Datt(N1,k), Dnon(N1+1,k), Dnon(N1+2,k), . . . , Dnon(N1+N2,k)]′ is elementary sound components, n=1, 2, . . . , N1+N2, wherein N1 is the number of elementary attack sound components, and N2 is the number of elementary non-attack sound components, A(t,n)=[Aatt(t,1), Aatt(t,2), . . . , Aatt(t,N1), Anon(t,N1+1), Anon(t,N1+2), . . . , Anon(t,N1+N2)] is a matrix of mixing factors of respective elementary sound components.
In a further variant, into the collection of sound sources in the learning stage, some non-attack sound sources may also be added, in addition to the attack sound sources. Such non-attack sound sources may include, for example, non-percussive instrument, singing voice, etc. In such a situation, in equation (4) Xs(t,k) will comprise both attack sound sources and non-attack sound sources.
Then, in the accent identifier 200, the first feature extractor 202 uses the same or similar decomposition algorithm or source separation method to decompose an audio frame to be processed into at least one of the elementary sound components D(n,k) obtained in the learning stage, resulting a matrix of mixing factors, which may be collectively or individually used as the at least one attack saliency feature. That is,
Where X(t,k) is the audio frame obtained in equation (1), k=1, 2, . . . , K, K is the number of Fourier coefficients for an audio frame, t is temporally sequential number (index) of the audio frame, D(n,k) is elementary sound components obtained in equation (2), n=1, 2, . . . , N1+N2, wherein N1 is the number of elementary attack sound components, and N2 is the number of elementary non-attack sound components, F(t,n) is a matrix of mixing factors of respective elementary sound components. The matrix F(t,n) as a whole, or any element in the matrix, may used as the at least one attack saliency feature. The matrix of mixing factors may be further processed to derive the attack saliency feature, such as some statistics of the mixing factors, a linear/nonlinear combination of some or all the mixing factors, etc. As a further variant, although mixing factors Fnon(t,N1+1), Fnon(t,N1+2), . . . , Fnon(t,N1+N2) are also obtained for elementary non-attack sound components, only those mixing factors Fatt(t,1), Fatt(t,2), . . . , Fatt(t,N1) for elementary attack sound components are considered when deriving the attack saliency feature.
In a further variant shown in
In a fourth embodiment, a further new feature is proposed to enrich the feature space used by the classifier 210 (and/or the subset selector 208), and improve the performance of the classifier 210 and thus the performance of the accent identifier 200 significantly. The new feature may be called “relative strength feature”, but it should be noted that the nomination of the feature is not intended to limit the feature and the present application in any sense.
Accordingly, a second feature extractor 202 (
Simply speaking, a relative strength feature of an audio frame represents change of strength of the audio frame with respect to at least one adjacent audio frame. From the definition of accent, we know that an accent generally has larger strength than adjacent (previous or subsequent) audio frames, therefore we can use change of strength as the feature for identifying accent frame. If considering real-time processing, usually a previous frame may be used to calculate the change (in the present application the previous frame is taking as an example). However, if the processing is not necessarily in real time, then a subsequent frame may also be used. Or both may be used.
The change of strength may be computed based on the change of the signal energy or spectrum, such as energy spectrum or STFT spectrum. To more accurately track the instantaneous frequencies of signal components, a modification of the FFT spectrum may be exploited to derive the relative strength feature. This modified spectrum is called time-corrected instantaneous frequency (TCIF) spectrum. The process using this TCIF spectrum for extracting the relative strength feature is presented as following as an example, but the present application is not limited thereto and the following processing can be equally applied to other spectrums including energy spectrum.
In a variant, the relative strength feature may be calculated as a difference between the spectrums of two concerned audio frames:
ΔX(t,k)=X(t,k)−X(t−1,k) (6)
Where t−1 indicates the previous frame.
In an alternative to the variant above, a ratio between spectrums of concerned frames may be used instead of the difference.
In a further alternative, the spectrum may be converted to log-scale, and a log-scale difference between concerned frames may be calculated as the difference:
Xlog(t,k)=log(X(t,k)) (7)
ΔXlog(t,k)=Xlog(t,k)−Xlog(t−1,k) (8)
Then for each frame, K differences (or ratios) are obtained, each corresponding to a frequency bin. At least one of them may be used as the at least one relative strength feature. The differences (ratios) may be further processed to derive the relative strength feature, such as some statistics of the differences (or ratios), a linear/nonlinear combination of some or all the differences (or ratios), etc. For example, as shown in
In a variant, the differences (including the log differences and the ratios) and/or the sum described above, may be subject to a half-wave rectification to shift the average of the differences and/or sum approximately to zero, and ignore those values lower than the average. Accordingly, a first half-wave rectifier 2042 (
Where ΔXrect(t,k) is the rectified difference after half-wave rectification,
In a further variant, as shown in
Please note that the operations of the first half-wave rectifier 2042, the addition unit 2044 and the low-pass filter 2046 may be performed separately or in any combination and in any sequence. Accordingly, the second feature extractor 204 may comprise only one of them or any combination thereof.
In above description a TCIF spectrum is taken as an example and as stated before any spectrum including energy spectrum may be processed similarly. In a further variant, any spectrum may be converted onto Mel bands to form a Mel spectrum, which then may be subjected to the operations above. The conversion may be expressed as:
X(t,k)→Xmel(t,k′) (10)
That is, the original spectrum X(t,k) on K frequency bins is converted into a Mel spectrum Xmel(t,k′) on K′ Mel bands, where k=1, 2, . . . , K, and k′=1, 2, . . . , K′.
Then all the operations (equations (6)-(9), for example) of the second feature extractor 204, including any one of the first half-wave rectifier 2042, the addition unit 2044 and the low-pass filter 2046, may be performed on the Mel spectrum of each audio frame. Then K′ differences (ratios, log differences) respectively on K′Mel bands may be obtained, at least one of them may be used as the at least one relative strength feature. If the addition unit is comprised, then the sum may be used alone as a relative strength feature, or together with the differences over K′ Mel bands to form a vector of K′+1 dimensions as the relative strength feature. Generally K′=40. Since Mel bands can represent human auditory perception more accurately, the accent identifier 200 working on Mel bands can ensure the identified accents comply with human auditory perception better.
Tempo EstimationIn the part “Overall Solutions” of this disclosure, some periodicity estimating techniques are introduced, and they can be applied on the accent sequence obtained by the accent identifier 200 to obtain a variable tempo sequence.
In this part, as a fourth embodiment of the audio processing apparatus, a novel tempo estimator is proposed to be used in the audio processing apparatus, as shown in
A known example of the dynamic programming unit 310 is Viterbi decoder, but the present application is not limited thereto, and any other dynamic programming techniques may be adopted. Simply speaking, dynamic programming techniques are used to predict a sequence of values (usually a temporal sequence of values) through collectively considering a predetermined length of history and/or future of the sequence with respect to the present time point, the length of history or future or the length of the history plus future may be called “path depth”. For all the time points within the path depth, various candidate values for each time point constitute different “paths”, for each possible path, a path metric may be calculated and a path with the optimal path metric may be selected and thus all the values of the time points within the path depth are determined
The input of the dynamic programming unit 310 may be the accent sequence obtained by the accent identifier 200, let it be Y(t), where t is temporally sequential number (index) of each audio frame (now the accent probability score corresponding to the audio frame). In a variant, a half-wave rectification may be performed on Y(t) and the resulted half-wave rectified accent sequence may be the input of the dynamic programming unit 310:
Where y(t) is the half-wave rectified accent sequence,
In a further variant, the tempo estimator 300 may comprise a smoothing unit 302 for eliminating noisy peaks in the accent sequence Y(t) before the processing of the dynamic programming unit 310 or the processing of the second half-wave rectifier 304. Alternatively, the smoothing unit 302 may operate on the output y(t) of the second half-wave rectifier 304 and output the smoothed sequence to the dynamic programming unit 310.
In yet a further variant, periodicity estimation may be further performed and the dynamic programming unit operates on the resulted sequence of the periodicity estimation. For estimating the periodicity, the original accent sequence Y(t) or the half-wave rectified accent sequence y(t), both of which may also have been subjected to smoothing operation of the smoothing unit 302, may be split into windows with length L. The longer the window is, the finer resolution for tempo estimation, but the worse tempo variation tracking capability can be achieved. Meanwhile the higher the overlap is, the better the tempo variation tracking is. In one embodiment, we may set the window length L equal to 6 seconds and the overlap equal to 4.5 seconds. The non-overlapped portion of the window corresponds to the step size between windows. And the step size may vary from 1 frame (corresponding to one accent probability score Y(t) or its derivation y(t) or the like) to the window length L (without overlap). Then a window sequence y(m) may be obtained, where m is the sequence number of the windows. Then, any periodicity estimation algorithm, such as those described in the part “Overall Solutions” of this disclosure, may be performed on each window, and for each window a periodicity function γ(1,m) is obtained, the function represents a score of the periodicity corresponding to a specific period (lag) of l. Then, for different values of l and for all the windows within the path depth, an optimal path metric may be selected at least based on the periodicity values, and thus a path of periodicity values is determined. The period l in each window is just the lag corresponding to a specific tempo value:
where s(m) is the tempo value at window m.
Accordingly, the tempo estimator 300 may comprise a periodicity estimator 306 for estimating periodicity values of the accent sequence within a moving window and with respect to different candidate tempo values (lag or period), and the dynamic programming unit 310 may comprise a path metric unit 312 for calculating the path metric based on the periodicity values with respect to different candidate tempo values, wherein a tempo value is estimated for each step of the moving window, the size of the moving window depends on intended precision of the estimated tempo value, and the step size of the moving window depends on intended sensibility with respect to tempo variation.
In a variant, the tempo estimator 300 may further comprises a third half-wave rectifier 308 after the periodicity estimator 306 and before the dynamic programming unit 310, for rectifying the periodicity values with respect to a moving average value or history average value thereof before the dynamic programming processing. The third-wave rectifier 308 is similar to the first and second half-rectifiers, and detailed description thereof is omitted.
The path metric unit 312 can calculate the path metric through any existing techniques. In the present application, a further implementation is proposed to derive the path metric from at least one of the following probabilities for each candidate tempo value in each candidate tempo sequence (that is candidate path): a conditional probability pemi(γ(l,m)|s(m)) of a periodicity value given a specific candidate tempo value, a prior probability pprior(s(m)) of a specific tempo value, and a probability pt(s(m+1)|s(m)) of transition from one specific tempo value to another specific tempo value in a tempo sequence. In a specific implementation using all the three probabilities, the path metric may be calculated as, for example:
p(S,y)=pprior(s(0))·pemi(γ(l,M)|s(M))·Π0,M-1(pt(s(m+1)|s(m))·pemi(γ(l,m)|s(m)) (13)
Where p(S,γ) is path metric function of a candidate path S with respect to a periodicity value sequence γ(l,m), the path depth is M, that is, S=s(m)=(s(0), s(1), . . . s(M)), m=0, 1, 2 . . . M, pprior(s(0)) is the prior probability of a candidate tempo value of the first moving window, pemi(γ(l,M)|s(M)) is the conditional probability of a specific periodicity value γ(l,m) for window m=M given that the window is in the tempo state s(M).
For different values of s(m) of each moving window m in the path, corresponding for different period/lag values l, there are different path metrics p(S,γ). The final tempo sequence is the path making the path metric p(S,γ) optimal:
Ŝ=argmaxS(p(S,γ)) (14)
Then a tempo path or tempo sequence Ŝ=s(m) is obtained. It may be converted into a tempo sequence s(t). If the step size of the moving window is 1 frame, then s(m) is directly s(t), that is, m=t. If the step size of the moving window is more than 1 frame, such as w frames, then in s(t), every w frames have the same tempo value.
Accordingly, the path metric unit 312 may comprise one of a first probability calculator 2042, a second probability calculator 2044 and a third probability calculator 2046, respectively for calculating the three probabilities pemi(γ(l,m)|s(m)), pprior(s(m)) and pt(s(m+1)|s(m)).
The conditional probability pemi(γ(l,m)|s(m)) is the probability of a specific periodicity value γ(l,m) for a specific lag l for window m given that the window is in the tempo state s(m) (a tempo value, corresponding to a specific lag or inter-beat duration l). l is related to s(m) and can be obtained from equation (12). In other words, the conditional probability pemi(γ(l,m)|s(m)) is equivalent to a conditional probability pemi(γ(l,m)|l) of the specific periodicity value γ(l,m) for moving window m given a specific lag or inter-beat duration l. This probability may be estimated based on the periodicity value with regard to the specific candidate tempo value l in moving window m and the periodicity values for all possible candidate tempo values l within the moving window m, for example:
pemi(γ(l,m)|s(m))=pemi(γ(l,m)|l)=γ(l,m)/Σlγ(l,m) (15)
For example, for a specific lag l=L0, that is a specific tempo value s(m)=T0=1/L0, we have:
pemi(γ(l,m)|s(m))=pemi(γ(L0,m)|T0)=pemi(γ(L0,m)|L0)=γ(L0,m)/Σlγ(l,m) (15-1)
However, for the path metric p(S,γ) in equation (13), every possible value of l for each moving window m shall be tried so as to find the optimal path. That is, in equation (15-1) the specific lag L0 shall vary within the possible range l for each moving window m. That is, equation (15) shall be used for the purpose of equation (13)
The prior probability pprior(s(m)) is the probability of a specific tempo state s(m) itself. In music, different tempo values may have a general distribution. For example, generally the tempo value will range from 30 to 500 bpm (beats per minute), then tempo values less than 30 bpm and greater than 500 bpm may have a probability of zero. For other tempo values, each may have a probability value corresponding to the general distribution. Such probability values may be obtained beforehand through statistics or may be calculated with a distribution model such as a Gaussian model.
We know there are different music genres, styles or other metadata relating to audio types. For different types of audio signals, the tempo values may have different distributions. Therefore, in a variant, the second probability calculator 2044 may be configured to calculate the probability of a specific tempo value in a specific moving window based on the probabilities of possible metadata values corresponding to the specific moving window and conditional probability of the specific tempo value given each possible metadata value of the specific moving window, for example:
pprior(s(m))=Σgpprior(s(m)|g)·p(g) (16)
Where pprior(s(m)|g) is conditional probability of s(m) given a metadata value g, and p(g) is the probability of metadata value g.
That is, if the audio signal in the moving window has certain metadata value, then each candidate tempo value in the moving window has its probability corresponding to the metadata value. When the moving window corresponds to multiple possible metadata values, then the probability of each candidate tempo value in the moving window shall be a weighted sum of the probabilities for all possible metadata values. The weights may be, for example, the probabilities of respective metadata values.
Supposing the tempo range of each metadata value g is modeled as a Gaussian function N(μg, σg), where μg is mean and σg is variance, the prior probability of a specific tempo can be predicted as below:
pprior(s(m))=ΣgN(μg,σg)·p(g) (17)
The metadata information (metadata value and its probability) may have been encoded in the audio signal and may be retrieved using existing techniques, or may be extracted with a metadata extractor 2048 (
The probability pt(s(m+1)|s(m)) is a conditional probability of a tempo state s(m+1) given the tempo state of the previous moving window is s(m), or the probability of transition from a specific tempo value for a moving window to a specific tempo value for the next moving window.
Similar to the probability pprior(s(m)), in music, different tempo value transition pairs may have a general distribution, and each pair may have a probability value corresponding to the general distribution. Such probability values may be obtained beforehand through statistics or may be calculated with a distribution model such as a Gaussian model. And similarly, for different metadata values (such as audio types) of audio signals, the tempo value transition pairs may have different distributions. Therefore, in a variant, the third probability calculator 2046 may be configured to calculate the probability of transition from a specific tempo value for a moving window to a specific tempo value for the next moving window based on the probabilities of possible metadata values corresponding to the moving window or the next moving window and the probability of the specific tempo value for the moving window transiting to the specific tempo value for the next moving window for each of the possible metadata values, for example:
pt(s(m+1)|s(m)=Σgpt(s(m+1),s(m)|g)·p(g) (18)
Where pt(s(m+1), s(m)|g) is conditional probability of a consecutive tempo value pair s(m+1) and s(m) given a metadata value g, and p(g) is the probability of metadata value g. Similar to the second probability calculator 2044, g and p(g) may have been encoded in the audio signal and may be simply retrieved, or may be extracted by the metadata extractor 2048, such as an audio classifier.
In a variant, the tempo transition probability pt(s(m+1), s(m)|g) may be modeled as a Gaussian function N(0, σg′) for each metadata value g, where σg′ is variance, and the mean is equal to zero as we favour tempo continuity over time. Then the transition probability can be predicted as below:
pt(s(m+1)|s(m))=ΣgN(0,σg′)·p(g) (19)
As mentioned before, the periodicity estimation algorithm may be implemented with autocorrelation function (ACF). Therefore, as an example, the periodicity estimator 306 may comprise an autocorrelation function (ACF) calculator for calculating autocorrelation values of the accent probability scores within a moving window, as the periodicity values. The autocorrelation values may be further normalized with the size of the moving window L and the candidate tempo value (corresponding to the lag l), for example:
In a variant, the tempo estimator 300 may further comprise an enhancer 314 (
Where, if the lag l is to be enhanced only with its double and triple, then α may ranges from 1 to 3, and so on.
With the enhanced autocorrelation value sequence (l,m), equations (13), (14) and (15) may be rewritten as:
p(S,R)=pprior(s(0))·pemi((l,M)|s(M))·Π0,M-1(pt(s(m+1)|s(m))·pemi((l,m)|s(m))) (13′)
Ŝ=argmaxS(p(S,R)) (14′)
pemi((l,m)|s(m))=(l,m)Σl(l,m) (15′)
In the part “Overall Solutions” of this disclosure, some beat tracking techniques are introduced, and they can be applied on the tempo sequence obtained by the tempo estimator 300 to obtain a beat sequence.
In this part, as a fifth embodiment of the audio processing apparatus, a novel beat tracking unit 400 is proposed to be used in the audio processing apparatus, as shown in
Specifically, as illustrated in
scoreini(t)=y(t) (22)
And for an anchor position, for example, its updated score may be a sum of its old score and the score of the previous candidate beat position:
scoreupd(t)=score(t−P)+scoreold(t) (23)
Where scoreupd(t) is the updated score of the anchor position t, score(t−P) is the score of the previous candidate beat position searched out from the anchor position t, assuming that the previous candidate beat position is P frames earlier than the anchor position t, and scoreold(t) is the old score of the anchor position t, that is its score before the updating. If it is the first time of updating for the anchor position, then
scoreold(t)=scoreini(t) (24)
The selection unit 404 uses the finally updated score.
In the embodiment described above, the accent sequence is scanned from the left to the right in
score′upd(t)=score′(t+P′)+score′old(t) (23′)
If it is the first time of updating for the anchor position, then
score′old(t)=scoreini(t) (24′)
Where score′(t+P′) is the score of the previous (with respect to the direction from the right to the left) candidate beat position searched out from the anchor position t. In the scanning direction from the right to the left, the previous candidate beat position is searched; but if still viewed in the natural direction of the audio signal, that is in the direction from the left to the right, then it's the succedent candidate beat position to be searched. That is, the frame index of the searched candidate beat position is greater than the anchor frame index t, assuming the difference is P′ frames. That is, in
In a further variant where the scanning is performed in both directions, then for each position, a combined score may be obtained based on the finally updated scores in both directions. The combination may be of any manner such as addition or multiplication. For example:
scorecom(t)=scoreupd(t)*score′upd(t) (25)
The selection unit 404 uses the combined score.
After the beat position seed has been determined by the selection unit 404, the other beat positions may be deduced from the beat position seed according to the tempo sequence with any existing techniques as mentioned in the part “Overall Solutions” in the present disclosure. As a variant, the other beat positions may be tracked iteratively with the predecessor tracking unit 402 in forward direction and/or backward direction. In a further variant, before selecting the beat position seed, for each anchor position a previous candidate beat position has been found and may be stored, then after the beat position seed has been selected, the other beat positions may be tracked using the stored information. That is, pairs of “anchor position” and corresponding “previous candidate beat position” are stored. Take the situation where only scanning in forward direction is performed as an example, that is, only scoreupd(t) has been obtained. Then in backward direction, a previous beat position may be tracked through using the beat position seed as anchor position and finding the corresponding previous candidate beat position as the previous beat position, then a further previous beat position may be tracked using the tracked previous beat position as new anchor position, and so on until the start of the accent sequence. And in forward direction, a subsequent beat position may be tracked through regarding the beat position seed as a “previous candidate beat position” and finding the corresponding anchor position as the subsequent beat position, then a further subsequent beat position may be tracked using the tracked subsequent beat position as a new “previous candidate beat position”, and so on until the end of the accent sequence.
When searching the previous candidate beat position based on the anchor position, the predecessor tracking unit 402 may be configured to track the previous candidate beat position by searching a searching range determined based on the tempo value at the corresponding position in the tempo sequence.
As shown in
p=((0.75T),(0.75T)+1, . . . ,(1.5T)) (26)
where (•) denotes the rounding function.
As stated before, the predecessor tracking unit 402 may adopt any existing techniques. In the present application it is proposed a new solution adopting a cost function highlighting the preliminarily estimated beat period deduced from the corresponding tempo value. For example, we may apply a log-time Gaussian function (but not limited thereto) to the searching range p. In the example shown in
In an implementation, the log-time Gaussian function is used over the searching range p as a weighting window to approximate the transition probability txcost from the anchor position to the previous candidate beat position (Note that the maximum of the log-time Gaussian window is located at T from the anchor position):
Search over all possible previous candidate beat positions (predecessors) t−p in the searching range p, and update their scores with the transition probability:
scoreupd(t−p)=α·txcost(t−p)+scoreold(t−p) (28)
where α is the weight applied to the transition cost, could be from 0 to 1, and its typical value could be 0.7. Here, scoreold(t−p) may have been updated once when the position t−p is used as an anchor position as described before, and in equation (28) it is updated again. The selecting unit 404 uses the finally updated score of each position.
Based on scoreupd(t−p), the best previous candidate beat location t−P with the highest score is found:
t−P=t−argmaxp(scoreupd(t−p)) (29)
And see equation (23), the score of the present anchor position may be updated based on the updated score of the position t−P, that is score(t−P). Optionally, the position t−P may be stored as the previous candidate beat position with respect to the anchor position t, and may be used in the subsequent steps.
In brief, the predecessor tracking unit 402 may be configured to update the score of each position in the searching range based on a transition cost calculated based on the position and the corresponding tempo value, to select the position having the highest score in the searching range as the previous candidate beat position, and to update the score of the anchor position based on the highest score in the searching range.
Still as shown in
p′=((0.75T),(0.75T)+1, . . . ,(1.5T)) (26′)
where (•) denotes the rounding function.
As an example, for anchor position t1, the searching range is equivalent to [t1+(0.75T), t1+(1.5T)] in the dimension of t. Similar to equation (23′) to (24′), when anchor position scans from the right to the left, for the processing of the predecessor tracking unit, equations (27) to (29) may be rewritten as follows, with apostrophe added:
And see equation (23′), the score of the present anchor position may be updated based on the updated score of the position t+P′, that is score′(t+P′). Optionally, the position t+P′ may be stored as the previous candidate beat position with respect to the anchor position t, and may be used in the subsequent steps.
As stated before, the selecting unit 404 selects the highest score among the finally updated scores of all the positions in the accent sequence, the corresponding position being used as the seed of beat position. The finally updated scores may be obtained by the predecessor tracking unit scanning the accent sequence in either forward or backward direction. The selecting unit may also selects the highest score among the combined scores obtained from the finally updated scores obtained in both forward and backward directions.
After that, the other beat positions may be tracked iteratively with the predecessor tracking unit 402 in forward direction and/or backward direction using the similar techniques discussed above, without necessity of updating the scores. In a further variant, when searching the previous candidate beat position (predecessor) from each anchor position, the previous candidate beat position has been found and may be stored, then after the beat position seed has been selected, the other beat positions may be tracked using the stored information. For example, from the beat position seed P0, we may take it as anchor position and get two adjacent beat positions using the stored previous candidate beat positions P1 and P′1 in both forward and backward directions. Then using P1 and P′1 respectively as anchor positions, we may further get two adjacent beat positions P2 and P′2 based on the stored previous candidate beat positions, and so on until the two ends of the accent sequence. Then we get a sequence of beat positions:
Px,Px-1, . . . P2,P1,P0,P′1,P′2, . . . P′y-1,P′y (30)
Where x and y are integers.
All the embodiments and variants thereof discussed above may be implemented in any combination thereof, and any components mentioned in different parts/embodiments but having the same or similar functions may be implemented as the same or separate components.
For example, the embodiments and variants shown in
As discussed at the beginning of the Detailed Description of the present application, the embodiment of the application may be embodied either in hardware or in software, or in both.
In
The CPU 1301, the ROM 1302 and the RAM 1303 are connected to one another via a bus 1304. An input/output interface 1305 is also connected to the bus 1304.
The following components are connected to the input/output interface 1305: an input section 1306 including a keyboard, a mouse, or the like; an output section 1307 including a display such as a cathode ray tube (CRT), a liquid crystal display (LCD), or the like, and a loudspeaker or the like; the storage section 1308 including a hard disk or the like; and a communication section 1309 including a network interface card such as a LAN card, a modem, or the like. The communication section 1309 performs a communication process via the network such as the internet.
A drive 1310 is also connected to the input/output interface 1305 as required. A removable medium 1311, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 1310 as required, so that a computer program read there from is installed into the storage section 1308 as required.
In the case where the above-described components are implemented by the software, the program that constitutes the software is installed from the network such as the internet or the storage medium such as the removable medium 1311.
In addition to general-purpose computing apparatus, the embodiments of the present application may also be implemented in a special-purpose computing device, which may be a part of any kind of audio processing apparatus or any kind of voice communication terminal.
The present application may be applied in many areas. The rhythmic information in multiple levels is not only essential for the computational modeling of music understanding and music information retrieval (MIR) applications, but also useful for audio processing applications. For example, once the musical beats are estimated, we can use them as the temporal unit for high-level beat-based computation instead of low-level frame-based computation that cannot tell much about music. Beat and bar detection can be used to align the other low-level features to represent perceptually salient information, so the low-level features are grouped by musically meaningful content. This has recently been proven to be exceptionally useful for mid-specificity MIR tasks such as cover song identification.
In audio signal post-processing realm, one exemplary application is using tempo estimation to optimize the release time for the compression control of an audio signal. For music with a slow tempo, the audio compression processing is suitable to be applied with long release time to ensure the sound integrity and enrichment, whereas for music with a fast tempo and salient rhythmic beats, the audio compression processing is suitable to be applied with short release time to secure the sound to be not sounding obscure.
Rhythm is one of the most fundamental and crucial characteristic of audio signals. Automatic estimation of music rhythm can potentially be utilized as a fundamental module in a wide range of applications, such as audio structure segmentation, content-based querying and retrieval, automatic classification, music structure analysis, music recommendation, playlist generation, audio to video (or image) synchronization, etc. Related applications have gained a place in software and web services aiming at recording producers, musicians and mobile application developers, as well as in widely-distributed commercial hardware mixers for DJs.
Audio Processing MethodsIn the process of describing the audio processing apparatus in the embodiments hereinbefore, apparently disclosed are also some processes or methods. Hereinafter a summary of these methods is given without repeating some of the details already discussed hereinbefore, but it shall be noted that although the methods are disclosed in the process of describing the audio processing apparatus, the methods do not necessarily adopt those components as described or are not necessarily executed by those components. For example, the embodiments of the audio processing apparatus may be realized partially or completely with hardware and/or firmware, while it is possible that the audio processing method discussed below may be realized totally by a computer-executable program, although the methods may also adopt the hardware and/or firmware of the audio processing apparatus.
The methods will be described below with reference to
As shown in
Further, based on the tempo sequence 30, a sequence of beat positions 40 in a section of the accent sequence may be estimated (operation S40).
The operation of identifying the accent frames may be realized with various classifying algorithms as discussed before, especially with a Bidirectional Long Short Term Memory (BLSTM), the advantage of which has been discussed.
For classifying the accent frames, various features may be extracted. In the present application, some new features are proposed, including attack saliency features and relative strength features. These features may be used together with other features by any classifier to classify the audio frames 10 (operation S29). In different embodiments as shown in
In a variant, the identifying operation S20 may further comprises selecting a subset of features from the at least one additional feature (operation S28), the at least one attack saliency feature and/or the at least one relative strength feature, and the classifying operation S29 may be performed based on the subset of features 15.
For extracting the at least one attack saliency feature, a decomposition algorithm may be used, including Non-negative Matrix Factorization (NMF) algorithm, Principle Component Analysis (PCA) or Independent Component Analysis (ICA). Specifically, an audio frame may be decomposed into at least one elementary attack sound component, the mixing factor of the at least one elementary attack sound component may serve, collectively or individually as the basis of the at least one attack saliency feature.
Generally, an audio signal may comprise not only elementary attack sound component, but also elementary non-attack sound component. For decomposing an audio signal more precisely and for adapting the decomposition algorithm to any audio signal, in the present application an audio frame may be decomposed into both at least one elementary attack sound component and at least one elementary non-attack sound component, resulting in a matrix of mixing factors of the at least one elementary attack sound component and the at least one elementary non-attack sound component, collectively or individually as the basis of the at least one attack saliency feature. In a variant, although mixing factors of both elementary attack sound components and elementary non-attach sound components are obtained, only the mixing factors of the elementary attack sound component are used as the basis of the at least one attack saliency feature.
The individual mixing factors or its matrix as a whole may be used as the at least one attack saliency feature. Alternatively, any linear or non-linear combination (such as sum or weighted sum) of some or all of the mixing factors may be envisaged. More complex methods for getting the attack saliency feature based on the mixing factors are also envisageable.
After obtaining the at least one attack saliency feature of each audio frame, the feature may be normalized with the energy of the audio frame (operation S23,
For decomposing the audio frames, the at least one attack sound component and/or the at least one non-attack sound component must be known beforehand. They may be obtained beforehand with any decomposition algorithm from at least one attack sound source and/or non-attack sound source, or may be derived beforehand from musicology knowledge by manually construction.
Incidentally, the audio frames to be decomposed may be any kind of spectrum (and the elementary attack/non-attack sound components may be of the same kind of spectrum), including Short-time Fourier Transform (STFT) spectrum, Time-Corrected Instantaneous Frequency (TCIF) spectrum, or Complex Quadrature Minor Filter (CQMF) transformed spectrum.
The relative strength feature, representing change of strength of the audio frame with respect to at least one adjacent audio frame, may be a difference or a ratio between a spectrum of the audio frame and that of the at least one adjacent audio frame. As variants, different transformation may be performed on the spectrums of the audio frames. For example, the spectrum (such as STFT, TCIF or CQMF spectrum) may be converted into log-scale spectrum, Mel band spectrum or log-scale Mel band spectrum. For each frame, the difference/ratio may be in the form of a vector comprising differences/ratios in different frequency bins or Mel bands. At least one of these differences/ratios or any linear/non-linear combination of some or all of the differences/ratios may be taken as the at least one relative strength feature. For example, for each audio frame, the differences over at least one Mel band/frequency bin may be summed or weightedly summed, with the sum as a part of the at least one relative strength feature.
In a variant, for each Mel band or each frequency bin, the differences may be further half-wave rectified in the dimension of time (frames). In the half-rectification, the reference may be the moving average value or history average value of the differences for the plurality of audio frames (along the time line). The sum/weighed sum of differences on different frequency bins/Mel bands may be subject to similar processing. Additionally/alternatively, redundant high-frequency components in the differences and/or the sum/weighted sum in the dimension of time may be filtered out, such as by a low-pass filter.
After getting the accent sequence 20, as shown in
Before the dynamic programming processing, some pre-processing may be conducted. For example, the accent sequence 20 may be smoothed (operation S31) for eliminating noisy peaks in the accent sequence, and or half-wave rectified (operation S31) with respect to a moving average value or history average value of the accent sequence.
In one embodiment, the accent sequence 20 may be divided into overlapped segments (moving windows), and periodicity values within each moving window and with respect to different candidate tempo values may be estimated firstly (operation S33). Then the path metric may be calculated based on the periodicity values with respect to different candidate tempo values (see
As further variants of the embodiment, the periodicity values may be further subject to half-wave rectification (operation S34) and/or enhancing processing (operation S35). The half-wave rectification may be conducted in the same manner as the other half-wave rectifications discussed before and may be realized with similar or the same module. The enhancing processing aims to enhance the relative higher periodicity value of the accent sequence in a moving window when the corresponding candidate tempo value tends to be right.
There are different kinds of periodicity values and corresponding estimating algorithm, as discussed before. And one example is autocorrelation values of the accent probability scores within the moving window. In such a case, the autocorrelation value may be further normalized with the size of the moving window and the candidate tempo value. And the enhancing operation S35 may comprise enhancing the autocorrelation values for a specific candidate tempo value with autocorrelation values under a lag being an integer number times of the lag corresponding to the specific candidate tempo value.
Now turn to the path metric, which may be calculated (operation S368) based on at least one of a conditional probability of a periodicity value given a specific candidate tempo value, a prior probability of a specific candidate tempo value, and a probability of transition from one specific tempo value to another specific tempo value in a tempo sequence. The conditional probability of a periodicity value of a specific moving window with respect to a specific candidate tempo value may be estimated based on the periodicity value with regard to the specific candidate tempo value and the periodicity values for all possible candidate tempo values for the specific moving window (operation S362). For a specific moving window, the prior probability of a specific candidate tempo value may be estimated based on the probabilities of possible metadata values corresponding to the specific moving window and conditional probability of the specific tempo value given each possible metadata value of the specific moving window (operation S364). And the probability of transition from a specific tempo value for a moving window to a specific tempo value for the next moving window may be estimated based on the probabilities of possible metadata values corresponding to the moving window or the next moving window and the probability of the specific tempo value for the moving window transiting to the specific tempo value for the next moving window for each of the possible metadata values (operation S366).
The metadata may represent audio types classified based on any standards. It may indicate music genre, style, etc. The metadata may have been encoded in the audio segment and may be simply retrieved/extracted from the information encoded in the audio stream (operation S363). Alternatively, the metadata may be extracted in real time from the audio content of the audio segment corresponding to the moving window (operation S363). For example, the audio segment may be classified into audio types using any kind of classifier.
Now come to beat tracking. As shown in
After the beat position seed has been found, the other beat positions may be tracked with an algorithm the same as the tracking operation discussed above. However, considering that the tracking operation has already been done for each position, it might be unnecessary to repeat the operation. Therefore, in a variant as shown with dotted lines in
The processing described with reference to
The operation S42 of tracking the previous candidate beat position may be realized with any techniques by searching a searching range determined based on the tempo value at the corresponding position in the tempo sequence (the inner cycle in
The searching range may be determined based on the tempo value corresponding to the anchor position. For example, based on the tempo value a period between the anchor position and the previous candidate beat position may be estimated, and the searching range may be set as around the previous candidate beat position. Therefore, in the searching range, the positions closer to the estimated previous candidate beat position will have higher weights. A transition cost may be calculated (operation S422) based on such a rule and the score of each position in the searching range may be updated with the transition cost (operation S424). Note again, within the scanning in one direction (forward scanning or backward scanning), the score of each position will be repeatedly updated (and thus accumulated), either as anchor position or when covered by any searching range of any later anchor position. But between two scanning in different directions, the scores are independent, that is the scores in the scanning of a different direction will be updated from beginning, that is from their initial scores determined based on the probability scores of accent decisions of corresponding audio frames.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Claims
1. An audio processing apparatus comprising:
- an accent identifier for identifying accent frames from a plurality of audio frames, resulting in an accent sequence comprised of probability scores of accent and/or non-accent decisions with respect to the plurality of audio frames, wherein the accent frames include at least one of an emphasis placed on a particular note and a phonetic prominence given to a particular syllable;
- a tempo estimator for estimating a tempo sequence of the plurality of audio frames based on the accent sequence; and
- an audio processor that uses the tempo sequence to perform an audio processing operation, the audio processing operation including one or more of cover song identification, audio compression control, content-based audio querying and retrieval, automatic audio classification, music structure analysis, music recommendation, music playlist generation, audio to video synchronization, and audio to image synchronization.
2. The audio processing apparatus according to claim 1, wherein the plurality of audio frames are partially overlapped with each other.
3. The audio processing apparatus according to claim 1, wherein the accent identifier comprises:
- a first feature extractor for extracting, from each audio frame, at least one attack saliency feature representing the proportion that at least one elementary attack sound component takes in the audio frame; and
- a classifier for classifying the plurality of audio frames at least based on the at least one attack saliency feature.
4. The audio processing apparatus according to claim 3, wherein the first feature extractor is configured to estimate the at least one attack saliency feature for each audio frame with a decomposition algorithm by decomposing the audio frame into at least one elementary attack sound component, resulting in a matrix of mixing factors of the at least one elementary attack sound component, collectively or individually as the basis of the at least one attack saliency feature.
5. The audio processing apparatus according to claim 3, wherein the first feature extractor further comprises a normalizing unit for normalizing the at least one attack saliency feature of each audio frame with the energy of the audio frame.
6. The audio processing apparatus according to claim 1, wherein the accent identifier comprises:
- a second feature extractor for extracting, from each audio frame, at least one relative strength feature representing change of strength of the audio frame with respect to at least one adjacent audio frame, and
- a classifier for classifying the plurality of audio frames at least based on the at least one relative strength feature.
7. The audio processing apparatus according to claim 6, wherein the accent identifier comprises:
- a first feature extractor for extracting, from each audio frame, at least one attack saliency feature representing the proportion that at least one elementary attack sound component takes in the audio frame;
- a second feature extractor for extracting, from each audio frame, at least one relative strength feature representing change of strength of the audio frame with respect to at least one adjacent audio frame, and
- a classifier for classifying the plurality of audio frames at least based on one of the at least one attack saliency feature and the at least one relative strength feature.
8. The audio processing apparatus according to claim 1, wherein the tempo estimator comprises a dynamic programming unit taking the accent sequence as input and outputting an optimal estimated tempo sequence by minimizing a path metric of a path consisting of a predetermined number of candidate tempo values along time line.
9. The audio processing apparatus according to claim 8, wherein the tempo estimator further comprises a second half-wave rectifier for rectifying, before the processing of the dynamic programming unit, the accent sequence with respect to a moving average value or history average value of the accent sequence.
10. The audio processing apparatus according to claim 8, further comprising:
- a beat tracking unit for estimating a sequence of beat positions in a section of the accent sequence based on the tempo sequence.
11. An audio processing method comprising:
- identifying accent frames from a plurality of audio frames, resulting in an accent sequence comprised of probability scores of accent and/or non-accent decisions with respect to the plurality of audio frames, wherein the accent frames include at least one of an emphasis placed on a particular note and a phonetic prominence given to a particular syllable;
- estimating a tempo sequence of the plurality of audio frames based on the accent sequence; and
- using the tempo sequence to perform an audio processing operation, the audio processing operation including one or more of cover song identification, audio compression control, content-based audio querying and retrieval, automatic audio classification, music structure analysis, music recommendation, music playlist generation, audio to video synchronization, and audio to image synchronization, wherein the audio processing method is implemented with one or more processors and one or more memories, wherein the one or more processors and one or more memories implement an accent identifier and a tempo estimator, wherein the accent identifier identifies the accent frames, and wherein the tempo estimator estimates the tempo sequence.
12. The audio processing method according to claim 11, wherein the plurality of audio frames are partially overlapped with each other.
13. The audio processing method according to claim 11, wherein the identifying operation comprises:
- extracting, from each audio frame, at least one attack saliency feature representing the proportion that at least one elementary attack sound component takes in the audio frame; and
- classifying the plurality of audio frames at least based on the at least one attack saliency feature.
14. The audio processing method according to claim 13, wherein the extracting operation comprises estimating the at least one attack saliency feature for each audio frame with a decomposition algorithm by decomposing the audio frame into at least one elementary attack sound component, resulting in a matrix of mixing factors of the at least one elementary attack sound component, collectively or individually as the basis of the at least one attack saliency feature.
15. The audio processing method according to claim 13, wherein the extracting operation comprises estimating the at least one attack saliency feature with the decomposition algorithm by decomposing each audio frame into at least one elementary attack sound component and at least one elementary non-attack sound component, resulting in a matrix of mixing factors of the at least one elementary attack sound component and the at least one elementary non-attack sound component, collectively or individually as the basis of the at least one attack saliency feature.
16. The audio processing method according to claim 14, wherein the at least one attack sound component is obtained beforehand with the decomposition algorithm from at least one attack sound source.
17. The audio processing method according to claim 14, wherein the at least one elementary attack sound component is derived beforehand from musicology knowledge by manually construction.
18. The audio processing method according to claim 13, further comprising normalizing the at least one attack saliency feature of each audio frame with the energy of the audio frame.
19. An apparatus comprising a processor and configured to perform the method recited in claim 11.
20. A non-transitory computer readable storage medium, comprising software instructions, which when executed by one or more processors cause performance of the method recited in claim 11.
7000200 | February 14, 2006 | Martins |
7612275 | November 3, 2009 | Seppanen |
8071869 | December 6, 2011 | Chen |
20050247185 | November 10, 2005 | Uhle |
20070008956 | January 11, 2007 | Moran |
20090055006 | February 26, 2009 | Asano |
20090287323 | November 19, 2009 | Kobayashi |
20100126332 | May 27, 2010 | Kobayashi |
20100131086 | May 27, 2010 | Itoyama |
20100186576 | July 29, 2010 | Kobayashi |
20160005387 | January 7, 2016 | Eronen |
2007/072394 | June 2007 | WO |
- Bock, S. et al “Enhanced Beat Tracking with Context-Aware Neural Networks” Proc. of the 14th Int. Conference on Digital Audio Effects, Paris, France, Sep. 19-23, 2011.
- Gouyon, F. et al “Evaluating Low Level Features for Beat Classification and Tracking” IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 4; pp. IV-1309-IV-1312, Apr. 15-20, 2007.
- Virtanen, T. et al “Bayesian Extensions to Non-Negative Matrix Factorisation for Audio Signal Modelling”, Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, Mar. 31, 2008-Apr. 4, 2008.
- Fulop, S.A. et al “Algorithms for Computing the Time-Corrected Instantenous Frequency (Reassigned) Spectogram, with Applications”, Acoust. Soc. Am. 119(1), Jan. 2006, pp. 360-371.
- Schuster, M. et al “Bidirectional Recurrent Neural Networks”, IEEE Transactions on Signal Processing, vol. 45, No. 11, Nov. 1997.
- Freund, Y. et al “A Short Introduction to Boosting”, Journal of Japanese Society for Artificial Intelligence 14(5): 771-780, Sep. 1999.
- Hall, M. et al “The WEKA Data Mining Software: an Update”, ACM SIGKDD Explorations Newsletter archive, vol. 11, Issue 1, Jun. 2009.
- Hall, M.A. “Correlation-Based Feature Subset Selection for Machine Learning”, Department of Computer Science, Thesis, Hamilton, New Zealand, Apr. 1999.
- Grosche, P. et al “Cyclic Tempogram—A Mid-level Tempo Representation for Music Signals” IEEE International Conference on Acoustics Speech and Signal Processing, Mar. 14-19, 2010, pp. 5522-5525.
- Rabiner, L.R. “Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition”, Proc. of the IEEE, vol. 77, No. 2, pp. 257-286, Feb. 1989.
Type: Grant
Filed: May 20, 2014
Date of Patent: Nov 28, 2017
Patent Publication Number: 20140358265
Assignee: Dolby Laboratories Licensing Corporation (San Francisco, CA)
Inventors: Jun Wang (Beijing), Lie Lu (Beijing)
Primary Examiner: Fan Tsang
Assistant Examiner: Eugene Zhao
Application Number: 14/282,654
International Classification: G06F 17/00 (20060101); G10H 1/40 (20060101);