System and method for recognizing audio pieces via audio fingerprinting
An audio fingerprinting system and method. A server receives an audio fingerprint of a first audio piece, searches a database for the audio fingerprint, retrieves an audio profile vector associated with the audio fingerprint, updates user preference information based on the audio profile vector, and selects a second audio piece based on the user preference information. The audio fingerprint is generated by creating a matrix based on the frequency measurements of the audio piece, and performing a singular value decomposition of the matrix. To expedite the search of the database and to increase matching accuracy, a subset of candidates in the database is identified based on the most prominent musical notes of the audio piece, and the search is limited to the identified subset. One of the attributes of the audio profile vector is a particular audio class. An identifier for the audio class is generated based on an average of audio fingerprints of the audio pieces belonging to the audio class.
This application is a continuation of U.S. patent application Ser. No. 10/668,926, filed Sep. 23, 2003, now U.S. Pat. No. 7,013,301, the content of which is hereby incorporated by reference as if set forth in full herein.
FIELD OF THE INVENTIONThe present invention is generally related to automatically identifying unknown audio pieces, and more specifically, to a system and method for efficiently identifying unknown audio pieces via their audio fingerprints.
BACKGROUND OF THE INVENTIONIt is often desirable to automatically identify an audio piece by analyzing the content of its audio signal, especially when no descriptive data is associated with the audio piece. Prior art fingerprinting systems generally allow recognition of audio pieces based on arbitrary portions of the piece. The fingerprints in the fingerprint database are often time-indexed to allow appropriate alignment of a fingerprint generated based on the arbitrary portion with a stored fingerprint. Time-based fingerprinting systems therefore add an additional complicating step of locating a correct segment in the fingerprint database before any comparison may be performed.
The generating and storing of time-indexed audio fingerprints are redundant if an assumption may be made as to the portion of the audio piece that will be available for fingerprinting. For example, if it is known that the audio piece to be identified will always be available from the beginning of the piece, it is not necessary to maintain time-indexed fingerprints of the audio piece for the various time slices, nor is it necessary to time-align a query fingerprint with a stored fingerprint.
Another problem encountered in prior art fingerprinting systems is that as the number of registered fingerprints in the fingerprint database increases, the time expended to obtain a match also increases.
Thus, what is needed is a fingerprinting system that provides a reliable, fast, and robust identification of audio pieces. Such a system should be configured to reduce the search space in performing the identification for a better matching accuracy and speed.
SUMMARY OF THE INVENTIONAccording to one embodiment, the invention is directed to a method for making choices from a plurality of audio pieces where the method includes: receiving an audio fingerprint of a first audio piece; searching a database for the audio fingerprint; retrieving an audio profile vector associated with the audio fingerprint, the audio profile vector quantifying a plurality of attributes associated with the audio piece; updating user preference information based on the audio profile vector; and selecting a second audio piece based on the user preference information.
According to another embodiment, the invention is directed to an audio fingerprinting method that includes: receiving an audio signal associated with an audio piece; obtaining a plurality of frequency measurements of the audio signal; building a matrix A based on the frequency measurements; performing a singular value decomposition on the matrix A, wherein A=USVT; retrieving one or more rows of matrix VT; associating the retrieved rows of matrix VT with the audio piece; and storing the retrieved rows of matrix VT in a data store.
According to another embodiment, the invention is directed to an audio indexing method that includes: receiving an audio signal of an audio piece; automatically obtaining from the audio signal a list of musical notes included in the audio piece; determining from the audio signal a prominence of the musical notes in the audio piece; selecting a pre-determined number of most prominent musical notes in the audio piece; generating an index based on the selected musical notes; and searching a database based on the generated index.
According to another embodiment, the invention is directed to a method for generating an identifier for an audio class where the method includes: selecting a plurality of audio pieces associated with the audio class; computing an audio fingerprint for each selected audio piece; calculating an average of the computed audio fingerprints; generating an average fingerprint based on the calculation; associating the average fingerprint to the audio class; and storing the average fingerprint in a data store.
According to another embodiment, the invention is directed to an audio selection system that includes: a first data store storing a plurality of audio fingerprints for a plurality of audio pieces; a second data store storing a plurality of audio profile vectors for the plurality of audio fingerprints, each audio profile vector quantifying a plurality of attributes associated with the audio piece corresponding to the audio fingerprint; means for searching the first data store for an audio fingerprint of a first audio piece; means for retrieving from the second data store an audio profile vector associated with the audio fingerprint; means for updating user preference information based on the retrieved audio profile vector; and means for selecting a second audio piece based on the user preference information.
According to another embodiment, the invention is directed to an audio fingerprinting system that includes a processor configured to: receive an audio signal associated with an audio piece; obtain a plurality of frequency measurements of the audio signal; build a matrix A based on the frequency measurements; perform a singular value decomposition on the matrix A, wherein A=USVT; retrieve one or more rows of matrix VT; and associate the retrieved rows of matrix VT with the audio piece. The audio fingerprint system also includes a data store coupled to the processor for storing the retrieved rows of matrix VT.
According to another embodiment, the invention is directed to an audio indexing system that includes a means for receiving an audio signal of an audio piece; means for automatically obtaining from the audio signal a list of musical notes included in the audio piece; means for determining from the audio signal a prominence of the musical notes in the audio piece; means for selecting a pre-determined number of most prominent musical notes in the audio piece; means for generating an index based on the selected musical notes; and means for searching a database based on the generated index.
According to another embodiment, the invention is directed to a system for generating an identifier for an audio class where the system includes: means for computing an audio fingerprint for each of a plurality of selected audio pieces; means for calculating an average of the computed audio fingerprints; means for associating the calculated average to the audio class; and means for storing the calculated average in a data store.
According to another embodiment, the invention is directed to an article of manufacture comprising a computer readable medium having computer usable program code containing executable instructions that, when executed, cause a computer to perform the steps of: obtaining a plurality of frequency measurements of an audio signal associated with an audio piece; building a matrix A based on the frequency measurements; performing a singular value decomposition on the matrix A, wherein A=USVT; retrieving one or more rows of matrix VT; associating the retrieved rows of matrix VT with the audio piece; and storing the retrieved rows of matrix VT in a data store.
According to another embodiment, the invention is directed to an article of manufacture comprising a computer readable medium having computer usable program code containing executable instructions that, when executed, cause a computer to perform the steps of: automatically obtaining from an audio signal of an audio piece, a list of musical notes included in the audio piece; determining from the audio signal a prominence of the musical notes in the audio piece; selecting a pre-determined number of most prominent musical notes in the audio piece; generating an index based on the selected musical notes; and searching a database based on the generated index.
These and other features, aspects and advantages of the present invention will be more fully understood when considered with respect to the following detailed description, appended claims, and accompanying drawings. Of course, the actual scope of the invention is defined by the appended claims.
The audio file 11 provided to the audio file reader 12 may be an entire audio piece or a portion of the audio piece to be recognized or registered. According to one embodiment of the invention, the audio file contains at least the first thirty seconds of the audio piece. A person of skill in the art should recognize, however, that shorter or longer segments may also be used in alternative embodiments.
The received audio file 11 is transmitted to a music preprocessor 16 which, according to one embodiment of the invention, is configured to take certain pre-processing steps prior to analysis of the audio file. Exemplary pre-processing steps may include normalizing the audio signal to ensure that the maximum level in the signal is the same for all audio samples, transforming the audio data from stereo to mono, eliminating silent portions of the audio file, and the like. A person skilled in the art should recognize, however, that the pre-processing step may be eliminated or may include other types of audio pre-processing steps that are conventional in the art.
The preprocessor 16 is coupled to a fingerprint extraction engine 18, fingerprint analysis engine 20, indexing engine 22, and class identification engine 24. According to one embodiment of the invention, the engines are processors that implement instructions stored in memory. A person of skill in the art should recognize, however, that the engines may be implemented in hardware, firmware (e.g. ASIC), or a combination of hardware, firmware, and software.
According to one embodiment of the invention, the fingerprint extraction engine 18 automatically generates a compact representation, hereinafter referred to as a fingerprint of signature, of the audio file 11, for use as a unique identifier of the audio piece. According to one embodiment of the invention, the audio fingerprint is represented as a matrix.
The fingerprint analysis engine 20 analyzes an audio fingerprint generated by the fingerprint extraction engine 18 for a match against registered fingerprints in a fingerprint database 26. Based on the match, either the fingerprint analysis engine or a separate engine coupled to the fingerprint analysis engine (not shown) retrieves additional data associated with the audio piece. The additional data may be, for example, an audio profile vector (also referred to as acoustic analysis data) that describes the various attributes of the audio piece as is described in further detail in U.S. patent application Ser. No. 10/278,636, filed on Oct. 23, 2002, the content of which is incorporated herein by reference. As described in patent application Ser. No. 10/278,636, the acoustic analysis data is generated based on an automatic processing of audio signals of the audio piece. The acoustic analysis data provides numerical measurements for various predetermined acoustic attributes. Such acoustic attributes include tempo, repeating sections in the audio piece, energy level, presence of particular instruments (e.g. snares, kick drums), rhythm, bass patterns, harmony, particular music classes (e.g. jazz piano trio), and the like. Of course, a person of skill in the art should recognize that other types of data may also be associated with the audio piece, such as, for example, title information, artist or group information, concert information, new release information, and/or links, such as URL links, to further data.
The indexing engine 22 associates the extracted audio fingerprint with an index that may be used by the fingerprint analysis engine 20 to identify a subset of candidates in the fingerprint database 26. According to one embodiment of the invention, the index is generated based on the prominent musical notes contained in the audio piece. Once the index is generated, a subset of audio fingerprints in the fingerprint database 26 to which the audio piece belongs may be identified.
The class identification engine 24 generates identifiers for different sets of audio pieces that belong to particular musical classes. According to one embodiment of the invention, the audio pieces in a particular musical class are similar in terms of overall instrumentation/orchestration. For example, an exemplary musical class may be identified as including a jazz piano trio, acappella singing, acoustic guitar, acoustic piano, solo acoustic guitar with vocal, or the like. The various musical classes may then be included as attributes of an audio profile vector where a values set for a particular musical class attribute indicates how close or far the audio piece is to the musical class. The identifiers and information about the various musical classes may then be stored in a musical class database 28.
The fingerprint database 26 stores a plurality of fingerprints of known audio pieces. The fingerprints may be grouped into discrete subsets based on the musical notes contained in the audio pieces. Each audio fingerprint may be associated with the actual audio file, an audio profile vector, a description of the audio piece (e.g. title, artist and/or group), concert information, new release information, URL links to additional data, and/or the like.
Based on the FFT calculation, the fingerprint extraction engine 18 generates, in step 102, a T×F matrix A, where T≧F. According to one embodiment of the invention, the rows of the matrix represent time, and the columns of the matrix represent frequency measurements, also referred to as bins, of the FFT.
In step 104, the fingerprint extraction engine 18 performs the well known matrix operation known as a Singular Value Decomposition (SVD) operation on matrix A. In general terms, SVD is a technique that reduces an original matrix into a product of three matrices as follows:
SVD(A)=USVT
where U is a T×F orthogonal matrix, S is an F×F diagonal matrix with positive or zero valued elements, and VT is the transpose of an F×F orthogonal matrix. According to one embodiment of the invention, the rows of V transposed are the coordinates that capture the most variance, that is, retain the most information about the audio piece in decreasing order of significance as measured by the diagonal entries of the S matrix.
In step 106, the fingerprint extraction engine 18 extracts a predetermined number of rows from the matrix VT and in step 108, builds a fingerprint matrix from the extracted rows. In step 110, the fingerprint matrix is set as the audio piece's fingerprint by associating the fingerprint matrix to the audio piece in any manner that may be conventional in the art.
In step 112, the fingerprint matrix is stored in a data store. The data store is the fingerprint database 26 if the fingerprint extraction is done for registration purposes. Otherwise, the data store is a temporary storage location for storing the fingerprint matrix for later retrieval by the fingerprint analysis engine 20 for comparing against registered fingerprints.
Unlike many audio fingerprints generated by prior art systems, the audio fingerprint generated via the SVD operation has no notion of time associated with it. A person of skill in the art should recognize, however, that time may be associated with the audio fingerprint generated via the SVD operation. In other words, the process of generating audio fingerprints described with relation to
According to one embodiment of the invention, the fingerprint extraction engine 18 may also incorporate prior art fingerprinting techniques such as, for example, spectral centroid and/or spectral flatness measures which result in time-indexed fingerprint measurements. If used, the results of either or both of these measures may be added to the fingerprint matrix generated by the SVD operation.
On the other hand, if there are more fingerprints in the fingerprint database that have not been analyzed, the fingerprint analysis engine 20 computes in step 206, a difference between the fingerprint matrix X and a current fingerprint (fingerprint matrix Y) in the fingerprint database 26. According to one embodiment of the invention, the difference is computed by taking the well-known Euclidian distance measure D for each row vector of the fingerprint matrices X and Y as follows:
D=√{square root over ((x1−y1)2+(x2−y2)2+ . . . +(xm−ym)2)}{square root over ((x1−y1)2+(x2−y2)2+ . . . +(xm−ym)2)}{square root over ((x1−y1)2+(x2−y2)2+ . . . +(xm−ym)2)}
where X1, X2, . . . Xm are the values of a row vector of fingerprint matrix X, and Y1, Y2, . . . Ym are the values of a row vector of fingerprint matrix Y. The distance measures for all the rows of the matrices are summed and, according to one embodiment of the invention, normalized. In step 208, a determination is made as to whether the sum of the distances exceed a threshold value. If the answer is NO, a match is declared. Otherwise, a next fingerprint in the fingerprint database is examined for a match.
According to one embodiment of the invention, if prior art fingerprinting techniques are also introduced, the time-indexed vectors generated by these techniques are measured for distance against corresponding stored fingerprint vectors and scaled by an appropriate constant. The resulting distance calculation is added to the distance calculation computed in step 206. A weighing factor may also be introduced to give more or less weight to the distance calculation performed by a particular technique. The total distance computation is then tested against the threshold value to determine if a match has been made.
The remainder of the process of
In this regard, the fingerprint analysis engine 20 inquires in step 304 whether there are more fingerprints in the identified subset of the fingerprint database 26 to compare. If the answer is NO, the fingerprint analysis engine returns a no match result in step 306.
If there are more fingerprints in the subset that have not been analyzed, the fingerprint analysis engine 20 computes in step 308 a difference between fingerprint matrix X and a current fingerprint (fingerprint matrix Y) in the subset. In step 310, a determination is made as to whether the difference exceeds a threshold value. If the answer is NO, a match is declared. Otherwise, a next fingerprint in the identified subset is examined for a match.
The process illustrated in
The peak-tracking algorithm generates tracks of local peaks in the FFT which are then analyzed by the indexing engine for their prominency. In this regard, the indexing engine 22 determines in step 404 whether there are any more tracks to examine. If the answer is YES, the engine converts, in step 406, the track's frequency into an integer value that quantizes the track's frequency. According to one embodiment of the invention, this is done by quantizing the track's frequency to a closest MIDI (Musical Instrument Digital Interface) note number in a manner that is well known in the art.
In step 408, the indexing engine 22 computes a prominence value for the track based on factors such as, for example, the track's strength and duration. In step 410, the engine associates the computed prominence value to the track's MIDI note. In step 412, the prominence value for the MIDI note is accumulated into a prominence array. The process then returns to step 404 for analyzing a next track.
If there are no more tracks to examine, the indexing engine 22 selects in step 414, the MIDI note numbers in the prominence array with the highest prominence values and outputs them as an index of the associated subset in the fingerprint database 26. According to one embodiment of the invention, the four MIDI note numbers with the highest prominence values are selected for the index. According to one embodiment of the invention, the index consists of four unordered numbers where the numbers are the selected MIDI note numbers, rendering a total of 24 possible combinations for the index.
The process starts, and in step 500, a set of audio pieces that belong to the musical class are selected. The selection of the pieces may be manual or automatic.
In step 502, the class identification engine computes a fingerprint for each audio piece in the set. According to one embodiment of the invention, the class identification engine invokes the fingerprint extraction engine 18 to compute the fingerprints via SVD operations. Other fingerprinting mechanisms may also be used in lieu and/or addition of the SVD fingerprinting mechanism.
In step 504, the class identification engine 24 calculates an average of the fingerprints generated for the set. In this regard, the class identification engine computes a matrix, referred to as a class ID matrix, that minimizes a distance measure to all the audio pieces in the set in a manner that is well known in the art.
In step 506, the calculated average of the fingerprints represented by the class ID matrix is associated with the musical class and in step 508, stored in the musical class database 28 as its identifier along with other information about the musical class. Such additional information may include, for example, a list of audio pieces that belong to the class, links to the fingerprint database 26 of audio fingerprints of the audio pieces that belong to the class, links to the audio profile vectors for the audio pieces that belong to the class, and/or the like.
Once the identifiers for the musical classes have been generated, calculations may be made to determine how close or far an audio piece is to a particular musical class. This may be done, for example, by computing the distance between the fingerprint extracted for the audio piece and the class ID matrix for the particular musical class.
According to one embodiment of the invention, the various musical classes are used as attributes of an audio piece's audio profile vector. The distance calculations are stored in the audio profile vector for each attribute as an indication of how close the audio piece is to the associated musical class.
According to one embodiment of the invention, the audio fingerprinting system 10 resides in the server 600. Portions of the audio fingerprinting system may also reside in end terminals 602-608. The server 600 and/or end-terminals 602-608 may also include the music profiler disclosed in U.S. patent application Ser. No. 10/278,636, for automatically analyzing an audio piece and generating an audio profile vector. One or more processors included in the server 600 and/or end terminals 602-608 may further be configured with additional functionality to recommend audio pieces to users based on their preferences. Such functionality includes generating/retrieving audio profile vectors quantifying a plurality of attributes associated with the audio pieces in the audio database, generating/updating user preference vectors, and selecting audio pieces from the audio database based on the user profile vector.
In an exemplary usage of the fingerprinting system 10, a user rates a song that does not have descriptive information associated with it. Instead of transmitting the entire song that the user wants to rate, a fingerprint of the song is transmitted along with the rating information. In this regard, an end terminal used by the user accesses the server 600 and downloads an instance of the fingerprint extraction engine 18 into its memory (not shown). The downloaded fingerprint extraction engine 18 is invoked to extract the fingerprint of the audio piece that is being rated. The extracted fingerprint is transmitted to the server 600 over the internet 610.
Upon receipt of the extracted audio fingerprint, the server 600 invokes the fingerprint analysis engine 20 to determine whether the received fingerprint is registered in the fingerprint database 26. If a match is made, the server retrieves the audio profile vector associated with the fingerprint and uses it to update or generate a user profile vector for the user as is described in further detail in U.S. patent application Ser. No. 10/278,636. Specifically, for a particular piece of music, the audio profile vector quantifies particular attributes found in the music. Such attributes include, but are not limited to, tempo of the music, repeating sections found in the music, energy saturation, snare and kick drum sounds, rhythm, bass pattern, music harmony, and the like. According to one embodiment of the invention, a music profiler analyzes a musical piece and quantifies each attribute in the music vector based on the analysis of such attribute. The user profile vector is then used to recommend other songs to the user.
If a match may not be made, the audio piece is analyzed, preferably by the end terminal, for generating the audio profile vector as is disclosed in further detail in U.S. patent application Ser. No. 10/278,636.
According to one embodiment of the invention, the end terminal may also download an instance of the indexing engine 22 for determining the index of the subset of fingerprints to which the audio piece that is being rated belongs. The indexing information is then also transmitted to the server 600 along with the fingerprint information to expedite the search of the fingerprint database 26.
Although this invention has been described in certain specific embodiments, those skilled in the art will have no difficulty devising variations to the described embodiment which in no way depart from the scope and spirit of the present invention. Moreover, to those skilled in the various arts, the invention itself herein will suggest solutions to other tasks and adaptations for other applications.
For example, the audio fingerprinting system 10 may have applications above and beyond the recognition of audio pieces for generating audio profile vectors. For example, the system 10 may be used to find associated descriptive data (metadata) for unknown pieces of music. The system 10 may also be used to identify and protocol transmitted audio program material on broadcasting stations for verification of scheduled transmission of advertisement spots, securing a composer's royalties for broadcast material, or statistical analysis of program material.
It is the applicants intention to cover by claims all such uses of the invention and those changes and modifications which could be made to the embodiments of the invention herein chosen for the purpose of disclosure without departing from the spirit and scope of the invention. Thus, the present embodiments of the invention should be considered in all respects as illustrative and not restrictive, the scope of the invention to be indicated by the appended claims and their equivalents rather than the foregoing description.
Claims
1. An audio recognition method comprising:
- receiving an audio fingerprint of a musical piece from a client device;
- comparing the received audio fingerprint against a plurality of stored audio fingerprints for a match;
- determining if the received audio fingerprint corresponds to a particular one of the stored audio fingerprints;
- if the received audio fingerprint corresponds to the particular one of the stored audio fingerprints; retrieving an audio profile vector stored in association with the particular one of the stored audio fingerprints, the audio profile vector including at least N numerical values quantifying N acoustic attributes of the musical piece, wherein N>0, and wherein at least one of the acoustic attributes is tempo, and the associated numerical value quantifies the tempo of the musical piece based on an automatic processing of audio signals of the musical piece by a music profiling engine; and transmitting information stored in association with the retrieved audio profile vector to the client device for doing at least one of generating a music playlist, making music related recommendations, and making other music-related selections; and
- if the received audio fingerprint does not correspond to the particular one of the stored audio fingerprints, prompting the client device for generating the audio profile vector.
2. The method of claim 1, wherein the audio profile vector is generated based on an automatic processing of audio signals of the audio piece.
3. The method of claim 1, wherein one of the plurality of acoustic attributes included in the audio profile vector is associated with a particular audio class, and the numerical value indicates a distance of the audio piece to the audio class.
4. The method of claim 3, wherein the audio class is identified based on an audio class fingerprint, the audio class fingerprint being an average of audio fingerprints of audio pieces associated with the audio class.
5. The method of claim 4, wherein the numerical value indicating the distance of the audio piece to the audio class is determined based on a distance calculation of the received audio fingerprint and the audio class fingerprint.
6. The method of claim 1 further comprising:
- identifying an index of a subset of the plurality of stored audio fingerprints, the index identifying a plurality of musical notes determined to be most prominent for the audio fingerprints in the subset; and
- searching the identified subset for the match.
7. The method of claim 1, wherein if none of the stored audio fingerprints correspond to the received audio fingerprint, invoking the client device to generate the audio profile vector of the audio piece.
8. The method of claim 1, wherein the received and stored audio fingerprints are each represented as a matrix of vectors.
9. The method of claim 8, wherein the received audio fingerprint corresponds to the particular one of the stored audio fingerprints if a distance computation between the matrix representing the received audio fingerprint and the matrix representing the particular one of the stored audio fingerprints results in a single scalar distance value that satisfies a threshold distance.
10. The method of claim 1, wherein the information transmitted to the client device is the retrieved audio profile vector.
11. The method of claim 1 further comprising:
- recommending a music item based on the audio profile vector.
12. The method of claim 1 further comprising:
- receiving from the client device a user rating with the received audio fingerprint;
- modifying user preference information based on the user rating and the retrieved audio profile vector; and
- recommending a music item based on the user preference information.
13. The method of claim 1, wherein none of the N numerical values quantifying the N acoustic attributes of the musical piece is determined based on human analysis of the musical piece.
14. An audio recognition method comprising:
- receiving an audio fingerprint of an audio piece;
- comparing the received audio fingerprint against a plurality of stored audio fingerprints for a match;
- identifying the audio piece responsive to a match of the audio fingerprint; and
- retrieving information stored in association with the identified audio piece, wherein the audio fingerprint is a representation of matrix VT generated from a singular value decomposition (SVD) of an N×M matrix A, the matrix A being built based on frequency measurements of audio signals associated with the audio piece, wherein SVD(A)=USVT, where U is an N×M orthogonal matrix, S in an M×M diagonal matrix, and VT is a transpose of an M×M orthogonal matrix.
15. The method of claim 14, wherein rows of the matrix A represent time, and columns of the matrix A represent the frequency measurements.
16. An audio recognition system comprising:
- a first data store storing a plurality of audio fingerprints for a plurality of audio pieces;
- one or more processors;
- one or more memory devices operably coupled to the one or more processors storing program instructions therein, each of the one or more processors being operable to execute one or more of the program instructions, the program instructions including: receiving an audio fingerprint of a particular musical piece from a client device; comparing the received audio fingerprint against the plurality of stored audio fingerprints for a match; determining if the received audio fingerprint corresponds to a particular one of the stored audio fingerprints; retrieving an audio profile vector stored in association with the particular one of the stored audio fingerprints if the received audio fingerprint corresponds to the particular one of the stored audio fingerprints, the audio profile vector including at least N numerical values quantifying N acoustic attributes of the musical piece, wherein N>0, and wherein at least one of the acoustic attributes is tempo, and the associated numerical value quantifies the tempo of the musical piece based on an automatic processing of audio signals of the audio piece by a music profiling engine; transmitting information stored in association with the retrieved audio profile vector to the client device for doing at least one of generating a music playlist, making music related recommendations, and making other music-related selections; and prompting the client device for generating the audio profile vector if the received audio fingerprint does not correspond to the particular one of the stored audio fingerprints.
17. The system of claim 16 further comprising:
- a second data store storing the audio profile vector in association with the particular one of the stored audio fingerprints, the audio profile vector being generated based on an automatic processing of audio signals of the audio piece.
18. The system of claim 17, wherein one of the plurality of acoustic attributes included in the audio profile vector is associated with a particular audio class, and the numerical value indicates a distance of the audio piece to the audio class.
19. The system of claim 18, wherein the audio class is identified based on an audio class fingerprint, the audio class fingerprint being an average of audio fingerprints of audio pieces associated with the audio class.
20. The system of claim 16, wherein the program instructions further include:
- identifying an index of a subset of the plurality of stored audio fingerprints based on the audio fingerprint, the index identifying a plurality of musical notes determined to be most prominent for the audio fingerprints in the subset; and
- searching the identified subset for the match.
21. An audio recognition system comprising:
- a first data store storing a plurality of audio fingerprints for a plurality of audio pieces;
- one or more processors;
- one or more memory devices operably coupled to the one or more processors storing program instructions therein, each of the one or more processors being operable to execute one or more of the program instructions, the program instructions including:
- receiving an audio fingerprint of a particular audio piece;
- comparing the received audio fingerprint against the plurality of stored audio fingerprints for a match;
- identifying the audio piece responsive to a match of the audio fingerprint; and
- retrieving information stored in association with the identified audio piece, wherein the audio fingerprint is a representation of matrix VT generated from a singular value decomposition (SVD) of an N×M matrix A, the matrix A being built based on frequency measurements of audio signals associated with the audio piece, wherein SVD(A)=USVT, where U is an N×M orthogonal matrix, S in an M×M diagonal matrix, and VT is a transpose of an M×M orthogonal matrix.
22. The system of claim 21, wherein rows of the matrix A represent time, and columns of the matrix A represent the frequency measurements.
4807169 | February 21, 1989 | Overbeck |
4996642 | February 26, 1991 | Hey |
5124911 | June 23, 1992 | Sack |
5210611 | May 11, 1993 | Yee et al. |
5233520 | August 3, 1993 | Kretsch et al. |
5412564 | May 2, 1995 | Ecer |
5583763 | December 10, 1996 | Atcheson et al. |
5612729 | March 18, 1997 | Ellis et al. |
5616876 | April 1, 1997 | Cluts |
5644727 | July 1, 1997 | Atkins |
5703308 | December 30, 1997 | Tashiro et al. |
5704017 | December 30, 1997 | Heckerman et al. |
5724567 | March 3, 1998 | Rose et al. |
5734444 | March 31, 1998 | Yoshinobu |
5749081 | May 5, 1998 | Whiteis |
5790426 | August 4, 1998 | Robinson |
5812937 | September 22, 1998 | Takahisa et al. |
5832446 | November 3, 1998 | Neuhaus |
5859414 | January 12, 1999 | Grimes et al. |
5872850 | February 16, 1999 | Klein et al. |
5884282 | March 16, 1999 | Robinson |
5899502 | May 4, 1999 | Del Giorno |
5918223 | June 29, 1999 | Blum et al. |
5954640 | September 21, 1999 | Szabo |
5960440 | September 28, 1999 | Brenner et al. |
5963948 | October 5, 1999 | Shilcrat |
5969283 | October 19, 1999 | Looney et al. |
5978766 | November 2, 1999 | Luciw |
5979757 | November 9, 1999 | Tracy et al. |
5999975 | December 7, 1999 | Kittaka et al. |
6009392 | December 28, 1999 | Kanevsky et al. |
6012051 | January 4, 2000 | Sammon, Jr. et al. |
6018738 | January 25, 2000 | Breese et al. |
6020883 | February 1, 2000 | Herz et al. |
6041311 | March 21, 2000 | Chislenko et al. |
6046021 | April 4, 2000 | Bochner |
6061680 | May 9, 2000 | Scherf et al. |
6088455 | July 11, 2000 | Logan et al. |
6112186 | August 29, 2000 | Bergh et al. |
6148094 | November 14, 2000 | Kinsella |
6192340 | February 20, 2001 | Abecassis |
6232539 | May 15, 2001 | Looney et al. |
6236974 | May 22, 2001 | Kolawa et al. |
6236978 | May 22, 2001 | Tuzhilin |
6236990 | May 22, 2001 | Geller et al. |
6288319 | September 11, 2001 | Catona |
6358546 | March 19, 2002 | Bebiak et al. |
6370513 | April 9, 2002 | Kolawa et al. |
6442517 | August 27, 2002 | Miller et al. |
6453252 | September 17, 2002 | Laroche |
6512837 | January 28, 2003 | Ahmed |
6539395 | March 25, 2003 | Gjerdingen et al. |
6657117 | December 2, 2003 | Weare et al. |
6671550 | December 30, 2003 | Iaizzo et al. |
6697779 | February 24, 2004 | Bellegarda et al. |
6721489 | April 13, 2004 | Benyamin et al. |
6725102 | April 20, 2004 | Sun |
6771797 | August 3, 2004 | Ahmed |
6823225 | November 23, 2004 | Sass |
6941275 | September 6, 2005 | Swierczek |
6941324 | September 6, 2005 | Plastina et al. |
6953886 | October 11, 2005 | Looney et al. |
6961430 | November 1, 2005 | Gaske et al. |
6961550 | November 1, 2005 | Ricard et al. |
6963975 | November 8, 2005 | Weare |
6967275 | November 22, 2005 | Ozick |
6990453 | January 24, 2006 | Wang et al. |
7003515 | February 21, 2006 | Glaser et al. |
7010485 | March 7, 2006 | Baumgartner et al. |
7022905 | April 4, 2006 | Hinman et al. |
7031980 | April 18, 2006 | Logan et al. |
7075000 | July 11, 2006 | Gang et al. |
7081579 | July 25, 2006 | Alcalde et al. |
7171174 | January 30, 2007 | Ellis et al. |
7200529 | April 3, 2007 | Cifra et al. |
7205471 | April 17, 2007 | Looney et al. |
7326848 | February 5, 2008 | Weare et al. |
7373209 | May 13, 2008 | Tagawa et al. |
20010053944 | December 20, 2001 | Marks et al. |
20020037083 | March 28, 2002 | Weare et al. |
20020038597 | April 4, 2002 | Huopaniemi et al. |
20020088336 | July 11, 2002 | Stahl |
20030046421 | March 6, 2003 | Horvitz et al. |
20030055516 | March 20, 2003 | Gang et al. |
20030072463 | April 17, 2003 | Chen |
20030100967 | May 29, 2003 | Ogasawara |
20030106413 | June 12, 2003 | Samadani et al. |
20030183064 | October 2, 2003 | Eugene et al. |
20040002310 | January 1, 2004 | Herley et al. |
20040049540 | March 11, 2004 | Wood |
20040107268 | June 3, 2004 | Iriya et al. |
20050038819 | February 17, 2005 | Hicken et al. |
20050065976 | March 24, 2005 | Holm et al. |
20060004640 | January 5, 2006 | Swierczek |
20060020614 | January 26, 2006 | Kolawa et al. |
20060026048 | February 2, 2006 | Kolawa et al. |
20060190450 | August 24, 2006 | Holm et al. |
20060242665 | October 26, 2006 | Knee et al. |
0 751 471 | January 1997 | EP |
8 063 455 | March 1996 | JP |
8 064 355 | March 1996 | JP |
2002132278 | May 2002 | JP |
- Kim et al. Boosted Binary Audio Fingerprint Based on Spectral Subrand Moments, Acoustic and Signal Processing, 2007, ICASSP Conference, Apr. 15-20, 2007, vol. 1, p. I-241-I-244.
- Co-pending U.S. Appl. No. 09/792,343; filed Feb. 23, 2001, entitled System and Method for Creating and Submitting Electronic Shopping Lists, 101 pages.
- Co-pending U.S. Appl. No. 09/885,308; filed Jun. 20, 2002, entitled System and Method for Automated Recipe Selection and Shopping List Creation, 9 pages.
- Co-pending U.S. Appl. No. 10/278,636; filed Oct. 23, 2002, entitled: Automated Music Profiling and Recommendation, 70 pages.
- Internet Papers: http://www.iVillage.com; IVillage.com The Women's Network—busy women sharing solutions and advice; (downloaded May 22, 2001, 11:36 AM), 6 pp.
- Internet Papers: http://www.my-meals.com; Meals.com—Recipes, Cooking and Meal Planning; meals.com meal planning made easy; (downloaded May 21, 2001, 5:42 PM); 9 pp.
- Internet Papers: http://www.foodfit.com; FoodFit.com: Food, Nutritional Recipes, . . . efs, Healthy Cooking and Fitness Advice; FoodFit.com; (downloaded May 22, 2001, 9:40 AM); 7 pp.
- Internet Papers: https://www.mealsforyou.com; Meals For You; (downloaded May 21, 2001, 5:37 PM); 4 pp.
- Internet Papers: http://www.ourhouse.com; OurHouse.com: Everything Your House Desires; Tavolo by OurHouse.com; (downloaded May 21, 2001, 6:03 PM); 7 pp.
- Internet Papers: http://www.recipezaar.com; Recipezaar—a recipe food cooking & nutritional info site—Recipezaar; (downloaded May 22, 2001, 10:06 AM); 7 pp.
- Internet Papers: http://www.ucook.coom; The Ultimate Cookbook; (downloaded May 22, 2001, 10:15 AM); 6 pp.
- Unklesbay et al.; An automated system for planning menus for the elderly in title VII nutrition programs; Food Technology 1978, 32 (8) 80-83, 1 page.
- Information Technology-Multimedia Content Descrition Interface-Part 4: Audio, dated Jun. 9, 2001, 119 pgs.
- Schonberg et al., Fingerprinting and Forensic Analysis of Multimedia, Proceedings of the 12th Annual ACM International Conference on Multimedia, 2004, pp. 788-795.
- Reddo S., On a Spatial Smoothing Technique for Multiple Source Location, 1987, p. 709.
- A Steady Stream of New Applications . . . Institutional Distribution; Nov. 1983, 9 pages.
- Co-Pending U.S. Appl. No. 09/556,051, filed Apr. 21, 2000, entitled Method and Apparatus for Automated Selection Organization and Recommendation of Items Based on User Preference Topography, 84 pgs.
- Co-pending U.S. Appl. No. 09/885,307; filed Jun. 20, 2001, entitled Acoustical Preference Tuner, 37 pages.
- Bill Communications Inc. A Steady Stream of New Applications, Institutional Distribution, v. 19, Nov. 1983, pp. 80-94.
- International Search Report and Written Opinion, dated Jul. 12, 2006, for PCT/US2004/31138, 13 pages.
- Allamanche, Eric et al., “Content-based Identification of Audio Material Using MPEG-7 Low Level Description”; 2001; 8 pp.
- AudioID—Automatic Identification/Fingerprinting of Audio; http://www.emt.iis.fhg.de/produkte/audioid, Fraunhofer Institut IntegrierteSchaltungen; 5 pp.
- Cheng, et al., “Beat Detection Algorithm”; http://www-dsp.rice.edu/courses/el...Projects01/beat—sync /beatalgo.html; 2001; 6 pp.
- Doan, A., MongoMusic Fans Include Microsoft; Forbes.com; wysiwyg://53/http://www.forbes.com/2000/09/09/feat2.html, Sep. 9, 2000; 2 pp.
- Music Manager Software—Library View; PhatNoise Music Manager, http://www.phatnoise.com/products/software/library.php, Site Copyright 1999-2004; Printed from Internet Jun. 9, 2004; 4pp.
Type: Grant
Filed: Jan 31, 2006
Date of Patent: Feb 3, 2009
Patent Publication Number: 20060190450
Assignee: MusicIP Corporation (Monrovia, CA)
Inventors: Frode Holm (Santa Barbara, CA), Wendell T. Hicken (La Verne, CA)
Primary Examiner: Baoquoc N To
Attorney: Christie, Parker & Hale, LLP.
Application Number: 11/345,548
International Classification: G06F 17/30 (20060101); G06F 17/00 (20060101);