Fundamental Tone Detection Or Extraction Patents (Class 84/616)
  • Patent number: 7102072
    Abstract: The tone pitch of a given tone waveform is detected by forming an envelope waveform of the given tone waveform, holding the level of the envelope waveform at each of the zero cross times of the given tone waveform and releasing such holding starting new holding when the level of the given tone waveform exceeds the held level of the envelope waveform, and determining the pitch of the given tone waveform based on the period between the adjacent zero cross times, A correction target note pitch is provided, to which the detected tone pitch of the given tone waveform is corrected. The detected tone pitch, the correction target note pitch and the amount of correction are exhibited for the user to visually understand the pitch correction.
    Type: Grant
    Filed: April 22, 2004
    Date of Patent: September 5, 2006
    Assignee: Yamaha Corporation
    Inventor: Toru Kitayama
  • Patent number: 7081581
    Abstract: In a method for characterizing a signal, which represents an audio content, a measure for a tonality of the signal is determined, whereupon a statement is made about the audio content of the signal based on the measure for the tonality of the signal. The measure for the tonality of the signal for the content analysis is robust against a signal distortion, such as by MP3 encoding, and has a high correlation to the content of the examined signal.
    Type: Grant
    Filed: February 26, 2002
    Date of Patent: July 25, 2006
    Assignee: m2any GmbH
    Inventors: Eric Allamanche, Juergen Herre, Oliver Hellmuth, Bernhard Froeba
  • Patent number: 7064262
    Abstract: In a method for transferring a music signal into a note-based description, a frequency-time representation of the music signal is first generated, the frequency-time representation comprising coordinate tuples, a coordinate tuple including a frequency value and a time value, the time value indicating the time of occurrence of the assigned frequency in the music signal. Thereupon, a fit function will be calculated as a function of the time, the course of which is determined by the coordinate tuples of the frequency-time representation. For time-segmenting the frequency-time representation, at least two adjacent extreme values of the fit function will be determined. On the basis of the determined extreme values, a segmenting will be carried out, a segment being limited by two adjacent extreme values of the fit function, the time length of the segments indicating a time length of a note for the segment. For pitch determination, a pitch for the segment using coordinate tuples in the segment will be determined.
    Type: Grant
    Filed: April 4, 2002
    Date of Patent: June 20, 2006
    Assignee: Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.
    Inventors: Frank Klefenz, Karlheinz Brandenburg, Matthias Kaufmann
  • Patent number: 7065416
    Abstract: In connection with a classification system for classifying media entities that merges perceptual classification techniques and digital signal processing classification techniques for improved classification of media entities, a system and methods are provided for automatically classifying and characterizing melodic movement properties of media entities. Such a system and methods may be useful for the indexing of a database or other storage collection of media entities, such as media entities that are audio files, or have portions that are audio files. The methods also help to determine media entities that have similar, or dissimilar as a request may indicate, melodic movement by utilizing classification chain techniques that test distances between media entities in terms of their properties. For example, a neighborhood of songs may be determined within which each song has similar melodic movement properties.
    Type: Grant
    Filed: August 29, 2001
    Date of Patent: June 20, 2006
    Assignee: Microsoft Corporation
    Inventors: Christopher B. Weare, Jeffrey S. Hoekman
  • Patent number: 7027983
    Abstract: A system and method for creating a ring tone for an electronic device takes as input a phrase sung in a human voice and transforms it into a control signal controlling, for example, a ringer on a cellular telephone. Time-varying features of the input signal are analyzed to segment the signal into a set of discrete notes and assigning to each note a chromatic pitch value. The set of note start and stop times and pitches are then translated into a format suitable for controlling the device.
    Type: Grant
    Filed: December 31, 2001
    Date of Patent: April 11, 2006
    Assignee: Nellymoser, Inc.
    Inventors: John D. Puterbaugh, Eric J. Puterbaugh, Peter Velikonja, Robert A. Baxter
  • Patent number: 7024260
    Abstract: Maximal and minimal values represented by samples of a digital audio signal are detected. A number of samples from a sample representing a minimal value to a maximal-value-corresponding sample is detected. A number of samples from a sample representing a maximal value to a minimal-value-corresponding sample is detected. Calculation is given of a first difference between the maximal-value-corresponding sample and the immediately-preceding sample. Calculation is given of a second difference between the minimal-value-corresponding sample and the immediately-preceding sample. First and second coefficients are calculated from the detected sample numbers. The first coefficient and the first difference are multiplied to generate a first multiplication result. The second coefficient and the second difference are multiplied to generate a second multiplication result. The maximal value represented by the maximal-value-corresponding sample is incremented by the first multiplication result.
    Type: Grant
    Filed: November 8, 2001
    Date of Patent: April 4, 2006
    Assignee: Victor Company of Japan, Ltd.
    Inventor: Toshiharu Kuwaoka
  • Patent number: 7015388
    Abstract: To store main information with associated additional information incorporated therein, data constituting the additional information is divided into a plurality of small-size data pieces of, e.g., one bit. Then, the respective values of particular ones of predetermined data units (e.g., bytes) constituting the main information are subjected to arithmetic operations in accordance with a predetermined algorithm containing the value of each of the data pieces as a parameter. In this way, the respective values of the particular data units in the main information are modulated in accordance with the values of the individual data pieces in the additional information; at that time, only some of the data unit values are altered with the others left unaltered. The thus-arithmetically-operated main information is stored into a storage.
    Type: Grant
    Filed: February 20, 2001
    Date of Patent: March 21, 2006
    Assignee: Yamaha Corporation
    Inventor: Hideaki Taruguchi
  • Patent number: 6967275
    Abstract: A song-matching system, which provides real-time, dynamic recognition of a song being sung and providing an audio accompaniment signal in synchronism therewith, includes a song database having a repertoire of songs, each song of the database being stored as a relative pitch template, an audio processing module operative in response to the song being sung to convert the song being sung into a digital signal, an analyzing module operative in response to the digital signal to determine a definition pattern representing a sequence of pitch intervals of the song being sung that have been captured by the audio processing module, a matching module operative to compare the definition pattern of the song being sung with the relative pitch template of each song stored in the song database to recognize one song in the song database as the song being sung, the matching module being further operative to cause the song database to download the unmatched portion of the relative pitch template of the recognized song as a dig
    Type: Grant
    Filed: June 24, 2003
    Date of Patent: November 22, 2005
    Assignee: iRobot Corporation
    Inventor: Daniel Ozick
  • Patent number: 6965068
    Abstract: A system and method for analyzing an input signal comprising one or more sinusoidal tones. A processor of the system receives samples of an input signal and operates on the samples to generate a transform array. The processor identifies positive frequency peaks of the transform array, and estimates a set of signal parameters (e.g. tone frequency and complex amplitude) for each of the positive frequency peaks. Each tone is represented in the transform array as a positive frequency image and a corresponding negative frequency image. Using the parameter sets, the processor may estimate the amount of cross-interaction between the images, i.e., may compute the amounts by which each positive frequency peak is effected by the negative frequency images and other positive frequency images. These amounts may be subtracted from each positive frequency peak to generate improved peak values. The processor may use the improved peak values to compute improved estimates for the signal parameters.
    Type: Grant
    Filed: December 27, 2000
    Date of Patent: November 15, 2005
    Assignee: National Instruments Corporation
    Inventor: Alain Moriat
  • Patent number: 6945784
    Abstract: Generating a pitched musical part from an electronic music file comprised of instrumental parts includes generating a control stream that indicates which of the instrumental parts has a highest value for a period of time, selecting one of the instrumental parts for the period of time based on the control stream, and outputting the selected instrumental part for the period of time to produce the musical part. Generating a non-pitched musical part from an electronic music file includes identifying patterns in the electronic music file and selectively combining the patterns to produce the musical part.
    Type: Grant
    Filed: March 21, 2001
    Date of Patent: September 20, 2005
    Assignee: Namco Holding Corporation
    Inventors: John Paquette, Robert Ferry
  • Patent number: 6930236
    Abstract: An apparatus for analyzing music based on sound information of instruments is provided. The apparatus uses sound information of instruments or the sound information and score information in order to analyze digital sounds. The sound information of instruments performed to generate digital sounds is previously stored by pitches and strengths so that monophonic notes and polyphonic notes performed by the instruments can be easily analyzed. In addition, by using the sound information, of instruments and score information together, input digital sounds can be accurately analyzed and can be detected in the form of quantitative data.
    Type: Grant
    Filed: December 10, 2002
    Date of Patent: August 16, 2005
    Assignee: Amusetec Co., Ltd.
    Inventor: Doill Jung
  • Patent number: 6911591
    Abstract: Rendition style determining apparatus detects at least one of duration of a first note to be performed at a given time point and time interval between the first note and a second note to be performed following the first note, in order to automatically impart music piece data with an appropriate rendition style. Rendition style to be imparted to the music piece data in relation to the given time point is determined on the basis of the detected duration or time interval. Also, the apparatus can readily control the rendition style to be imparted to the music piece data, by appropriately setting/changing rendition style determination conditions, such as reference time lengths. Music piece data is supplied to a determination device, thereby causes the determination device to perform automatic rendition style determination based on the supplied music piece data and then displays the rendition style imparted to the music piece data.
    Type: Grant
    Filed: March 14, 2003
    Date of Patent: June 28, 2005
    Assignee: Yamaha Corporation
    Inventors: Eiji Akazawa, Yasuyuki Umeyama, Junji Kuroda
  • Patent number: 6881889
    Abstract: Systems and methods for extracting a music snippet from a music stream are described. In one aspect, one or more music sentences are extracted from the music stream. The one or more sentences are extracted as a function of peaks and valleys of acoustic energy across sequential music stream portions. The music snippet is selected based on the one or more music sentences.
    Type: Grant
    Filed: June 3, 2004
    Date of Patent: April 19, 2005
    Assignee: Microsoft Corporation
    Inventors: Lie Lu, Hong-Jiang Zhang, Po Yuan
  • Patent number: 6864413
    Abstract: An ensemble system reproduces a performance on an automatic player piano expressed by a set of MIDI music data codes in ensemble with another performance recorded in a compact disc in the form of audio data codes; the ensemble system firstly determines the pitch of the fundamental tone produced through vibrations of a string, then searching the audio data codes for a corresponding tone, calculating a ratio between the pitch of the fundamental tone and the pitch of the corresponding tone, and determining a data read-out speed for the audio data codes; while the MIDI data codes are being supplied to the automatic player piano, the audio data codes are transferred to a speaker system at a speed equal to the product between the standard speed and the ratio so that the piano tones are well harmonized with the electronic tones.
    Type: Grant
    Filed: January 14, 2003
    Date of Patent: March 8, 2005
    Assignee: Yamaha Corporation
    Inventor: Yoshihiro Shiiya
  • Patent number: 6856923
    Abstract: A method for analyzing digital-sounds using sound-information of instruments and/or score-information is provided. Particularly, sound-information of instruments which were used or which are being used to generate input digital-sounds is used. Alternatively, in addition to the sound-information, score-information which were used or which are being used to generate the input digital-sounds is also used. According to the method, sound-information including pitches and strengths of notes performed on instruments used to generate the input digital-sounds is stored in advance so that monophonic or polyphonic pitches performed on the instruments can be easily analyzed. Since the sound-information of instruments and the score-information are used together, the input digital-sounds can be accurately analyzed and output as quantitative data.
    Type: Grant
    Filed: December 3, 2001
    Date of Patent: February 15, 2005
    Assignee: AMUSETEC Co., Ltd.
    Inventor: Doill Jung
  • Patent number: 6815599
    Abstract: A musical instrument has play pistons incorporated in a trumpet-shaped housing at the upper central part in the front-and-back direction. An air vibration sensor for inputting voices is provided at the end surface on the front side of housing. When a player inputs voices into air vibration sensor simultaneously with the player's operation of play pistons, a musical tone is generated having a tone pitch that is determined by a combination of the operation of play pistons and the frequency of the voice sound signal. Rings formed of transparent or translucent resin and emitting light by energization of light-emitting elements are incorporated at the foot of play pistons. Through energization of light-emitting elements in accordance with performance data, light emission of rings guides playing. Thus, the player can easily play and practice the musical instrument, such as a trumpet, that determines the tone pitch of a musical tone in accordance with a combination of operation of a plurality of play operators.
    Type: Grant
    Filed: May 7, 2003
    Date of Patent: November 9, 2004
    Assignee: Yamaha Corporation
    Inventor: Yasuhiko Asahi
  • Publication number: 20040216585
    Abstract: Systems and methods for extracting a music snippet from a music stream are described. In one aspect, one or more music sentences are extracted from the music stream. The one or more sentences are extracted as a function of peaks and valleys of acoustic energy across sequential music stream portions. The music snippet is selected based on the one or more music sentences.
    Type: Application
    Filed: June 3, 2004
    Publication date: November 4, 2004
    Applicant: Microsoft Corporation
    Inventors: Lie Lu, Hong-Jiang Zhang, Po Yuan
  • Publication number: 20040200337
    Abstract: A highlight portion is detected to a high accuracy from acoustic signals in say an event, and an index is added to the highlight portion. In an acoustic signal processing apparatus 10, a candidate domain extraction unit 13 retains a domain, a length of which with short-term amplitudes as calculated by an amplitude calculating unit 11 not being less than an amplitude threshold value is not less than a time threshold value, as a candidate domain. A feature extraction unit 14 extracts sound quality featuring quantities, relevant to the sound quality, from the acoustic signals, to quantify the sound quality peculiar to a climax. A candidate domain evaluating unit 15 calculates a score value, indicating the degree of the climax, using featuring quantities relevant to the amplitude or the sound quality for each candidate domain, in order to detect a true highlight domain, based on the so calculated score value.
    Type: Application
    Filed: December 8, 2003
    Publication date: October 14, 2004
    Inventors: Mototsugu Abe, Akihiro Mukai, Masayuki Nishiguchi
  • Patent number: 6784354
    Abstract: Systems and methods for extracting a music snippet from a music stream are described. In one aspect, the music stream is divided into multiple frames of fixed length. The most-salient frame of the multiple frames is then identified. One or more music sentences are then extracted from the music stream as a function of peaks and valleys of acoustic energy across sequential music stream portions. The music snippet is the sentence that includes the most-salient frame.
    Type: Grant
    Filed: March 13, 2003
    Date of Patent: August 31, 2004
    Assignee: Microsoft Corporation
    Inventors: Lie Lu, Hong-Jiang Zhang, Po Yuan
  • Publication number: 20040144239
    Abstract: A vibration signal (e.g. a human voice) is generated at a desired pitch corresponding to a tone pitch of a musical tone desired to be generated, and is input via a microphone. The pitch of the input vibration signal and an amplitude (volume) level thereof are detected. When an amplitude level equal to or greater than a predetermined threshold level has been detected but the pitch has not been detected yet, an instruction of generation of a noise tone is issued thereby to generate the noise tone. Thereafter, when a certain pitch is detected, a musical tone is generated at a pitch determined according to the detected pitch. In this way, a noise tone is generated during a delay in pitch detection, and a delay in response at the start of sounding is absorbed.
    Type: Application
    Filed: December 23, 2003
    Publication date: July 29, 2004
    Applicant: YAMAHA CORPORATION
    Inventor: Shinya Sakurada
  • Patent number: 6756532
    Abstract: A method generates waveform signals from a plurality of channels to sound a music tone through an electro-acoustic converter in response to sounding instruction information.
    Type: Grant
    Filed: May 25, 2001
    Date of Patent: June 29, 2004
    Assignee: Yamaha Corporation
    Inventors: Masatada Wachi, Masahiro Shimizu, Tsuyoshi Futamase
  • Patent number: 6747201
    Abstract: A method and system for extracting melodic patterns by first recognizing musical “keywords” or themes. The invention searches for all instances of melodic (intervallic) repetition in a piece (patterns). This process generally uncovers a large number of patterns, many of which are either uninteresting or are only superficially prevalent. Filters reduce the number and/or prevalence of such patterns. Patterns are then rated according to characteristics deemed perceptually significant. The top ranked patterns correspond to important thematic or motivic musical content. The system operates robustly across a broad range of styles, and relies on no metadata on its input, allowing it to independently and efficiently catalog multimedia data.
    Type: Grant
    Filed: September 26, 2001
    Date of Patent: June 8, 2004
    Assignee: The Regents of the University of Michigan
    Inventors: William P. Birmingham, Colin J. Meek
  • Patent number: 6740804
    Abstract: A waveform generating method is provided, which is capable of generating expressive musical tones. A plurality of partial waveforms are stored in a partial waveform memory. Property information on respective ones of the partial waveforms stored in the partial waveform memory is stored in a property information memory. The property information is retrieved according to inputted sounding control information in order to read out a partial waveform having property information corresponding to the sounding control information. The readout partial waveform is then processed according to the property information and the sounding control information to generate a waveform corresponding to the sounding control information.
    Type: Grant
    Filed: February 4, 2002
    Date of Patent: May 25, 2004
    Assignee: Yamaha Corporation
    Inventors: Masahiro Shimizu, Yasuhiro Kawano, Hidemichi Kimura
  • Publication number: 20040074378
    Abstract: In a method for characterizing a signal, which represents an audio content, a measure for a tonality of the signal is determined (12), whereupon a statement is made (16) about the audio content of the signal based on the measure for the tonality of the signal. The measure for the tonality of the signal for the content analysis is robust against a signal distortion, such as by MP3 encoding, and has a high correlation to the content of the examined signal.
    Type: Application
    Filed: December 1, 2003
    Publication date: April 22, 2004
    Inventors: Eric Allamanche, Juergen Herre, Oliver Hellmuth, Bernhard Froeba
  • Publication number: 20040060424
    Abstract: In a method for transferring a music signal into a note-based description, a frequency-time representation of the music signal is first generated, with the frequency-time representation comprising coordinate tuples, with a coordinate tuple including a frequency value and a time value, with the time value indicating the time of occurrence of the assigned frequency in the music signal. Thereupon, a fit function will be calculated as a function of the time, the course of which is determined by the coordinate tuples of the frequency-time representation. For time-segmenting the frequency-time representation, at least two adjacent extreme values of the fit function will be determined. On the basis of the determined extreme values, a segmenting will be carried out, with a segment being limited by two adjacent extreme values of the fit function, with the time length of the segments indicating a time length of a note for the segment.
    Type: Application
    Filed: September 26, 2003
    Publication date: April 1, 2004
    Inventors: Frank Klefenz, Karlheinz Brandenburg, Matthias Kaufmann
  • Patent number: 6704671
    Abstract: The present invention provides for a method and system for identifying a sonic event of interest within a received audio signal. A sonic event is characterized by a predetermined rate of change in the perceived audio volume, and is associated with the loudness of the audio. The present invention detects a sonic event such as a percussive hit without requiring that the detector be disabled for a fixed time to avoid false triggering. Because the detector is not disabled during the detection process, sonic events occurring in close proximity are easily recognized and not ignored as in some conventional systems.
    Type: Grant
    Filed: July 22, 1999
    Date of Patent: March 9, 2004
    Assignee: Avid Technology, Inc.
    Inventor: Frederick W. Umminger, III
  • Patent number: 6703551
    Abstract: A musical scale recognition apparatus includes an analog/digital converter and a processor, and can determine by the processor how close an input analog audio signal is to a musical scale of a musical tone to be recognized by repeatedly performing a calculation of a cumulative value As to find a coefficient of a Fourier sine series of the audio signal on the basis of a frequency f and digital data D, a cumulative value Ac to find a coefficient of a Fourier cosine series of the audio signal on the basis of the frequency f and the digital data D, and a frequency power spectrum effective value A of the audio signal on the basis of the cumulative value As and the cumulative value Ac, wherein the digital data into which the input analog audio signal is converted by the analog/digital converter is D, the frequency (musical scale) of the musical tone to be recognized is f, and a current time is t.
    Type: Grant
    Filed: March 18, 2002
    Date of Patent: March 9, 2004
    Assignee: SSD Company Limited
    Inventors: Hiromu Ueshima, Shuhei Kato
  • Patent number: 6703549
    Abstract: There are provided a performance data generating apparatus and method which is capable of automatically converting original performance data providing an expressionless performance into performance data that enable a variety of musically excellent performances, by means of a novel expression adding converter using various modularized rules and procedures to add expressions based on temporal changes such as tempo and timing, as well as a storage medium that stores a program for executing the method.
    Type: Grant
    Filed: August 8, 2000
    Date of Patent: March 9, 2004
    Assignee: Yamaha Corporation
    Inventors: Tetsuo Nishimoto, Masahiro Kakishita, Yutaka Tohgi, Toru Kitayama, Toshiyuki Iwamoto, Norio Suzuki, Akane Iyatomi, Akira Yamauchi
  • Publication number: 20030221542
    Abstract: A frequency spectrum is detected by analyzing a frequency of a voice waveform corresponding to a voice synthesis unit formed of a phoneme or a phonemic chain. Local peaks are detected on the frequency spectrum, and spectrum distribution regions including the local peaks are designated. For each spectrum distribution region, amplitude spectrum data representing an amplitude spectrum distribution depending on a frequency axis and phase spectrum data representing a phase spectrum distribution depending on the frequency axis are generated. The amplitude spectrum data is adjusted to move the amplitude spectrum distribution represented by the amplitude spectrum data along the frequency axis based on an input note pitch, and the phase spectrum data is adjusted corresponding to the adjustment. Spectrum intensities are adjusted to be along with a spectrum envelope corresponding to a desired tone color. The adjusted amplitude and phase spectrum data are converted into a synthesized voice signal.
    Type: Application
    Filed: February 27, 2003
    Publication date: December 4, 2003
    Inventors: Hideki Kenmochi, Alex Loscos, Jordi Bonada
  • Patent number: 6657114
    Abstract: Sound signal indicative of a human voice or musical tone is input, and the pitch of the input sound signal is detected. Then, a scale note pitch is determined which is nearest to the detected pitch of the input sound signal. In the meantime, a scale note pitch of an additional sound or harmony sound to be added to the input sound is specified in accordance with a harmony mode selected by a user. The scale note pitch of the additional sound to be generated is modified in accordance with a difference between the determined scale note pitch and the detected pitch of the input sound signal. Because the additional sound is generated with the modified pitch, it can appropriately follow a variation in the pitch of the input sound to be in harmony with the input sound, rather than exactly agreeing with the scale note pitch.
    Type: Grant
    Filed: March 1, 2001
    Date of Patent: December 2, 2003
    Assignee: Yamaha Corporation
    Inventor: Kazuhide Iwamoto
  • Patent number: 6653546
    Abstract: An electronic, voice-controlled musical instrument called the Vocolo, in which the player hums into the mouthpiece, and the device imitates the sound of a musical instrument whose pitch and volume change in response to the player's voice is disclosed. The player is given the impression of playing the actual instrument and controlling it intimately with the fine nuances of his voice. The invention comprises techniques for pitch quantization that provide esthetically pleasing note transitions, mechanisms for song recording that are suited for rhythmic repeated playback and performance evaluation of the player's pitch control, techniques related to expressive control and pitch detection, and techniques for mitigating the effect of pitch detection errors. Embodiments are disclosed for providing finger/hand interaction for expressive control, a microphone enclosure that mitigates audio feedback, and for providing rhythmic feedback to the player through mechanical vibrations induced in the device.
    Type: Grant
    Filed: September 18, 2002
    Date of Patent: November 25, 2003
    Assignee: Alto Research, LLC
    Inventor: John W. Jameson
  • Publication number: 20030177892
    Abstract: Rendition style determining apparatus detects at least one of duration of a first note to be performed at a given time point and time interval between the first note and a second note to be performed following the first note, in order to automatically impart music piece data with an appropriate rendition style. Rendition style to be imparted to the music piece data in relation to the given time point is determined on the basis of the detected duration or time interval. Also, the apparatus can readily control the rendition style to be imparted to the music piece data, by appropriately setting/changing rendition style determination conditions, such as reference time lengths. Music piece data is supplied to a determination device, thereby causes the determination device to perform automatic rendition style determination based on the supplied music piece data and then displays the rendition style imparted to the music piece data.
    Type: Application
    Filed: March 14, 2003
    Publication date: September 25, 2003
    Applicant: Yamaha Corporation
    Inventors: Eiji Akazawa, Yasuyuki Umeyama, Junji Kuroda
  • Patent number: 6580024
    Abstract: A musical instrument tuning and intonation aid employs a novel concept and method, which closely approximates a desirable mechanical stroboscopic effect—but without mechanical moving parts. A suitable image pattern is displayed on an electronically controlled, visual display screen of any available type (Liquid Crystal Display, Organic LED Display, Electro-Luminescent Display, Cathode Ray Tube display, etc.) in direct and immediate response to the detection of a unique phase of the fundamental pitch periods in arbitrary acoustic or electrical signals. The displaced position of the pattern at these instants is determined by a pre-calculated reference frequency.
    Type: Grant
    Filed: January 10, 2002
    Date of Patent: June 17, 2003
    Assignee: Peterson Electro-Musical Products, Inc.
    Inventor: Michael J. Skubic
  • Patent number: 6541691
    Abstract: A method for generating accompaniment to a musical presentation, the method comprising steps of providing a note-based code representing musical information corresponding to the musical presentation, generating a code sequence corresponding to new melody lines by using said note-based code as an input for a composing method, and providing accompaniment on the basis of the code sequence corresponding to new melody lines. Providing the note-based code representing the musical information comprises steps of receiving the musical information in the form of an audio signal, and applying an audio-to-notes conversion to the audio signal for generating the note-based code representing the musical information, the audio-to-notes conversion comprising the steps of estimating fundamental frequencies of the audio signal for obtaining a sequence of fundamental frequencies, and detecting note events on the basis of the sequence of fundamental frequencies for obtaining the note-based code.
    Type: Grant
    Filed: June 29, 2001
    Date of Patent: April 1, 2003
    Assignee: Oy Elmorex Ltd.
    Inventors: Tero Tolonen, Ville Pulkki
  • Patent number: 6525255
    Abstract: An average is calculated of every predetermined number of sample amplitude values of a sound signal from an external sound source, and the respective averages are output as a time-series of average level information. On the basis of the average level information, each available section of the sound signal is detected where there appears to be a musical sound. On the basis of degrees of inclination in the average level information within the available section, stable sections are detected for detection of same-waveform sections. On the basis of the signals within the stable sections, a steady section is detected which corresponds to a note. A time-varying band-pass filtering operation is then performed on the sound signal, and detection is made of a plurality of periodic reference points of the sound signal.
    Type: Grant
    Filed: November 19, 1997
    Date of Patent: February 25, 2003
    Assignee: Yamaha Corporation
    Inventor: Tomoyuki Funaki
  • Patent number: 6476308
    Abstract: The present invention is directed to classifying a musical piece based on determined characteristics for each of plural notes contained within the piece. Exemplary embodiments accommodate the fact that in a continuous piece of music, the starting and ending points of a note may overlap previous notes, the next note, or notes played in parallel by one or more instruments. This is complicated by the additional fact that different instruments produce notes with dramatically different characteristics. For example, notes with a sustaining stage, such as those produced by a trumpet or flute, possess high energy in the middle of the sustaining stage, while notes without a sustaining stage, such as those produced by a piano or guitar, posses high energy in the attacking stage when the note is first produced.
    Type: Grant
    Filed: August 17, 2001
    Date of Patent: November 5, 2002
    Assignee: Hewlett-Packard Company
    Inventor: Tong Zhang
  • Patent number: 6476304
    Abstract: A musical instrument is computerized for electronically generating tones, and an information processing subsystem assists users selectively to change the attributes of the tones, wherein the information processing subsystem includes a speaker recognition engine for identifying a player with one of registered users and a graphical user interface transferring a digital image signal representative of a picture customized by the registered user to a display unit so that the user can check the attribute during the performance.
    Type: Grant
    Filed: June 19, 2001
    Date of Patent: November 5, 2002
    Assignee: Yamaha Corporation
    Inventor: Haruki Uehara
  • Patent number: 6437227
    Abstract: The invention relates to methods for recognizing and for selecting a tone sequence, particularly a piece of music, which permit a user to request a particular piece of music by singing a section of the piece of music, whose title is unknown to him.
    Type: Grant
    Filed: October 11, 2000
    Date of Patent: August 20, 2002
    Assignee: Nokia Mobile Phones Ltd.
    Inventor: Wolfgang Theimer
  • Patent number: 6405163
    Abstract: A method and apparatus for removing or amplifying voice or other signals panned to the center of a stereo recording utilizes frequency domain techniques to calculate a frequency dependent gain factor based on the difference between the frequency domain spectra of the stereo channels.
    Type: Grant
    Filed: September 27, 1999
    Date of Patent: June 11, 2002
    Assignee: Creative Technology Ltd.
    Inventor: Jean Laroche
  • Patent number: 6369311
    Abstract: A harmony tone generating apparatus receives a voice wave signal bearing musical tone pitches such as a singer's or a speaker's human voice signal and a musician's instrumental voice signal, and plural kinds of performance data, each representing a musical note pitch, from plural kinds of performance sources. The latest received performance data from among the plural performance sources is captured and supplied to a harmony tone data generator, which in turn generates harmony tone data defining a tone pitch which is determined by the note pitch represented by the performance data. As the harmony tone data generator includes a limited number of data generation-channels, the limited number of the latest captured performance data pieces are assigned thereto for harmony data generation, truncating the oldest supplied performance data for new data assignment, where there is no empty channel left and newly captured performance data is supplied.
    Type: Grant
    Filed: June 22, 2000
    Date of Patent: April 9, 2002
    Assignee: Yamaha Corporation
    Inventor: Kazuhide Iwamoto
  • Publication number: 20020035915
    Abstract: A method for generating accompaniment to a musical presentation, the method comprising steps of providing a note-based code representing musical information corresponding to the musical presentation, generating a code sequence corresponding to new melody lines by using said note-based code as an input for a composing method, and providing accompaniment on the basis of the code sequence corresponding to new melody lines. Providing the note-based code representing the musical information comprises steps of receiving the musical information in the form of an audio signal, and applying an audio-to-notes conversion to the audio signal for generating the note-based code representing the musical information, the audio-to-notes conversion comprising the steps of estimating fundamental frequencies of the audio signal for obtaining a sequence of fundamental frequencies, and detecting note events on the basis of the sequence of fundamental frequencies for obtaining the note-based code.
    Type: Application
    Filed: June 29, 2001
    Publication date: March 28, 2002
    Inventors: Tero Tolonen, Ville Pulkki
  • Patent number: 6362409
    Abstract: A software based digital wavetable synthesizer receives musical data from an external source and generates a plurality of digital sample values corresponding to the musical source. The musical source may be a synthesized music source or an actual instrument. In an exemplary embodiment, a sample for each semi-tone for the musical instrument is sampled and stored. A subsequent process analyzes the sampled and selects a single cycle representing that musical instrument at each of the semi-tones. The data is subsequently normalized such that each cycle begins with a zero value and the normalized data is stored in a data structure along with labels indicative of the musical instrument and the musical note. In subsequent use, the user can create synthesized music by selecting the desired instrument and notes. Additional musical rules, such as rules associated with Indian classical music, may be applied to specify the synthesis process.
    Type: Grant
    Filed: November 24, 1999
    Date of Patent: March 26, 2002
    Assignee: IMMS, Inc.
    Inventor: Sharadchandra H. Gadre
  • Patent number: 6355869
    Abstract: A method and a system for obtaining a musical score from a musical recording is disclosed. The present invention further provides a method and system for creating an editable music file from a musical recording, such that a user may modify the musical recording and obtain a musical score from the modified musical recording. The method comprises operations of: (1) storing a musical recording as a wave file; (2) generating a pseudo wave file for each segment of interest from the wave file; (3) generating a sequence file for each of the pseudo wave files; (4) generating a list of events from the sequence files; (5) converting the list of events from the sequence file into a MIDI or other notation readable program file; and (6) importing the MIDI or other file into the notation program in order to print the musical score.
    Type: Grant
    Filed: August 21, 2000
    Date of Patent: March 12, 2002
    Inventor: Duane Mitton
  • Patent number: 6140568
    Abstract: A system and method for automatically detecting and identifying a plurality of frequencies simultaneously present in an audio signal, as well as the duration, amplitude, and phase of those frequencies, then filtering out harmonic components to determine which frequencies are fundamentals. The system includes a computer readable medium of instruction code that decomposes the signal into its component sine waves by computing and comparing correlations between the input signal and sine waves at various phase and amplitude combinations. The system also employs several optimization and error correction routines.
    Type: Grant
    Filed: November 5, 1998
    Date of Patent: October 31, 2000
    Assignee: Innovative Music Systems, Inc.
    Inventor: Joseph Louis Kohler
  • Patent number: 6124544
    Abstract: A method for detecting the pitch of a musical signal comprising the steps of receiving the musical signal, identifying an active portion of the musical signal, identifying a periodic portion of the active portion of the musical signal, and determining a fundamental frequency of the periodic portion of the musical signal.
    Type: Grant
    Filed: July 30, 1999
    Date of Patent: September 26, 2000
    Assignee: Lyrrus Inc.
    Inventors: John Stern Alexander, Themistoclis George Katsianos
  • Patent number: 6121533
    Abstract: An initial note series is collected from a real-time source of musical input material such as a keyboard or a sequencer playing back musical data, or extracted from musical data stored in memory. The initial note series may be altered to create variations of the initial note series using various mathematical operations. The resulting altered note series, or other data stored in memory is read out according to one or more patterns. The patterns may have steps containing pools of independently selectable items from which random selections are made. A pseudo-random number generator is employed to perform the random selections during processing, where the random sequences thereby generated have the ability to be repeated at specific musical intervals. The resulting musical effect may additionally incorporate a repeated effect, or a repeated effect can be independently performed from input notes in the musical input material.
    Type: Grant
    Filed: January 28, 1999
    Date of Patent: September 19, 2000
    Inventor: Stephen Kay
  • Patent number: 6057502
    Abstract: A time fraction or short duration of a musical sound wave is first analyzed by the FFT processing into frequency components in the form of a frequency spectrum having a number of peak energy levels, a predetermined frequency range (e.g. 63.5-2032 Hz) of the spectrum is cut out for the analysis of chord recognition, the cut-out frequency spectrum is then folded on an octave span basis to enhance spectrum peaks within a musical octave span, the frequency axis is adjusted by an amount of difference between the reference tone pitch as defined by the peak frequency positions of the analyzed spectrum and the reference tone pitch used in the processing system, and then a chord is determined from the locations of those peaks in the established octave spectrum by pattern comparison with the reference frequency component patterns of the respective chord types. Thus, the musical chords included in a musical performance are recognized from the sound wave of the musical performance.
    Type: Grant
    Filed: March 30, 1999
    Date of Patent: May 2, 2000
    Assignee: Yamaha Corporation
    Inventor: Takuya Fujishima
  • Patent number: 6002080
    Abstract: There are provided a plurality of keys for designating a pitch and a plurality of sensors for detecting operating states of the keys. Main pitch determination section identifies respective ON/OFF states of the keys on the basis of output values from the sensors to thereby determines a performed pitch. When the output value from the sensor, corresponding to a particular one of the keys determined to be in the ON or OFF state, presents a predetermined intermediate value, a subsidiary pitch determination section determines another pitch on the basis of predetermined assumptive ON/OFF states of the keys. The assumptive ON/OFF states are similar to the ON/OFF states of the keys identified by the main pitch determination section except that the ON or OFF state of the particular key is inverted to the OFF or ON state.
    Type: Grant
    Filed: June 16, 1998
    Date of Patent: December 14, 1999
    Assignee: Yahama Corporation
    Inventor: So Tanaka
  • Patent number: 5998725
    Abstract: A musical sound synthesizer generates a predetermined singing sound based on performance data. A compression device determines whether each of a plurality of phonemes forming the predetermined singing sound is a first phoneme to be sounded in accordance with a note-on signal indicative of a note-on of the performance data, and compresses a rise time of the first phoneme when the first phoneme is sounded in accordance with occurrence of the note-on signal of the performance data.
    Type: Grant
    Filed: July 29, 1997
    Date of Patent: December 7, 1999
    Assignee: Yamaha Corporation
    Inventor: Shinichi Ohta
  • Patent number: 5963907
    Abstract: A voice converter provides for pitch and formant shifting of an input voice signal. An audio filter extracts the volume level of the input voice signal, and outputs the extracted volume level as first volume data. A second audio filter extracts the volume level of an output voice signal, and outputs the extracted volume level as second volume data. A difference judging circuit compares the first and second volume data with each other, and determines a volume gain and a distorting factor which is supplied to a distortion circuit. When the volume of the output voice after conversion is smaller than that of the input voice, the volume gain is increased.
    Type: Grant
    Filed: August 29, 1997
    Date of Patent: October 5, 1999
    Assignee: Yamaha Corporation
    Inventor: Shuichi Matsumoto