Abstract: Music data memory includes pieces of music within a group and other pieces of music outside the group. The next piece to be played is automatically determined by random table among pieces within the group. Favorite or newest piece is weighted to be more frequently played in the group. Piece in music data memory is automatically included into the group by random table. Newly downloaded piece into music data memory is included into the group by priority. Most frequently played piece is excluded from the group in place of newly included piece. Favorite or newest piece may be an exception of exclusion. Next piece is capable of being played in tempo similar to that of preceding piece by means of tempo-adjusted or piece replacement or repetition of the same piece for the purpose of continued baby cradling in synchronism with the same tempo of succeeding pieces.
Abstract: A multi-channel noise reduction system provides improved noise reduction with direct instrument tracking of all channels. In a two channel noise reduction system, both channels detect and track the input level and dynamic range of the guitar directly with one channel of dynamic noise reduction between the guitar and the input of a guitar amplifier to eliminate the noise of the instrument and another channel of noise reduction connected in the effects loop of the guitar amplifier. Multiple channels of noise reduction can be implemented with separated threshold controls and with low level expansion and dynamic filtering being combined so as to detect and track the input level and dynamic range of the guitar directly. A buffer amplifier can be used to feed the direct guitar signal to the detectors of the noise reduction system and the input of a stereo guitar system.
Abstract: A music reproduction apparatus includes: a reproduction section for reproducing user-selected music piece data; a generation section for generating control information including music piece information identifying a music piece reproduced by the reproduction section and reproduced position information indicative of a currently-reproduced position; a modulation section for outputting, on the basis of the generated control information, an audio signal of a predetermined frequency band for carrying the control information; and an output section for transmitting to outside the audio signal generated by the modulation section.
Abstract: A method for composing a musical is carried out by identifying spectral data characterizing a selected chemical composition, said spectral data representing a plurality of transmittance peaks for the selected chemical composition in a spectral range (e.g., a wave number range), including identifying the spectral range and transmittance values and positions of a sequence of transmittance peaks within the spectral range; assigning a melody duration to the identified spectral range; generating a sequence of musical tones by assigning the identified transmittance values to musical tones to the sequence of transmittance peaks; and assigning a duration to each musical tone. A computer-readable medium includes code for carrying out the method on a general-purpose computer.
Abstract: A band division apparatus obtains an analog low-band signal, an analog intermediate-band signal and an analog high-band signal from a digital sound signal. In the band division apparatus, a digital filter separates the digital sound signal into a digital intermediate-band signal having the intermediate frequency band while the high frequency band and the low frequency band are attenuated, and a digital low-high-band signal having a frequency band combining the low frequency band and the high frequency band while the intermediate frequency band are attenuated. A DA converter converts the digital low-high-band signal into an analog low-high-band signal. Another DA converter converts the digital intermediate-band signal into the analog intermediate-band signal. An analog filter separates the analog low-high-band signal into the analog low-band signal and the analog high-band signal.
Abstract: In some embodiments, an electric guitar interface system, includes a first touchpad of an electric guitar configured to detect a user input, and a control unit coupled to the first touchpad. The control unit may be configured to set a first parameter of the electric guitar's output as a function of a position of the user input along the first axis. The first parameter includes a first pickup gain. The first pickup gain may be for at least one of a bridge pickup, a middle pickup, and a neck pickup. The first parameter includes a second pickup gain.
Abstract: A sound analysis and associated sound synthesis method is provided. A first input sound signal is received and analyzed, to determine its corresponding impulse response representative of a timbre of the input sound signal. A second input sound signal is received and processed into a form which the corresponding impulse response is susceptible to being applied, wherein the processing includes generating a “pink noise” equivalent frequency spectrum of the second input sound signal. The impulse response is applied to the processed second input sound signal to generate an output signal, wherein the output sound signal includes at least timbral nuances of the first input sound signal.
Abstract: Systems and method for performing adaptive audio signal processing using music as a measurement stimulus signal. A musical stimuli generator may be used to generate musical stimulus signals composed to provide a stimulus with a spectrum that is substantially dense, and ideally white or pink, over a selected frequency range, so that all frequencies of interest are stimulated. The musical stimuli generator may generate melodically pleasing musical stimulus signals using music clips that include any of: a chromatic sequence, a chromatic sequence including chromatic tones over a plurality of octaves, a chromatic sequence including chromatic tones over a selected plurality of octaves, or an algorithmically composed chromatic sequence, to cover a selected frequency range. The musical stimulus signal may be generated as sound into the environment of use. An audio input picks up the sound from the environment, and a sound processor uses the received musical stimulus signal to determine a transfer function.
June 29, 2011
Date of Patent:
November 11, 2014
Harman International Industries, Incorporated
Abstract: An acoustic effect impartment apparatus detects striking of any one of strings by a corresponding hammer in an acoustic piano like a grand piano, and vibrates a vibration section with a driving waveform signal obtained by synthesizing sine wave signals of the fundamental frequency and harmonic frequency of the hammer-struck string. Such vibration of the vibration section is transmitted to the keys via a soundboard and bridge of the piano. Thus, vibration is excited in the hammer-struck string by the striking with the hammer but also by the driving waveform signal, so that an acoustic effect corresponding to the driving waveform signal is imparted. Because the driving waveform signal is a simple signal using the sine wave signals corresponding to the fundamental frequency of the string, a natural feeling of the acoustic piano will not be lost even when the acoustic effect is imparted.
Abstract: A controller having proximity sensors associated with a trigger, such as beam sensors, configured to generate proximity data as a function of where each beam is broken along its span. A variety of control signals are be generated, whereby each beam can be configured to be spatially controlled and mapped to mimic other controllers, such as those of a DJ controller or other entertainment device. MIDI messages may be generated in response to positioning a member in a beam as detected by the proximity sensors. Each beam may be configured into a plurality of proximity zones, where a different MIDI message is generated when the member is positioned in the respective proximity zone.
November 29, 2012
Date of Patent:
October 28, 2014
Beamz Interactive, Inc.
Dave Sandler, Cody Myer, Gerald H. Riopelle
Abstract: A method of performing audio synthesis is disclosed. An audio event is input to an audio algorithm along with associated parameters including source sample data. An interpolation function is provided and the source sample data are interpolated to generate one or more interpolated samples based on the source sample data. A filter function is provided and at least one of the interpolated samples is filtered to generate a filtered sample. A gain function is provided and the filtered sample is processed to generate a gained sample. At least one of the interpolation, filter, and gain functions include outputting an earlier-calculated value along with an estimated difference value in lieu of calculating a new value.
April 15, 2011
Date of Patent:
August 26, 2014
MediaLab Solutions Corp.
Alain Georges, Voislav Damevski, Peter M. Blair, Christian Laffitte, Yves Wenzinger
Abstract: In the present invention, a click sound corresponding to key depression speed is generated, and the production timings of fundamental and harmonic components respectively corresponding to each footage are changed to vary from one another in accordance with a wait time, whereby the fundamental and the harmonic components to be synthesized by additive synthesis are changed to differ from one another. Next, a click sound corresponding to key release speed is generated, and the stop timings of the fundamental and harmonic components are changed to vary from one another in accordance with a wait time, whereby the fundamental and the harmonic components to be muted are changed to differ from one another. Accordingly, by the click sound being mixed with the drawbar sound having these slight tone changes, a unique drawbar sound such as that generated by the sound producing mechanism of an actual drawbar organ is generated.
Abstract: Synthetic multi-string musical instruments have been developed for capturing and rendering musical performances on handheld or other portable devices in which a multi-touch sensitive display provides one of the input vectors for an expressive performance by a user or musician. Visual cues may be provided on the multi-touch sensitive display to guide the user in a performance based on a musical score. Alternatively, or in addition, uncued freestyle modes of operation may be provided. In either case, it is not the musical score that drives digital synthesis and audible rendering of the synthetic multi-string musical instrument. Rather, it is the stream of user gestures captured at least in part using the multi-touch sensitive display that drives the digital synthesis and audible rendering.
November 9, 2011
Date of Patent:
July 8, 2014
Ge Wang, Jeannie Yang, Jieun Oh, Tom Lieber
Abstract: A system and method for audio synthesizer utilizing frequency aperture cells (FAC) and frequency aperture arrays (FAA). In accordance with an embodiment, an audio processing system can be provided for the transformation of audio-band frequencies for musical and other purposes. In accordance with an embodiment, a single stream of mono, stereo, or multi-channel monophonic audio can be transformed into polyphonic music, based on a desired target musical note or set of multiple notes. At its core, the system utilizes an input waveform(s) (which can be either file-based or streamed) which is then fed into an array of filters, which are themselves optionally modulated, to generate a new synthesized audio output.
August 26, 2011
Date of Patent:
June 24, 2014
James Edwin Van Buskirk, Jennifer Hruska, Jason Jordan, Al Joelson, Borislav Zlatkov
Abstract: A lead tone is generated on the basis of an input tone signal. Meanwhile, a specific pitch of the input tone signal is sequentially detected, from which is detected a normalized pitch corresponding to any one of the musical pitch names. Then, difference information is obtained which pertains to a difference between the specific pitch and the normalized pitch, and a pitch having a given pitch interval from the normalized pitch is determined as a target pitch of a tone signal to be generated. Then, a harmony tone is generated which has a pitch obtained by modulating the target pitch in accordance with the difference information.
Abstract: A thereminist robot has a characteristic model of theremin and is capable of performing in response to an environment of theremin performance by calibrating the characteristic model before the performance. A robot 10 has a first arm 12, a second arm 11, and a pitch model for indicating an arm position corresponding to a pitch of the theremin The robot 10 plays the theremin by moving the first arm 12 to the arm position corresponding to a musical note based on the target note and the pitch model. The robot further has a parameter adjustment unit for adjusting parameters of the pitch model that change depending on environments surrounding the theremin.
Abstract: A digital multi-media device provides features for a user unskilled in musical arts or sound handling techniques that provides automatic musical score composition in accordance with contained composition instructions. Stored sound samples and interfaces for obtaining external signals provide signals for merger with visual and sound presentations to obtain altered presentations either time shifted or in real time. In this fashion the user can create simulated radio stations for playback of prearranged and composed audio material. Further, the automatically composed musical score may be mixed with synthesized, digitized signals from the stored sound samples and external signals obtained through the device interfaces.
Abstract: A method and user interface for data sonification for representing multidimensional numerical information with a plurality of variable-timbre channels are described. The method includes providing a user interface that includes a plurality of metaphors and spatial sound rendering operations resulting in stereo audio output. In one implementation, a metaphor is used in making parameter assignments according to which audio-frequency waveforms having adjustable timbre attributes are generated. In another implementation, timbre attributes are adjustable over a range of timbre variation and are further associated with corresponding metaphors.
Abstract: A method of matching the tempo and phase in pieces of music which allows the conjunction of the pieces of music to form a continuous stream of music. The interactive music player which digitally executes the method of matching the tempo and phase in pieces of music is also disclosed.
Abstract: The sound-processing apparatus of the present invention generates plural frequency data by decoding plural encoded sound data and applying inverse quantization. Each of the frequency data are subjected to sound-processing and then synthesized into one single frequency data. Transformation processing from frequency domain to time domain is applied to the synthesized single frequency data so as to generate sound data in time domain so as to reduce computation amounts of decoding process.
Abstract: A system and method is disclosed teach how to synthesizing audio. It allows specification of a musical sound to be generated. It synthesizes an audio source, such as noise, using parameters to specify the desired frequency slit spacing and the desired noise-to-frequency band ratio, then filtering the audio source through a sequence of filters to obtain the desired frequency slit spacing and noise to frequency band ratio. It allows modulation of the filters in the sequence. It outputs musical sound.
Abstract: When a signal representing a first set value or more is received from a detecting section, a determination is made whether a boost value is a predetermined value or more. When the boost value is the predetermined value or more, a currently set boost value is reduced by a predetermined step, so that a new boost value is set. Further, when the boost value is less than the predetermined value, a currently set volume level is reduced by a predetermined step, and a new volume level is set. Therefore, in a case when the set boost value is gradually reduced and the boost value is finally less than the predetermined value, the audio signal is still the first set value or more, the volume level is further reduced by each predetermined step. Therefore, an audio signal can be securely prevented from being clipped.
Abstract: A musical tone signal is synthesized based on performance information to simulate a sound generated from a musical instrument having a string and a body that supports the string by a support. There is provided a closed loop circuit having a delay element that simulates delay characteristic of vibration propagated through the string and a characteristic control element that simulates a variation in amplitude or frequency. A string model calculation circuit inputs an excitation signal based on the performance information to the closed loop circuit, and calculates first information representing a force of the string acting on the support based on a cyclic signal generated in the closed loop and representing the vibration of the string circuit. A body model calculation circuit calculates second information representing a displacement of the body or a derivative of the displacement. A musical tone signal calculation circuit calculates the musical tone signal.
Abstract: The invention provides for the integration of video into electronic music technology. Cameras on instruments, such as guitars, are provided for display during performance, recording, and for creating control signals. Extraction of control signals from camera images, including patterns and gestures in 2D and 3D, may be included. All such outgoing control signals may be MIDI, and extraction algorithms may be selected and controlled by MIDI. A range of video signal generation, video signal processing, and camera image capture control functions relevant to the use of video in live performances may be controlled by MIDI to unify a common control infrastructure for a live performance environment.
Abstract: Interactive data sonification for representing multidimensional numerical information with a plurality of variable-timbre channels for use complementing data visualization is described. The method includes generation of a plurality of variable-timbre audio waveforms, each having an audio frequency parameter and at least one timbre modulation parameter having an adjustable value that affects the timbre of the audio waveform. The method includes associating aspects of multidimensional numerical data with the timbre modulation parameter of each audio frequency waveform using a mapping element. The mapping element varies values of timbre modulation parameters responsive to selected values from the multidimensional numerical data. Each audio frequency waveform can be positioned within a sonically-rendered sound field, associating information with positions within the sound field. Mapping elements can vary the positions responsive to selected values from the multidimensional numerical data.
Abstract: A multi-media entertainment device enabling a user to control sound/audio elements of a music or sound program while video is displayed and which is correlated to the played sound elements. The user interacts with triggers such as laser beams that can be interrupted by a player's fingers to play music. The video track is displayed, and the user controls the audio play of the sound track by interrupting the beams, each beam associated with a different instrument. This allows the user to play the multimedia device along with a displayed video performance, in synchronization with one or more musicians displayed on a display. The user's play may be scored as a function of the user's accuracy of engaging the triggers. For instance, a user can strum a trigger associated with a guitar program in unison with a guitarist on the display. The music created by the user interacting with multiple triggers is sympathetic and always synchronized to the video performance.
Abstract: A signal modulator includes a discriminator for discriminating a modulation technique through which a carrier signal was modulated to a quasi audio signal and a signal demodulation module for reproducing a continuous data stream from the quasi audio signal through a demodulating technique corresponding to the discriminated modulation technique; the discriminator includes a sampling circuit for extracting groups of samples from the quasi audio signal during each period of the carrier signal, an integrator calculating an integrated value on each group of samples, a comparator comparing the integrated value with a threshold for a neighborhood of zero so as to determine the groups of samples with the integrated value less than the threshold and a determiner measuring the time period between the groups of two modulation period and discriminating 16DPSK when the time period is equal to the modulation period.
Abstract: A CPU 19a supplies parameters on musical tone signals to a tone generator 17 having a plurality of tone generation channels CH0, CH1, . . . , CH127 each generating a musical tone signal. The parameters include channel information which designates one or more of the tone generation channels, and musical tone information which defines respective musical tone signals which are to be generated in the respective tone generation channels designated by the channel information. The tone generator 17 has a tone generation reservation circuit 17b which makes the designated tone generation channels start generation of the musical tone signals defined by the musical tone information when the respective tone volume levels of musical tone signals currently generated in the tone generation channels designated by the channel information are equal to or below a certain tone volume level.
Abstract: A resonance generation device of an electronic musical instrument, including: a key depression state detecting means detecting whether a key which is in a specific relation with a played key is already depressed or not when a key playing operation is performed; a specific relation detecting means detecting the relation between the played key and the depressed key when the key depression state detecting means detects that the key in the specific relation with the played key is already depressed; and a musical sound generation means sound generating a musical sound of the played key when the specific relation detecting means detects that the played key and the depressed key are in the specific relation set in advance, and generating a predetermined musical sound based on the relation between the played key and the depressed key so that a position of the depressed key is to be a sound generation source.
Abstract: The Source-Dependent Instrument is a signal processing and signal generation system that uses one or more signal event generators that can be functionally activated and controlled by the analysis of an external input signal. These output generators and signal processors can be set to re-synthesize aspects of the input or synthesize a more complex or perceptually shifted output based on the input.
Abstract: The teachings described herein are generally directed to a system, method, and apparatus for learning music through an educational audio track embodied on a computer readable medium. The system can comprise components including a processor, an input device, a database, a transformation module, an emulation recording module, an integration engine, an output module, and an output device, wherein each component is operable in itself to perform it's function in the system and operable with other system components to provide a system to a user for learning music.
Abstract: An electronic high-hat circuitry system allows the drummer to manually choose the sounds that an electronic high-hat makes when the drummer's foot is off of the pedal and the high-hat instrument is struck. When the pedal is at or near the top of its travel, a primary circuitry switch disables normal foot-controlled positioning circuitry and enables a secondary circuit that sends a selected positioning signal to a drum module. When the pedal is again pressed down, the primary circuitry switch returns control to the primary, pedal controlled circuit. An optional tertiary circuit allows for the choosing of a different sound when the secondary circuit is activated and the high-hat cymbal is tilted. A control panel is used by the drummer to select the desired high-hat sounds of the secondary and tertiary circuits. Also, high-hat instruments are introduced that have removable foot pedals, or no foot pedal.
Abstract: A data sonification method for representing multidimensional numerical information with a plurality of variable-timbre channels rendered in a sound field is described. The method includes generation of a plurality of variable-timbre audio waveforms, each having an audio frequency parameter and at least one timbre modulation parameter having an adjustable value that affects the timbre of the audio waveform. The method includes associating aspects of multidimensional numerical data with the timbre modulation parameter of each audio frequency waveform using a mapping element. The mapping element varies values of timbre modulation parameters responsive to selected values from the multidimensional numerical data. The method also positions each audio frequency waveform within a sonically-rendered sound field, associating aspects of multidimensional numerical data with the sonically-rendered position within the sound field. The sound field can be stereo, two-dimensional, or three-dimensional.
Abstract: An index calculating unit calculates a tonality index of a signal component of each area of the input signal transformed into a time frequency domain based on intensity of the signal component and a function obtained by approximating the intensity of the signal component. A similarity calculating unit calculates a similarity between a feature quantity in each area of the input signal obtained based on the index and the feature quantity in each area of the reference signal obtained based on the index calculated on the reference signal transformed into the time frequency domain. A music identifying unit identifies music of the input signal based on the similarity. The present technology can be applied to a music search apparatus that identifies music from an input signal.
Abstract: An index calculating unit calculates a tonality index of a signal component of each area of an input signal transformed into a time frequency domain based on intensity (for example, power spectrum) of the signal component and a function (quadratic function) obtained by approximating the intensity of the signal component. A music determining unit determines whether or not each area of the input signal includes music based on the tonality index. The present technology can be applied to a music section detecting apparatus that detects a music part from an input signal in which music is mixed with noise.
Abstract: A method of teaching reading includes displaying, by an application executing on a computing device, a singing exercise configured to allow a user to sing along as a song is played. Lyrics of the song are displayed as the song plays, thus allowing the user to read the lyrics as the user sings along to the song. An audio input is monitored as the song is played. A score representing how accurately the audio input matches the song is calculated. The score is provided to the user. A series of target pitch lines representative of target pitches on the display and a target pitch area encompassed about each target pitch line may be displayed. A pitch tracking line from the audio input may be computed and displayed.
Abstract: An object of the present invention is to provide an information processing terminal that specifies emotions from a voice and audio outputs music suitable for the specified emotions to enable the emotions of a loudspeaker who uttered the voice to be recognized readily. In an information processing terminal according to the present invention, an emotion inferring unit 23 detects, from sound information, at least two emotions of an utterer who uttered a voice included in the sound information, and a music data generating unit 24 synthesizes music data, stored in a music parts database 242 and corresponding to the emotions detected by the emotion inferring unit 23, and a controller 22 reproduces the music data generated by the music data generating unit 24.
Abstract: A morphed musical piece generation system that enables even a user with little knowledge of music to easily generate a morphed musical piece between two different musical pieces is provided. A first intermediate time-span tree data generation section 6 selectively removes difference information between common time-span tree data and first time-span tree data from the first time-span tree data. Also, a second intermediate time-span tree data generation section 7 performs the same operation to obtain second intermediate time-span tree data. A data combining section combines the first intermediate time-span tree data and the second intermediate time-span tree data to generate combined time-span tree data. A musical piece data generation section generates a morphed musical piece on the basis of the combined time-span tree data.
Abstract: A data sonification system for representing a plurality of channels of numerical information is described. The data sonification system includes a plurality of audio waveform generator elements. Each of the audio waveform generator elements generates an associated audio frequency waveform. Each audio frequency waveform has an audio frequency parameter and at least one timbre modulation parameter having a settable value. The timbre modulation parameter affects the timbre of the audio waveform. The data sonification system includes a mapping element for associating aspects of multidimensional numerical data with the timbre modulation parameter of each audio frequency waveform. The mapping element sets the value of the timbre modulation parameter in response to multidimensional numerical data.
Abstract: A method for increasing ring tone volume is provided. The method includes steps of: reading an audio file which is set as a current ring tone; determining whether the ring tone is a MP3 audio file or a musical instrument digital interface (MIDI) audio file; adjusting frequencies by using an equalizer technique to increase volume of the ring tone if the ring tone is the MP3 audio file; adjusting a volume level of the ring tone to be the highest volume level, and adjusting timbre of the ring tone to increase the ring tone volume by simulating a musical score of the ring tone by using different instruments if the ring tone is the MIDI audio file. A related system is also provided.
Abstract: First display object associated with a control operating member and a second display object associated with a tone color effect parameter are displayed, and variation of a displayed position of the first display object is controlled in accordance with operation of the control operating member. Control value of the tone color effect parameter is determined in response to variation in relationship between the first and second display objects, and tone control is performed on the basis of the determined control value. Further, the variation of the displayed position of the first display object is controlled so as to move on and along a set path, so that displayable positions of the first display object is limited to the set path. Control value of the tone color effect parameter is determined on the basis of relationship between displayed position of the first and second display objects to perform tone control.
Abstract: In a synthesizer 10, when a function of a tone generation module 312 provided by an external tone generation server 310 is usable, a tone generator control module 102 assigns a necessary number of sound generation channels among sound generation channels of an internal tone generation unit 17 and sound generation channels of the external tone generation module 312, for sound generation corresponding to MIDI data. When assigning the sound generation channel of the tone generation module 312, the tone generator control module 102 transmits, to the tone generation server 310, the MIDI data with identification information of the assigned sound generation channel, thereby causing the sound generation channel indicated by the identification information in the tone generation module 312 to generate waveform data according to the transmitted MIDI data.
Abstract: There is provided a music accompaniment apparatus that is connected to at least one external device to reproduce an audio or video signal, the music accompaniment apparatus including: an audio input section for inputting an external audio signal; an audio signal processing section for processing an audio signal including an accompaniment signal internally provided and an external audio signal input through the audio input section, and externally outputting the processed audio signal; a video signal processing section for processing a video signal including a caption signal, and externally outputting the processed video signal; a time delay calculating section for calculating the difference between a transmission time of a check signal to the at least one external device and a reception time of the check signal from the at least one external device to compute a time delay representing a delay of the audio or video signal for the at least one external device; and a control section for controlling the whole oper
Abstract: In a filter device, a filter coefficient calculation circuit has a parameter table. The parameter table stores a plurality of sets of filter coefficients associated with a first parameter based on a frequency and a second parameter based on respective plurality of levels representing a degree of attenuation or enhancement of a gain of a filter in filter characteristics. The filter coefficient calculation circuit extracts a set of filter coefficients from a parameter table with the use of the first parameter and the second parameter determined according to a frequency and a strength of a musical sound signal, and outputs the extracted set of filter coefficients to the filter. The filter circuit performs filter processing for the musical sound signal, based on the filter characteristics determined by the set of filter coefficients.
Abstract: An apparatus, method and system for generating music in real time are provided. A pipeline for coordinating generation of a musical piece is created. At least one producer is loaded into the pipeline, the at least one producer for producing at least one high level musical element of the musical piece, independent of other producers in the pipeline. At least one generator is called by the at least one producer, the at least one generator for generating at least one low level music element of the musical piece. The at least one low level musical element and the at least one high level musical element are integrated, such that the musical piece is generated in real time.
September 19, 2008
Date of Patent:
November 15, 2011
The University of Western Ontario
Maia Hoeberechts, Ryan Demopoulos, Michael Katchabaw
Abstract: This invention provides a signal processing and signal synthesis technique from a family of signal processing and signal synthesis techniques designed to readily interwork or be used individually in creating new forms of rich musical timbres. Phase staggered multi-channel signal panning creates spatial perturbation and chase effects for subtle or dramatic application, and may be swept with control signals from a low-frequency oscillator, transient envelope, or other source. Phase-staggering and modulation parameters may be recalled from stored program control or modulated in real-time by arbitrary control signals, including those derived from the original input signal. The invention may be used individually or in conjunction with other signal processing and signal synthesis techniques in creating new forms of rich musical timbres. The invention may also be used in spatially-distributed timbre construction.
Abstract: This invention provides a signal processing and signal synthesis technique from a family of signal processing and signal synthesis techniques designed to readily interwork or be used individually in creating new forms of rich musical timbres. Amplitude-envelope controlled time-modulation and pitch-modulation are employed to add rich and attention-getting aspects to solo lines and chords. The amplitude envelope may be measured from the signal being modulated, a delayed version of this signal, or another signal source. Modulation characteristics and parameters may be recalled from stored program control or modulated in real-time by arbitrary control signals, including those derived from the original input signal. The invention may be used individually or in conjunction with other signal processing and signal synthesis techniques in creating new forms of rich musical timbres. The invention may also be used in spatially-distributed timbre construction.
Abstract: This invention provides a signal processing and signal synthesis technique from a family of signal processing and signal synthesis techniques designed to readily interwork or be used individually in creating new forms of rich musical timbres. A plurality of audio signal delays, each with high resonance positive feedback, distortion characteristics, and selectable delay times corresponding to a desired resonant frequency, provide twang and resonance synthesis for moments of sparkle or vibrantly-responsive ongoing backdrops. The selectable delay times may match a musical scale or other resonant frequency distribution. Delay, feedback, and signal processing characteristics and parameters may be recalled from stored program control or modulated in real-time by arbitrary control signals, including those derived from the original input signal. The invention may be used individually or in conjunction with other signal processing and signal synthesis techniques in creating new forms of rich musical timbres.
Abstract: A method of sound-object oriented analysis and of note-object oriented processing a polyphonic digitized sound recording present in the form of a time signal F(A, t) includes the following analytical and processing steps: portion-wise readouts of the time signal F(A, t) using a window function and overlapping windows; Fourier-transforming the readout signal into frequency space, in particular by applying a discrete Fourier transform; calculating an energy value E at each bin from the frequency amplitude resulting from the Fourier transformation, in particular by squaring the real and imaginary parts or forming energy values derived from them; generating a function F(t, f, E); identifying event objects; identifying event objects; identifying note objects; comparing the temporal occurrence of event objects and note objects and associating event objects to note objects in the case of plausible time occurrences; calculating spectral proportion factors for each note object.
Abstract: The present invention concerns itself with methods and systems for just intonation tuning of a digital/electrical piano in real time. A simple and economical solution is presented, which makes use of a PLC (i.e., Programmable Logic Controller), having 13 inputs (an octave plus one input for the pedal) and 22 outputs (22 possible frequencies per octave), relays, and parallel connections between octaves and PLC inputs, as well as, between PLC outputs and relays.