Abstract: An electronic musical instrument allows a player to operate operators as least number of times as possible to play music, and the player can play music easily and agreeably, using the instrument. Every measure decided by plural beats counted based on a designated meter a prior tone is determined from among automatic playing music data. The prior tone is a musical tone which is made note-on for example at a timing of a downbeat in the measure. If a candidate for the prior tone is one of chord composing tones, and the one of chord composing tones can compose a melody, then such musical tone is decided as the prior tone. The prior tones successively decided from the beginning of the automatic playing music data are indicated to the player as lighted up keys. The player operates the lighted up keys successively to perform the automatic playing music data.
Abstract: A signal processing apparatus has a first memory in which plural pieces of FIR coefficient data used for implementing an FIR filter algorithm are stored, a second memory which stores plural pieces of input data to be subjected to the FIR filter algorithm, and a processor implements the FIR filter algorithm using the plural pieces of FIR coefficient data stored in the first memory and the plural pieces of input data stored in the second memory as many times as the number corresponding to a designated filter order, in which filter algorithm each piece of coefficient data and each piece of input data are multiplied together and resultant products are summed up. The signal processing apparatus is provided, which can implement plural sorts of FIR filter algorithms of filter order which can be changed flexibly.
Abstract: A method for testing operation of an elevator including an elevator car includes starting movement of the elevator car; and thereafter starting a stopping sequence for stopping movement of the elevator car; and monitoring movement of the elevator car, the monitoring preferably including monitoring acceleration of the elevator car; and detecting a predefined response in movement of the elevator car, the predefined response preferably being cease of acceleration of the elevator car; and determining time elapsed between the starting a stopping sequence and the detected predefined response in movement of the elevator car, wherein the response is preferably cease of acceleration, for thereby determining reaction time of the elevator; and comparing the time elapsed with at least one reference, such as with at least one predefined threshold. An elevator is provided for implementing the method.
Abstract: An electronic musical instrument includes a frame and a plate-shaped member attached to the frame. The frame includes a first rib extending in a first direction; a second rib being arranged on a second direction side intersecting the first direction with respect to the first rib and extending in the first direction; a first wall portion extending in the second direction and connecting the first rib and the second rib; a second wall portion facing the first wall portion and extending in the second direction; an extending portion being arranged on the first wall portion and being bendable toward the first direction; a first projecting portion projecting from the extending portion toward the second wall portion; and a second projecting portion projecting from the second wall portion toward the first wall portion.
Abstract: With an electronic percussion 100, a mesh-shaped head 101 is supported by a shell 103 formed into a cylindrical shape. The shell 103 includes an optical sensor 105 inside and a sensor cover body 110 outside. The optical sensor 105 includes a light emitter 105a that irradiates light to a back surface 101b of the head 101 and a light receiver 105b that receives reflected light from the back surface 101b of the head 101. This optical sensor 105 is supported by a sensor supporting body 104 at a position adjacent to the back surface 101b of the head 101. The sensor cover body 110 is formed so as to have a size with which the sensor cover body 110 covers the optical sensor 105 at the position that is opposed to the optical sensor 105 on a struck surface 101a side, which is opposite to the back surface 101b of the head 101 and which faces the optical sensor 105.
Abstract: Music data memory includes pieces of music within a group and other pieces of music outside the group. The next piece to be played is automatically determined by random table among pieces within the group. Favorite or newest piece is weighted to be more frequently played in the group. Piece in music data memory is automatically included into the group by random table. Newly downloaded piece into music data memory is included into the group by priority. Most frequently played piece is excluded from the group in place of newly included piece. Favorite or newest piece may be an exception of exclusion. Next piece is capable of being played in tempo similar to that of preceding piece by means of tempo-adjusted or piece replacement or repetition of the same piece for the purpose of continued baby cradling in synchronism with the same tempo of succeeding pieces.
Abstract: Provided are a display device and a program which allow a user to intuitively recognize a connection and a breathing timing between respective notes. A CPU (11) generates a guide image, based on information about a sound-producing timing and a sound length of each note, which are included in a guide melody track. The CPU (11) smoothly connects respective notes. Thereafter, the CPU (11) disconnects the notes at the breathing timing indicated in a breath position track.
Abstract: A musical instrument is provided. The musical instrument includes at least, but is not limited to, a core portion providing a neck, and headstock portions, a body portion attached to the core portion, the body portion including at least a relief, the relief configured to accommodate an arm of a chair, and a tuning structure secured to the headstock. The musical instrument further includes at least, but is not limited to, a plurality of strings secured to the tuning structure, a bridge portion communicating with the plurality of strings, and a pick up secured to the body portion and interacting with the plurality of strings.
Abstract: An electronic mallet controller includes a plurality of bars representing musical notes. Each bar active produces a signal indicative of the respective musical note when struck by an implement, and all adjacent bars are spaced apart with the same spacing. A first user input permits a user to select a lowest diatonic natural note of the range of the musical instrument to thereby define a location of dead notes. A processor circuit interprets each signal as an outputted musical note. Based on the first user input, the processor circuit shifts mapping between the bars and the musical notes to be outputted, causing the dead note locations to be associated with certain of the bars, and wherein the bars at the dead note locations are inactive bars. An indicator is associated with the inactive bars to indicate the location of the dead notes to the user.
Abstract: An interchangeable pickup support is for a stringed musical instrument. The support is of the type that includes at least a base body provided with an attachment to a stringed musical instrument and a support for supporting and attaching the pickup to the base body. The attachment includes at least a fastener which is movable by an actuator which is rigidly connected to the fastener, the ends of which are configured to project in part through respective holes arranged on the outer surface of the base body and attached to the stringed musical instrument. The base body also includes at least a retainer for the fastener in the attachment to the stringed musical instrument.
Abstract: An application for operating on a smart phone that records a musician's performance, either voice or instrumental, in combination with pre-recorded music. The combination allows for the auto tuning of the recording, the compression of the recording, the equalization of the recording, adding in reverb, and the audio quantization of the rhythm. Once combined, the song is transmitted to social media and/or to an online store for sale. The user can also make a video with the song. Additional marketing such as song competitions or music reviews and ratings are also provided.
Abstract: An electronic musical instrument includes a keyboard, a sound-producing system, and a control unit. A panning value indicating a left-right balance of sound outputted from a left speaker and a right speaker in stereo sound production is assigned to each key of the keyboard such that the left-right balance of soundboard resonant sound waveform data for certain higher-pitch keys is shifted towards the left-speaker as compared with the left-right balance of soundboard resonant sound waveform data for certain lower-pitch keys that are located to the left of the higher-pitch keys so as to produce realistic piano sound mimicking an actual piano.
Abstract: A method and a device provides a user with the ability to freely improvise or play around with a selection of different chords and be provided with visual guidance assisting the improvisation of melody, while also providing accompaniment consistent with the selection of chords. Sound files consistent with a user selection of a chord and/or a dynamic level are selected from an audio library and played, while the user is given visual cues on a user interface keyboard assisting the user to select notes that are consistent with the chord selected for the accompaniment.
Abstract: An electronic musical instrument includes a plurality of keys respectively specifying different pitches when operated; a memory; and a sound processor. In response to a current operation of a current key, which is one of the plurality of keys, the sound processor retrieves the information stored in the memory for a previous operation, if any, of a previous key, which is a same as the current key or is another one of the plurality of keys, and performs a prescribed processing on a beginning part of the waveform data generated for the current operation of the current key in accordance with the retrieved information stored in the memory for the previous operation of the previous key so as to generate processed waveform data in response to the current operation of the current key. The resulting processed waveform data can be configured to better mimic artists' performance of an original instrument.
Abstract: In order to help music players without sufficient musical knowledge to adapt original music pieces but still keep the original style, the present invention provides an interactive system and the accompanying method for creating music by substituting audio tracks. The interactive system includes a database of musical elements that comprises tonality, tempo, beat, timbre, texture, chord, and pitch, a database of music that contains multiple original music pieces, and a processor. As a result, players without strong knowledge in music theories can create adapted a music piece that matches the style of the original one.
Abstract: A musical instrument with two or more classes of pitches where at least one class is a diminished chord extended over an arbitrary number of octaves, and another class is another chord extended over the same span of octaves. The pitches of the second class of pitches are interlaced with the pitches of the diminished chord of the first class of pitches. Additional classes are chords similarly interlaced with other classes, with one class of any interlaced pair of classes being a diminished chord.
Abstract: A method, apparatus and system that enables a user to find and act-upon a sound-containing composition, in a group of compositions. One or more sound-segments, which are intended to prompt a user's memory, may be associated with each composition in a group of compositions. A recognition sound-segment may include a portion of its associated composition, which is more recognizable to users than the beginning part of its associated composition. A recognition-segment may contain one or more highly recognizable portion(s) of a composition. When the user is trying to locate or select a particular composition, the recognition-segments are navigated and played-back to the user, based upon a user-device context/mode. When a user recognizes the desired composition from its recognition-segment, the user may initiate a control action to playback; arrange; and/or act-upon, the composition that is associated with the currently playing recognition-segment.
Abstract: Audio generation method, server and storage medium are provided. The method includes obtaining a comparison audio, and performing a theme extraction on the comparison audio to obtain a comparison note sequence, the comparison note sequence comprising comparison note positions, comparison note pitches, and a comparison note duration; obtaining an original audio matching with the comparison audio via audio retrieval, and obtaining an original note sequence corresponding to the original audio by performing a theme extraction on the original audio, the original note sequence comprising original note positions, original note pitches, and an original note duration; calculating theme distances between fragments of the comparison audio and fragments of the original audio according to the comparison note sequence and the original note sequence; and generating an audio by capturing a fragment that is of the original audio and that satisfies the smallest theme distance.
December 18, 2017
Date of Patent:
April 16, 2019
TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITED
Abstract: Systems and methods for re-arranging a musical composition using augmented reality are disclosed. A user can be provided with one or more physical image markers representative of at least one of a musical entity and a musical style in which a musical track has been pre-recorded. Upon capturing a visual scene in which one or more of these physical image markers are included, an augmented reality version of the visual scene is displayed along with audio corresponding to the musical style(s) and/or entities represented by the one or more physical image markers.
Abstract: Real-time pitch detection of voiced musical notes involves converting sound waves, produced by a voiced rendition of one or more musical notes, to a time domain electronic audio signal. The electronic audio signal is processed to determine a true pitch of the time domain electronic audio signal. True pitch information is displayed in real-time, concurrent with the voiced rendition of each musical note. A pitch indicator conveys to a user information concerning the true pitch which has been determined. The true pitch is determined by segmenting the electronic audio signal into a plurality of audio signal samples and applying a constant-Q transform. Additional processing steps are applied to reduce pitch detection errors.