Patents by Inventor Keita Arimoto
Keita Arimoto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11568839Abstract: A performance support device includes a first surface, a second surface, and a flow path. The first surface is configured to face a blow hole of an air reed instrument. The second surface is configured to be displaced from the blow hole. The flow path penetrates from the first surface to the second surface and that is configured to flow an exhaled breath toward the blow hole. A cross-sectional area of the flow path at the first surface is smaller than a cross-sectional area of the flow path at a position displaced from the first surface toward the second surface.Type: GrantFiled: December 22, 2020Date of Patent: January 31, 2023Assignee: Yamaha CorporationInventors: Keita Arimoto, Masafumi Fuke, Kazuhiro Fujita, Akira Miki
-
Publication number: 20210201859Abstract: A performance support device includes a first surface, a second surface, and a flow path. The first surface is configured to face a blow hole of an air reed instrument. The second surface is configured to be displaced from the blow hole. The flow path penetrates from the first surface to the second surface and that is configured to flow an exhaled breath toward the blow hole. A cross-sectional area of the flow path at the first surface is smaller than a cross-sectional area of the flow path at a position displaced from the first surface toward the second surface.Type: ApplicationFiled: December 22, 2020Publication date: July 1, 2021Inventors: Keita ARIMOTO, Masafumi FUKE, Kazuhiro FUJITA, Akira MIKI
-
Patent number: 11011187Abstract: An apparatus for generating relations between feature amounts of audio and scene type includes at least one processor and a memory. The memory is operatively coupled to the at least one processor. The processor is configured to set one of the scene types to each of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also configured to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts of the audio, that belongs to each cluster and the scene type set for each cluster. The processor is also configured to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.Type: GrantFiled: July 2, 2020Date of Patent: May 18, 2021Assignee: Yamaha CorporationInventors: Yuta Yuyama, Keita Arimoto
-
Publication number: 20200335127Abstract: An apparatus for generating relations between feature amounts of audio and scene type includes at least one processor and a memory. The memory is operatively coupled to the at least one processor. The processor is configured to set one of the scene types to each of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also configured to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts of the audio, that belongs to each cluster and the scene type set for each cluster. The processor is also configured to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.Type: ApplicationFiled: July 2, 2020Publication date: October 22, 2020Inventors: Yuta Yuyama, Keita ARIMOTO
-
Patent number: 10789972Abstract: An apparatus for generating relations between feature amounts of audio and scene types includes at least one processor and a memory. Upon execution of the instructions the processor is caused to set one of the scene types in accordance with an instruction from a user to indicate one of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also caused to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts, that belongs to the cluster and the scene type set for the cluster. The processor is also caused to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.Type: GrantFiled: August 26, 2019Date of Patent: September 29, 2020Assignee: Yamaha CorporationInventors: Yuta Yuyama, Keita Arimoto
-
Patent number: 10748556Abstract: An apparatus for generating relations between feature amounts of audio and scene types includes at least one processor and a memory. Upon execution of the instructions the processor is caused to set one of the scene types in accordance with an instruction from a user to indicate one of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also caused to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts, that belongs to the cluster and the scene type set for the cluster. The processor is also caused to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.Type: GrantFiled: August 26, 2019Date of Patent: August 18, 2020Assignee: Yamaha CorporationInventors: Yuta Yuyama, Keita Arimoto
-
Publication number: 20190378534Abstract: An apparatus for generating relations between feature amounts of audio and scene types includes at least one processor and a memory. Upon execution of the instructions the processor is caused to set one of the scene types in accordance with an instruction from a user to indicate one of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also caused to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts, that belongs to the cluster and the scene type set for the cluster. The processor is also caused to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.Type: ApplicationFiled: August 26, 2019Publication date: December 12, 2019Inventors: Yuta YUYAMA, Keita Arimoto
-
Patent number: 10298192Abstract: A recorded signal representing a recorded sound generated by a sound generation source is reproduced. A type of a sound generation source of a performance sound represented by a performance signal is specified. A sound volume of the recorded signal is reduced in a case where the sound generation source of the recorded signal corresponds to the specified type of the sound generation source.Type: GrantFiled: March 28, 2018Date of Patent: May 21, 2019Assignee: Yamaha CorporationInventor: Keita Arimoto
-
Patent number: 10243680Abstract: An audio processing device has an identification module and an adjustment information acquisition module. The identification module identifies each of the musical instruments that correspond to each of the audio signals. The adjustment information acquisition module acquires adjustment information for adjusting each of the audio signals described above according to the combination of the identified musical instruments.Type: GrantFiled: March 26, 2018Date of Patent: March 26, 2019Assignee: YAMAHA CORPORATIONInventor: Keita Arimoto
-
Publication number: 20180219638Abstract: An audio processing device has an identification module and an adjustment information acquisition module. The identification module identifies each of the musical instruments that correspond to each of the audio signals. The adjustment information acquisition module acquires adjustment information for adjusting each of the audio signals described above according to the combination of the identified musical instruments.Type: ApplicationFiled: March 26, 2018Publication date: August 2, 2018Inventor: Keita ARIMOTO
-
Publication number: 20180219521Abstract: A recorded signal representing a recorded sound generated by a sound generation source is reproduced. A type of a sound generation source of a performance sound represented by a performance signal is specified. A sound volume of the recorded signal is reduced in a case where the sound generation source of the recorded signal corresponds to the specified type of the sound generation source.Type: ApplicationFiled: March 28, 2018Publication date: August 2, 2018Inventor: Keita ARIMOTO
-
Patent number: 9053696Abstract: It is an object of the present invention to provide an improved technique for searching for a tone data set of a phrase constructed in a rhythm pattern that satisfies a predetermined condition of similarity to a rhythm pattern intended by a user. The user inputs a rhythm pattern via a rhythm input device. An input rhythm pattern storage section stores the input rhythm pattern into a RAM on the basis of clock signals output from a bar line clock output section and trigger data included in the input rhythm pattern. A rhythm pattern search section searches through a rhythm database for a tone data set presenting the highest degree of similarity to the stored input rhythm pattern. A performance processing section causes a sound output section to audibly output the searched-out tone data set.Type: GrantFiled: December 1, 2011Date of Patent: June 9, 2015Assignee: Yamaha CorporationInventors: Daichi Watanabe, Keita Arimoto
-
Patent number: 8853516Abstract: In an audio analysis apparatus, a component acquirer acquires a component matrix composed of an array of component values, columns of the component matrix corresponding to the sequence of unit periods of an audio signal and rows of the component matrix corresponding to a series of unit bands of the audio signal arranged in a frequency-axis direction. A difference generator generates a plurality of shift matrices each obtained by shifting the columns of the component matrix in the time-axis direction with a different shift amount, and generates a plurality of difference matrices each composed of an array of element values in correspondence to the plurality of the shift matrices, the element value representing a difference between the corresponding component values of the shift matrix and the component matrix.Type: GrantFiled: April 6, 2011Date of Patent: October 7, 2014Assignee: Yamaha CorporationInventors: Keita Arimoto, Sebastian Streich, Bee Suan Ong
-
Patent number: 8543387Abstract: Disclosed herein is a pitch estimation apparatus and associated methods for estimating a fundamental frequency of an audio signal from a fundamental frequency probability density function by modeling the audio signal as a weighted mixture of a plurality of tone models corresponding respectively to harmonic structures of individual fundamental frequencies, so that the fundamental frequency probability density function of the audio signal is given as a distribution of respective weights of the plurality of the tone models.Type: GrantFiled: August 31, 2007Date of Patent: September 24, 2013Assignee: Yamaha CorporationInventors: Masataka Goto, Takuya Fujishima, Keita Arimoto
-
Patent number: 8494668Abstract: Character value of a sound signal is extracted for each unit portion, and degrees of similarity between the character values of the individual unit portions are calculated and arranged in a matrix configuration. The matrix has arranged in each column the degrees of similarity acquired by comparing, for each of the unit portions, the sound signal and a delayed sound signal obtained by delaying the sound signal by a time difference equal to an integral multiple of a time length of the unit portion, and it has a plurality of the columns in association with different time differences. Repetition probability is calculated for each of the columns corresponding to the different time differences in the matrix. A plurality of peaks in a distribution of the repetition probabilities are identified. The loop region in the sound signal is identified by collating a reference matrix with the degree of similarity matrix.Type: GrantFiled: February 19, 2009Date of Patent: July 23, 2013Assignee: Yamaha CorporationInventors: Bee Suan Ong, Sebastian Streich, Takuya Fujishima, Keita Arimoto
-
Patent number: 8487175Abstract: In a musical analysis apparatus, a spectrum acquirer acquires a spectrum for each frame of an audio signal representing a piece of music. A beat specifier specifies a sequence of beats of the audio signal. A feature amount extractor divides an interval between the beats into a plurality of analysis periods such that one analysis period contains a plurality of frames, and separates the spectrum of the frames contained in one analysis period into a plurality of analysis bands so as to set a plurality of analysis units in one analysis period in correspondence with the plurality of the analysis bands, such that one analysis unit contains components of the spectrum belonging to the corresponding analysis band.Type: GrantFiled: April 6, 2011Date of Patent: July 16, 2013Assignee: Yamaha CorporationInventors: Keita Arimoto, Sebastian Streich, Bee Suan Ong
-
Publication number: 20120192701Abstract: It is an object of the present invention to provide an improved technique for searching for a tone data set of a phrase constructed in a rhythm pattern that satisfies a predetermined condition of similarity to a rhythm pattern intended by a user. The user inputs a rhythm pattern via a rhythm input device. An input rhythm pattern storage section stores the input rhythm pattern into a RAM on the basis of clock signals output from a bar line clock output section and trigger data included in the input rhythm pattern. A rhythm pattern search section searches through a rhythm database for a tone data set presenting the highest degree of similarity to the stored input rhythm pattern. A performance processing section causes a sound output section to audibly output the searched-out tone data set.Type: ApplicationFiled: December 1, 2011Publication date: August 2, 2012Applicant: YAMAHA CORPORATIONInventors: Daichi Watanabe, Keita Arimoto
-
Publication number: 20110271819Abstract: In a musical analysis apparatus, a spectrum acquirer acquires a spectrum for each frame of an audio signal representing a piece of music. A beat specifier specifies a sequence of beats of the audio signal. A feature amount extractor divides an interval between the beats into a plurality of analysis periods such that one analysis period contains a plurality of frames, and separates the spectrum of the frames contained in one analysis period into a plurality of analysis bands so as to set a plurality of analysis units in one analysis period in correspondence with the plurality of the analysis bands, such that one analysis unit contains components of the spectrum belonging to the corresponding analysis band.Type: ApplicationFiled: April 6, 2011Publication date: November 10, 2011Applicant: YAMAHA CORPORATIONInventors: Keita ARIMOTO, Sebastian Streich, Bee Suan Ong
-
Publication number: 20110268284Abstract: In an audio analysis apparatus, a component acquirer acquires a component matrix composed of an array of component values, columns of the component matrix corresponding to the sequence of unit periods of an audio signal and rows of the component matrix corresponding to a series of unit bands of the audio signal arranged in a frequency-axis direction. A difference generator generates a plurality of shift matrices each obtained by shifting the columns of the component matrix in the time-axis direction with a different shift amount, and generates a plurality of difference matrices each composed of an array of element values in correspondence to the plurality of the shift matrices, the element value representing a difference between the corresponding component values of the shift matrix and the component matrix.Type: ApplicationFiled: April 6, 2011Publication date: November 3, 2011Applicant: YAMAHA CORPORATIONInventors: Keita Arimoto, Sebastian Streich, Bee Suan Ong
-
Patent number: 7858869Abstract: A sound analysis apparatus employs tone models which are associated with various fundamental frequencies and each of which simulates a harmonic structure of a performance sound generated by a musical instrument, then defines a weighted mixture of the tone models to simulate frequency components of the performance sound, further sequentially updates and optimizes weight values of the respective tone models so that a frequency distribution of the weighted mixture of the tone models corresponds to a distribution of the frequency components of the performance sound, and estimates the fundamental frequency of the performance sound based on the optimized weight values.Type: GrantFiled: February 25, 2008Date of Patent: December 28, 2010Assignees: National Institute of Advanced Industrial Science and Technology, Yamaha CorporationInventors: Masataka Goto, Takuya Fujishima, Keita Arimoto