Patents by Inventor Keita Arimoto

Keita Arimoto has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11568839
    Abstract: A performance support device includes a first surface, a second surface, and a flow path. The first surface is configured to face a blow hole of an air reed instrument. The second surface is configured to be displaced from the blow hole. The flow path penetrates from the first surface to the second surface and that is configured to flow an exhaled breath toward the blow hole. A cross-sectional area of the flow path at the first surface is smaller than a cross-sectional area of the flow path at a position displaced from the first surface toward the second surface.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: January 31, 2023
    Assignee: Yamaha Corporation
    Inventors: Keita Arimoto, Masafumi Fuke, Kazuhiro Fujita, Akira Miki
  • Publication number: 20210201859
    Abstract: A performance support device includes a first surface, a second surface, and a flow path. The first surface is configured to face a blow hole of an air reed instrument. The second surface is configured to be displaced from the blow hole. The flow path penetrates from the first surface to the second surface and that is configured to flow an exhaled breath toward the blow hole. A cross-sectional area of the flow path at the first surface is smaller than a cross-sectional area of the flow path at a position displaced from the first surface toward the second surface.
    Type: Application
    Filed: December 22, 2020
    Publication date: July 1, 2021
    Inventors: Keita ARIMOTO, Masafumi FUKE, Kazuhiro FUJITA, Akira MIKI
  • Patent number: 11011187
    Abstract: An apparatus for generating relations between feature amounts of audio and scene type includes at least one processor and a memory. The memory is operatively coupled to the at least one processor. The processor is configured to set one of the scene types to each of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also configured to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts of the audio, that belongs to each cluster and the scene type set for each cluster. The processor is also configured to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: May 18, 2021
    Assignee: Yamaha Corporation
    Inventors: Yuta Yuyama, Keita Arimoto
  • Publication number: 20200335127
    Abstract: An apparatus for generating relations between feature amounts of audio and scene type includes at least one processor and a memory. The memory is operatively coupled to the at least one processor. The processor is configured to set one of the scene types to each of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also configured to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts of the audio, that belongs to each cluster and the scene type set for each cluster. The processor is also configured to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.
    Type: Application
    Filed: July 2, 2020
    Publication date: October 22, 2020
    Inventors: Yuta Yuyama, Keita ARIMOTO
  • Patent number: 10789972
    Abstract: An apparatus for generating relations between feature amounts of audio and scene types includes at least one processor and a memory. Upon execution of the instructions the processor is caused to set one of the scene types in accordance with an instruction from a user to indicate one of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also caused to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts, that belongs to the cluster and the scene type set for the cluster. The processor is also caused to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: September 29, 2020
    Assignee: Yamaha Corporation
    Inventors: Yuta Yuyama, Keita Arimoto
  • Patent number: 10748556
    Abstract: An apparatus for generating relations between feature amounts of audio and scene types includes at least one processor and a memory. Upon execution of the instructions the processor is caused to set one of the scene types in accordance with an instruction from a user to indicate one of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also caused to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts, that belongs to the cluster and the scene type set for the cluster. The processor is also caused to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: August 18, 2020
    Assignee: Yamaha Corporation
    Inventors: Yuta Yuyama, Keita Arimoto
  • Publication number: 20190378534
    Abstract: An apparatus for generating relations between feature amounts of audio and scene types includes at least one processor and a memory. Upon execution of the instructions the processor is caused to set one of the scene types in accordance with an instruction from a user to indicate one of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also caused to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts, that belongs to the cluster and the scene type set for the cluster. The processor is also caused to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.
    Type: Application
    Filed: August 26, 2019
    Publication date: December 12, 2019
    Inventors: Yuta YUYAMA, Keita Arimoto
  • Patent number: 10298192
    Abstract: A recorded signal representing a recorded sound generated by a sound generation source is reproduced. A type of a sound generation source of a performance sound represented by a performance signal is specified. A sound volume of the recorded signal is reduced in a case where the sound generation source of the recorded signal corresponds to the specified type of the sound generation source.
    Type: Grant
    Filed: March 28, 2018
    Date of Patent: May 21, 2019
    Assignee: Yamaha Corporation
    Inventor: Keita Arimoto
  • Patent number: 10243680
    Abstract: An audio processing device has an identification module and an adjustment information acquisition module. The identification module identifies each of the musical instruments that correspond to each of the audio signals. The adjustment information acquisition module acquires adjustment information for adjusting each of the audio signals described above according to the combination of the identified musical instruments.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: March 26, 2019
    Assignee: YAMAHA CORPORATION
    Inventor: Keita Arimoto
  • Publication number: 20180219638
    Abstract: An audio processing device has an identification module and an adjustment information acquisition module. The identification module identifies each of the musical instruments that correspond to each of the audio signals. The adjustment information acquisition module acquires adjustment information for adjusting each of the audio signals described above according to the combination of the identified musical instruments.
    Type: Application
    Filed: March 26, 2018
    Publication date: August 2, 2018
    Inventor: Keita ARIMOTO
  • Publication number: 20180219521
    Abstract: A recorded signal representing a recorded sound generated by a sound generation source is reproduced. A type of a sound generation source of a performance sound represented by a performance signal is specified. A sound volume of the recorded signal is reduced in a case where the sound generation source of the recorded signal corresponds to the specified type of the sound generation source.
    Type: Application
    Filed: March 28, 2018
    Publication date: August 2, 2018
    Inventor: Keita ARIMOTO
  • Patent number: 9053696
    Abstract: It is an object of the present invention to provide an improved technique for searching for a tone data set of a phrase constructed in a rhythm pattern that satisfies a predetermined condition of similarity to a rhythm pattern intended by a user. The user inputs a rhythm pattern via a rhythm input device. An input rhythm pattern storage section stores the input rhythm pattern into a RAM on the basis of clock signals output from a bar line clock output section and trigger data included in the input rhythm pattern. A rhythm pattern search section searches through a rhythm database for a tone data set presenting the highest degree of similarity to the stored input rhythm pattern. A performance processing section causes a sound output section to audibly output the searched-out tone data set.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: June 9, 2015
    Assignee: Yamaha Corporation
    Inventors: Daichi Watanabe, Keita Arimoto
  • Patent number: 8853516
    Abstract: In an audio analysis apparatus, a component acquirer acquires a component matrix composed of an array of component values, columns of the component matrix corresponding to the sequence of unit periods of an audio signal and rows of the component matrix corresponding to a series of unit bands of the audio signal arranged in a frequency-axis direction. A difference generator generates a plurality of shift matrices each obtained by shifting the columns of the component matrix in the time-axis direction with a different shift amount, and generates a plurality of difference matrices each composed of an array of element values in correspondence to the plurality of the shift matrices, the element value representing a difference between the corresponding component values of the shift matrix and the component matrix.
    Type: Grant
    Filed: April 6, 2011
    Date of Patent: October 7, 2014
    Assignee: Yamaha Corporation
    Inventors: Keita Arimoto, Sebastian Streich, Bee Suan Ong
  • Patent number: 8543387
    Abstract: Disclosed herein is a pitch estimation apparatus and associated methods for estimating a fundamental frequency of an audio signal from a fundamental frequency probability density function by modeling the audio signal as a weighted mixture of a plurality of tone models corresponding respectively to harmonic structures of individual fundamental frequencies, so that the fundamental frequency probability density function of the audio signal is given as a distribution of respective weights of the plurality of the tone models.
    Type: Grant
    Filed: August 31, 2007
    Date of Patent: September 24, 2013
    Assignee: Yamaha Corporation
    Inventors: Masataka Goto, Takuya Fujishima, Keita Arimoto
  • Patent number: 8494668
    Abstract: Character value of a sound signal is extracted for each unit portion, and degrees of similarity between the character values of the individual unit portions are calculated and arranged in a matrix configuration. The matrix has arranged in each column the degrees of similarity acquired by comparing, for each of the unit portions, the sound signal and a delayed sound signal obtained by delaying the sound signal by a time difference equal to an integral multiple of a time length of the unit portion, and it has a plurality of the columns in association with different time differences. Repetition probability is calculated for each of the columns corresponding to the different time differences in the matrix. A plurality of peaks in a distribution of the repetition probabilities are identified. The loop region in the sound signal is identified by collating a reference matrix with the degree of similarity matrix.
    Type: Grant
    Filed: February 19, 2009
    Date of Patent: July 23, 2013
    Assignee: Yamaha Corporation
    Inventors: Bee Suan Ong, Sebastian Streich, Takuya Fujishima, Keita Arimoto
  • Patent number: 8487175
    Abstract: In a musical analysis apparatus, a spectrum acquirer acquires a spectrum for each frame of an audio signal representing a piece of music. A beat specifier specifies a sequence of beats of the audio signal. A feature amount extractor divides an interval between the beats into a plurality of analysis periods such that one analysis period contains a plurality of frames, and separates the spectrum of the frames contained in one analysis period into a plurality of analysis bands so as to set a plurality of analysis units in one analysis period in correspondence with the plurality of the analysis bands, such that one analysis unit contains components of the spectrum belonging to the corresponding analysis band.
    Type: Grant
    Filed: April 6, 2011
    Date of Patent: July 16, 2013
    Assignee: Yamaha Corporation
    Inventors: Keita Arimoto, Sebastian Streich, Bee Suan Ong
  • Publication number: 20120192701
    Abstract: It is an object of the present invention to provide an improved technique for searching for a tone data set of a phrase constructed in a rhythm pattern that satisfies a predetermined condition of similarity to a rhythm pattern intended by a user. The user inputs a rhythm pattern via a rhythm input device. An input rhythm pattern storage section stores the input rhythm pattern into a RAM on the basis of clock signals output from a bar line clock output section and trigger data included in the input rhythm pattern. A rhythm pattern search section searches through a rhythm database for a tone data set presenting the highest degree of similarity to the stored input rhythm pattern. A performance processing section causes a sound output section to audibly output the searched-out tone data set.
    Type: Application
    Filed: December 1, 2011
    Publication date: August 2, 2012
    Applicant: YAMAHA CORPORATION
    Inventors: Daichi Watanabe, Keita Arimoto
  • Publication number: 20110271819
    Abstract: In a musical analysis apparatus, a spectrum acquirer acquires a spectrum for each frame of an audio signal representing a piece of music. A beat specifier specifies a sequence of beats of the audio signal. A feature amount extractor divides an interval between the beats into a plurality of analysis periods such that one analysis period contains a plurality of frames, and separates the spectrum of the frames contained in one analysis period into a plurality of analysis bands so as to set a plurality of analysis units in one analysis period in correspondence with the plurality of the analysis bands, such that one analysis unit contains components of the spectrum belonging to the corresponding analysis band.
    Type: Application
    Filed: April 6, 2011
    Publication date: November 10, 2011
    Applicant: YAMAHA CORPORATION
    Inventors: Keita ARIMOTO, Sebastian Streich, Bee Suan Ong
  • Publication number: 20110268284
    Abstract: In an audio analysis apparatus, a component acquirer acquires a component matrix composed of an array of component values, columns of the component matrix corresponding to the sequence of unit periods of an audio signal and rows of the component matrix corresponding to a series of unit bands of the audio signal arranged in a frequency-axis direction. A difference generator generates a plurality of shift matrices each obtained by shifting the columns of the component matrix in the time-axis direction with a different shift amount, and generates a plurality of difference matrices each composed of an array of element values in correspondence to the plurality of the shift matrices, the element value representing a difference between the corresponding component values of the shift matrix and the component matrix.
    Type: Application
    Filed: April 6, 2011
    Publication date: November 3, 2011
    Applicant: YAMAHA CORPORATION
    Inventors: Keita Arimoto, Sebastian Streich, Bee Suan Ong
  • Patent number: 7858869
    Abstract: A sound analysis apparatus employs tone models which are associated with various fundamental frequencies and each of which simulates a harmonic structure of a performance sound generated by a musical instrument, then defines a weighted mixture of the tone models to simulate frequency components of the performance sound, further sequentially updates and optimizes weight values of the respective tone models so that a frequency distribution of the weighted mixture of the tone models corresponds to a distribution of the frequency components of the performance sound, and estimates the fundamental frequency of the performance sound based on the optimized weight values.
    Type: Grant
    Filed: February 25, 2008
    Date of Patent: December 28, 2010
    Assignees: National Institute of Advanced Industrial Science and Technology, Yamaha Corporation
    Inventors: Masataka Goto, Takuya Fujishima, Keita Arimoto