Patents by Inventor Yuta YUYAMA

Yuta YUYAMA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230386501
    Abstract: A data processing device includes: a digital signal processor; at least one processor; and at least one memory device configured to store a plurality of instructions, which when executed by the at least one processor, cause the at least one processor to operate to: output a first determination result relating to a scene of content through use of sound data; select processing for the sound data by a first selection method based on the first determination result; determine an attribute of the content from among a plurality of attribute candidates; and select the processing by a second selection method, which is different from the first selection method, based on a determination result of the attribute, wherein the digital signal processor is configured to execute the processing selected by the at least one processor on the sound data.
    Type: Application
    Filed: August 9, 2023
    Publication date: November 30, 2023
    Inventors: Yuta YUYAMA, Kunihiro KUMAGAI, Ryotaro AOKI
  • Patent number: 11763837
    Abstract: A data processing device includes: a digital signal processor; at least one processor; and at least one memory device configured to store a plurality of instructions, which when executed by the at least one processor, cause the at least one processor to operate to: output a first determination result relating to a scene of content through use of sound data; select processing for the sound data by a first selection method based on the first determination result; determine an attribute of the content from among a plurality of attribute candidates; and select the processing by a second selection method, which is different from the first selection method, based on a determination result of the attribute, wherein the digital signal processor is configured to execute the processing selected by the at least one processor on the sound data.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: September 19, 2023
    Assignee: YAMAHA CORPORATION
    Inventors: Yuta Yuyama, Kunihiro Kumagai, Ryotaro Aoki
  • Patent number: 11756571
    Abstract: An apparatus that identifies a scene type includes at least one processor and a memory. The memory is operatively coupled to the at least one processor and is configured to store instructions executable by the processor. Upon execution of the instructions, the processor is caused to identify a scene type of content that includes video and audio based on a feature amount of the audio in the content.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: September 12, 2023
    Assignee: Yamaha Corporation
    Inventor: Yuta Yuyama
  • Patent number: 11277704
    Abstract: An acoustic processing device including a memory storing instructions and a processor that implements the stored instructions to execute a plurality of tasks, the tasks including: an analyzing task that analyzes an input signal; a determining task that determines an acoustic effect to be applied to the input signal, from among a first acoustic effect of virtual surround and a second acoustic effect of virtual surround different from the first acoustic effect, based on a result of the analyzing task; and an acoustic effect applying task that applies the acoustic effect determined by the determining task to the input signal.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: March 15, 2022
    Assignee: YAMAHA CORPORATION
    Inventor: Yuta Yuyama
  • Publication number: 20210327458
    Abstract: An apparatus that identifies a scene type includes at least one processor and a memory. The memory is operatively coupled to the at least one processor and is configured to store instructions executable by the processor. Upon execution of the instructions, the processor is caused to identify a scene type of content that includes video and audio based on a feature amount of the audio in the content.
    Type: Application
    Filed: July 1, 2021
    Publication date: October 21, 2021
    Inventor: Yuta YUYAMA
  • Patent number: 11087779
    Abstract: An apparatus that identifies a scene type includes at least one processor and a memory. The memory is operatively coupled to the at least one processor and is configured to store instructions executable by the processor. Upon execution of the instructions, the processor is caused to identify a scene type of content that includes video and audio based on a feature amount of the audio in the content.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: August 10, 2021
    Assignee: Yamaha Corporation
    Inventor: Yuta Yuyama
  • Publication number: 20210225390
    Abstract: A data processing device includes: a digital signal processor; at least one processor; and at least one memory device configured to store a plurality of instructions, which when executed by the at least one processor, cause the at least one processor to operate to: output a first determination result relating to a scene of content through use of sound data; select processing for the sound data by a first selection method based on the first determination result; determine an attribute of the content from among a plurality of attribute candidates; and select the processing by a second selection method, which is different from the first selection method, based on a determination result of the attribute, wherein the digital signal processor is configured to execute the processing selected by the at least one processor on the sound data.
    Type: Application
    Filed: April 9, 2021
    Publication date: July 22, 2021
    Inventors: Yuta YUYAMA, Kunihiro KUMAGAI, Ryotaro AOKI
  • Patent number: 11011187
    Abstract: An apparatus for generating relations between feature amounts of audio and scene type includes at least one processor and a memory. The memory is operatively coupled to the at least one processor. The processor is configured to set one of the scene types to each of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also configured to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts of the audio, that belongs to each cluster and the scene type set for each cluster. The processor is also configured to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.
    Type: Grant
    Filed: July 2, 2020
    Date of Patent: May 18, 2021
    Assignee: Yamaha Corporation
    Inventors: Yuta Yuyama, Keita Arimoto
  • Patent number: 11004460
    Abstract: A data processing device includes: a digital signal processor; at least one processor; and at least one memory device configured to store a plurality of instructions, which when executed by the at least one processor, cause the at least one processor to operate to: output a first determination result relating to a scene of content through use of sound data; select processing for the sound data by a first selection method based on the first determination result; determine an attribute of the content from among a plurality of attribute candidates; and select the processing by a second selection method, which is different from the first selection method, based on a determination result of the attribute, wherein the digital signal processor is configured to execute the processing selected by the at least one processor on the sound data.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: May 11, 2021
    Assignee: YAMAHA CORPORATION
    Inventors: Yuta Yuyama, Kunihiro Kumagai, Ryotaro Aoki
  • Publication number: 20210021950
    Abstract: An acoustic processing device including a memory storing instructions and a processor that implements the stored instructions to execute a plurality of tasks, the tasks including: an analyzing task that analyzes an input signal; a determining task that determines an acoustic effect to be applied to the input signal, from among a first acoustic effect of virtual surround and a second acoustic effect of virtual surround different from the first acoustic effect, based on a result of the analyzing task; and an acoustic effect applying task that applies the acoustic effect determined by the determining task to the input signal.
    Type: Application
    Filed: July 13, 2020
    Publication date: January 21, 2021
    Inventor: Yuta YUYAMA
  • Patent number: 10848888
    Abstract: An audio data processing device according to an aspect of the present disclosure includes: a sound field effect data generator configured to add sound field effect data to audio data by arithmetic operation processing using a parameter, at least one processor, and at least one memory device that stores a plurality of instructions, which when executed by the at least one processor, causes the at least one processor to operate to: analyze a scene for the audio data, recognize switching of the scene based on an analysis result of the scene, gradually decrease both an input gain and an output gain of the sound field effect data generator, and gradually increase both the input gain and the output gain after changing the parameter.
    Type: Grant
    Filed: December 27, 2018
    Date of Patent: November 24, 2020
    Assignee: YAMAHA CORPORATION
    Inventors: Morishige Fujisawa, Kotaro Nakabayashi, Yuta Yuyama
  • Publication number: 20200335127
    Abstract: An apparatus for generating relations between feature amounts of audio and scene type includes at least one processor and a memory. The memory is operatively coupled to the at least one processor. The processor is configured to set one of the scene types to each of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also configured to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts of the audio, that belongs to each cluster and the scene type set for each cluster. The processor is also configured to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.
    Type: Application
    Filed: July 2, 2020
    Publication date: October 22, 2020
    Inventors: Yuta Yuyama, Keita ARIMOTO
  • Patent number: 10789972
    Abstract: An apparatus for generating relations between feature amounts of audio and scene types includes at least one processor and a memory. Upon execution of the instructions the processor is caused to set one of the scene types in accordance with an instruction from a user to indicate one of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also caused to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts, that belongs to the cluster and the scene type set for the cluster. The processor is also caused to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: September 29, 2020
    Assignee: Yamaha Corporation
    Inventors: Yuta Yuyama, Keita Arimoto
  • Patent number: 10748556
    Abstract: An apparatus for generating relations between feature amounts of audio and scene types includes at least one processor and a memory. Upon execution of the instructions the processor is caused to set one of the scene types in accordance with an instruction from a user to indicate one of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also caused to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts, that belongs to the cluster and the scene type set for the cluster. The processor is also caused to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: August 18, 2020
    Assignee: Yamaha Corporation
    Inventors: Yuta Yuyama, Keita Arimoto
  • Patent number: 10536778
    Abstract: An information processing apparatus includes a processor that performs a separate process to separate a content signal into a primary component which is an objective sound and a secondary component which is other than the objective sound, a speaker that outputs the primary component, and a transmitter that transmits the secondary component to another apparatus.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: January 14, 2020
    Assignee: Yamaha Corporation
    Inventors: Kohei Sekiguchi, Yuta Yuyama, Kunihiro Kumagai
  • Publication number: 20190378535
    Abstract: An apparatus that identifies a scene type includes at least one processor and a memory. The memory is operatively coupled to the at least one processor and is configured to store instructions executable by the processor. Upon execution of the instructions, the processor is caused to identify a scene type of content that includes video and audio based on a feature amount of the audio in the content.
    Type: Application
    Filed: August 26, 2019
    Publication date: December 12, 2019
    Inventor: Yuta YUYAMA
  • Publication number: 20190378534
    Abstract: An apparatus for generating relations between feature amounts of audio and scene types includes at least one processor and a memory. Upon execution of the instructions the processor is caused to set one of the scene types in accordance with an instruction from a user to indicate one of clusters classifying the feature amounts of audio in one or more pieces of content. The processor is also caused to generate a plurality of pieces of learning data, each representative of a feature amount, from among the feature amounts, that belongs to the cluster and the scene type set for the cluster. The processor is also caused to generate an identification model representative of relations between the feature amounts of audio and the scene types by performing machine learning using the plurality of pieces of learning data.
    Type: Application
    Filed: August 26, 2019
    Publication date: December 12, 2019
    Inventors: Yuta YUYAMA, Keita Arimoto
  • Publication number: 20190362739
    Abstract: A data processing device includes: a digital signal processor; at least one processor; and at least one memory device configured to store a plurality of instructions, which when executed by the at least one processor, cause the at least one processor to operate to: output a first determination result relating to a scene of content through use of sound data; select processing for the sound data by a first selection method based on the first determination result; determine an attribute of the content from among a plurality of attribute candidates; and select the processing by a second selection method, which is different from the first selection method, based on a determination result of the attribute, wherein the digital signal processor is configured to execute the processing selected by the at least one processor on the sound data.
    Type: Application
    Filed: May 21, 2019
    Publication date: November 28, 2019
    Inventors: Yuta YUYAMA, Kunihiro KUMAGAI, Ryotaro AOKI
  • Publication number: 20190253798
    Abstract: An information processing apparatus includes a processor that performs a separate process to separate a content signal into a primary component which is an objective sound and a secondary component which is other than the objective sound, a speaker that outputs the primary component, and a transmitter that transmits the secondary component to another apparatus.
    Type: Application
    Filed: February 8, 2019
    Publication date: August 15, 2019
    Inventors: Kohei SEKIGUCHI, Yuta YUYAMA, Kunihiro KUMAGAI
  • Publication number: 20190200151
    Abstract: An audio data processing device according to an aspect of the present disclosure includes: a sound field effect data generator configured to add sound field effect data to audio data by arithmetic operation processing using a parameter, at least one processor, and at least one memory device that stores a plurality of instructions, which when executed by the at least one processor, causes the at least one processor to operate to: analyze a scene for the audio data, recognize switching of the scene based on an analysis result of the scene, gradually decrease both an input gain and an output gain of the sound field effect data generator, and gradually increase both the input gain and the output gain after changing the parameter.
    Type: Application
    Filed: December 27, 2018
    Publication date: June 27, 2019
    Inventors: Morishige FUJISAWA, Kotaro NAKABAYASHI, Yuta YUYAMA