Patents by Inventor Tatsuya Iriyama

Tatsuya Iriyama has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240021183
    Abstract: A singing sound output system includes at least one processor configured to execute a teaching unit configured to indicate to a user a progression position in singing data that are temporally associated with accompaniment data and include a plurality of syllables, an acquisition unit configured to acquire at least one piece of sound information input by a performance, a syllable identification unit configured to identify, from the syllables in the singing data, a syllable corresponding to the sound information, a timing identification unit configured to associate, with the sound information, relative information indicating a relative timing with respect to an identified syllable identified by the syllable identification unit, a synthesizing unit configured to synthesize a singing sound based on the identified syllable, and an output unit configured to, based on the relative information, synchronize and output the singing sound and an accompaniment sound based on the accompaniment data.
    Type: Application
    Filed: September 27, 2023
    Publication date: January 18, 2024
    Inventor: Tatsuya IRIYAMA
  • Publication number: 20230419946
    Abstract: A sound generation device includes an electronic controller including at least one processor. The electronic controller is configured to execute a first acquisition module configured to acquire first lyrics data in which a plurality of characters to be vocalized are arranged in a time series and that include a first character and a second character that follows the first character, a second acquisition module configured to acquire a vocalization start instruction, and a control module configured to, in response to the acquiring of the vocalization start instruction, output an instruction to generate an audio signal based on a first vocalization corresponding to the first character, in response to the vocalization start instruction satisfying a first condition, and output an instruction to generate the audio signal based on a second vocalization corresponding to the second character, in response to the vocalization start instruction not satisfying the first condition.
    Type: Application
    Filed: September 8, 2023
    Publication date: December 28, 2023
    Inventor: Tatsuya IRIYAMA
  • Publication number: 20230239520
    Abstract: A first terminal transmits first event data instructing generation of a first sound to a server. A second terminal transmits second event data instructing generation of a second sound to the server. The server transmits data including the first event data and the second event data to the first terminal. The first terminal controls generation of the first sound and the second sound, based on the data including the first event data and the second event data.
    Type: Application
    Filed: March 30, 2023
    Publication date: July 27, 2023
    Inventors: Tatsuya IRIYAMA, Satoshi UKAI
  • Publication number: 20230239621
    Abstract: A signal processing system is a system in which a plurality of devices including at least a first terminal device and a second terminal device that receive streaming data are connected to a communication system capable of communicating with the plurality of devices. The signal processing system includes: a receiving unit that receives a designation of first sound data from the first terminal device that received the streaming data and a designation of second sound data from the second terminal device that received the streaming data; a signal processing unit that obtains first sound data corresponding to the received designation of first sound data and second sound data corresponding to the received designation of second sound data, and generates a third sound signal in which a first sound signal corresponding to the first sound data and a second sound signal corresponding to the second sound data are mixed.
    Type: Application
    Filed: March 28, 2023
    Publication date: July 27, 2023
    Inventor: Tatsuya IRIYAMA
  • Publication number: 20230042477
    Abstract: A reproduction control method implemented by a computer includes receiving, from a first terminal device, a first reproduction request in accordance with an instruction from a first user, receiving, from a second terminal device, a second reproduction request in accordance with an instruction from a second user, acquiring a first acoustic signal representing a first sound in accordance with the first reproduction request, and a second acoustic signal representing a second sound which is in accordance with the second reproduction request and have acoustic characteristics that differ from acoustic characteristics of the first sound represented by the first acoustic signal, mixing the first acoustic signal and the second acoustic signal, thereby generating a third acoustic signal, and causing a reproduction system to reproduce a third sound represented by the third acoustic signal.
    Type: Application
    Filed: October 14, 2022
    Publication date: February 9, 2023
    Inventor: Tatsuya IRIYAMA
  • Publication number: 20230012622
    Abstract: A reproduction control method, which is executed by a computer, includes detecting a first state in which an object is separated from an operation surface by a prescribed distance and a second state in which the object is in contact with the operation surface, initiating sound reproduction at a first time point at which the first state is detected, continuing the sound reproduction from the first time point to a third time point which is subsequent to a second time point at which the second state is detected, and controlling a change in a feature amount of a sound during a first time period from the first time point to the second time point.
    Type: Application
    Filed: September 22, 2022
    Publication date: January 19, 2023
    Inventor: Tatsuya IRIYAMA
  • Publication number: 20230019428
    Abstract: An information processing method, which is executed by a computer, includes detecting a first state in which an object is separated from an operation surface by a prescribed distance. detecting a second state in which the object comes in contact with the operation surface after the first state is detected. executing a first process which includes reading data from a first storage device and loading, into a second storage device, the data that are read, in response to the detecting of the first state, and executing a second process with respect to the data loaded into the second storage device, in response to the detecting of the second state.
    Type: Application
    Filed: September 22, 2022
    Publication date: January 19, 2023
    Inventor: Tatsuya IRIYAMA
  • Publication number: 20230013425
    Abstract: A reproduction control method, which is executed by a computer, includes determining, based on an image showing an object, whether a type of the object is a first type or a second type that is different from the first type, and reproducing a sound, triggered by a striking of an operation surface by the object, based on a result of the determining.
    Type: Application
    Filed: September 22, 2022
    Publication date: January 19, 2023
    Inventor: Tatsuya IRIYAMA
  • Patent number: 9355634
    Abstract: A voice synthesis device includes a sequence data generation unit configured to generate sequence data including a plurality of kinds of parameters for controlling vocalization of a voice to be synthesized based on music information and lyrics information, an output unit configured to output a singing voice based on the sequence data, and a processing content information acquisition unit configured to acquire a plurality of processing content information, associated with each of pieces of preset singing manner information. Each of the content information indicates contents of edit processing for all or part of the parameters. The sequence data generation unit generates a plurality of pieces of sequence data, and the sequence data are obtained by editing the all or part of the parameters included in the sequence data, based on the content information associated with one of the pieces of singing manner information specified by a user.
    Type: Grant
    Filed: March 5, 2014
    Date of Patent: May 31, 2016
    Assignee: Yamaha Corporation
    Inventor: Tatsuya Iriyama
  • Publication number: 20160111083
    Abstract: Provided is a phoneme information synthesis device, including: an operation intensity information acquisition unit configured to acquire information indicating an operation intensity; and a phoneme information generation unit configured to output phoneme information for specifying a phoneme of a singing voice to be synthesized based on the information indicating the operation intensity supplied from the operation intensity information acquisition unit.
    Type: Application
    Filed: October 15, 2015
    Publication date: April 21, 2016
    Inventor: Tatsuya IRIYAMA
  • Patent number: 9135909
    Abstract: A speech synthesis information editing apparatus is provided. The speech synthesis information editing apparatus includes a phoneme storage unit that stores phoneme information, which designates a duration of each phoneme of speech to be synthesized. The speech synthesis information editing apparatus also includes a feature storage unit that stores feature information, which designates a time variation in a feature of the speech. In addition, the speech synthesis information editing apparatus includes an edition processing unit that changes a duration of each phoneme designated by the phoneme information with an expansion/compression degree, based on a feature designated by the feature information in correspondence to the phoneme.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: September 15, 2015
    Assignee: Yamaha Corporation
    Inventor: Tatsuya Iriyama
  • Patent number: 8975500
    Abstract: A display area, in which a note is displayed on two-axis coordinates configured by a tone pitch axis and a time axis, is displayed on a display device. A display magnification ratio used in the display area is variable. A note image of a given note is displayed in the display area to be arranged in correspondence with a tone pitch and a tone generation time of the note. The size of the note image is varied with the display magnification ratio. Relevant information is displayed in association with the note image displayed in the display area in such a manner that the relevant information is arranged inside the note image of the note in a first display state and the relevant information is arranged outside the note image of the note in a second display state with a display magnification ratio lower than that of the first display state.
    Type: Grant
    Filed: November 2, 2012
    Date of Patent: March 10, 2015
    Assignee: Yamaha Corporation
    Inventor: Tatsuya Iriyama
  • Publication number: 20140278433
    Abstract: A voice synthesis device includes a sequence data generation unit configured to generate sequence data including a plurality of kinds of parameters for controlling vocalization of a voice to be synthesized based on music information and lyrics information, an output unit configured to output a singing voice based on the sequence data, and a processing content information acquisition unit configured to acquire a plurality of processing content information, associated with each of pieces of preset singing manner information. Each of the content information indicates contents of edit processing for all or part of the parameters. The sequence data generation unit generates a plurality of pieces of sequence data, and the sequence data are obtained by editing the all or part of the parameters included in the sequence data, based on the content information associated with one of the pieces of singing manner information specified by a user.
    Type: Application
    Filed: March 5, 2014
    Publication date: September 18, 2014
    Applicant: Yamaha Corporation
    Inventor: Tatsuya IRIYAMA
  • Publication number: 20120143600
    Abstract: In a speech synthesis information editing apparatus, a phoneme storage unit stores phoneme information that designates a duration of each phoneme of speech to be synthesized. A feature storage unit stores feature information that designates a time variation in a feature of the speech. An edition processing unit changes a duration of each phoneme designated by the phoneme information with an expansion/compression degree depending on a feature designated by the feature information in correspondence to the phoneme.
    Type: Application
    Filed: December 1, 2011
    Publication date: June 7, 2012
    Applicant: Yamaha Corporation
    Inventor: Tatsuya IRIYAMA
  • Patent number: 7929710
    Abstract: A communication apparatus is disposed at a target place for use in monitoring of sounds. An input section collects various sounds generated at the target place. The collected sounds contain a first type of sound information which should be monitored and a second type of sound information which should not be monitored. The input section converts the collected sounds into a signal capable of conveying the sound information. A signal processing section processes the signal for creating ambiguous sound information by masking, trimming or modifying the second type of the sound information. A transmission section transmits the processed signal to a remote place, where the sounds are reproduced from the transmitted signal and the first type of the sound information is monitored, while the second type of the sound information is not monitored, since the second type of the sound information is altered to the ambiguous sound information.
    Type: Grant
    Filed: September 9, 2004
    Date of Patent: April 19, 2011
    Assignee: Yamaha Corporation
    Inventor: Tatsuya Iriyama
  • Publication number: 20050052285
    Abstract: A communication apparatus is disposed at a target place for use in monitoring of sounds. An input section collects various sounds generated at the target place. The collected sounds contain a first type of sound information which should be monitored and a second type of sound information which should not be monitored. The input section converts the collected sounds into a signal capable of conveying the sound information. A signal processing section processes the signal for creating ambiguous sound information by masking, trimming or modifying the second type of the sound information. A transmission section transmits the processed signal to a remote place, where the sounds are reproduced from the transmitted signal and the first type of the sound information is monitored, while the second type of the sound information is not monitored, since the second type of the sound information is altered to the ambiguous sound information.
    Type: Application
    Filed: September 9, 2004
    Publication date: March 10, 2005
    Inventor: Tatsuya Iriyama