Patents by Inventor Tatsuya Iriyama
Tatsuya Iriyama has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240021183Abstract: A singing sound output system includes at least one processor configured to execute a teaching unit configured to indicate to a user a progression position in singing data that are temporally associated with accompaniment data and include a plurality of syllables, an acquisition unit configured to acquire at least one piece of sound information input by a performance, a syllable identification unit configured to identify, from the syllables in the singing data, a syllable corresponding to the sound information, a timing identification unit configured to associate, with the sound information, relative information indicating a relative timing with respect to an identified syllable identified by the syllable identification unit, a synthesizing unit configured to synthesize a singing sound based on the identified syllable, and an output unit configured to, based on the relative information, synchronize and output the singing sound and an accompaniment sound based on the accompaniment data.Type: ApplicationFiled: September 27, 2023Publication date: January 18, 2024Inventor: Tatsuya IRIYAMA
-
Publication number: 20230419946Abstract: A sound generation device includes an electronic controller including at least one processor. The electronic controller is configured to execute a first acquisition module configured to acquire first lyrics data in which a plurality of characters to be vocalized are arranged in a time series and that include a first character and a second character that follows the first character, a second acquisition module configured to acquire a vocalization start instruction, and a control module configured to, in response to the acquiring of the vocalization start instruction, output an instruction to generate an audio signal based on a first vocalization corresponding to the first character, in response to the vocalization start instruction satisfying a first condition, and output an instruction to generate the audio signal based on a second vocalization corresponding to the second character, in response to the vocalization start instruction not satisfying the first condition.Type: ApplicationFiled: September 8, 2023Publication date: December 28, 2023Inventor: Tatsuya IRIYAMA
-
Publication number: 20230239520Abstract: A first terminal transmits first event data instructing generation of a first sound to a server. A second terminal transmits second event data instructing generation of a second sound to the server. The server transmits data including the first event data and the second event data to the first terminal. The first terminal controls generation of the first sound and the second sound, based on the data including the first event data and the second event data.Type: ApplicationFiled: March 30, 2023Publication date: July 27, 2023Inventors: Tatsuya IRIYAMA, Satoshi UKAI
-
Publication number: 20230239621Abstract: A signal processing system is a system in which a plurality of devices including at least a first terminal device and a second terminal device that receive streaming data are connected to a communication system capable of communicating with the plurality of devices. The signal processing system includes: a receiving unit that receives a designation of first sound data from the first terminal device that received the streaming data and a designation of second sound data from the second terminal device that received the streaming data; a signal processing unit that obtains first sound data corresponding to the received designation of first sound data and second sound data corresponding to the received designation of second sound data, and generates a third sound signal in which a first sound signal corresponding to the first sound data and a second sound signal corresponding to the second sound data are mixed.Type: ApplicationFiled: March 28, 2023Publication date: July 27, 2023Inventor: Tatsuya IRIYAMA
-
Publication number: 20230042477Abstract: A reproduction control method implemented by a computer includes receiving, from a first terminal device, a first reproduction request in accordance with an instruction from a first user, receiving, from a second terminal device, a second reproduction request in accordance with an instruction from a second user, acquiring a first acoustic signal representing a first sound in accordance with the first reproduction request, and a second acoustic signal representing a second sound which is in accordance with the second reproduction request and have acoustic characteristics that differ from acoustic characteristics of the first sound represented by the first acoustic signal, mixing the first acoustic signal and the second acoustic signal, thereby generating a third acoustic signal, and causing a reproduction system to reproduce a third sound represented by the third acoustic signal.Type: ApplicationFiled: October 14, 2022Publication date: February 9, 2023Inventor: Tatsuya IRIYAMA
-
Publication number: 20230012622Abstract: A reproduction control method, which is executed by a computer, includes detecting a first state in which an object is separated from an operation surface by a prescribed distance and a second state in which the object is in contact with the operation surface, initiating sound reproduction at a first time point at which the first state is detected, continuing the sound reproduction from the first time point to a third time point which is subsequent to a second time point at which the second state is detected, and controlling a change in a feature amount of a sound during a first time period from the first time point to the second time point.Type: ApplicationFiled: September 22, 2022Publication date: January 19, 2023Inventor: Tatsuya IRIYAMA
-
Publication number: 20230019428Abstract: An information processing method, which is executed by a computer, includes detecting a first state in which an object is separated from an operation surface by a prescribed distance. detecting a second state in which the object comes in contact with the operation surface after the first state is detected. executing a first process which includes reading data from a first storage device and loading, into a second storage device, the data that are read, in response to the detecting of the first state, and executing a second process with respect to the data loaded into the second storage device, in response to the detecting of the second state.Type: ApplicationFiled: September 22, 2022Publication date: January 19, 2023Inventor: Tatsuya IRIYAMA
-
Publication number: 20230013425Abstract: A reproduction control method, which is executed by a computer, includes determining, based on an image showing an object, whether a type of the object is a first type or a second type that is different from the first type, and reproducing a sound, triggered by a striking of an operation surface by the object, based on a result of the determining.Type: ApplicationFiled: September 22, 2022Publication date: January 19, 2023Inventor: Tatsuya IRIYAMA
-
Patent number: 9355634Abstract: A voice synthesis device includes a sequence data generation unit configured to generate sequence data including a plurality of kinds of parameters for controlling vocalization of a voice to be synthesized based on music information and lyrics information, an output unit configured to output a singing voice based on the sequence data, and a processing content information acquisition unit configured to acquire a plurality of processing content information, associated with each of pieces of preset singing manner information. Each of the content information indicates contents of edit processing for all or part of the parameters. The sequence data generation unit generates a plurality of pieces of sequence data, and the sequence data are obtained by editing the all or part of the parameters included in the sequence data, based on the content information associated with one of the pieces of singing manner information specified by a user.Type: GrantFiled: March 5, 2014Date of Patent: May 31, 2016Assignee: Yamaha CorporationInventor: Tatsuya Iriyama
-
Publication number: 20160111083Abstract: Provided is a phoneme information synthesis device, including: an operation intensity information acquisition unit configured to acquire information indicating an operation intensity; and a phoneme information generation unit configured to output phoneme information for specifying a phoneme of a singing voice to be synthesized based on the information indicating the operation intensity supplied from the operation intensity information acquisition unit.Type: ApplicationFiled: October 15, 2015Publication date: April 21, 2016Inventor: Tatsuya IRIYAMA
-
Patent number: 9135909Abstract: A speech synthesis information editing apparatus is provided. The speech synthesis information editing apparatus includes a phoneme storage unit that stores phoneme information, which designates a duration of each phoneme of speech to be synthesized. The speech synthesis information editing apparatus also includes a feature storage unit that stores feature information, which designates a time variation in a feature of the speech. In addition, the speech synthesis information editing apparatus includes an edition processing unit that changes a duration of each phoneme designated by the phoneme information with an expansion/compression degree, based on a feature designated by the feature information in correspondence to the phoneme.Type: GrantFiled: December 1, 2011Date of Patent: September 15, 2015Assignee: Yamaha CorporationInventor: Tatsuya Iriyama
-
Patent number: 8975500Abstract: A display area, in which a note is displayed on two-axis coordinates configured by a tone pitch axis and a time axis, is displayed on a display device. A display magnification ratio used in the display area is variable. A note image of a given note is displayed in the display area to be arranged in correspondence with a tone pitch and a tone generation time of the note. The size of the note image is varied with the display magnification ratio. Relevant information is displayed in association with the note image displayed in the display area in such a manner that the relevant information is arranged inside the note image of the note in a first display state and the relevant information is arranged outside the note image of the note in a second display state with a display magnification ratio lower than that of the first display state.Type: GrantFiled: November 2, 2012Date of Patent: March 10, 2015Assignee: Yamaha CorporationInventor: Tatsuya Iriyama
-
Publication number: 20140278433Abstract: A voice synthesis device includes a sequence data generation unit configured to generate sequence data including a plurality of kinds of parameters for controlling vocalization of a voice to be synthesized based on music information and lyrics information, an output unit configured to output a singing voice based on the sequence data, and a processing content information acquisition unit configured to acquire a plurality of processing content information, associated with each of pieces of preset singing manner information. Each of the content information indicates contents of edit processing for all or part of the parameters. The sequence data generation unit generates a plurality of pieces of sequence data, and the sequence data are obtained by editing the all or part of the parameters included in the sequence data, based on the content information associated with one of the pieces of singing manner information specified by a user.Type: ApplicationFiled: March 5, 2014Publication date: September 18, 2014Applicant: Yamaha CorporationInventor: Tatsuya IRIYAMA
-
Publication number: 20120143600Abstract: In a speech synthesis information editing apparatus, a phoneme storage unit stores phoneme information that designates a duration of each phoneme of speech to be synthesized. A feature storage unit stores feature information that designates a time variation in a feature of the speech. An edition processing unit changes a duration of each phoneme designated by the phoneme information with an expansion/compression degree depending on a feature designated by the feature information in correspondence to the phoneme.Type: ApplicationFiled: December 1, 2011Publication date: June 7, 2012Applicant: Yamaha CorporationInventor: Tatsuya IRIYAMA
-
Patent number: 7929710Abstract: A communication apparatus is disposed at a target place for use in monitoring of sounds. An input section collects various sounds generated at the target place. The collected sounds contain a first type of sound information which should be monitored and a second type of sound information which should not be monitored. The input section converts the collected sounds into a signal capable of conveying the sound information. A signal processing section processes the signal for creating ambiguous sound information by masking, trimming or modifying the second type of the sound information. A transmission section transmits the processed signal to a remote place, where the sounds are reproduced from the transmitted signal and the first type of the sound information is monitored, while the second type of the sound information is not monitored, since the second type of the sound information is altered to the ambiguous sound information.Type: GrantFiled: September 9, 2004Date of Patent: April 19, 2011Assignee: Yamaha CorporationInventor: Tatsuya Iriyama
-
Publication number: 20050052285Abstract: A communication apparatus is disposed at a target place for use in monitoring of sounds. An input section collects various sounds generated at the target place. The collected sounds contain a first type of sound information which should be monitored and a second type of sound information which should not be monitored. The input section converts the collected sounds into a signal capable of conveying the sound information. A signal processing section processes the signal for creating ambiguous sound information by masking, trimming or modifying the second type of the sound information. A transmission section transmits the processed signal to a remote place, where the sounds are reproduced from the transmitted signal and the first type of the sound information is monitored, while the second type of the sound information is not monitored, since the second type of the sound information is altered to the ambiguous sound information.Type: ApplicationFiled: September 9, 2004Publication date: March 10, 2005Inventor: Tatsuya Iriyama