Patents by Inventor Yasuo Yoshioka
Yasuo Yoshioka has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20090103740Abstract: Even in a state that the change of an environmental noise cannot be anticipated, a sound generating period in an audio signal can be specified with high accuracy. Sound in an audio space in which an audio signal processing system 1 is disposed is always collected by a microphone 20 and inputted to an audio signal processing device 10 as an audio signal. Before a user carried out a prescribed operation, the audio signals inputted from the microphone 20 are sequentially stored in a first buffer 121. After the prescribed operation is carried out, the audio signals are sequentially stored in a second buffer 122. A specifying part 114 considers the level of the audio signal stored in the first buffer 121 as the level of the environmental noise and the level of the audio signal sequentially stored in the second buffer 122 as the level of sound generated at a current time to calculate an S/N ratio.Type: ApplicationFiled: June 28, 2006Publication date: April 23, 2009Applicant: Yamaha CorporationInventor: Yasuo Yoshioka
-
Publication number: 20090063146Abstract: In a voice processing device, a male voice index calculator calculates a male voice index indicating a similarity of the input sound relative to a male speaker sound model. A female voice index calculator calculates a female voice index indicating a similarity of the input sound relative to a female speaker sound model. A first discriminator discriminates the input sound between a non-human-voice sound and a human voice sound which may be either of the male voice sound or the female voice sound. A second discriminator discriminates the input sound between the male voice sound and the female voice sound based on the male voice index and the female voice index in case that the first discriminator discriminates the human voice sound.Type: ApplicationFiled: August 26, 2008Publication date: March 5, 2009Applicant: Yamaha CorporationInventor: Yasuo Yoshioka
-
Patent number: 7464034Abstract: A voice converting apparatus is constructed for converting an input voice into an output voice according to a target voice. The apparatus includes a storage section, an analyzing section including a characteristic analyzer, a producing section, a synthesizing section, a memory, an alignment processor, and target decoder.Type: GrantFiled: September 27, 2004Date of Patent: December 9, 2008Assignees: Yamaha Corporation, Pompeu Fabra UniversityInventors: Takahiro Kawashima, Yasuo Yoshioka, Pedro Cano, Alex Loscos, Xavier Serra, Mark Schiementz, Jordi Bonada
-
Publication number: 20080192954Abstract: Microphones are provided at an air inlet of the engine and a vehicle-cabin-side wall surface of an engine room, and engine sounds are picked up. The engine sound is processed by a signal processing section, and the processed engine sound is output from a speaker provided in a vehicle cabin. The signal processing section is provided with a filter which simulates a sound insulation characteristic of the vehicle cabin and a transformation section for processing the engine sound according to driving condition. A spectrum transformation characteristic of the transformation section is determined according to values detected by a vehicle speed sensor, an engine speed sensor, and an accelerator depression sensor, and a spectrum of the engine sound is transformed by means of specification of the spectrum transformation characteristic, thereby enhancing an engine sound.Type: ApplicationFiled: March 10, 2006Publication date: August 14, 2008Applicant: Yamaha CorporationInventors: Yoshikazu Honji, Yasuo Yoshioka, Tetsu Kobayashi, Akio Takahashi
-
Publication number: 20080154597Abstract: A voice processing apparatus has a storage device that stores registration information containing a characteristic parameter of a given voice. The voice processing apparatus is further provided with a judgment unit, a management unit and a notification unit. The judgment unit judges whether an input voice is appropriate or not for creating or updating the registration information. The management unit creates or updates the registration information based on a characteristic parameter of the input voice when the judgment unit judges that the input voice is appropriate. The notification unit notifies a speaker of the input voice when the judgment unit judges that the input voice is inappropriate.Type: ApplicationFiled: December 20, 2007Publication date: June 26, 2008Applicant: Yamaha CorporationInventors: Takehiko Kawahara, Yasuo Yoshioka
-
Publication number: 20080154585Abstract: In a sound signal processing apparatus, a frame information generation section generates frame information of each frame of a sound signal. A storage stores the frame information generated by the frame information generation section. A first interval determination section determines a first utterance interval in the sound signal. A second interval determination section determines a second utterance interval based on the frame information of the first utterance interval stored in the storage such that the second utterance interval is made shorter than the first utterance interval and confined within the first utterance interval by trimming frames from either of a start point or an end point of the first utterance interval.Type: ApplicationFiled: December 21, 2007Publication date: June 26, 2008Applicant: Yamaha CorporationInventor: Yasuo Yoshioka
-
Patent number: 7389231Abstract: A voice synthesizing apparatus comprises: a storage device that stores a first database storing a first parameter obtained by analyzing a voice and a second database storing a second parameter obtained by analyzing a voice with vibrato; an input device that inputs information for a voice to be synthesized; a generating device that generates a third parameter based on the first parameter read from the first database and the second parameter read from the second database in accordance with the input information; and a synthesizing device that synthesizes the voice in accordance with the third parameter. A very real vibrato effect can be added to a synthesized voice.Type: GrantFiled: August 30, 2002Date of Patent: June 17, 2008Assignee: Yamaha CorporationInventors: Yasuo Yoshioka, Alex Loscos
-
Patent number: 7382496Abstract: A printer is provided with a scanner mounted to a printer main body includes control for suppressing the scanning speed of the carriage during a reading operation of a document image by a scanner. With this structure, the printer can perform an image forming operation at maximum speed and the time required for forming an image can be reduced while suppressing a reading inferior resulting from the vibrations generated from the carriage by suppressing such vibrations.Type: GrantFiled: June 18, 2003Date of Patent: June 3, 2008Assignee: Sharp Kabushiki KaishaInventors: Yukihiko Sugimoto, Ryuichi Nakashima, Hiroshi Kubota, Yasuo Yoshioka, Katsuhiko Kyuken, Yoshitaka Yamanaka
-
Publication number: 20080071535Abstract: In a voice authentication apparatus, a characteristics analyzer analyzes characteristics of a sample noise which is generated around a subject while the subject generates a sample voice for authentication of the subject. A setting part sets a correction value according to the characteristics of the sample noise analyzed by the characteristics analyzer. A correction part corrects an index value, which indicates a degree of similarity between a feature quantity of a reference voice which has been previously registered and a feature quantity of the sample voice obtained from the subject, based on the set correction value. A determinator determines authenticity of the subject by comparing the corrected index value with a predetermined threshold value.Type: ApplicationFiled: September 5, 2007Publication date: March 20, 2008Applicant: Yamaha CorporationInventors: Yasuo YOSHIOKA, Takehiko Kawahara
-
Publication number: 20080059805Abstract: In a biometrics authentication apparatus, a storage part stores a dictionary containing biometrics information of a registered person. An information acquisition part acquires biometrics information of a subject person. An authentication part authenticates validity of the subject person based on an index representing a degree of similarity between the biometrics information of the dictionary stored in the storage part and the acquired biometrics information. A dictionary management part updates the dictionary based on the acquired biometrics information in a first case where the index represents a degree of the similarity higher than a first threshold value, and otherwise creates a new dictionary based on the acquired biometrics information and stores the created new dictionary in the storage part, in a second case where the index represents a degree of the similarity lower than the first threshold value and higher than a second threshold value which is set lower than the first threshold value.Type: ApplicationFiled: August 30, 2007Publication date: March 6, 2008Applicant: Yamaha CorporationInventors: Yasuo Yoshioka, Takehiko Kawahara
-
Publication number: 20070225979Abstract: A similarity degree estimation method is performed by two processes. In a first process, an inter-band correlation matrix is created from spectral data of an input voice such that the spectral data are divided into a plurality of discrete bands which are separated from each other with spaces therebetween along a frequency axis, a plurality of envelope components of the spectral data are obtained from the plurality of the discrete bands, and elements of the inter-band correlation matrix are correlation values between the respective envelope components of the input voice. In a second process, a degree of similarity is calculated between a pair of input voices to be compared with each other by using respective inter-band correlation matrices obtained for the pair of the input voices through the inter-band correlation matrix creation process.Type: ApplicationFiled: March 20, 2007Publication date: September 27, 2007Applicants: YAMAHA CORPORATION, WASEDA UNIVERSITYInventors: Mikio Tohyama, Michiko Kazama, Satoru Goto, Takehiko Kawahara, Yasuo Yoshioka
-
Patent number: 7149682Abstract: An apparatus is constructed for converting an input voice signal into an output voice signal according to a target voice signal. In the apparatus, an input device provides the input voice signal composed of original sinusoidal components and original residual components other than the original sinusoidal components. An extracting device extracts original attribute data from at least the sinusoidal components of the input voice signal. The original attribute data is characteristic of the input voice signal. A synthesizing device synthesizes new attribute data based on both of the original attribute data derived from the input voice signal and target attribute data being characteristic of the target voice signal composed of target sinusoidal components and target residual components other than the sinusoidal components. The target attribute data is derived from at least the target sinusoidal components.Type: GrantFiled: October 29, 2002Date of Patent: December 12, 2006Assignees: Yamaha Corporation, Pompeu Fabra UniversityInventors: Yasuo Yoshioka, Hiraku Kayama, Xavier Serra, Jordi Bonada
-
Patent number: 7135636Abstract: A method for synthesizing a natural-sounding singing voice divides performance data into a transition part and a long sound part. The transition part is represented by articulation (phonemic chain) data that is read from an articulation template database and is outputted without modification. For the long sound part, a new characteristic parameter is generated by linearly interpolating characteristic parameters of the transition parts positioned before and after the long sound part and adding thereto a changing component of stationary data that is read from a constant part (stationary) template database. An associated apparatus for carrying out the singing voice synthesizing method includes a phoneme database for storing articulation data for the transition part and stationary data for the long sound part, a first device for outputting the articulation data, and a second device for outputting the newly-generated characteristic parameter of the long sound part.Type: GrantFiled: February 27, 2003Date of Patent: November 14, 2006Assignee: Yamaha CorporationInventors: Hideki Kemmochi, Yasuo Yoshioka, Jordi Bonada
-
Patent number: 7117154Abstract: A voice converter synthesizes an output voice signal from an input voice signal and a reference voice signal. In the voice converter, an analyzer device analyzes a plurality of sinusoidal wave components contained in the input voice signal to derive a parameter set of an original frequency and an original amplitude representing each sinusoidal wave component. A source device provides reference information characteristic of the reference voice signal. A modulator device modulates the parameter set of each sinusoidal wave component according to the reference information. A regenerator device operates according to each of the parameter sets as modulated to regenerate each of the sinusoidal wave components so that at least one of the frequency and the amplitude of each sinusoidal wave component as regenerated varies from original one, and mixes the regenerated sinusoidal wave components altogether to synthesize the output voice signal.Type: GrantFiled: October 27, 1998Date of Patent: October 3, 2006Assignees: Yamaha Corporation, Pompeu Fabra UniversityInventors: Yasuo Yoshioka, Xavier Serra
-
Publication number: 20060212298Abstract: Spectrum envelope of an input sound is detected. In the meantime, a converting spectrum is acquired which is a frequency spectrum of a converting sound comprising a plurality of sounds, such as unison sounds. Output spectrum is generated by imparting the detected spectrum envelope of the input sound to the acquired converting spectrum. Sound signal is synthesized on the basis of the generated output spectrum. Further, a pitch of the input sound may be detected, and frequencies of peaks in the acquired converting spectrum may be varied in accordance with the detected pitch of the input sound. In this manner, the output spectrum can have the pitch and spectrum envelope of the input sound and spectrum frequency components of the converting sound comprising a plurality of sounds, and thus, unison sounds can be readily generated with simple arrangements.Type: ApplicationFiled: March 9, 2006Publication date: September 21, 2006Applicant: Yamaha CorporationInventors: Hideki Kemmochi, Yasuo Yoshioka, Jordi Bonada
-
Patent number: 6991083Abstract: The bill validator disclosed is provided with a plurality of pathway selectors at a position nearer to the inlet than the outlet, wherein the selectors select the pathways in respective different phases for the pathways to form a cross between the plurality of pathway selectors in every pathway selection.Type: GrantFiled: July 11, 2003Date of Patent: January 31, 2006Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Yasuo Yoshioka, Motoki Sugiyama
-
Publication number: 20060004569Abstract: Envelope identification section generates input envelope data (DEVin) indicative of a spectral envelope (EVin) of an input voice. Template acquisition section reads out, from a storage section, converting spectrum data (DSPt) indicative of a frequency spectrum (SPt) of a converting voice. On the basis of the input envelope data (DEVin) and the converting spectrum data (DSPt), a data generation section specifies a frequency spectrum (SPnew) corresponding in shape to the frequency spectrum (SPt) of the converting voice and having a substantially same spectral envelope as the spectral envelope (EVin) of the input voice, and the data generation section generates new spectrum data (DSPnew) indicative of the frequency spectrum (SPnew). Reverse FFT section and output processing section generates an output voice signal (Snew) on the basis of the new spectrum data (DSPnew).Type: ApplicationFiled: June 24, 2005Publication date: January 5, 2006Applicant: Yamaha CorporationInventors: Yasuo Yoshioka, Alex Loscos
-
Publication number: 20050288921Abstract: In a sound effect applying apparatus, an input part frequency-analyzes an input signal of sound or voice for detecting a plurality of local peaks of harmonics contained in the input signal. A subharmonics provision part adds a spectrum component of subharmonics between the detected local peaks so as to provide the input signal with a sound effect. An output part converts the input signal of a frequency domain containing the added spectrum component into an output signal of a time domain for generating the sound or voice provided with the sound effect.Type: ApplicationFiled: June 22, 2005Publication date: December 29, 2005Applicant: Yamaha CorporationInventors: Yasuo Yoshioka, Alex Loscos
-
Patent number: 6944589Abstract: A voice analyzing apparatus comprises: a first analyzer that analyzes a voice into harmonic components and inharmonic components: a second analyzer that analyzes a magnitude spectrum envelope of the harmonic components into a magnitude spectrum envelope of a vocal cord vibration waveform, resonances and a spectrum envelope of a difference of the magnitude spectrum envelope of the harmonic components from a sum of the magnitude spectrum envelope of the vocal cord vibration waveform and the resonances; and a memory that stores the inharmonic components, the magnitude spectrum envelope of the vocal cord vibration waveform, resonances and the spectrum envelope of the difference.Type: GrantFiled: March 8, 2002Date of Patent: September 13, 2005Assignee: Yamaha CorporationInventors: Yasuo Yoshioka, Jordi Bonada Sanjaume
-
Publication number: 20050049875Abstract: A voice converting apparatus is constructed for converting an input voice into an output voice according to a target voice. In the apparatus, a storage section provisionally stores source data, which is associated to and extracted from the target voice. An analyzing section analyzes the input voice to extract therefrom a series of input data frames representing the input voice. A producing section produces a series of target data frames representing the target voice based on the source data, while aligning the target data frames with the input data frames to secure synchronization between the target data frames and the input data frames. A synthesizing section synthesizes the output voice according to the target data frames and the input data frames. In the recognizing feature analysis, a characteristic analyzer extracts from the input voice a characteristic vector. A memory memorizes target behavior data representing a behavior of the target voice.Type: ApplicationFiled: September 27, 2004Publication date: March 3, 2005Inventors: Takahiro Kawashima, Yasuo Yoshioka, Pedro Cano, Alex Loscos, Xavier Serra, Mark Schiementz, Jordi Bonada