Patents by Inventor Kayoko Yanagisawa
Kayoko Yanagisawa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11144597Abstract: A system for emulating a subject, to allow a user to interact with a computer generated talking head with the subject's face and voice; said system comprising a processor, a user interface and a personality storage section, the user interface being configured to emulate the subject, by displaying a talking head which comprises the subject's face and output speech from the mouth of the face with the subject's voice, the user interface further comprising a receiver for receiving a query from the user, the emulated subject being configured to respond to the query received from the user, the processor comprising a dialogue section and a talking head generation section, wherein said dialogue section is configured to generate a response to a query inputted by a user from the user interface and generate a response to be outputted by the talking head, the response being generated by retrieving information from said personality storage section, said personality storage section comprising content created by or aboutType: GrantFiled: March 16, 2018Date of Patent: October 12, 2021Assignee: Kabushiki Kaisha ToshibaInventors: Balakrishna Venkata Jagannadha Kolluru, Vincent Ping Leung Wan, Bjorn Dietmar Rafael Stenger, Roberto Cipolla, Javier Latorre-Martinez, Langzhou Chen, Ranniery Da Silva Maia, Kayoko Yanagisawa, Norbert Braunschweiler, Ioannis Stylianou, Robert Arthur Blokland
-
Patent number: 10446133Abstract: There is provided a speech synthesizer comprising a processor configured to receive one or more linguistic units, convert said one or more linguistic units into a sequence of speech vectors for synthesizing speech, and output the sequence of speech vectors. Said conversion comprises modelling higher and lower spectral frequencies of the speech data as separate high and low spectral streams by applying a first set of one or more statistical models to the higher spectral frequencies and a second set of one or more statistical models to the lower spectral frequencies.Type: GrantFiled: February 24, 2017Date of Patent: October 15, 2019Assignee: Kabushiki Kaisha ToshibaInventors: Kayoko Yanagisawa, Ranniery Maia, Yannis Stylianou
-
Patent number: 10373604Abstract: An acoustic model is adapted, relating acoustic units to speech vectors. The acoustic model comprises a set of acoustic model parameters related to a given speech factor. The acoustic model parameters enable the acoustic model to output speech vectors with different values of the speech factor. The method comprises inputting a sample of speech which is corrupted by noise; determining values of the set of acoustic model parameters which enable the acoustic model to output speech with said first value of the speech factor; and employing said determined values of the set of speech factor parameters in said acoustic model. The acoustic model parameters are obtained by obtaining corrupted speech factor parameters using the sample of speech, and mapping the corrupted speech factor parameters to clean acoustic model parameters using noise characterization paramaters characterizing the noise.Type: GrantFiled: February 2, 2017Date of Patent: August 6, 2019Assignee: Kabushiki Kaisha ToshibaInventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Kayoko Yanagisawa
-
Publication number: 20180203946Abstract: A system for emulating a subject, to allow a user to interact with a computer generated talking head with the subject's face and voice; said system comprising a processor, a user interface and a personality storage section, the user interface being configured to emulate the subject, by displaying a talking head which comprises the subject's face and output speech from the mouth of the face with the subject's voice, the user interface further comprising a receiver for receiving a query from the user, the emulated subject being configured to respond to the query received from the user, the processor comprising a dialogue section and a talking head generation section, wherein said dialogue section is configured to generate a response to a query inputted by a user from the user interface and generate a response to be outputted by the talking head, the response being generated by retrieving information from said personality storage section, said personality storage section comprising content created by or aboutType: ApplicationFiled: March 16, 2018Publication date: July 19, 2018Applicant: Kabushiki Kaisha ToshibaInventors: Balakrishna Venkata Jagannadha KOLLURU, Vincent Ping Leung Wan, Bjorn Dietmar Rafael Stenger, Roberto Cipolla, Javier Latorre-Martinez, Langzhou Chen, Ranniery Da Silva Maia, Kayoko Yanagisawa, Norbert Braunschweiler, Ioannis Stylianou, Robert Arthur Blokland
-
Patent number: 9959368Abstract: A system for emulating a subject, to allow a user to interact with a computer generated talking head with the subject's face and voice; said system comprising a processor, a user interface and a personality storage section, the user interface being configured to emulate the subject, by displaying a talking head which comprises the subject's face and output speech from the mouth of the face with the subject's voice, the user interface further comprising a receiver for receiving a query from the user, the emulated subject being configured to respond to the query received from the user, the processor comprising a dialogue section and a talking head generation section, wherein said dialogue section is configured to generate a response to a query inputted by a user from the user interface and generate a response to be outputted by the talking head, the response being generated by retrieving information from said personality storage section, said personality storage section comprising content created by or aboutType: GrantFiled: August 13, 2014Date of Patent: May 1, 2018Assignee: Kabushiki Kaisha ToshibaInventors: Balakrishna Venkata Jagannadha Kolluru, Vincent Ping Leung Wan, Bjorn Dietmar Rafael Stenger, Roberto Cipolla, Javier Latorre-Martinez, Langzhou Chen, Ranniery Da Silva Maia, Kayoko Yanagisawa, Norbert Braunschweiler, Ioannis Stylianou, Robert Arthur Blokland
-
Publication number: 20170263239Abstract: There is provided a speech synthesiser comprising a processor configured to receive one or more linguistic units, convert said one or more linguistic units into a sequence of speech vectors for synthesising speech, and output the sequence of speech vectors. Said conversion comprises modelling higher and lower spectral frequencies of the speech data as separate high and low spectral streams by applying a first set of one or more statistical models to the higher spectral frequencies and a second set of one or more statistical models to the lower spectral frequencies.Type: ApplicationFiled: February 24, 2017Publication date: September 14, 2017Applicant: Kabushiki Kaisha ToshibaInventors: Kayoko YANAGISAWA, Ranniery MAIA, Yannis STYLIANOU
-
Publication number: 20170221479Abstract: A method of adapting an acoustic model relating acoustic units to speech vectors, wherein said acoustic model comprises a set of speech factor parameters related to a given speech factor and which enable the acoustic model to output speech vectors with different values of the speech factor, the method comprising: inputting a sample of speech with a first value of the speech factor; determining values of the set of speech factor parameters which enable the acoustic model to output speech with said first value of the speech factor; and employing said determined values of the set of speech factor parameters in said acoustic model, wherein said sample of speech is corrupted by noise, and wherein said step of determining the values of the set of speech factor parameters comprises: (i) obtaining noise characterization parameters characterising the noise; (ii) performing a speech factor parameter generation algorithm on the sample of speech, thereby generating corrupted values of the set of speech factor paramType: ApplicationFiled: February 2, 2017Publication date: August 3, 2017Applicant: Kabushiki Kaisha ToshibaInventors: Javier LATORRE-MARTINEZ, Vincent Ping Leung Wan, Kayoko Yanagisawa
-
Patent number: 9361722Abstract: A method of animating a computer generation of a head and displaying the text of an electronic book, such that the head has a mouth which moves in accordance with the speech of the text of the electronic book to be output by the head and a word or group of words from the text is displayed while simultaneously being mimed by the mouth, wherein input text is divided into a sequence of acoustic units, which are converted to a sequence of image vectors and into a sequence of text display indicators. The sequence of image vectors is outputted as video such that the mouth of said head moves to mime the speech associated with the input text with a selected expression, and the sequence of text display indicators is output as video which is synchronized with the lip movement of the head.Type: GrantFiled: August 8, 2014Date of Patent: June 7, 2016Assignee: KABUSHIKI KAISHA TOSHIBAInventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Balakrishna Venkata Jagannadha Kolluru, Ioannis Stylianou, Robert Arthur Blokland, Norbert Braunschweiler, Kayoko Yanagisawa, Langzhou Chen, Ranniery Maia, Robert Anderson, Bjorn Stenger, Roberto Cipolla, Neil Baker
-
Publication number: 20150052084Abstract: A system for emulating a subject, to allow a user to interact with a computer generated talking head with the subject's face and voice; said system comprising a processor, a user interface and a personality storage section, the user interface being configured to emulate the subject, by displaying a talking head which comprises the subject's face and output speech from the mouth of the face with the subject's voice, the user interface further comprising a receiver for receiving a query from the user, the emulated subject being configured to respond to the query received from the user, the processor comprising a dialogue section and a talking head generation section, wherein said dialogue section is configured to generate a response to a query inputted by a user from the user interface and generate a response to be outputted by the talking head, the response being generated by retrieving information from said personality storage section, said personality storage section comprising content created by or aboutType: ApplicationFiled: August 13, 2014Publication date: February 19, 2015Applicant: Kabushiki Kaisha ToshibaInventors: Balakrishna Venkata Jagannadha KOLLURU, Vincent Ping Leung WAN, Bjorn Dietmar Rafael STENGER, Roberto CIPOLLA, Javier LATORRE-MARTINEZ, Langzhou CHEN, Ranniery Da Silva MAIA, Kayoko YANAGISAWA, Norbert BRAUNSCHWEILER, Ioannis STYLIANOU, Robert Arthur BLOKLAND
-
Publication number: 20150042662Abstract: A method of animating a computer generation of a head and displaying the text of an electronic book, such that the head has a mouth which moves in accordance with the speech of the text of the electronic book to be output by the head and a word or group of words from the text is displayed while simultaneously being mimed by the mouth, said method comprising: inputting the text of said book; dividing said input text into a sequence of acoustic units; determining expression characteristics for the inputted text; calculating a duration for each acoustic unit using a duration model; converting said sequence of acoustic units to a sequence of image vectors using a statistical model, wherein said model has a plurality of model parameters describing probability distributions which relate an acoustic unit to an image vector, said image vector comprising a plurality of parameters which define a face of said head; converting said sequence of acoustic units into a sequence of text display indicators using an text disType: ApplicationFiled: August 8, 2014Publication date: February 12, 2015Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Javier Latorre-Martinez, Vincent Ping Leung Wan, Balakrishna Venkata Jagannadha Kolluru, Ioannis Stylianou, Robert Arthur Blokland, Norbert Braunschweiler, Kayoko Yanagisawa, Langzhou Chen, Ranniery MAIA, Robert Anderson, Bjorn Stenger, Roberto Cipolla, Neil Baker