Patents by Inventor Chih-Chung Kuo

Chih-Chung Kuo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9691389
    Abstract: In a spoken word generation system for speech recognition, at least one input device receives a plurality of input signals at least including at least one sound signal; a mode detection module detects the plurality of input signals; when a specific sound event is detected in the at least one sound signal or at least one control signal is included in the plurality of input signals, a speech training mode is outputted; when no specific sound event is detected in the at least one sound signal and no control signal is included in the plurality of input signals, a speech recognition mode is outputted; a speech training module receives the speech training mode and performs a training process on the audio segment and outputs a training result; and a speech recognition module receives the speech recognition mode, and performs a speech recognition process and outputs a recognition result.
    Type: Grant
    Filed: May 28, 2014
    Date of Patent: June 27, 2017
    Assignee: Industrial Technology Research Institute
    Inventors: Shih-Chieh Chien, Chih-Chung Kuo
  • Publication number: 20160343910
    Abstract: A light-emitting device includes a substrate and a first light-emitting unit. The first light-emitting unit is disposed on the substrate, and includes a first semiconductor layer, a first light-emitting layer, and a second semiconductor layer. The first semiconductor layer is disposed on the substrate. The first light-emitting layer is disposed between the first semiconductor layer and the second semiconductor layer. The second semiconductor layer is disposed on the first light-emitting layer. The first semiconductor layer has a first sidewall and a second sidewall. A first angle is between the substrate and the first sidewall. A second angle is between the substrate and the second sidewall. The first angle is smaller than the second angle.
    Type: Application
    Filed: April 22, 2016
    Publication date: November 24, 2016
    Inventors: Tsung-Syun Huang, Chih-Chung Kuo, Jing-En Huang, Shao-Ying Ting
  • Publication number: 20160247788
    Abstract: The disclosure relates to a high-voltage light-emitting diode (HV LED) and a manufacturing method thereof. A plurality of LED dies connected in series, in parallel, or in series and parallel are formed on a substrate. A side surface of the first semiconductor layer of part of the LED dies is aligned with a side surface of the substrate, such that no space for exposing the substrate is reserved between the LED dies and the edges of the substrate, the ratio of the substrate being covered by the LED dies is increased, that is, light-emitting area per unit area is increased, and the efficiency of light extraction of HV LED is improved.
    Type: Application
    Filed: February 17, 2016
    Publication date: August 25, 2016
    Inventors: Tsung-Syun Huang, Chih-Chung Kuo, Yi-Ru Huang, Chih-Ming Shen, Kuan-Chieh Huang, Jing-En Huang
  • Publication number: 20150269930
    Abstract: In a spoken word generation system for speech recognition, at least one input device receives a plurality of input signals at least including at least one sound signal; a mode detection module detects the plurality of input signals; when a specific sound event is detected in the at least one sound signal or at least one control signal is included in the plurality of input signals, a speech training mode is outputted; when no specific sound event is detected in the at least one sound signal and no control signal is included in the plurality of input signals, a speech recognition mode is outputted; a speech training module receives the speech training mode and performs a training process on the audio segment and outputs a training result; and a speech recognition module receives the speech recognition mode, and performs a speech recognition process and outputs a recognition result.
    Type: Application
    Filed: May 28, 2014
    Publication date: September 24, 2015
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Shih-Chieh CHIEN, Chih-Chung KUO
  • Publication number: 20150179171
    Abstract: A recognition network generation device, disposed in an electronic device, comprising: an operation record storage device storing a plurality of operation records of the electronic device, wherein each of the operation records includes operation content executed by the electronic device and device peripheral information detected by the electronic device when the electronic device executes the operation content; an activity model constructor classifying the operation records into a plurality of activity models according to all the device peripheral information of the operation records; an activity predictor selecting at least one selected activity model according to the degree of similarity between each of the activity models and a current device peripheral information detected by the electronic device; and a weight adjustor adjusting the weights of a plurality of recognition vocabularies, wherein the recognition vocabularies correspond to all the operation content of the at least one selected activity model.
    Type: Application
    Filed: November 13, 2014
    Publication date: June 25, 2015
    Inventors: Hsin-Chang CHANG, Jiang-Chun CHEN, Chih-Chung KUO
  • Patent number: 8932471
    Abstract: This disclosure relates to a method of recovering and concentrating an aqueous N-methylmorpholine-N-oxide (NMMO) solution.
    Type: Grant
    Filed: September 9, 2011
    Date of Patent: January 13, 2015
    Assignee: Acelon Chemicals & Fiber Corporation
    Inventors: Wen-Tung Chou, Ming-Yi Lai, Kun-Shan Huang, Hsiao-Chi Tsai, Chih-Chung Kuo
  • Patent number: 8898066
    Abstract: A multi-lingual text-to-speech system and method processes a text to be synthesized via an acoustic-prosodic model selection module and an acoustic-prosodic model mergence module, and obtains a phonetic unit transformation table. In an online phase, the acoustic-prosodic model selection module, according to the text and a phonetic unit transcription corresponding to the text, uses at least a set controllable accent weighting parameter to select a transformation combination and find a second and a first acoustic-prosodic models. The acoustic-prosodic model mergence module merges the two acoustic-prosodic models into a merged acoustic-prosodic model, according to the at least a controllable accent weighting parameter, processes all transformations in the transformation combination and generates a merged acoustic-prosodic model sequence. A speech synthesizer and the merged acoustic-prosodic model sequence are further applied to synthesize the text into an L1-accent L2 speech.
    Type: Grant
    Filed: August 25, 2011
    Date of Patent: November 25, 2014
    Assignee: Industrial Technology Research Institute
    Inventors: Jen-Yu Li, Jia-Jang Tu, Chih-Chung Kuo
  • Publication number: 20140114663
    Abstract: According to an exemplary embodiment of a guided speaker adaptive speech synthesis system, a speaker adaptive training module generates adaptation information and a speaker-adapted model based on inputted recording text and recording speech. A text to speech engine receives the recording text and the speaker-adapted model and outputs synthesized speech information. A performance assessment module receives the adaptation information and the synthesized speech information to generate assessment information. An adaptation recommendation module selects at least one subsequent recording text from at least one text source as a recommendation of a next adaption process, according to the adaptation information and the assessment information.
    Type: Application
    Filed: August 28, 2013
    Publication date: April 24, 2014
    Applicant: Industrial Technology Research Institute
    Inventors: Cheng-Yuan Lin, Cheng-Hsien Lin, Chih-Chung Kuo
  • Patent number: 8706493
    Abstract: In one embodiment of a controllable prosody re-estimation system, a TTS/STS engine consists of a prosody prediction/estimation module, a prosody re-estimation module and a speech synthesis module. The prosody prediction/estimation module generates predicted or estimated prosody information. And then the prosody re-estimation module re-estimates the predicted or estimated prosody information and produces new prosody information, according to a set of controllable parameters provided by a controllable prosody parameter interface. The new prosody information is provided to the speech synthesis module to produce a synthesized speech.
    Type: Grant
    Filed: July 11, 2011
    Date of Patent: April 22, 2014
    Assignee: Industrial Technology Research Institute
    Inventors: Cheng-Yuan Lin, Chien-Hung Huang, Chih-Chung Kuo
  • Patent number: 8660839
    Abstract: A system for leaving and transmitting speech messages automatically analyzes input speech of at least a reminder, fetches a plurality of tag informations, and transmits speech message to at least a message receiver, according to the transmit criterions of the reminder. A command or message parser parses the tag informations at least including at least a reminder ID, at least a transmitted command and at least a speech message. The tag informations are sent to a message composer for being synthesized into a transmitted message. A transmitting controller controls a device switch according to the reminder ID and the transmitted command, to allow the transmitted message send to the message receiver via a transmitting device.
    Type: Grant
    Filed: March 18, 2010
    Date of Patent: February 25, 2014
    Assignee: Industrial Technology Research Institute
    Inventors: Chih-Chung Kuo, Shih-Chieh Chien, Chung-Jen Chiu, Hsin-Chang Chang
  • Publication number: 20130062566
    Abstract: This disclosure relates to a method of recovering and concentrating an aqueous N-methylmorpholine-N-oxide (NMMO) solution.
    Type: Application
    Filed: September 9, 2011
    Publication date: March 14, 2013
    Applicant: Acelon Chemicals & Fiber Corporation
    Inventors: Wen-Tung Chou, Ming-Yi Lai, Kun-Shan Huang, Hsiao-Chi Tsai, Chih-Chung Kuo
  • Publication number: 20130054134
    Abstract: A telematics apparatus for providing driving assistance, and a system and a method are provided. The telematics apparatus receives position information indicating a current position of the telematics apparatus. The telematics apparatus transmits a request signal to the server. According to the request signal, the server obtains the position information, time information indicating a current time of the telematics apparatus, and identification information identifying a user of the telematics apparatus. The telematics apparatus displays driving assistance information received from the server which generates the driving assistance information according to the identification information, the position information, and the time information by searching through a route usage history of a plurality of routes and referring to a plurality of reference values of the routes. The reference value of each route indicates the telematics apparatus user's familiarity with that route.
    Type: Application
    Filed: January 20, 2012
    Publication date: February 28, 2013
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Chih-Hsiang Wang, Jin-Chin Chung, Shih-Tsun Chu, Yuan-Yi Chang, Chun-Lung Huang, Chih-Chung Kuo
  • Publication number: 20120173241
    Abstract: A multi-lingual text-to-speech system and method processes a text to be synthesized via an acoustic-prosodic model selection module and an acoustic-prosodic model mergence module, and obtains a phonetic unit transformation table. In an online phase, the acoustic-prosodic model selection module, according to the text and a phonetic unit transcription corresponding to the text, uses at least a set controllable accent weighting parameter to select a transformation combination and find a second and a first acoustic-prosodic models. The acoustic-prosodic model mergence module merges the two acoustic-prosodic models into a merged acoustic-prosodic model, according to the at least a controllable accent weighting parameter, processes all transformations in the transformation combination and generates a merged acoustic-prosodic model sequence. A speech synthesizer and the merged acoustic-prosodic model sequence are further applied to synthesize the text into an L1-accent L2 speech.
    Type: Application
    Filed: August 25, 2011
    Publication date: July 5, 2012
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Jen-Yu LI, Jia-Jang Tu, Chih-Chung Kuo
  • Publication number: 20120166198
    Abstract: In one embodiment of a controllable prosody re-estimation system, a TTS/STS engine consists of a prosody prediction/estimation module, a prosody re-estimation module and a speech synthesis module. The prosody prediction/estimation module generates predicted or estimated prosody information. And then the prosody re-estimation module re-estimates the predicted or estimated prosody information and produces new prosody information, according to a set of controllable parameters provided by a controllable prosody parameter interface. The new prosody information is provided to the speech synthesis module to produce a synthesized speech.
    Type: Application
    Filed: July 11, 2011
    Publication date: June 28, 2012
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Cheng-Yuan Lin, Chien-Hung Huang, Chih-Chung Kuo
  • Patent number: 8175865
    Abstract: A method of text script generation for a corpus-based text-to-speech system includes searching in a source corpus having L sentences, selecting N sentences with a best integrated efficiency as N best cases, and setting iteration k to be 1; for each case n of the N best cases, selecting Mk+1 best sentences with the best integrated efficiency from the unselected sentences in the source corpus; keeping N best cases out of the total unselected sentences for next iteration, and increasing iteration k by 1; and if a termination criterion being reached, setting the best case in the N traced cases as the text script, otherwise, returning to the (k+1)th iteration of searching in the unselected sentences for (k+1)th sentence; wherein the best integrated efficiency depends on a function of combining the covering rate of the synthesis unit type, the hit rate of the synthesis unit type, and the text script size.
    Type: Grant
    Filed: December 14, 2007
    Date of Patent: May 8, 2012
    Assignee: Industrial Technology Research Institute
    Inventors: Chih-Chung Kuo, Jing-Yi Huang
  • Patent number: 8055501
    Abstract: A speech synthesizer generating system and a method thereof are provided. A speech synthesizer generator in the speech synthesizer generating system automatically generates a speech synthesizer conforming to a speech output specification input by a user. In addition, a recording script is automatically generated by a recording script generator in the speech synthesizer generating system according to the speech output specification, and a customized or expanded speech material is recorded according to the recording script. After the speech material is uploaded to the speech synthesizer generating system, the speech synthesizer generator automatically generates a speech synthesizer conforming to the speech output specification. The speech synthesizer then synthesizes and outputs a speech output at a user end.
    Type: Grant
    Filed: October 21, 2007
    Date of Patent: November 8, 2011
    Assignee: Industrial Technology Research Institute
    Inventors: Chih-Chung Kuo, Min-Hsin Shen
  • Patent number: 8054953
    Abstract: A method and a system for executing correlative services are provided. In the method and the system, an event type corresponding to an input message is determined through semantic analysis. After collecting the necessary execution information of the event type according to the input message, a user database, or by inquiring the user or another system, the system automatically executes various correlative services of the event type. Therefore, the system can help users to execute correlative services more correctly and more efficiently.
    Type: Grant
    Filed: January 23, 2007
    Date of Patent: November 8, 2011
    Assignee: Industrial Technology Research Institute
    Inventors: Shih-Chieh Chien, Chih-Chung Kuo, Jui-Hsin Hung
  • Patent number: 7962327
    Abstract: A method and system for pronunciation assessment based on distinctive feature analysis is provided. It evaluates a user's pronunciation by one or more distinctive feature (DF) assessor. It may further construct a phone assessor with DF assessors to evaluate a user's phone pronunciation, and even construct a continuous speech pronunciation assessor with phone assessor to get the final pronunciation score for a word or a sentence. Each DF assessor further includes a feature extractor and a distinctive feature classifier, and can be realized differently. This is based on the different characteristic of the distinctive feature. A score mapper may be included to standardize the output for each DF assessor. Each speech phone can be described as a “bundle” of DFs. The invention is a novel and qualitative solution based on the DF of speech sounds for pronunciation assessment.
    Type: Grant
    Filed: June 21, 2005
    Date of Patent: June 14, 2011
    Assignee: Industrial Technology Research Institute
    Inventors: Chih-Chung Kuo, Che-Yao Yang, Ke-Shiu Chen, Miao-Ru Hsu
  • Publication number: 20110119053
    Abstract: A system for leaving and transmitting speech messages automatically analyzes input speech of at least a reminder, fetches a plurality of tag informations, and transmits speech message to at least a message receiver, according to the transmit criterions of the reminder. A command or message parser parses the tag informations at least including at least a reminder ID, at least a transmitted command and at least a speech message. The tag informations are sent to a message composer for being synthesized into a transmitted message. A transmitting controller controls a device switch according to the reminder ID and the transmitted command, to allow the transmitted message send to the message receiver via a transmitting device.
    Type: Application
    Filed: March 18, 2010
    Publication date: May 19, 2011
    Inventors: Chih-Chung Kuo, Shih-Chieh Chien, Chung-Jen Chiu, Hsin-Chang Chang
  • Patent number: 7801725
    Abstract: A method for speech quality degradation estimation, a method for degradation measures calculation, and the apparatuses thereof are provided. The first method above estimates the speech quality of a speech signal that is modified by a pitch-synchronous prosody modification method, which comprises the following steps. First, extract at least one source pitchmark from the speech signal, and then maps the source pitchmark(s) to at least one target pitchmark(s). Finally, calculate at least one degradation measure based on the mapping between the source and the target pitchmarks. The degradation measures include several weighted pitch-related functions and duration-related functions, where the weighting functions can be calculated based on the speech signal or the pitchmark(s) mapping mentioned above.
    Type: Grant
    Filed: June 29, 2006
    Date of Patent: September 21, 2010
    Assignee: Industrial Technology Research Institute
    Inventors: Shi-Han Chen, Chih-Chung Kuo, Shun-Ju Chen