Patents by Inventor Jia-Ching Wang

Jia-Ching Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11741343
    Abstract: A source separation method, an apparatus, and a non-transitory computer-readable medium are provided. Atrous Spatial Pyramid Pooling (ASPP) is used to reduce the number of parameters of a model and speed up computation. Conventional upsampling is replaced with a conversion between time and depth, and a receptive field preserving decoder is provided. In addition, temporal attention with dynamic convolution kernel is added, to further achieve lightweight and improve the effect of separation.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: August 29, 2023
    Assignee: National Central University
    Inventors: Jia-Ching Wang, Yao-Ting Wang
  • Patent number: 11663462
    Abstract: A machine learning method and a machine learning device are provided. The machine learning method includes: receiving an input signal and performing normalization on the input signal; transmitting the normalized input signal to a convolutional layer; and adding a sparse coding layer after the convolutional layer, wherein the sparse coding layer uses dictionary atoms to reconstruct signals on a projection of the normalized input signal passing through the convolutional layer, and the sparse coding layer receives a mini-batch input to refresh the dictionary atoms.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: May 30, 2023
    Assignee: National Central University
    Inventors: Jia-Ching Wang, Chien-Yao Wang, Chih-Hsuan Yang
  • Patent number: 11520997
    Abstract: A device and a method for generating a machine translation model and a machine translation device are disclosed. The device inputs a source training sentence of a source language and a dictionary data to a generator network so that the generator network outputs a target training sentence of a target language according to the source training sentence and the dictionary data. Then, the device inputs the target training sentence and a correct translation of the source training sentence to a discriminator network so as to calculate an error between the target training sentence and the correct translation according to the output of the discriminator network, and trains the generator network and the discriminator network respectively. The trained generator network is the machine translation model.
    Type: Grant
    Filed: November 29, 2019
    Date of Patent: December 6, 2022
    Assignee: National Central University
    Inventors: Jia-Ching Wang, Yi-Xing Lin
  • Patent number: 11170203
    Abstract: A training data generation method for human facial recognition and a data generation apparatus are provided. A large amount of virtual synthesized models are generated based on a face deformation model, where changes are made to face shapes, expressions, and/or angles to increase diversity of the training data. Experimental results show that the aforementioned training data may improve the accuracy of human face recognition.
    Type: Grant
    Filed: November 27, 2019
    Date of Patent: November 9, 2021
    Assignee: National Central University
    Inventors: Jia-Ching Wang, Chien-Wei Yeh
  • Publication number: 20210224647
    Abstract: A model training apparatus and method are provided. A neural network model includes a convolutional neural network (CNN) and a domain discriminator. The CNN includes multiple feature extractors and a classifier. The model training apparatus inputs multiple pieces of training data into the CNN so that each feature extractor generates a feature block for each piece of training data and so that the classifier generates a classification result for each piece of training data. The model training apparatus generates a vector for each piece of training data based on the corresponding feature blocks. The domain discriminator generates a domain discrimination result for each piece of training data according to the corresponding vector. The apparatus calculates a classification loss value and a domain loss value of the neural network model and determines whether to continue training the neural network model according to the classification loss value and the domain loss value.
    Type: Application
    Filed: January 13, 2021
    Publication date: July 22, 2021
    Inventors: Jia-Ching WANG, Ting-Yu WANG
  • Publication number: 20210158020
    Abstract: A training data generation method for human facial recognition and a data generation apparatus are provided. A large amount of virtual synthesized models are generated based on a face deformation model, where changes are made to face shapes, expressions, and/or angles to increase diversity of the training data. Experimental results show that the aforementioned training data may improve the accuracy of human face recognition.
    Type: Application
    Filed: November 27, 2019
    Publication date: May 27, 2021
    Applicant: National Central University
    Inventors: Jia-Ching Wang, Chien-Wei Yeh
  • Publication number: 20210158967
    Abstract: Provided herein are method of prediction of potential health risk, and particularly to a method for training artificial neural networks using biological analysis data. The method of present disclosure is characterized in the combined use of biological analysis and deep learning; in which the specific clinical data relating to the characteristic gene expression is used to train the artificial neural network to improve the accuracy of the prediction power of the artificial neural network.
    Type: Application
    Filed: October 30, 2020
    Publication date: May 27, 2021
    Applicant: National Central University
    Inventors: Yi-Chiung Hsu, Jia-Ching Wang, Chung-Yang Sung
  • Publication number: 20210157991
    Abstract: A device and a method for generating a machine translation model and a machine translation device are disclosed. The device inputs a source training sentence of a source language and a dictionary data to a generator network so that the generator network outputs a target training sentence of a target language according to the source training sentence and the dictionary data. Then, the device inputs the target training sentence and a correct translation of the source training sentence to a discriminator network so as to calculate an error between the target training sentence and the correct translation according to the output of the discriminator network, and trains the generator network and the discriminator network respectively. The trained generator network is the machine translation model.
    Type: Application
    Filed: November 29, 2019
    Publication date: May 27, 2021
    Inventors: Jia-Ching WANG, Yi-Xing LIN
  • Publication number: 20210142148
    Abstract: A source separation method, an apparatus, and a non-transitory computer-readable medium are provided. Atrous Spatial Pyramid Pooling (ASPP) is used to reduce the number of parameters of a model and speed up computation. Conventional upsampling is replaced with a conversion between time and depth, and a receptive field preserving decoder is provided. In addition, temporal attention with dynamic convolution kernel is added, to further achieve lightweight and improve the effect of separation.
    Type: Application
    Filed: November 27, 2019
    Publication date: May 13, 2021
    Applicant: National Central University
    Inventors: Jia-Ching Wang, Yao-Ting Wang
  • Patent number: 10685474
    Abstract: The present invention provides a method for repairing incomplete 3D depth image using 2D image information. The method includes the following steps: obtaining 2D image information and 3D depth image information; dividing 2D image information into 2D reconstruction blocks and 2D reconstruction boundaries, and corresponding to 3D reconstruction of blocks and 3D reconstruction boundaries; analyzing each 3D reconstruction block, partitioning into residual-surface blocks and repaired blocks; and proceeding at least one 3D image reconstruction, which extends with the initial depth value of the 3D depth image of each of the residual-surface block and covers all the corresponding repaired block to form a repair block and to achieve the purpose of repairing incomplete 3D depth images using 2D image information.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: June 16, 2020
    Assignee: NATIONAL CENTRAL UNIVERSITY
    Inventors: Yeh-Wei Yu, Chi-Chung Lau, Ching-Cherng Sun, Tsung-Hsun Yang, Tzu-Kai Wang, Jia-Ching Wang, Chien-Yao Wang, Kuan-Chung Wang
  • Publication number: 20200035013
    Abstract: The present invention provides a method for repairing incomplete 3D depth image using 2D image information. The method includes the following steps: obtaining 2D image information and 3D depth image information; dividing 2D image information into 2D reconstruction blocks and 2D reconstruction boundaries, and corresponding to 3D reconstruction of blocks and 3D reconstruction boundaries; analyzing each 3D reconstruction block, partitioning into residual-surface blocks and repaired blocks; and proceeding at least one 3D image reconstruction, which extends with the initial depth value of the 3D depth image of each of the residual-surface block and covers all the corresponding repaired block to form a repair block and to achieve the purpose of repairing incomplete 3D depth images using 2D image information.
    Type: Application
    Filed: November 19, 2018
    Publication date: January 30, 2020
    Inventors: Yeh-Wei YU, Chi-Chung LAU, Ching-Cherng SUN, Tsung-Hsun YANG, Tzu-Kai WANG, Jia-Ching WANG, Chien-Yao WANG, Kuan-Chung WANG
  • Publication number: 20200012932
    Abstract: A machine learning method and a machine learning device are provided. The machine learning method includes: receiving an input signal and performing normalization on the input signal; transmitting the normalized input signal to a convolutional layer; and adding a sparse coding layer after the convolutional layer, wherein the sparse coding layer uses dictionary atoms to reconstruct signals on a projection of the normalized input signal passing through the convolutional layer, and the sparse coding layer receives a mini-batch input to refresh the dictionary atoms.
    Type: Application
    Filed: July 10, 2018
    Publication date: January 9, 2020
    Applicant: National Central University
    Inventors: Jia-Ching Wang, Chien-Yao Wang, Chih-Hsuan Yang
  • Publication number: 20190251421
    Abstract: A source separation method and a source separation device are provided. The source separation method comprises: obtaining at least two source time-frequency signals and a mixed time-frequency signal of the at least two source time-frequency signals; disposing the mixed time-frequency signal at an input layer of a complex-valued deep neural network, and taking the at least two time-frequency signals as a target of the complex-valued deep neural network; calculating a cost function of the complex-valued deep neural network; and performing partial differential to a real part and an imaginary part of a network parameter of the complex-valued deep neural network respectively to minimize the cost function.
    Type: Application
    Filed: March 5, 2018
    Publication date: August 15, 2019
    Applicant: National Central University
    Inventors: Jia-Ching Wang, Yuan-Shan Lee, Shu-Fan Wang, Chien-Yao Wang
  • Patent number: 9612329
    Abstract: An apparatus, a system and a method for space status detection based on acoustic signal are provided. The detecting apparatus includes an audio transmitting device, an audio receiving device, a signal processing device and a decision device. The audio transmitting device transmits an acoustic signal into a space. The audio receiving device receives a varied acoustic signal as a sensing signal. The signal processing device is coupled to the audio receiving device to receive the sensing signal and generates a characteristic parameter of a space status according to the sensing signal. The decision device is coupled to the signal processing device to receive the characteristic parameter and detects a change of the space status according to the characteristic parameter.
    Type: Grant
    Filed: July 14, 2015
    Date of Patent: April 4, 2017
    Assignee: Industrial Technology Research Institute
    Inventors: Ming-Yen Chen, Jia-Ching Wang, Chen-Guei Chang, Chang-Hong Lin
  • Publication number: 20160091604
    Abstract: An apparatus, a system and a method for space status detection based on acoustic signal are provided. The detecting apparatus includes an audio transmitting device, an audio receiving device, a signal processing device and a decision device. The audio transmitting device transmits an acoustic signal into a space. The audio receiving device receives a varied acoustic signal as a sensing signal. The signal processing device is coupled to the audio receiving device to receive the sensing signal and generates a characteristic parameter of a space status according to the sensing signal. The decision device is coupled to the signal processing device to receive the characteristic parameter and detects a change of the space status according to the characteristic parameter.
    Type: Application
    Filed: July 14, 2015
    Publication date: March 31, 2016
    Inventors: Ming-Yen Chen, Jia-Ching Wang, Chen-Guei Chang, Chang-Hong Lin
  • Patent number: 9280914
    Abstract: The present invention discloses a vision-aided hearing assisting device, which includes a display device, a microphone and a processing unit. The processing unit includes a receiving module, a message generating module and a display driving module. The processing unit is electrically connected to the display device and the microphone. The receiving module receives a surrounding sound signal, which is generated by the microphone. The message generating module analyzes the surrounding sound signal according to a present-scenario mode to generate a related message related with the surrounding sound signal. The display driving module drives the display device to display the related message.
    Type: Grant
    Filed: April 10, 2014
    Date of Patent: March 8, 2016
    Assignee: National Central University
    Inventors: Jia-Ching Wang, Chang-Hong Lin, Chih-Hao Shih
  • Publication number: 20140307879
    Abstract: The present invention discloses a vision-aided hearing assisting device, which includes a display device, a microphone and a processing unit. The processing unit includes a receiving module, a message generating module and a display driving module. The processing unit is electrically connected to the display device and the microphone. The receiving module receives a surrounding sound signal, which is generated by the microphone. The message generating module analyzes the surrounding sound signal according to a present-scenario mode to generate a related message related with the surrounding sound signal. The display driving module drives the display device to display the related message.
    Type: Application
    Filed: April 10, 2014
    Publication date: October 16, 2014
    Applicant: National Central University
    Inventors: Jia-Ching WANG, Chang-Hong LIN, Chih-Hao SHIH
  • Patent number: 8451292
    Abstract: A video summarized method based on mining the story structure and semantic relations among concept entities has steps of processing a video to generate multiple important shots that are annotated with respective keywords: Performing a concept expansion process by using the keywords to create expansion trees for the annotated shots; rearranging the keywords of the expansion trees and classifying to calculate relations thereof; applying a graph entropy algorithm to determine significant shots and edges interconnected with the shots. Based on the determined result of the graph entropy algorithm, a structured relational graph is built to display the significant shots and edges thereof. Consequently, users can more rapidly browse the content of a video and comprehend if different shots are related.
    Type: Grant
    Filed: November 23, 2009
    Date of Patent: May 28, 2013
    Assignee: National Cheng Kung University
    Inventors: Jhing-Fa Wang, Bo-Wei Chen, Jia-Ching Wang, Chia-Hung Chang
  • Publication number: 20110122137
    Abstract: A video summarized method based on mining the story structure and semantic relations among concept entities has steps of processing a video to generate multiple important shots that are annotated with respective keywords: Performing a concept expansion process by using the keywords to create expansion trees for the annotated shots; rearranging the keywords of the expansion trees and classifying to calculate relations thereof; applying a graph entropy algorithm to determine significant shots and edges interconnected with the shots. Based on the determined result of the graph entropy algorithm, a structured relational graph is built to display the significant shots and edges thereof. Consequently, users can more rapidly browse the content of a video and comprehend if different shots are related.
    Type: Application
    Filed: November 23, 2009
    Publication date: May 26, 2011
    Applicant: NATIONAL CHENG KUNG UNIVERSITY
    Inventors: Jhing-Fa WANG, Bo-Wei CHEN, Jia-Ching WANG, Chia-Hung CHANG
  • Patent number: 7613365
    Abstract: The present invention discloses a video summarization system and the method thereof. A similarity computing apparatus computes the similarity between each frame to obtain multiple similarity values. A key frame extracting apparatus chooses the key frames from the frames wherein the sum of the similarity values between the key frames is a minimum. A feature space mapping apparatus converts the sentences into multiple corresponding sentence vectors and computes the distance between each sentence vector to obtain multiple distance values. A clustering apparatus divides the sentences into multiple clusters according to the distance values and the importance of the sentences, and also applies a splitting step to split the cluster with the highest importance into multiple new clusters. A key sentence extracting apparatus chooses multiple key sentence from the clusters, wherein the sum of the importance of the key sentences is the maximum.
    Type: Grant
    Filed: July 14, 2006
    Date of Patent: November 3, 2009
    Assignee: National Cheng Kung University
    Inventors: Jhing-Fa Wang, Jia-Ching Wang, Chen-Yu Chen