Patents by Inventor Duan-Yu Chen

Duan-Yu Chen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220409126
    Abstract: A method for predicting sleep apnea from neural networks that mainly includes the following steps: a) retrieving an original signal; b) retrieving at least one snoring signal from the original signal by a snoring signal segmentation algorithm and converting the snoring signal into one with one-dimensional vector; c) applying a feature extraction algorithm to process the snoring signal with one-dimensional vector and transform the snoring signal into a feature matrix of two-dimensional vectors; and d) classifying the feature matrix by a neural network algorithm to obtain the number of times of sleep apnea and sleep hypopnea from the snoring signal. The method thereby is able to decide whether the snoring signal has revealed indications of sleep apnea or sleep hypopnea or not.
    Type: Application
    Filed: September 2, 2022
    Publication date: December 29, 2022
    Applicant: Far Eastern Memorial Hospital
    Inventors: Tsung-Wei Huang, Duan-Yu Chen, Sheng-Yen Chen
  • Patent number: 11517196
    Abstract: An evaluation device for tear secretion uses an air nozzle and a thermal camera device to cause minor irritations to the eyes and record the temperature changes of the eyes to evaluate the quantity of the tear secretion and to determine whether the subject is able to perform reflex tearing normally or not.
    Type: Grant
    Filed: September 12, 2019
    Date of Patent: December 6, 2022
    Assignee: Yuan Ze University
    Inventors: Tai-Yuan Su, Tsung-Yen Tsai, Duan-Yu Chen
  • Publication number: 20210076933
    Abstract: An evaluation device for tear secretion uses an air nozzle and a thermal camera device to cause minor irritations to the eyes and record the temperature changes of the eyes to evaluate the quantity of the tear secretion and to determine whether the subject is able to perform reflex tearing normally or not.
    Type: Application
    Filed: September 12, 2019
    Publication date: March 18, 2021
    Inventors: Tai-Yuan SU, Tsung-Yen Tsai, Duan-Yu Chen
  • Publication number: 20200365271
    Abstract: A method for predicting sleep apnea from neural networks that mainly includes the following steps: a) retrieving an original signal; b) retrieving at least one snoring signal from the original signal by a snoring signal segmentation algorithm and converting the snoring signal into one with one-dimensional vector; c) applying a feature extraction algorithm to process the snoring signal with one-dimensional vector and transform the snoring signal into a feature matrix of two-dimensional vectors; and d) classifying the feature matrix by a neural network algorithm to obtain the number of times of sleep apnea and sleep hypopnea from the snoring signal. The method thereby is able to decide whether the snoring signal has revealed indications of sleep apnea or sleep hypopnea or not.
    Type: Application
    Filed: November 6, 2019
    Publication date: November 19, 2020
    Inventors: TSUNG-WEI HUANG, DUAN-YU CHEN, SHENG-YE CHEN
  • Patent number: 10779725
    Abstract: A convolutional neural network model distinguishes eyelash images, break-up area images, non-break-up images, sclera images and eyelid images corresponding to a first prediction score, a second prediction score, a third prediction score, a fourth prediction score and a fifth prediction score to respectively produce a first label, a second label, a third label, a fourth label and a fifth label, thereby a break-up area can be detected in a tear film image and a tear film break-up time can be quantized for detection.
    Type: Grant
    Filed: January 4, 2019
    Date of Patent: September 22, 2020
    Assignee: Yuan Ze University
    Inventors: Tai-Yuan Su, Zi-Yuan Liu, Duan-Yu Chen
  • Publication number: 20200214554
    Abstract: A convolutional neural network model distinguishes eyelash images, break-up area images, non-break-up images, sclera images and eyelid images corresponding to a first prediction score, a second prediction score, a third prediction score, a fourth prediction score and a fifth prediction score to respectively produce a first label, a second label, a third label, a fourth label and a fifth label, thereby a break-up area can be detected in a tear film image and a tear film break-up time can be quantized for detection.
    Type: Application
    Filed: January 4, 2019
    Publication date: July 9, 2020
    Inventors: TAI-YUAN SU, ZI-YUAN LIU, DUAN-YU CHEN
  • Patent number: 9824434
    Abstract: A system for object recognition includes an image/spectrum sensing device, to fetch an object image from a real object and sense spectra at sensing regions of the real object. A fetching module obtains a real-object image feature pattern for each ROI of the object image. An analyzing module for object image feature searches for a first candidate object from data bank and analyzes a correlation between the real object and candidate object. A fetching module for object spectrum feature obtains a real-object spectrum pattern for each ROI of the object image. An analyzing module for object spectrum feature is to search for a second candidate object from data bank and to further analyze a match level between the real-object and the second candidate object. A fusion module further analyzes information of image feature and spectrum feature find whether or not having a matched object as an identify object.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: November 21, 2017
    Assignee: Industrial Technology Research Institute
    Inventors: Szu-Han Tzao, Yue-Min Jiang, Chih-Jen Hsu, Ho-Hsin Lee, Tsai-Ya Lai, Duan-Yu Chen
  • Publication number: 20170053393
    Abstract: A system for object recognition includes an image/spectrum sensing device, to fetch an object image from a real object and sense spectra at sensing regions of the real object. A fetching module obtains a real-object image feature pattern for each ROI of the object image. An analyzing module for object image feature searches for a first candidate object from data bank and analyzes a correlation between the real object and candidate object. A fetching module for object spectrum feature obtains a real-object spectrum pattern for each ROI of the object image. An analyzing module for object spectrum feature is to search for a second candidate object from data bank and to further analyze a match level between the real-object and the second candidate object. A fusion module further analyzes information of image feature and spectrum feature find whether or not having a matched object as an identify object.
    Type: Application
    Filed: December 17, 2015
    Publication date: February 23, 2017
    Inventors: Szu-Han Tzao, Yue-Min Jiang, Chih-Jen Hsu, Ho-Hsin Lee, Tsai-Ya Lai, Duan-Yu Chen
  • Patent number: 9305224
    Abstract: A method for instant recognition of traffic lights countdown image that can quickly scan and confirm the circular feature image of a traffic light, and retrieve the countdown image thereof by calculating the displacement ratio from the circular image, then enhance, cut and converse the countdown image to display a feature image thereof, and proceed similarity comparison with collected data to calculate the percentage of similarity. The method eventually brings out a result from the image comparisons, so as to fulfill the effectiveness of searching and instantly recognizing the countdown image of a traffic light.
    Type: Grant
    Filed: September 29, 2014
    Date of Patent: April 5, 2016
    Assignee: Yuan Ze University
    Inventors: Duan-Yu Chen, Yi-Tung Chou
  • Publication number: 20160092742
    Abstract: A method for instant recognition of traffic lights countdown image that can quickly scan and confirm the circular feature image of a traffic light, and retrieve the countdown image thereof by calculating the displacement ratio from the circular image, then enhance, cut and converse the countdown image to display a feature image thereof, and proceed similarity comparison with collected data to calculate the percentage of similarity. The method eventually brings out a result from the image comparisons, so as to fulfill the effectiveness of searching and instantly recognizing the countdown image of a traffic light.
    Type: Application
    Filed: September 29, 2014
    Publication date: March 31, 2016
    Inventors: DUAN-YU CHEN, YI-TUNG CHOU
  • Patent number: 7616779
    Abstract: The method for automatic key posture information abstraction of this invention comprises the steps of: Abstracting from a series of continuous digitized images spatial features of objects contained in said images; abstracting shape features of said objects using a method of probability calculation; detecting key posture information contained in said series of continuous images using a method of entropy calculation; removing redundant key postures; mating obtained key postures with key posture templates stored in a codebook; and encoding mated key postures.
    Type: Grant
    Filed: November 9, 2005
    Date of Patent: November 10, 2009
    Assignee: National Chiao Tung University
    Inventors: Hong-Yuan Liao, Duan-Yu Chen
  • Publication number: 20070104373
    Abstract: The method for automatic key posture information abstraction of this invention comprises the steps of: Abstracting from a series of continuous digitized images spatial features of objects contained in said images; abstracting shape features of said objects using a method of probability calculation; detecting key posture information contained in said series of continuous images using a method of entropy calculation; removing redundant key postures; mating obtained key postures with key posture templates stored in a codebook; and encoding mated key postures.
    Type: Application
    Filed: November 9, 2005
    Publication date: May 10, 2007
    Applicant: National Chiao Tung University
    Inventors: Hong-Yuan Liao, Duan-Yu Chen