Patents by Inventor Chuan-Yu Chang
Chuan-Yu Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240164649Abstract: A physiological signal measuring method includes a training's thermal image providing step, a training step, a classification model generating step, a measurement's thermal image providing step, a mask-wearing classifying step, a block identifying step and a measurement result generating step. The measurement's thermal image providing step includes providing a measurement's thermal image, which is an infrared thermal video for measuring. The measurement result generating step includes generating a measurement result of at least one physiological parameter of the subject according to a plurality of signals of the forehead block, and the mask block or the nasal cavity block.Type: ApplicationFiled: May 28, 2023Publication date: May 23, 2024Inventors: Chuan-Yu CHANG, Yen-Qun GAO
-
Patent number: 11608170Abstract: A buoy position monitoring method includes a buoy positioning step, an unmanned aerial vehicle receiving step and an unmanned aerial vehicle flying step. In the buoy positioning step, a plurality of buoys are put on a water surface. Each of the buoys is capable of sending a detecting signal. Each of the detecting signals is sent periodically and includes a position dataset of each of the buoys. In the unmanned aerial vehicle receiving step, an unmanned aerial vehicle is disposed on an initial position, and the unmanned aerial vehicle receives the detecting signals. In the unmanned aerial vehicle flying step, when at least one of the buoys is lost, the unmanned aerial vehicle flies to a predetermined position to get contact with the at least one buoy that is lost.Type: GrantFiled: April 9, 2020Date of Patent: March 21, 2023Assignee: NATIONAL YUNLIN UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Ching-Ju Chen, Chuan-Yu Chang, Chia-Yan Cheng, Meng-Syue Li, Yueh-Min Huang
-
Patent number: 11380348Abstract: A method for correcting infant crying identification includes the following steps: a detecting step provides an audio unit to detect a sound around an infant to generate a plurality of audio samples. A converting step provides a processing unit to convert the audio samples to generate a plurality of audio spectrograms. An extracting step provides a common model to extract the audio spectrograms to generate a plurality of infant crying features. An incremental training step provides an incremental model to train the infant crying features to generate an identification result. A judging step provides the processing unit to judge whether the identification result is correct according to a real result of the infant. When the identification result is different from the real result, an incorrect result is generated. A correcting step provides the processing unit to correct the incremental model according to the incorrect result.Type: GrantFiled: August 27, 2020Date of Patent: July 5, 2022Assignee: NATIONAL YUNLIN UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Chuan-Yu Chang, Jun-Ying Li
-
Publication number: 20220031191Abstract: A contactless breathing detection method is for detecting a breathing rate of a subject. The contactless breathing detection method includes a photographing step, a capturing step, a calculating step, and a converting step. The photographing step is performed to provide a camera to photograph the subject to generate a facial image. The capturing step is performed to provide a processor module to capture the facial image to generate a plurality of feature points. The calculating step is performed to drive the processor module to calculate the feature points according to an optical flow algorithm to generate a plurality of breathing signals. The converting step is performed to drive the processor module to convert the breathing signals to generate a plurality of power spectrums, respectively. The processor module generates an index value by calculating the power spectrums, and the breathing rate is extrapolated from the index value.Type: ApplicationFiled: September 20, 2020Publication date: February 3, 2022Inventors: Chuan-Yu CHANG, Min-Hsiang CHANG
-
Publication number: 20220028409Abstract: A method for correcting infant crying identification includes the following steps: a detecting step provides an audio unit to detect a sound around an infant to generate a plurality of audio samples. A converting step provides a processing unit to convert the audio samples to generate a plurality of audio spectrograms. An extracting step provides a common model to extract the audio spectrograms to generate a plurality of infant crying features. An incremental training step provides an incremental model to train the infant crying features to generate an identification result. A judging step provides the processing unit to judge whether the identification result is correct according to a real result of the infant. When the identification result is different from the real result, an incorrect result is generated. A correcting step provides the processing unit to correct the incremental model according to the incorrect result.Type: ApplicationFiled: August 27, 2020Publication date: January 27, 2022Inventors: Chuan-Yu CHANG, Jun-Ying LI
-
Patent number: 10891845Abstract: A mouth and nose occluded detecting method includes a detecting step and a warning step. The detecting step includes a facial detecting step, an image extracting step and an occluded determining step. In the facial detecting step, an image is captured by an image capturing device, wherein a facial portion image is obtained from the image. In the image extracting step, a mouth portion is extracted from the facial portion image so as to obtain a mouth portion image. In the occluded determining step, the mouth portion image is entered into an occluding convolutional neural network so as to produce a determining result, wherein the determining result is an occluding state or a normal state. In the warning step, a warning is provided according to the determining result.Type: GrantFiled: November 28, 2018Date of Patent: January 12, 2021Assignee: NATIONAL YUNLIN UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Chuan-Yu Chang, Fu-Jen Tsai
-
Patent number: 10846518Abstract: A facial stroking detection method includes a detecting step and a determining step. The detecting step includes a pre-processing step, a feature extracting step and a feature selecting step. In the pre-processing step, an image is captured by an image capturing device, and the image is pre-processed so as to obtain a post-processing image. In the feature extracting step, a plurality of image features are extracted from the post-processing image so as to form an image feature set. In the feature selecting step, a determining feature set is formed by selecting a part of the image features from the image feature set and entered into a classifier. In the determining step, wherein the classifier provides a determining result according to the determining feature set.Type: GrantFiled: November 28, 2018Date of Patent: November 24, 2020Assignee: NATIONAL YUNLIN UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Chuan-Yu Chang, Man-Ju Cheng, Matthew Huei-Ming Ma
-
Publication number: 20200324897Abstract: A buoy position monitoring method includes a buoy positioning step, an unmanned aerial vehicle receiving step and an unmanned aerial vehicle flying step. In the buoy positioning step, a plurality of buoys are put on a water surface. Each of the buoys is capable of sending a detecting signal. Each of the detecting signals is sent periodically and includes a position dataset of each of the buoys. In the unmanned aerial vehicle receiving step, an unmanned aerial vehicle is disposed on an initial position, and the unmanned aerial vehicle receives the detecting signals. In the unmanned aerial vehicle flying step, when at least one of the buoys is lost, the unmanned aerial vehicle flies to a predetermined position to get contact with the at least one buoy that is lost.Type: ApplicationFiled: April 9, 2020Publication date: October 15, 2020Inventors: Ching-Ju CHEN, Chuan-Yu CHANG, Chia-Yan CHENG, Meng-Syue LI
-
Patent number: 10722126Abstract: A heart rate detection method includes a facial image data acquiring step, a feature points recognizing step, an effective displacement signal generating step and a heart rate determining step. The feature points recognizing step is for recognizing a plurality of feature points, wherein a number range of the feature points is from three to twenty, and the feature points include a center point between two medial canthi, a point of a pronasale and a point of a subnasale of the face. The effective displacement signal generating step is for calculating an original displacement signal, wherein the original displacement signal is converted to an effective displacement signal. The heart rate determining step is for transforming the effective displacement signals of each of the feature points to an effective spectrum, wherein a heart rate is determined from one of the effective spectrums corresponding to the feature points, respectively.Type: GrantFiled: November 28, 2018Date of Patent: July 28, 2020Assignee: NATIONAL YUNLIN UNIVERSITY OF SCIENCE AND TECHNOLOGYInventors: Chuan-Yu Chang, Hsiang-Chi Liu, Matthew Huei-Ming Ma
-
Publication number: 20200168071Abstract: A mouth and nose occluded detecting method includes a detecting step and a warning step. The detecting step includes a facial detecting step, an image extracting step and an occluded determining step. In the facial detecting step, an image is captured by an image capturing device, wherein a facial portion image is obtained from the image. In the image extracting step, a mouth portion is extracted from the facial portion image so as to obtain a mouth portion image. In the occluded determining step, the mouth portion image is entered into an occluding convolutional neural network so as to produce a determining result, wherein the determining result is an occluding state or a normal state. In the warning step, a warning is provided according to the determining result.Type: ApplicationFiled: November 28, 2018Publication date: May 28, 2020Inventors: Chuan-Yu CHANG, Fu-Jen TSAI
-
Publication number: 20200167551Abstract: A facial stroking detection method includes a detecting step and a determining step. The detecting step includes a pre-processing step, a feature extracting step and a feature selecting step. In the pre-processing step, an image is captured by an image capturing device, and the image is pre-processed so as to obtain a post-processing image. In the feature extracting step, a plurality of image features are extracted from the post-processing image so as to form an image feature set. In the feature selecting step, a determining feature set is formed by selecting a part of the image features from the image feature set and entered into a classifier. In the determining step, wherein the classifier provides a determining result according to the determining feature set.Type: ApplicationFiled: November 28, 2018Publication date: May 28, 2020Inventors: Chuan-Yu CHANG, Man-Ju CHENG, Matthew Huei-Ming MA
-
Publication number: 20200163560Abstract: A heart rate detection method includes a facial image data acquiring step, a feature points recognizing step, an effective displacement signal generating step and a heart rate determining step. The feature points recognizing step is for recognizing a plurality of feature points, wherein a number range of the feature points is from three to twenty, and the feature points include a center point between two medial canthi, a point of a pronasale and a point of a subnasale of the face. The effective displacement signal generating step is for calculating an original displacement signal, wherein the original displacement signal is converted to an effective displacement signal. The heart rate determining step is for transforming the effective displacement signals of each of the feature points to an effective spectrum, wherein a heart rate is determined from one of the effective spectrums corresponding to the feature points, respectively.Type: ApplicationFiled: November 28, 2018Publication date: May 28, 2020Inventors: Chuan-Yu CHANG, Hsiang-Chi LIU, Matthew Huei-Ming MA
-
Publication number: 20120133753Abstract: A system, device, method, and computer program product for facial defect analysis using an angular facial image are provided. The system includes a storage module, an image angle detection module, a feature definition module, and a skin analysis module. The storage module stores at least one angular facial image, at least one skin defect condition, and at least one of multi-angle facial feature conditions. The image angle detection module detects an angle of the angular facial image. The feature definition module analyzes the angular facial image according to the facial feature conditions, so as to obtain at least one facial skin area image. The skin analysis module determines whether at least one skin defect image exists in the at least one facial skin area image by using the skin defect condition, and if yes, marks the at least one skin defect image in the at least one facial skin area image.Type: ApplicationFiled: December 17, 2010Publication date: May 31, 2012Inventors: Chuan-Yu Chang, Pao-Choo Chung, Shung-Cheng Li, Jia-Sin Li, Jui-Yi Kuo, Heng-Yi Liao
-
Publication number: 20110116691Abstract: A facial skin defect resolution system, method and a computer program product are presented. The system includes a storage module, a feature definition module, and a skin analysis module. The storage module stores at least one facial image of a user, at least one skin defect condition, and at least one of a plurality of facial feature conditions. The skin analysis module analyzes the facial image by using the facial feature conditions to obtain at least one facial skin area image, and then analyzes the facial skin area image according to the skin defect condition so as to mark a skin defect image in the facial skin area image.Type: ApplicationFiled: December 10, 2009Publication date: May 19, 2011Inventors: Pao-Choo CHUNG, Chuan-Yu Chang, Chi-Lu Yang, Shung-Cheng Li, Jia-Sin Li