Patents by Inventor Bing-Fei Wu
Bing-Fei Wu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11830292Abstract: A system of image processing based emotion recognition is disclosed. The system principally comprises a camera and a main processor. Particularly, there a plurality of function units provided in the main processor, including: face detection unit, feature processing module, feature combination unit, conversion module, facial action judging unit, and emotion recognition unit. According to the present invention, the emotion recognition unit is configured to utilize a facial emotion recognition (FER) model to evaluate or distinguish an emotion state of a user based on at least one facial action, at least one emotional dimension, and a plurality of emotional scores. As a result, the accuracy of the emotion recognition conducted by the emotion recognition unit is significantly enhanced because basis of the emotion recognition comprises basic emotions, emotional dimension(s) and the user's facial action.Type: GrantFiled: September 6, 2021Date of Patent: November 28, 2023Assignee: NATIONAL YANG MING CHIAO TUNG UNIVERSITYInventors: Bing-Fei Wu, Wei-Hsiang Su, Yi-Chiao Wu
-
Publication number: 20230320667Abstract: A contactless physiological measurement device is disclosed. The contactless physiological measurement device is an electronic device comprising a processor and a memory storing an application program. When the application program is executed, the processor controls a front camera and a rear camera of the electronic device to photograph a user, so as to obtain a face image and a hand image. Subsequently, after extracting a first rPPG signal and a second rPPG signal from the face image and the hand image respectively, physiological parameters are calculated by apply a signal process to the first/second rPPG signal. Moreover, a signal parameter difference is also acquired after applying a signal difference calculation to the two rPPG signals. Consequently, an estimation value of blood pressure is outputted by inputting user anthropometric parameter, said signal parameter difference and at least one physiological parameter into a pre-trained blood pressure estimating model.Type: ApplicationFiled: April 7, 2022Publication date: October 12, 2023Applicant: FACEHEART INC CORPORATIONInventors: Chun-Hsien Lin, Yi-Chiao Wu, Meng-Liang Chung, Bing-Fei Wu
-
Publication number: 20230004738Abstract: A system of image processing based emotion recognition is disclosed. The system principally comprises a camera and a main processor. Particularly, there a plurality of function units provided in the main processor, including: face detection unit, feature processing module, feature combination unit, conversion module, facial action judging unit, and emotion recognition unit. According to the present invention, the emotion recognition unit is configured to utilize a facial emotion recognition (FER) model to evaluate or distinguish an emotion state of a user based on at least one facial action, at least one emotional dimension, and a plurality of emotional scores. As a result, the accuracy of the emotion recognition conducted by the emotion recognition unit is significantly enhanced because basis of the emotion recognition comprises basic emotions, emotional dimension(s) and the user's facial action.Type: ApplicationFiled: September 6, 2021Publication date: January 5, 2023Applicant: National Yang Ming Chiao Tung UniversityInventors: Bing-Fei Wu, Wei-Hsiang Su, Yi-Chiao Wu
-
Publication number: 20220386886Abstract: The present disclosure provides a non-contact heart rhythm category monitoring system, which includes steps as follows. Facial images are continuously captured through an image sensor; images of a continuous target area for a predetermined duration are extracted from the facial images; non-contact physiological signal related to heartbeats are captured from the images of the continuous target area; the non-contact physiological signal are classified into a normal heart rhythm, an atrial fibrillation and a non-atrial fibrillation arrhythmia.Type: ApplicationFiled: October 26, 2021Publication date: December 8, 2022Inventors: Bing-Fei WU, Yin-Yin YANG, Po-Wei HUANG, Bing-Jhang WU, Shao-En CHENG
-
Patent number: 11517253Abstract: A device for liveness detection is disclosed. The liveness detecting device has a simplest structure that principally comprises a light sensing unit and a signal processing module. Particularly, the signal processing module is configured for having a physiological feature extracting unit and a liveness detecting unit therein. The physiological feature extracting unit is adopted for extracting a first physiological feature from a PPG signal, or extracting a second physiological feature from the PPG signal that has been applied with a signal process. As such, through the first and second physiological features, the liveness detecting unit is able to determine whether a subject is a living body or not. The liveness detecting device does not use any camera unit and iPPG technology, such that the liveness detecting device has advantages of simple structure, low cost and immediately completing liveness detection.Type: GrantFiled: August 2, 2020Date of Patent: December 6, 2022Assignee: FACEHEART INC.Inventors: Bing-Jhang Wu, Chih-Wei Liu, Po-Wei Huang, Bing-Fei Wu
-
Patent number: 11315350Abstract: A method for assessing driver fatigue is implemented by a processor and includes steps of: based on images of a driver captured by an image capturing device, obtaining an entry of physiological information that indicates a physiological state of the driver; based on one of the images of the driver, obtaining an entry of facial expression information that indicates an emotional state of the driver; based on one of the images of the driver, obtaining an entry of behavioral information that indicates driver behavior of the driver; and based on the entry of physiological information, the entry of facial expression information and the entry of behavioral information, obtaining a fatigue score that indicates a level of fatigue of the driver.Type: GrantFiled: September 3, 2019Date of Patent: April 26, 2022Assignee: NATIONAL CHIAO TUNG UNIVERSITYInventors: Bing-Fei Wu, Kuan-Hung Chen, Po-Wei Huang, Yin-Cheng Tsai
-
Patent number: 11151720Abstract: The present invention provides a physiological information detection method for calculating a physiological value by using changes of a dynamic image. The detection method includes: acquiring detection data from a gray-scale value of the dynamic image, and transforming the detection data into frequency data. The detection method further includes: determining whether the frequency data meet a preset condition, and using a transformation model of a corresponding transformation combination accordingly to transform the frequency data into a physiological value. The present invention further provides a physiological information detection device applying the detection method.Type: GrantFiled: January 2, 2020Date of Patent: October 19, 2021Assignee: NATIONAL CHIAO TUNG UNIVERSITYInventors: Bing-Fei Wu, Yun-Wei Chu, Po-Wei Huang, Meng-Liang Chung, Yin-Cheng Tsai
-
Publication number: 20210251567Abstract: A device for liveness detection is disclosed. The liveness detecting device has a simplest structure that principally comprises a light sensing unit and a signal processing module. Particularly, the signal processing module is configured for having a physiological feature extracting unit and a liveness detecting unit therein. The physiological feature extracting unit is adopted for extracting a first physiological feature from a PPG signal, or extracting a second physiological feature from the PPG signal that has been applied with a signal process. As such, through the first and second physiological features, the liveness detecting unit is able to determine whether a subject is a living body or not. The liveness detecting device does not use any camera unit and iPPG technology, such that the liveness detecting device has advantages of simple structure, low cost and immediately completing liveness detection.Type: ApplicationFiled: August 2, 2020Publication date: August 19, 2021Inventors: BING-JHANG WU, CHIH-WEI LIU, PO-WEI HUANG, BING-FEI WU
-
Patent number: 11017241Abstract: A people-flow analysis system includes an image source, a computing device, and a host. The image source captures a first image and a second image. The computing device is connected to the image source. The computing device identifies the first image according to a data set to generate a first detecting image. The first detecting image has a position box corresponding to a pedestrian in the first image. The computing device generates a tracking image according to the data set and a difference between the first detecting image and the second image. The tracking image has another position box corresponding to a pedestrian in the second image. The host is connected to the computing device and generates a people-flow list according to the first detecting image and the tracking image.Type: GrantFiled: July 8, 2019Date of Patent: May 25, 2021Assignee: NATIONAL CHIAO TUNG UNIVERSITYInventors: Bing-Fei Wu, Chun-Hsien Lin, Po-Wei Huang, Meng-Liang Chung
-
Publication number: 20210019881Abstract: The present invention provides a physiological information detection method for calculating a physiological value by using changes of a dynamic image. The detection method includes: acquiring detection data from a gray-scale value of the dynamic image, and transforming the detection data into frequency data. The detection method further includes: determining whether the frequency data meet a preset condition, and using a transformation model of a corresponding transformation combination accordingly to transform the frequency data into a physiological value. The present invention further provides a physiological information detection device applying the detection method.Type: ApplicationFiled: January 2, 2020Publication date: January 21, 2021Applicant: NATIONAL CHIAO TUNG UNIVERSITYInventors: Bing-Fei WU, Yun-Wei CHU, Po-Wei HUANG, Meng-Liang CHUNG, Yin-Cheng TSAI
-
Patent number: 10835135Abstract: A non-contact heartbeat rate measurement system includes an image sensor, a target region selecting module, a heartbeat signal calculating module, a spectrum analyzing module, a vibration detecting module, a heartbeat peak selecting module. When a signal quality indicator is greater than a threshold value, and a first peak with global highest signal intensity in a heartbeat spectrum is similar to a face vibration frequency, the heartbeat peak selecting module selects a second peak with local highest signal intensity from a part frequency band of the heartbeat spectrum as an output heartbeat frequency. A non-contact heartbeat rate measurement method and an apparatus are disclosed herein.Type: GrantFiled: April 24, 2019Date of Patent: November 17, 2020Assignee: NATIONAL CHIAO TUNG UNIVERSITYInventors: Bing-Fei Wu, Meng-Liang Chung, Tsong-Yang Tsou, Yun-Wei Chu, Kuan-Hung Chen, Po-Wei Huang, Yin-Yin Yang
-
Patent number: 10824936Abstract: A recycling system and a method based on deep-learning and computer vision technology are disclosed. The system includes a trash sorting device and a trash sorting algorithm. The trash sorting device includes a trash arraying mechanism, trash sensors, a trash transfer mechanism and a controller. The trash arraying mechanism is configured to process trash in a batch manner. The controller drives the trash arraying mechanism according to the signals of trash sensors and controls the sorting gates of the trash sorting mechanism to rotate. The trash sorting algorithm makes use of the images of trash, wherein the images are taken by cameras in different directions. The trash sorting algorithm includes a dynamic object detection algorithm, an image pre-processing algorithm, an identification module and a voting and selecting algorithm. The identification module is based on the convolutional neural networks (CNNs) and may at least identify four kinds of trash.Type: GrantFiled: January 28, 2019Date of Patent: November 3, 2020Assignee: National Chiao Tung UniversityInventors: Bing-Fei Wu, Wan-Ju Tseng, Yu-Ming Chen, Bing-Jhang Wu, Yi-Chiao Wu, Meng-Liang Chung
-
Patent number: 10803654Abstract: The invention is related to a method of three-dimensional face reconstruction by inputting a single face image to reconstruct a three-dimensional face model, therefore, the human face image is seen at various angles of three-dimensional face through rotating the model images.Type: GrantFiled: September 23, 2019Date of Patent: October 13, 2020Assignee: NATIONAL CHIAO TUNG UNIVERSITYInventors: Bing-Fei Wu, Chun-Hsien Lin, Yi-Chiao Wu, Bing-Jhang Wu, Chih-Cheng Huang, Meng-Liang Chung
-
Publication number: 20200320319Abstract: A method for assessing driver fatigue is implemented by a processor and includes steps of: based on images of a driver captured by an image capturing device, obtaining an entry of physiological information that indicates a physiological state of the driver; based on one of the images of the driver, obtaining an entry of facial expression information that indicates an emotional state of the driver; based on one of the images of the driver, obtaining an entry of behavioral information that indicates driver behavior of the driver; and based on the entry of physiological information, the entry of facial expression information and the entry of behavioral information, obtaining a fatigue score that indicates a level of fatigue of the driver.Type: ApplicationFiled: September 3, 2019Publication date: October 8, 2020Inventors: Bing-Fei WU, Kuan-Hung CHEN, Po-Wei HUANG, Yin-Cheng TSAI
-
Patent number: 10779739Abstract: The present invention provides a contactless-type sport training monitor method, comprising: selecting at least an image database to recognize a plurality of expressions in the image database; making pre-processing for the plurality of expressions: using a convolutional neural network as a feature point extraction model; acquiring a human image; tracking a first target region and a second target region in the human image; making chrominance-based rPPG trace extraction; using the deep level model to compare the second target region image; and to calculate a post-exercise heart rate recovery achievement ratio, to judge the Rating of Perceived Exertion, and judging whether the human body is under overtraining status or not.Type: GrantFiled: July 20, 2018Date of Patent: September 22, 2020Assignee: NATIONAL CHIAO TUNG UNIVERSITYInventors: Bing-Fei Wu, Chun-Hsien Lin, Po-Wei Huang, Tzu-Min Lin, Meng-Liang Chung
-
Patent number: 10776614Abstract: A facial expression recognition training system includes a training module, feature database, a capturing module, a recognition module and an adjusting module. The training module trains a facial expression feature capturing model according to known face images. The feature database stores known facial expression features of the known face images. The capturing module continuously captures first face images, and the facial expression feature capturing model outputs facial expression features of the first face images according to the first face images. The recognition module compares the facial expression features and the known facial expression features, and fit the facial expression features to the first known facial expression features that is one kind of the known facial expression feature accordingly. The adjusting module adjusts the facial expression feature capturing model to reduce the differences between the facial expression features and the known facial expression features.Type: GrantFiled: January 23, 2019Date of Patent: September 15, 2020Assignee: NATIONAL CHIAO TUNG UNIVERSITYInventors: Bing-Fei Wu, Chun-Hsien Lin, Meng-Liang Chung
-
Publication number: 20200218884Abstract: An identity recognition system includes a target region acquisition module, a photoplethysmography signal conversion module, a biometric characteristic conversion module, a face characteristic acquisition module, and a comparison module. The target region acquisition module is configured to acquire a plurality of target region images from a plurality of face images. The photoplethysmography signal conversion module is configured to generate a photoplethysmography signal according to the target region images. The biometric characteristic conversion module is configured to convert the photoplethysmography signal into a biometric characteristic. The face characteristic acquisition module is configured to acquire a face characteristic from the face images.Type: ApplicationFiled: April 10, 2019Publication date: July 9, 2020Inventors: Bing-Fei WU, Po-Wei HUANG, Wen-Chung CHEN, Kuan-Hung CHEN
-
Patent number: 10694959Abstract: The present invention provides an image based blood pressure monitoring method, comprising: acquiring at least a human image information of at least a human skin area; according to the human image information to locate at least a ROI; extracting the human image information of the ROI, and calculate; filtering the average value of the human image information; monitoring the filtered signal; calculating an image pulse transmit time of the filtered signal, and calculating an inter-beat interval of the filtered signal; and employ the specific prediction model, to calculate a systolic pressure value and a diastolic pressure value.Type: GrantFiled: July 20, 2018Date of Patent: June 30, 2020Assignee: NATIONAL CHIAO TUNG UNIVERSITYInventors: Bing-Fei Wu, Po-Wei Huang, Chun-Hao Lin, Meng-Liang Chung, Tzu-Min Lin
-
Publication number: 20200184229Abstract: A people-flow analysis system includes an image source, a computing device, and a host. The image source captures a first image and a second image. The computing device is connected to the image source. The computing device identifies the first image according to a data set to generate a first detecting image. The first detecting image has a position box corresponding to a pedestrian in the first image. The computing device generates a tracking image according to the data set and a difference between the first detecting image and the second image. The tracking image has another position box corresponding to a pedestrian in the second image. The host is connected to the computing device and generates a people-flow list according to the first detecting image and the tracking image.Type: ApplicationFiled: July 8, 2019Publication date: June 11, 2020Applicant: NATIONAL CHIAO TUNG UNIVERSITYInventors: BING-FEI WU, CHUN-HSIEN LIN, PO-WEI HUANG, MENG-LIANG CHUNG
-
Publication number: 20200167990Abstract: The invention is related to a method of three-dimensional face reconstruction by inputting a single face image to reconstruct a three-dimensional face model, therefore, the human face image is seen at various angles of three-dimensional face through rotating the model images.Type: ApplicationFiled: September 23, 2019Publication date: May 28, 2020Inventors: BING-FEI WU, CHUN-HSIEN LIN, YI-CHIAO WU, BING-JHANG WU, CHIH-CHENG HUANG, MENG-LIANG CHUNG