Patents by Inventor Peter Chondro
Peter Chondro has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250058791Abstract: This disclosure provides a method for determining visual and auditory attentiveness of vehicle driver, which comprises: determining a visual attentiveness of a driver according to an image captured by a camera installed inside the vehicle; determining a recognition attentiveness of the driver according to sounds obtained by a microphone installed inside the vehicle; deciding whether to issue a reminder to the driver based on the visual attentiveness and the recognition attentiveness of the driver; when it is necessary to remind the driver after determining the driver's visual and recognition attentiveness, one or a combination of reminder steps will be executed; the reminder steps comprising: issuing a visual reminder by a display device in the vehicle; and issuing a auditory reminder by a speaker in the vehicle.Type: ApplicationFiled: July 30, 2024Publication date: February 20, 2025Inventors: Peter Chondro, Jun-Yao Zhong, Bo-Yu Chen, Tse-Min Chen, Jui-Li Chen
-
Publication number: 20240177456Abstract: According to an exemplary embodiment, the disclosure provides an object detection method includes not limited to obtaining a set of a plurality of object annotated images in a source domain and have a first image style; obtaining a minority set of a plurality of object annotated images in a target domain and having a second image style; obtaining a majority set of a plurality of unannotated images which are in the target domain and having the second image style; performing an image style transfer to generate a converted set of object annotated images having the second image style; generating object annotation for the majority set of the plurality of unannotated images in the second image style to change from the majority set of a plurality of unannotated images into a majority set of a plurality of annotated images; and performing an active domain adaptation to generate an object detection model.Type: ApplicationFiled: November 24, 2022Publication date: May 30, 2024Applicant: Industrial Technology Research InstituteInventor: Peter Chondro
-
Patent number: 11528435Abstract: The disclosure is directed to an image dehazing method and an image dehazing apparatus using the same method. In an aspect, the disclosure is directed to an image dehazing method, and the method would include not limited to: receiving an input image; dehazing the image by a dehazing module to output a dehazed RGB image; recovering image brightness of the dehazed RGB image by a high dynamic range (HDR) module to output an HDR image; and removing reflection of the HDR image by a ReflectNet inference model, wherein the ReflectNet inference model uses a deep learning architecture.Type: GrantFiled: December 25, 2020Date of Patent: December 13, 2022Assignee: Industrial Technology Research InstituteInventors: Peter Chondro, De-Qin Gao
-
Patent number: 11507776Abstract: An image recognition method, including: obtaining an image to be recognized by an image sensor; inputting the image to be recognized to a single convolutional neural network; obtaining a first feature map of a first detection task and a second feature map of a second detection task according to an output result of the single convolutional neural network, wherein the first feature map and the second feature map have a shared feature; using an end-layer network module to generate a first recognition result corresponding to the first detection task from the image to be recognized according to the first feature map, and to generate a second recognition result corresponding to the second detection task from the image to be recognized according to the second feature map; and outputting the first recognition result corresponding to the first detection task and the second recognition result corresponding to the second detection task.Type: GrantFiled: November 18, 2020Date of Patent: November 22, 2022Assignee: Industrial Technology Research InstituteInventors: De-Qin Gao, Peter Chondro, Mei-En Shao, Shanq-Jang Ruan
-
Publication number: 20220210350Abstract: The disclosure is directed to an image dehazing method and an image dehazing apparatus using the same method. In an aspect, the disclosure is directed to an image dehazing method, and the method would include not limited to: receiving an input image; dehazing the image by a dehazing module to output a dehazed RGB image; recovering image brightness of the dehazed RGB image by a high dynamic range (HDR) module to output an HDR image; and removing reflection of the HDR image by a ReflectNet inference model, wherein the ReflectNet inference model uses a deep learning architecture.Type: ApplicationFiled: December 25, 2020Publication date: June 30, 2022Applicant: Industrial Technology Research InstituteInventors: Peter Chondro, De-Qin Gao
-
Publication number: 20220114383Abstract: An image recognition method, including: obtaining an image to be recognized by an image sensor; inputting the image to be recognized to a single convolutional neural network; obtaining a first feature map of a first detection task and a second feature map of a second detection task according to an output result of the single convolutional neural network, wherein the first feature map and the second feature map have a shared feature; using an end-layer network module to generate a first recognition result corresponding to the first detection task from the image to be recognized according to the first feature map, and to generate a second recognition result corresponding to the second detection task from the image to be recognized according to the second feature map; and outputting the first recognition result corresponding to the first detection task and the second recognition result corresponding to the second detection task.Type: ApplicationFiled: November 18, 2020Publication date: April 14, 2022Applicant: Industrial Technology Research InstituteInventors: De-Qin Gao, Peter Chondro, Mei-En Shao, Shanq-Jang Ruan
-
Patent number: 10852420Abstract: In one of the exemplary embodiments, the disclosure is directed to an object detection system including a first type of sensor for generating a first sensor data; a second type of sensor for generating a second sensor data; and a processor coupled to the first type of sensor and the second type of sensor and configured at least for: processing the first sensor data by using a first plurality of object detection algorithms and processing the second sensor data by using a second plurality of object detection algorithms, wherein each of the first plurality of object detection algorithms and each of the second plurality of object detection algorithms include environmental parameters calculated from a plurality of parameter detection algorithms; and determining for each detected object a bounding box resulted from processing the first sensor data and processing the second sensor data.Type: GrantFiled: June 15, 2018Date of Patent: December 1, 2020Assignee: Industrial Technology Research InstituteInventors: Peter Chondro, Pei-Jung Liang
-
Patent number: 10748033Abstract: The disclosure is directed to an object detection method using a CNN model and an object detection apparatus thereof. In an aspect, the object detection method includes generating a sensor data; processing the sensor data by using a first object detection algorithm to generate a first object detection result; processing the first object detection result by using a plurality of stages of sparse update mapping algorithm to generate a plurality of stages of updated first object detection result; processing a first stage of the stages of updated first object detection result by using a plurality of stages of spatial pooling algorithm between each of stages of sparse update mapping algorithm; executing a plurality of stages of deep convolution layer algorithm to extract a plurality of feature results; and performing a detection prediction based on a last-stage feature result.Type: GrantFiled: December 11, 2018Date of Patent: August 18, 2020Assignee: Industrial Technology Research InstituteInventors: Wei-Hao Lai, Pei-Jung Liang, Peter Chondro, Tse-Min Chen, Shanq-Jang Ruan
-
Patent number: 10699430Abstract: In one of the exemplary embodiments, the disclosure is directed to a depth estimation apparatus including a first type of sensor for generating a first sensor data; a second type of sensor for generating a second sensor data; and a processor coupled to the first type of sensor and the second type of sensor and configured at least for: processing the first sensor data by using two stage segmentation algorithms to generate a first segmentation result and a second segmentation result; synchronizing parameters of the first segmentation result and parameters of the second sensor data to generate a synchronized second sensor data; fusing the first segmentation result, the synchronized second sensor data, and the second segmentation result by using two stage depth estimation algorithms to generate a first depth result and a second depth result.Type: GrantFiled: October 9, 2018Date of Patent: June 30, 2020Assignee: Industrial Technology Research InstituteInventors: Peter Chondro, Wei-Hao Lai, Pei-Jung Liang
-
Publication number: 20200184260Abstract: The disclosure is directed to an object detection method using a CNN model and an object detection apparatus thereof. In an aspect, the object detection method includes generating a sensor data; processing the sensor data by using a first object detection algorithm to generate a first object detection result; processing the first object detection result by using a plurality of stages of sparse update mapping algorithm to generate a plurality of stages of updated first object detection result; processing a first stage of the stages of updated first object detection result by using a plurality of stages of spatial pooling algorithm between each of stages of sparse update mapping algorithm; executing a plurality of stages of deep convolution layer algorithm to extract a plurality of feature results; and performing a detection prediction based on a last-stage feature result.Type: ApplicationFiled: December 11, 2018Publication date: June 11, 2020Applicant: Industrial Technology Research InstituteInventors: Wei-Hao Lai, Pei-Jung Liang, Peter Chondro, Tse-Min Chen, Shanq-Jang Ruan
-
Publication number: 20200111225Abstract: In one of the exemplary embodiments, the disclosure is directed to a depth estimation apparatus including a first type of sensor for generating a first sensor data; a second type of sensor for generating a second sensor data; and a processor coupled to the first type of sensor and the second type of sensor and configured at least for: processing the first sensor data by using two stage segmentation algorithms to generate a first segmentation result and a second segmentation result; synchronizing parameters of the first segmentation result and parameters of the second sensor data to generate a synchronized second sensor data; fusing the first segmentation result, the synchronized second sensor data, and the second segmentation result by using two stage depth estimation algorithms to generate a first depth result and a second depth result.Type: ApplicationFiled: October 9, 2018Publication date: April 9, 2020Applicant: Industrial Technology Research InstituteInventors: Peter Chondro, Wei-Hao Lai, Pei-Jung Liang
-
Publication number: 20190353774Abstract: In one of the exemplary embodiments, the disclosure is directed to an object detection system including a first type of sensor for generating a first sensor data; a second type of sensor for generating a second sensor data; and a processor coupled to the first type of sensor and the second type of sensor and configured at least for: processing the first sensor data by using a first plurality of object detection algorithms and processing the second sensor data by using a second plurality of object detection algorithms, wherein each of the first plurality of object detection algorithms and each of the second plurality of object detection algorithms include environmental parameters calculated from a plurality of parameter detection algorithms; and determining for each detected object a bounding box resulted from processing the first sensor data and processing the second sensor data.Type: ApplicationFiled: June 15, 2018Publication date: November 21, 2019Applicant: Industrial Technology Research InstituteInventors: Peter Chondro, Pei-Jung Liang