Patents by Inventor Jiun-In Guo

Jiun-In Guo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230410481
    Abstract: A machine learning method for continual learning is provided, and the method includes following steps. Capturing an input image. Performing feature extraction on the input image by a plurality of sub-models to obtain a plurality of feature maps, where the sub-models correspond to a plurality of tasks, and the sub-models are determined by a neural network model and a plurality of channel-wise masks. Converting the feature maps into a plurality of energy scores. Selecting a target sub-model corresponding to a target task of the tasks from the sub-models according to the energy scores. Outputting a prediction result corresponding to the target task by the target sub-model.
    Type: Application
    Filed: September 22, 2022
    Publication date: December 21, 2023
    Applicant: Wistron Corporation
    Inventors: Jiun-In Guo, Cheng-Fu Liou
  • Publication number: 20230385600
    Abstract: An optimizing method and a computing apparatus for a deep learning network and a computer-readable storage medium are provided. In the method, a value distribution is obtained from a pre-trained model. One or more breaking points in a range of the value distribution are determined. Quantization is performed on a part of values of a parameter type in a first section among multiple sections using a first quantization parameter and the other part of values of the parameter type in a second section among the sections using a second quantization parameter. The value distribution is a statistical distribution of values of the parameter type in the deep learning network. The range is divided into the sections by one or more breaking points. The first quantization parameter is different from the second quantization parameter. Accordingly, accuracy drop can be reduced.
    Type: Application
    Filed: September 22, 2022
    Publication date: November 30, 2023
    Applicant: Wistron Corporation
    Inventors: Jiun-In Guo, Po-Yuan Chen
  • Publication number: 20230375694
    Abstract: A dual sensing method of an object and a computing apparatus for object sensing are provided. In the method, a first clustering is performed on radar information including a plurality of sensing points and is for determining a first part of the sensing points to be an object. A second clustering is performed on a result of the first clustering and is for determining that the sensing points determined to be the object in the result of the first clustering are located in a region of a first density. A result of the second clustering is taken as a region of interest. According to the region of interest, object detection and/or object tracking is performed on combined information formed by combining the radar information and an image, whose respective detection region and photographing region are overlapped.
    Type: Application
    Filed: September 23, 2022
    Publication date: November 23, 2023
    Applicant: Wistron Corporation
    Inventors: Jiun-In Guo, Jia-Jheng Lin
  • Publication number: 20230351185
    Abstract: Embodiments of the disclosure provide an optimizing method and a computer system for a neural network, and a computer-readable storage medium. In the method, the neural network is pruned sequentially using two different pruning algorithms. The pruned neural network is retrained in response to each pruning algorithm pruning the neural network. Thereby, the computation amount and the parameter amount of the neural network are reduced.
    Type: Application
    Filed: August 3, 2022
    Publication date: November 2, 2023
    Applicant: Wistron Corporation
    Inventors: Jiun-In Guo, En-Chih Chang
  • Publication number: 20230334675
    Abstract: An object tracking integration method and an integrating apparatus are provided. In the method, one or more first images and one or more second images are obtained. The first image is captured from a first capturing apparatus, and the second image is captured from a second capturing apparatus. One or more target objects in the first image and in the second image are detected. A detection result of the target object in the first image and a detection result of the target object in the second image are matched. The detection result of the target object is updated according to a matching result between the detection results of the first image and the second image. Accordingly, the accuracy of the association and the monitoring range may be improved.
    Type: Application
    Filed: June 29, 2022
    Publication date: October 19, 2023
    Applicant: Wistron Corporation
    Inventors: Jiun-In Guo, Jhih-Sheng Lai
  • Publication number: 20230206654
    Abstract: The present invention provides a deep learning object detection method that locates the distant region in the image in real-time and concentrates on distant objects in a front dash cam perspective, trying to solve a common problem in advanced driver assistance system (ADAS) applications, that the detectable range of the system is not far enough.
    Type: Application
    Filed: March 16, 2022
    Publication date: June 29, 2023
    Inventors: Jiun-In GUO, Bo-Xun WU
  • Publication number: 20230168361
    Abstract: A method for recognizing a motion state of an object by using a millimeter wave radar having at least one antenna is disclosed. The method includes the following steps. A region is set to select an object in the region, wherein the object has M ranges and M azimuths between the object and the at least one antenna during a first motion time. Each of the M ranges and the M azimuths are projected on a two-dimensional (2D) plane to form M frames. The M frames are sequentially arranged into a first consecutive candidate frames having a time sequence. The first consecutive candidate frames are inputted into an artificial intelligence model to determine a motion state type of the first consecutive candidate frames.
    Type: Application
    Filed: April 6, 2022
    Publication date: June 1, 2023
    Inventors: Jiun-In Guo, Hung-Yu Liu
  • Publication number: 20230154203
    Abstract: A path planning system includes an image-capturing device, a point-cloud map-retrieving device, and a processing device. The image-capturing device captures a first and a second camera road image. The point-cloud map-retrieving device retrieves distance data points to create a road distance point-cloud map. The processing device receives the road distance point-cloud map and the first and second camera road images, calibrates and fuses those to generate a road camera point-cloud fusion map, and then determines the road-line information of the second camera road image to generate a road-segmented map. The road-segmented map and the road camera point-cloud fusion map are fused. The distance data of the road-segmented map are obtained according to distance data points. A front driving path for the target road is planned according to the distance data.
    Type: Application
    Filed: January 26, 2022
    Publication date: May 18, 2023
    Inventors: JIUN-IN GUO, JEN-SHUO CHANG
  • Publication number: 20210350705
    Abstract: The invention relates to a deep-learning-based driving assistance system and method thereof. The system adopts a one-stage object detection neural network, and is applied to an embedded device for quickly calculating and determining a driving object information. The system comprises an image capture module, a feature extraction module, a semantic segmentation module, and a lane processing module, wherein the lane processing module further comprises a lane line binary sub-module, a lane line clustering sub-module, and a lane line fitting sub-module.
    Type: Application
    Filed: October 7, 2020
    Publication date: November 11, 2021
    Inventors: Jiun-In Guo, Chun-Yu Lai
  • Patent number: 11102406
    Abstract: A method of processing a panoramic map based on an equirectangular projection is provided. The method comprises the following steps. A first and a second equirectangular projected panoramic images are captured through at least one lens at different time, respectively. The first and the second equirectangular projected panoramic images are perspectively transformed based on at least one horizontal angle, respectively. A plurality of first and second feature points are extracted from the first and the second equirectangular projected panoramic images, respectively. A plurality of identical feature points in the first and the second feature points are tracked. A camera pose is obtained based on the identical feature points. A plurality of 3D sparse point maps, in binary format, are established based on the at least one horizontal angle. The camera pose and the 3D sparse point maps are exported to an external system through an export channel.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: August 24, 2021
    Assignee: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Po-Hung Lin
  • Publication number: 20210203843
    Abstract: A method of processing a panoramic map based on an equirectangular projection is provided. The method comprises the following steps. A first and a second equirectangular projected panoramic images are captured through at least one lens at different time, respectively. The first and the second equirectangular projected panoramic images are perspectively transformed based on at least one horizontal angle, respectively. A plurality of first and second feature points are extracted from the first and the second equirectangular projected panoramic images, respectively. A plurality of identical feature points in the first and the second feature points are tracked. A camera pose is obtained based on the identical feature points. A plurality of 3D sparse point maps, in binary format, are established based on the at least one horizontal angle. The camera pose and the 3D sparse point maps are exported to an external system through an export channel.
    Type: Application
    Filed: November 23, 2020
    Publication date: July 1, 2021
    Applicant: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In GUO, Po-Hung LIN
  • Patent number: 11049259
    Abstract: An image tracking method of the present invention includes the following steps: (A) obtaining a plurality of original images by using an image capturing device; (B) transmitting the plurality of original images to a computing device, and generating a position box based on a preset image set; (C) obtaining an initial foreground image including a target object, and an identified foreground image is determined based on a pixel ratio and a first threshold; (D) obtaining a feature and obtaining a first feature score based on the feature of the identified foreground images; and (E) generating a target object matching result based on the first feature score and a second threshold, and recording a moving trajectory of the target object based on the target object matching result.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: June 29, 2021
    Assignee: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Ssu-Yuan Chang
  • Publication number: 20210004635
    Abstract: A method and system for identifying a pedestrian is disclosed. The method comprises: capturing a original image and detecting a pedestrian in the original image so as to obtain a 2D pedestrian feature image; obtaining a 3D information and identifying the 3D information so as to obtain a 3D pedestrian feature map; projecting the 3D pedestrian feature map to a 2D pedestrian feature plane image; and matching the 2D pedestrian feature imager and the 2D pedestrian feature plane image to obtain a matched image; wherein the original image and the 3D information are obtained simultaneously.
    Type: Application
    Filed: June 5, 2020
    Publication date: January 7, 2021
    Applicant: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In GUO, Tai-En WU
  • Patent number: 10810705
    Abstract: A video dehazing method includes: capturing a hazy image including multiple inputted pixels by an image capture module, calculating an atmospheric light value according to the inputted pixels by an atmospheric light estimation unit, determining a sky image area according to the inputted pixels via the intermediate calculation results of a guided filter by a sky detection unit; calculating a dark channel image according to the inputted pixels based on dark channel prior (DCP) by a dark channel prior unit; calculating a fine transmission image according to the inputted pixels, the atmospheric light value, the sky image area and the dark channel image via a guided filter by a transmission estimation unit, generating a dehazing image according to the inputted pixels, the atmospheric light value and the fine transmission image by an image dehazing unit, and outputting the dehazing image by a video outputting module.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: October 20, 2020
    Assignee: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Cheng-Yen Lin
  • Patent number: 10726277
    Abstract: A lane line detection method including following steps: acquiring an original image by an image capture device, in which the original image includes a ground area and a sky area; setting a separating line between the sky area and the ground area in the original image; measuring an average intensity of a central area above the separating line, and deciding a weather condition according to the average intensity; setting a threshold according to the weather condition, and execute a binarization process according to the threshold to an area below the separating line to obtain a binary image; and using a line detection method to detect a plurality of approximate lane lines in the binary image.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: July 28, 2020
    Assignee: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Yi-Ting Lai
  • Publication number: 20200058129
    Abstract: An image tracking method of the present invention includes the following steps: (A) obtaining a plurality of original images by using an image capturing device; (B) transmitting the plurality of original images to a computing device, and generating a position box based on a preset image set; (C) obtaining an initial foreground image including a target object, and an identified foreground image is determined based on a pixel ratio and a first threshold; (D) obtaining a feature and obtaining a first feature score based on the feature of the identified foreground images; and (E) generating a target object matching result based on the first feature score and a second threshold, and recording a moving trajectory of the target object based on the target object matching result.
    Type: Application
    Filed: July 26, 2019
    Publication date: February 20, 2020
    Applicant: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Ssu-Yuan Chang
  • Publication number: 20190287219
    Abstract: A video dehazing method includes: capturing a hazy image including multiple inputted pixels by an image capture module, calculating an atmospheric light value according to the inputted pixels by an atmospheric light estimation unit, determining a sky image area according to the inputted pixels via the intermediate calculation results of a guided filter by a sky detection unit; calculating a dark channel image according to the inputted pixels based on dark channel prior (DCP) by a dark channel prior unit; calculating a fine transmission image according to the inputted pixels, the atmospheric light value, the sky image area and the dark channel image via a guided filter by a transmission estimation unit, generating a dehazing image according to the inputted pixels, the atmospheric light value and the fine transmission image by an image dehazing unit, and outputting the dehazing image by a video outputting module.
    Type: Application
    Filed: June 11, 2018
    Publication date: September 19, 2019
    Inventors: Jiun-In GUO, Cheng-Yen LIN
  • Publication number: 20190279003
    Abstract: A lane line detection method including following steps: acquiring an original image by an image capture device, in which the original image includes a ground area and a sky area; setting a separating line between the sky area and the ground area in the original image; measuring an average intensity of a central area above the separating line, and deciding a weather condition according to the average intensity; setting a threshold according to the weather condition, and execute a binarization process according to the threshold to an area below the separating line to obtain a binary image; and using a line detection method to detect a plurality of approximate lane lines in the binary image.
    Type: Application
    Filed: August 23, 2018
    Publication date: September 12, 2019
    Inventors: Jiun-In GUO, Yi-Ting LAI
  • Patent number: 10187585
    Abstract: A method for adjusting exposure of a camera device is provided to include steps of: using first to Mth camera modules to continuously capture images in sync; reducing an exposure value of the first camera module according to a first current image captured at a latest time point by the first camera module until a first condition associated with the first current image is met; adjusting an exposure value of an ith camera module according to an ith current image, which is captured at the latest time point by the ith camera module, and an (i?1)th previous image, which is captured at a time point immediately previous to the latest time point by an (i?1)th camera module.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: January 22, 2019
    Assignee: National Chiao Tung University
    Inventors: Jiun-In Guo, Po-Hsiang Huang
  • Publication number: 20170374257
    Abstract: A method for adjusting exposure of a camera device is provided to include steps of: using first to Mth camera modules to continuously capture images in sync; reducing an exposure value of the first camera module according to a first current image captured at a latest time point by the first camera module until a first condition associated with the first current image is met; adjusting an exposure value of an ith camera module according to an ith current image, which is captured at the latest time point by the ith camera module, and an (i-1)th previous image, which is captured at a time point immediately previous to the latest time point by an (i-1)th camera module.
    Type: Application
    Filed: March 16, 2017
    Publication date: December 28, 2017
    Inventors: Jiun-In GUO, Po-Hsiang HUANG