Patents by Inventor Jiun-In Guo

Jiun-In Guo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11102406
    Abstract: A method of processing a panoramic map based on an equirectangular projection is provided. The method comprises the following steps. A first and a second equirectangular projected panoramic images are captured through at least one lens at different time, respectively. The first and the second equirectangular projected panoramic images are perspectively transformed based on at least one horizontal angle, respectively. A plurality of first and second feature points are extracted from the first and the second equirectangular projected panoramic images, respectively. A plurality of identical feature points in the first and the second feature points are tracked. A camera pose is obtained based on the identical feature points. A plurality of 3D sparse point maps, in binary format, are established based on the at least one horizontal angle. The camera pose and the 3D sparse point maps are exported to an external system through an export channel.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: August 24, 2021
    Assignee: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Po-Hung Lin
  • Publication number: 20210203843
    Abstract: A method of processing a panoramic map based on an equirectangular projection is provided. The method comprises the following steps. A first and a second equirectangular projected panoramic images are captured through at least one lens at different time, respectively. The first and the second equirectangular projected panoramic images are perspectively transformed based on at least one horizontal angle, respectively. A plurality of first and second feature points are extracted from the first and the second equirectangular projected panoramic images, respectively. A plurality of identical feature points in the first and the second feature points are tracked. A camera pose is obtained based on the identical feature points. A plurality of 3D sparse point maps, in binary format, are established based on the at least one horizontal angle. The camera pose and the 3D sparse point maps are exported to an external system through an export channel.
    Type: Application
    Filed: November 23, 2020
    Publication date: July 1, 2021
    Applicant: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In GUO, Po-Hung LIN
  • Patent number: 11049259
    Abstract: An image tracking method of the present invention includes the following steps: (A) obtaining a plurality of original images by using an image capturing device; (B) transmitting the plurality of original images to a computing device, and generating a position box based on a preset image set; (C) obtaining an initial foreground image including a target object, and an identified foreground image is determined based on a pixel ratio and a first threshold; (D) obtaining a feature and obtaining a first feature score based on the feature of the identified foreground images; and (E) generating a target object matching result based on the first feature score and a second threshold, and recording a moving trajectory of the target object based on the target object matching result.
    Type: Grant
    Filed: July 26, 2019
    Date of Patent: June 29, 2021
    Assignee: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Ssu-Yuan Chang
  • Publication number: 20210004635
    Abstract: A method and system for identifying a pedestrian is disclosed. The method comprises: capturing a original image and detecting a pedestrian in the original image so as to obtain a 2D pedestrian feature image; obtaining a 3D information and identifying the 3D information so as to obtain a 3D pedestrian feature map; projecting the 3D pedestrian feature map to a 2D pedestrian feature plane image; and matching the 2D pedestrian feature imager and the 2D pedestrian feature plane image to obtain a matched image; wherein the original image and the 3D information are obtained simultaneously.
    Type: Application
    Filed: June 5, 2020
    Publication date: January 7, 2021
    Applicant: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In GUO, Tai-En WU
  • Patent number: 10810705
    Abstract: A video dehazing method includes: capturing a hazy image including multiple inputted pixels by an image capture module, calculating an atmospheric light value according to the inputted pixels by an atmospheric light estimation unit, determining a sky image area according to the inputted pixels via the intermediate calculation results of a guided filter by a sky detection unit; calculating a dark channel image according to the inputted pixels based on dark channel prior (DCP) by a dark channel prior unit; calculating a fine transmission image according to the inputted pixels, the atmospheric light value, the sky image area and the dark channel image via a guided filter by a transmission estimation unit, generating a dehazing image according to the inputted pixels, the atmospheric light value and the fine transmission image by an image dehazing unit, and outputting the dehazing image by a video outputting module.
    Type: Grant
    Filed: June 11, 2018
    Date of Patent: October 20, 2020
    Assignee: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Cheng-Yen Lin
  • Patent number: 10726277
    Abstract: A lane line detection method including following steps: acquiring an original image by an image capture device, in which the original image includes a ground area and a sky area; setting a separating line between the sky area and the ground area in the original image; measuring an average intensity of a central area above the separating line, and deciding a weather condition according to the average intensity; setting a threshold according to the weather condition, and execute a binarization process according to the threshold to an area below the separating line to obtain a binary image; and using a line detection method to detect a plurality of approximate lane lines in the binary image.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: July 28, 2020
    Assignee: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Yi-Ting Lai
  • Publication number: 20200058129
    Abstract: An image tracking method of the present invention includes the following steps: (A) obtaining a plurality of original images by using an image capturing device; (B) transmitting the plurality of original images to a computing device, and generating a position box based on a preset image set; (C) obtaining an initial foreground image including a target object, and an identified foreground image is determined based on a pixel ratio and a first threshold; (D) obtaining a feature and obtaining a first feature score based on the feature of the identified foreground images; and (E) generating a target object matching result based on the first feature score and a second threshold, and recording a moving trajectory of the target object based on the target object matching result.
    Type: Application
    Filed: July 26, 2019
    Publication date: February 20, 2020
    Applicant: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Ssu-Yuan Chang
  • Publication number: 20190287219
    Abstract: A video dehazing method includes: capturing a hazy image including multiple inputted pixels by an image capture module, calculating an atmospheric light value according to the inputted pixels by an atmospheric light estimation unit, determining a sky image area according to the inputted pixels via the intermediate calculation results of a guided filter by a sky detection unit; calculating a dark channel image according to the inputted pixels based on dark channel prior (DCP) by a dark channel prior unit; calculating a fine transmission image according to the inputted pixels, the atmospheric light value, the sky image area and the dark channel image via a guided filter by a transmission estimation unit, generating a dehazing image according to the inputted pixels, the atmospheric light value and the fine transmission image by an image dehazing unit, and outputting the dehazing image by a video outputting module.
    Type: Application
    Filed: June 11, 2018
    Publication date: September 19, 2019
    Inventors: Jiun-In GUO, Cheng-Yen LIN
  • Publication number: 20190279003
    Abstract: A lane line detection method including following steps: acquiring an original image by an image capture device, in which the original image includes a ground area and a sky area; setting a separating line between the sky area and the ground area in the original image; measuring an average intensity of a central area above the separating line, and deciding a weather condition according to the average intensity; setting a threshold according to the weather condition, and execute a binarization process according to the threshold to an area below the separating line to obtain a binary image; and using a line detection method to detect a plurality of approximate lane lines in the binary image.
    Type: Application
    Filed: August 23, 2018
    Publication date: September 12, 2019
    Inventors: Jiun-In GUO, Yi-Ting LAI
  • Patent number: 10187585
    Abstract: A method for adjusting exposure of a camera device is provided to include steps of: using first to Mth camera modules to continuously capture images in sync; reducing an exposure value of the first camera module according to a first current image captured at a latest time point by the first camera module until a first condition associated with the first current image is met; adjusting an exposure value of an ith camera module according to an ith current image, which is captured at the latest time point by the ith camera module, and an (i?1)th previous image, which is captured at a time point immediately previous to the latest time point by an (i?1)th camera module.
    Type: Grant
    Filed: March 16, 2017
    Date of Patent: January 22, 2019
    Assignee: National Chiao Tung University
    Inventors: Jiun-In Guo, Po-Hsiang Huang
  • Publication number: 20170374257
    Abstract: A method for adjusting exposure of a camera device is provided to include steps of: using first to Mth camera modules to continuously capture images in sync; reducing an exposure value of the first camera module according to a first current image captured at a latest time point by the first camera module until a first condition associated with the first current image is met; adjusting an exposure value of an ith camera module according to an ith current image, which is captured at the latest time point by the ith camera module, and an (i-1)th previous image, which is captured at a time point immediately previous to the latest time point by an (i-1)th camera module.
    Type: Application
    Filed: March 16, 2017
    Publication date: December 28, 2017
    Inventors: Jiun-In GUO, Po-Hsiang HUANG
  • Patent number: 9824263
    Abstract: The present invention proposes a method for processing an image with depth information and a computer program product thereof, wherein a filtering template is used to extract a gesture region and filter the image, and wherein the hue values of the pixels of the current gesture region are used to modify the self-adaptive thresholds of the filtering template, and wherein the size of the filtering template at the next time point is modified according to the depth at the current time point and the depth at the former time point.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: November 21, 2017
    Assignee: National Chiao Tung University
    Inventors: Jiun-In Guo, Po-Yu Chien
  • Publication number: 20170249503
    Abstract: The present invention proposes a method for processing an image with depth information and a computer program product thereof, wherein a filtering template is used to extract a gesture region and filter the image, and wherein the hue values of the pixels of the current gesture region are used to modify the self-adaptive thresholds of the filtering template, and wherein the size of the filtering template at the next time point is modified according to the depth at the current time point and the depth at the former time point.
    Type: Application
    Filed: June 20, 2016
    Publication date: August 31, 2017
    Inventors: Jiun-In GUO, Po-Yu CHIEN
  • Publication number: 20160127731
    Abstract: A macroblock skip mode judgement method for an encoder applies to implementation of H.264 encoder hardware compressing a plurality of successive frames and comprises steps: undertaking IME (Integer Motion Estimation) for two frames appearing at neighboring time points to calculate MV (Motion Vector), and calculating PMV (Predicted Motion Vector) of the frame appearing the preceding time point; if MV equals PMV, modifying the cost function of MV to be a negative value; undertaking block encoding and compare the costs; if the cost of the Inter mode is smaller than the cost of the Intra mode, setting the mb (macroblock) skip flag to be 1; using Entropy to compress the mb skip flag, and omitting compressing the macroblock represented by the mb skip flag.
    Type: Application
    Filed: November 3, 2014
    Publication date: May 5, 2016
    Inventors: Jiun-In GUO, Jin-Ming CHEN, Hsiu-Cheng CHANG, Cheng-An CHIEN
  • Patent number: 9251612
    Abstract: An optimal dynamic seam adjustment system for image stitching includes an image obtaining module, a feature difference calculation module and a dynamic seam module. The image obtaining module obtains at least two images to be stitched, which have at least one overlapping area divided into a plurality of pixels arranged in a plurality of pixel columns and a plurality of pixel rows. The feature difference calculation module calculates a feature difference value for each pixel within the overlapping area. The dynamic seam module establishes a plurality of dynamic seam routes started from a top end of the overlapping area, so as to find an optimal dynamic seam route based on the feature difference value, wherein each dynamic seam route is composed of a plurality of pixels.
    Type: Grant
    Filed: March 19, 2014
    Date of Patent: February 2, 2016
    Assignee: NATIONAL CHIAO TUNG UNIVERSITY
    Inventors: Jiun-In Guo, Hsiu-Cheng Chang, Cheng-An Chien, Kai-Chen Huang
  • Patent number: 9105130
    Abstract: A view synthesis method of depth mismatching checking and depth error compensation, wherein, input left and right maps are warped in a processor to perform view synthesis, comprising following steps: when a present pixel moves to a position of a target pixel after warping, compute respectively pixel displacement amounts for said present pixel to move to said target pixel in said left and said right maps, to figure out coordinate of said target pixel; determine if depth value of the present pixel is greater than that of said target pixel, if an answer is positive, determine if depth value of said present pixel matches that of coordinate adjacent to said target pixel; and if answer is negative, set depth value of said target pixel to a hole; otherwise, cover said target pixel with pixel value of said present pixel, hereby completing refining even minute errors.
    Type: Grant
    Filed: February 6, 2013
    Date of Patent: August 11, 2015
    Assignee: NATIONAL CHUNG CHENG UNIVERSITY
    Inventors: Jiun-In Guo, Kuan-Hung Chen, Chih-Hao Chang
  • Publication number: 20150172620
    Abstract: An optimal dynamic seam adjustment system for image stitching includes an image obtaining module, a feature difference calculation module and a dynamic seam module. The image obtaining module obtains at least two images to be stitched, which have at least one overlapping area divided into a plurality of pixels arranged in a plurality of pixel columns and a plurality of pixel rows. The feature difference calculation module calculates a feature difference value for each pixel within the overlapping area. The dynamic seam module establishes a plurality of dynamic seam routes started from a top end of the overlapping area, so as to find an optimal dynamic seam route based on the feature difference value, wherein each dynamic seam route is composed of a plurality of pixels.
    Type: Application
    Filed: March 19, 2014
    Publication date: June 18, 2015
    Applicant: National Chiao Tung University
    Inventors: Jiun-In GUO, Hsiu-Cheng CHANG, Cheng-An CHIEN, Kai-Chen HUANG
  • Patent number: 8897548
    Abstract: A low-complexity method of converting 2D images/videos into 3D ones includes the steps of identifying whether each pixel in one of the frames is an edge feature point; locating at least two vanishing lines in the frame according to the edge feature point; categorizing the frame into the one of close-up photographic feature, of landscape feature, and of vanishing-area feature; if the frame is identified to have the vanishing-area feature or the landscape feature to generate a GDM; and apply a modificatory procedure to the GDM to generate a final depth information map; if the frame is identified to have the close-up photographic feature, distinguish between a foreground object and a background information in the frame and define the depth of field to generate the final depth information map.
    Type: Grant
    Filed: April 2, 2013
    Date of Patent: November 25, 2014
    Assignee: National Chung Cheng University
    Inventors: Jiun-In Guo, Jui-Sheng Lee, Cheng-An Chien, Jia-Hou Chang, Cheng-Yen Chang
  • Publication number: 20140294287
    Abstract: A low-complexity method of converting 2D images/videos into 3D ones includes the steps of identifying whether each pixel in one of the frames is an edge feature point; locating at least two vanishing lines in the frame according to the edge feature point; categorizing the frame into the one of close-up photographic feature, of landscape feature, and of vanishing-area feature; if the frame is identified to have the vanishing-area feature or the landscape feature to generate a GDM; and apply a modificatory procedure to the GDM to generate a final depth information map; if the frame is identified to have the close-up photographic feature, distinguish between a foreground object and a background information in the frame and define the depth of field to generate the final depth information map.
    Type: Application
    Filed: April 2, 2013
    Publication date: October 2, 2014
    Applicant: National Chung Cheng University
    Inventors: Jiun-In GUO, Jui-Sheng LEE, Cheng-An CHIEN, Jia-Hou CHANG, Cheng-Yen Chang
  • Publication number: 20140111605
    Abstract: A low-complexity panoramic image and video stitching method includes the steps of (1) providing a first image/video and a second image/video; (2) carrying out an image/video alignment to locate a plurality of common features from the first and second images/videos and to align the first and second images/videos pursuant to the common features; (3) carrying out an image/video projection and warping to make the first and second coordinates of the common features in the first and second images/videos correspond to each other and to stitch the first and second images/videos according to the mutually corresponsive first and second coordinates; (4) carrying out an image/video repairing and blending for compensating chromatic aberrations of at least one seam between the first and second images/videos; and outputting the stitched first and second images/videos.
    Type: Application
    Filed: January 15, 2013
    Publication date: April 24, 2014
    Applicant: NATIONAL CHUNG CHENG UNIVERSITY
    Inventors: Jiun-In GUO, Jia-Hou CHANG, Cheng-An CHIEN