Patents by Inventor CHIN-PIN KUO

CHIN-PIN KUO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230401737
    Abstract: A method acquires a first image and a second image of a target object being inputted into the depth estimation network for outputting a depth image. A pixel posture conversion relationship between the first image and the second image is obtained. The pixel posture conversion relationship includes a position relationship between each first pixel in the first image and a second pixel in the second image, which correspond to a same part of the target object. A restored image is generated based on the depth image, the pixel posture conversion relationship, and pre-obtained camera parameters. A loss of the depth estimation network is determined based on a difference between the first image, the depth image, the restored image, and the second image for adjusting the parameters of the depth estimation network. A training apparatus, and an electronic device applying the method are also disclosed.
    Type: Application
    Filed: May 4, 2023
    Publication date: December 14, 2023
    Inventors: JUNG-HAO YANG, CHIN-PIN KUO, CHIH-TE LU
  • Publication number: 20230401670
    Abstract: A multi-scale autoencoder generation method applied to an electronic device is provided. The method includes acquire product images and acquire an annotation of each product image. Latent spaces of a plurality of scales are constructed. Autoencoders are obtained according to the latent spaces and an image size of the product image. Learners are obtained by training each autoencoder based on non-defective images. Reconstructed images are obtained by inputting the product images into the learners. Detection results are obtained by detecting whether each product image has defects according to the reconstructed images. Similar images for each learner are determined based on a comparison result between each detection result and a corresponding annotation result. Once a correct rate of each learner is obtained according to the similar images, a learner from the plurality of learner is determined as a multi-scale autoencoder according to the correct rate of each learner.
    Type: Application
    Filed: August 26, 2022
    Publication date: December 14, 2023
    Inventors: WAN-JHEN LEE, CHIN-PIN KUO
  • Publication number: 20230401733
    Abstract: A method for training an autoencoder implemented in an electronic device includes obtaining a stereoscopic image as the vehicle is in motion, the stereoscopic image includes a left image and a right image; generating a stereo disparity map according to the left image; generating a predicted right image according to the left image and the stereo disparity map; and calculating a first mean square error between the predicted right image and the right image.
    Type: Application
    Filed: November 30, 2022
    Publication date: December 14, 2023
    Inventors: CHIN-PIN KUO, CHIH-TE LU, TZU-CHEN LIN, JUNG-HAO YANG
  • Publication number: 20230394634
    Abstract: A method for processing images to remove distortions of various types without requiring retraining for per-type processing models, applied in an electronic device, establishes a distortion removal model, and inputs distorted images in the distortion removal model. General distortions are removed by the distortion removal model and the restored images are input into a deep learning model for final image processing. The deep learning model of the present disclosure finally corrects distortion in the images so that distortion-free images are obtained.
    Type: Application
    Filed: January 9, 2023
    Publication date: December 7, 2023
    Inventors: TSUNG-WEI LIU, CHIN-PIN KUO
  • Publication number: 20230391372
    Abstract: A method of detecting moving objects is provided. The method obtains point cloud data set of a target scene and an image of the target scene. The method detects one or more stationary object areas from the image. The method records point cloud data, which corresponds to the stationary object areas in the point cloud data set, to be first point cloud data. The method determines a velocity range of the first point cloud data according to the first point cloud data. The method further determines whether one or more moving objects are present in the target scene according to second point cloud data and the velocity range of the first point cloud data. The second point cloud data is the point cloud data of the point cloud data set excluding the first point cloud data. A related electronic device and a non-transitory storage medium are provided.
    Type: Application
    Filed: May 5, 2023
    Publication date: December 7, 2023
    Inventors: YU-HSUAN CHIEN, CHIN-PIN KUO
  • Publication number: 20230394620
    Abstract: A method for generating distorted images is applied in an electronic device, obtains first pixel coordinates of undistorted images and a first pixel value of the first pixel coordinates, and selects an arbitrary distortion center coordinate. The distance between the coordinate of the distortion center and each first pixel coordinate is calculated, and second pixel coordinates corresponding to the first pixel coordinates are calculated according to distortion coefficient, the first pixel coordinates, and the distance. The first pixel value of each first pixel coordinates is taken as the second pixel value of each second pixel coordinates, and distorted images from undistorted images are generated for training purposes according to the second pixel coordinates and the second pixel values.
    Type: Application
    Filed: August 26, 2022
    Publication date: December 7, 2023
    Inventors: TSUNG-WEI LIU, CHIN-PIN KUO
  • Publication number: 20230394843
    Abstract: A method for identifying road vehicles or other objects which are in motion against those which are not moving applied in an in-vehicle device of an assisted vehicle which is being driven shoots a first image of a target vehicle and a second later image of the target vehicle, determines a first mask area of the target vehicle from the first image, and determines a second mask of the target vehicle from the second image based on an instance segmentation algorithm. An Intersection over Union (IoU) is calculated between the first mask area and the second mask area and a determination made as to whether a dynamic class object mask area of the target vehicle according to the IoU should be generated. A dynamic class object mask area of the target vehicle is generated when the target vehicle is found to be a moving vehicle.
    Type: Application
    Filed: January 11, 2023
    Publication date: December 7, 2023
    Inventors: CHIEH LEE, CHIN-PIN KUO
  • Publication number: 20230394690
    Abstract: A method for obtaining depth images for improved driving safety applied in an electronic device processes first and immediately-following second images which have been captured and processes each to obtain two sets of predicted depth maps. The electronic device determines a transformation matrix of a camera between first and second images and converts the first predicted depth maps into first point cloud maps, and second predicted depth maps into second point cloud maps. The first point cloud maps are converted into third point cloud maps, and the second point cloud maps into fourth point cloud maps. First and fourth point cloud maps are matched and first and second error values are calculated, thereby obtaining a target deep learning network model. Images to be detected are input into the target deep learning network model and depth images are obtained.
    Type: Application
    Filed: August 26, 2022
    Publication date: December 7, 2023
    Inventors: CHIH-TE LU, CHIEH LEE, CHIN-PIN KUO
  • Publication number: 20230394692
    Abstract: A method for estimating depth implemented in an electronic device includes obtaining a first image; inputting the first image into a depth estimation model, and obtaining a first depth image; obtaining a depth ratio factor, the depth ratio factor indicating a relationship between a relative depth and a depth of each pixel in the first depth image; and obtaining depth information of the first depth image according to the first depth image and the depth ratio factor.
    Type: Application
    Filed: September 29, 2022
    Publication date: December 7, 2023
    Inventors: YU-HSUAN CHIEN, CHIN-PIN KUO
  • Publication number: 20230394693
    Abstract: A method for training a depth estimation model comprise acquires a first image and a second image being inputted into the depth estimation model. The depth estimation model outputs a first depth image. A posture conversion relationship between the first image and the second image is extracted by a posture estimation model. A restored image is generated based on the first depth image, the posture conversion relationship, and pre-obtained camera parameters. A similarity between the restored image and the first image is calculated to obtain a two-dimension loss image. A first similarity of pixel points of each weak texture region are determined based on the two-dimension loss image. A ratio of the first similarity for adjusting the parameters of the depth estimation model is decreased and a first loss value is obtained. A training apparatus and an electronic device applying the method are also disclosed.
    Type: Application
    Filed: June 2, 2023
    Publication date: December 7, 2023
    Applicant: HON HAI PRECISION INDUSTRY CO., LTD.
    Inventors: CHIN-PIN KUO, TSUNG-WEI LIU
  • Publication number: 20230386062
    Abstract: A method for training a depth estimation model implemented in an electronic device includes obtaining a first image pair from a training data set; inputting the first left image into the depth estimation model, and obtaining a disparity map; adding the first left image and the disparity map, and obtaining a second right image; calculating a mean square error and cosine similarity of pixel values of all corresponding pixels in the first right image and the second right image; calculating mean values of the mean square error and the cosine similarity, and obtaining a first mean value of the mean square error and a second mean value of the cosine similarity; adding the first mean value and the second mean value, and obtaining a loss value of the depth estimation model; and iteratively training the depth estimation model according to the loss value.
    Type: Application
    Filed: September 28, 2022
    Publication date: November 30, 2023
    Inventors: YU-HSUAN CHIEN, CHIN-PIN KUO
  • Publication number: 20230386222
    Abstract: A method for detecting three-dimensional (3D) objects in roadway is applied in an electronic device. The device inputs training images into a semantic segmentation model, and performs convolution operations and pooling operations on the training images and obtains feature maps. The electronic device performs up-sampling operations on the feature maps to obtain first images, classifies pixels on the first images, calculates and optimizes a classification loss and obtains a trained semantic segmentation model. The device inputs the detection images into the trained semantic segmentation model, determines object models of the objects, point cloud data and distances from the depth camera to the object models, determines rotation angles of the object models according to the point cloud data and the object models, and determines positions of the object models in 3D space according to the distances, the rotation angles, and positions of the objects.
    Type: Application
    Filed: August 25, 2022
    Publication date: November 30, 2023
    Inventors: CHIEH LEE, CHIN-PIN KUO
  • Publication number: 20230386221
    Abstract: A method for detecting road conditions applied in an electronic device obtains images of a scene in front of a vehicle, and inputs the images into a trained semantic segmentation model. The electronic device inputs the images into a backbone network for feature extraction and obtains a plurality of feature maps, inputs the feature maps into the head network, processes the feature maps by a first segmentation network of the head network, and outputs a first recognition result. The electronic device further processes the feature maps by a second segmentation network of the head network, and outputs a second recognition result, and determines whether the vehicle can continue to drive on safely according to the first recognition result and the second recognition result.
    Type: Application
    Filed: June 22, 2022
    Publication date: November 30, 2023
    Inventors: SHIH-CHAO CHIEN, CHIN-PIN KUO
  • Publication number: 20230386230
    Abstract: A method for detection of three-dimensional (3D) objects on or around a roadway by machine learning, applied in an electronic device, obtains images of road, inputs the images into a trained object detection model, and determines categories of objects in the images, two-dimensional (2D) bounding boxes of the objects, and parallax (rotation)angles of the objects. The electronic device determines object models and 3D bounding boxes of the object models and determines distance from the camera to the object models according to size of the 2D bounding boxes, image information of the detection images, and focal length of the camera. The positions of the object models in a 3D space can be determined according to the rotation angles, the distance, and the 3D bounding boxes, and the positions of the object models are taken as the position of the objects in the 3D space.
    Type: Application
    Filed: June 30, 2022
    Publication date: November 30, 2023
    Inventors: CHIH-TE LU, CHIEH LEE, CHIN-PIN KUO
  • Publication number: 20230386178
    Abstract: An image recognition method applied to an electronic device is provided. The method includes constructing a first semantic segmentation network. In response that an initial labeled result of one of a plurality of initial labeled images does not match a preset labeled result, a target image corresponding to the one of the plurality of initial labeled images and a target labeled result of the target image are obtained. A second semantic segmentation network is obtained by training the first semantic segmentation network based on a plurality of the target images and the target labeled result of each target image, and a labeled result of an image to be recognized is obtained by inputting the image to be recognized into the second semantic segmentation network.
    Type: Application
    Filed: June 30, 2022
    Publication date: November 30, 2023
    Inventors: CHIEH LEE, CHIN-PIN KUO
  • Publication number: 20230386231
    Abstract: A method for detecting three-dimensional (3D) objects in relation to autonomous driving is applied in an electronic device. The device obtains detection images and depth images, =inputs the detection images into a trained object detection model to determine categories of objects in the detection images and two-dimensional (2D) bounding boxes of the objects. The device determines object models of the objects and 3D bounding boxes of the object models according to the object categories, and calculates point cloud data of the objects selected and distances from the depth camera to the object models. The device determines angles of rotation of the object models of the objects according to the object models of the objects and the point cloud data, and can determine respective positions of the objects in 3D space according to the distance from the depth camera to the object models, the rotation angles, and the 3D bounding boxes.
    Type: Application
    Filed: August 25, 2022
    Publication date: November 30, 2023
    Inventors: CHIEH LEE, CHIH-TE LU, CHIN-PIN KUO
  • Publication number: 20230384235
    Abstract: A method for detecting a product for defects implemented in an electronic device, the method obtains an image of a product to be detected, obtains a reconstructed image by inputting the image to be detected into a pre-trained autoencoder, generates a difference image from the image and the reconstructed image, obtains a number of feature absolute values by performing clustering processing on the difference image; generates a target image according to the number of feature absolute values, the difference image, and a preset value; and determines a defect detection result by detecting the target image for defects.
    Type: Application
    Filed: March 27, 2023
    Publication date: November 30, 2023
    Inventors: CHUNG-YU WU, CHIN-PIN KUO
  • Publication number: 20230386063
    Abstract: A method and system for generating depth in monocular images acquires multiple sets of binocular images to build a dataset containing instance segmentation labels as to content; training an work using the dataset with instance segmentation labels to obtain a trained autoencoder network; acquiring monocular image, the monocular image is input into the trained autoencoder network to obtain a first disparity map and the first disparity map is converted to obtain depth image corresponding to the monocular image. The method combines binocular images with instance segmentation images as training data for training an autoencoder network, monocular images can simply be input into the autoencoder network to output the disparity map. Depth estimation for monocular images is achieved by converting the disparity map to a depth image corresponding to the monocular image. An electronic device and a non-transitory storage are also disclosed.
    Type: Application
    Filed: January 13, 2023
    Publication date: November 30, 2023
    Inventors: JUNG-HAO YANG, CHIH-TE LU, CHIN-PIN KUO
  • Publication number: 20230386055
    Abstract: An image feature matching method is provided by the present disclosure. The method includes determining a first weak texture area of a first image and a second weak texture area of a second image based on an edge detection algorithm. First feature points of the first weak texture area and second feature points of the second weak texture area are extracted. The first feature points and the second feature points are matched by determining a target point for each of the first feature points from the second feature points. Once a position difference value between each first feature point and the corresponding target point is determined, a matching point for each first feature point is determined according to the position difference value between the each first feature point and the corresponding target point.
    Type: Application
    Filed: July 4, 2022
    Publication date: November 30, 2023
    Inventors: WAN-JHEN LEE, CHIN-PIN KUO
  • Publication number: 20230322216
    Abstract: A method for preventing vehicle collision applied in an electronic device obtains a first image in the driving direction of a vehicle using the method (method vehicle), and detects a driving route of other vehicles in the first image, and takes one of the other vehicles in the first image as a target vehicle when it satisfies a first condition. The first condition comprises a vehicle being in the same lane as the method vehicle and having an opposing driving direction. The electronic device further detects whether a detectable distance between the method vehicle and the target vehicle is less than a preset distance, and generates a warning or a control command when the detectable distance is less than the preset distance.
    Type: Application
    Filed: November 16, 2022
    Publication date: October 12, 2023
    Inventors: WAN-JHEN LEE, CHIN-PIN KUO