Patents by Inventor Po-Chuan CHO
Po-Chuan CHO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Scene reconstructing system, scene reconstructing method and non-transitory computer-readable medium
Patent number: 10915781Abstract: A scene reconstructing system, scene reconstructing method and non-transitory computer-readable medium are provided in this disclosure. The scene reconstructing system includes a first electronic device and a second electronic device. A first electronic device includes a first camera unit, a first processor, and a first communication unit. The first processor is configured for recognizing at least a first object from a first image to construct a first map. The second electronic device includes a second camera unit, a second processor, and a second communication unit. The second processor is configured for recognizing at least a second object from a second image to construct a second map; calculating a plurality of confidence values corresponding to the second map. The second communication unit is configured for transmitting a location information to the first communication unit according to the plurality of confidence values.Type: GrantFiled: February 27, 2019Date of Patent: February 9, 2021Assignee: HTC CorporationInventors: Cheng-Hsien Lin, Po-Chuan Cho, Yen-Jung Lee, Hung-Yi Yang -
Patent number: 10706547Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. Previous CNN weight data is received by a current convolution neural network unit of the neural network, wherein the previous CNN weight data is generated by a previous convolution neural network unit of the neural network based on a previous image of video data corresponding to a previous time spot. A current image of the video data corresponding to a current time spot next to the previous time spot is received by the current convolution neural network unit. Convolution is performed according to the previous CNN weight data and the current image to generate a current image segmentation result by the current convolution neural network unit.Type: GrantFiled: May 9, 2018Date of Patent: July 7, 2020Assignee: HTC CorporationInventors: Cheng-Hsien Lin, Po-Chuan Cho, Hung-Yi Yang
-
Patent number: 10657415Abstract: An image correspondence determining method is provided that includes the steps outlined below. A first image and a second image are concatenated to generate a concatenated image having global information. Features are extracted from the concatenated image to generate a plurality of feature maps and the feature maps are divided into first feature maps and second feature maps. First image patches are extracted from the first feature maps corresponding to a first region and second image patches are extracted from the second feature maps corresponding to a second region. The first and the second image patches are concatenated to generate concatenated image patches. A similarity metric is calculated according to the concatenated image patches to determine a similarity between the first region and the second region.Type: GrantFiled: May 10, 2018Date of Patent: May 19, 2020Assignee: HTC CorporationInventors: Cheng-Hsien Lin, Po-Chuan Cho, Hung-Yi Yang
-
Patent number: 10628919Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. An input image is down-sampled to generate down-sampled images. Previous convolution neural network (CNN) data having a first resolution is received to up-sample the previous CNN result to generate an up-sampled previous CNN data having a second resolution. A current down-sampled image of the down-sampled images having the second resolution and the up-sampled previous CNN data are received. Convolution is performed according to the up-sampled previous CNN data and the current down-sampled image to generate a current image segmentation result.Type: GrantFiled: May 17, 2018Date of Patent: April 21, 2020Assignee: HTC CorporationInventors: Cheng-Hsien Lin, Po-Chuan Cho, Hung-Yi Yang
-
SCENE RECONSTRUCTING SYSTEM, SCENE RECONSTRUCTING METHOD AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
Publication number: 20190325251Abstract: A scene reconstructing system, scene reconstructing method and non-transitory computer-readable medium are provided in this disclosure. The scene reconstructing system includes a first electronic device and a second electronic device. A first electronic device includes a first camera unit, a first processor, and a first communication unit. The first processor is configured for recognizing at least a first object from a first image to construct a first map. The second electronic device includes a second camera unit, a second processor, and a second communication unit. The second processor is configured for recognizing at least a second object from a second image to construct a second map; calculating a plurality of confidence values corresponding to the second map. The second communication unit is configured for transmitting a location information to the first communication unit according to the plurality of confidence values.Type: ApplicationFiled: February 27, 2019Publication date: October 24, 2019Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Yen-Jung LEE, Hung-Yi YANG -
Patent number: 10402943Abstract: An image enhancement device that includes a down-sampling module, correction modules and an up-sampling module is provided. The down-sampling module down-samples an input image to generate down-sampled images having different down-sampled resolutions. Each of the correction modules performs correction on one of the down-sampled images according to a correction model based on at least one correction parameter to generate one of corrected images. The up-sampling module up-samples the corrected images to generate up-sampled images, wherein each of the up-sampled images is of a same up-sampled resolution. The concatenating module concatenates the up-sampled images into an output image.Type: GrantFiled: September 26, 2017Date of Patent: September 3, 2019Assignee: HTC CorporationInventors: Hung-Yi Yang, Cheng-Hsien Lin, Po-Chuan Cho
-
Publication number: 20190066265Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. An input image is down-sampled to generate down-sampled images. Previous convolution neural network (CNN) data having a first resolution is received to up-sample the previous CNN result to generate an up-sampled previous CNN data having a second resolution. A current down-sampled image of the down-sampled images having the second resolution and the up-sampled previous CNN data are received. Convolution is performed according to the up-sampled previous CNN data and the current down-sampled image to generate a current image segmentation result.Type: ApplicationFiled: May 17, 2018Publication date: February 28, 2019Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Hung-Yi YANG
-
Publication number: 20180350077Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. Previous CNN weight data is received by a current convolution neural network unit of the neural network, wherein the previous CNN weight data is generated by a previous convolution neural network unit of the neural network based on a previous image of video data corresponding to a previous time spot. A current image of the video data corresponding to a current time spot next to the previous time spot is received by the current convolution neural network unit. Convolution is performed according to the previous CNN weight data and the current image to generate a current image segmentation result by the current convolution neural network unit.Type: ApplicationFiled: May 9, 2018Publication date: December 6, 2018Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Hung-Yi YANG
-
Publication number: 20180349737Abstract: An image correspondence determining method is provided that includes the steps outlined below. A first image and a second image are concatenated to generate a concatenated image having global information. Features are extracted from the concatenated image to generate a plurality of feature maps and the feature maps are divided into first feature maps and second feature maps. First image patches are extracted from the first feature maps corresponding to a first region and second image patches are extracted from the second feature maps corresponding to a second region. The first and the second image patches are concatenated to generate concatenated image patches. A similarity metric is calculated according to the concatenated image patches to determine a similarity between the first region and the second region.Type: ApplicationFiled: May 10, 2018Publication date: December 6, 2018Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Hung-Yi YANG
-
Publication number: 20180114294Abstract: An image enhancement device that includes a down-sampling module, correction modules and an up-sampling module is provided. The down-sampling module down-samples an input image to generate down-sampled images having different down-sampled resolutions. Each of the correction modules performs correction on one of the down-sampled images according to a correction model based on at least one correction parameter to generate one of corrected images. The up-sampling module up-samples the corrected images to generate up-sampled images, wherein each of the up-sampled images is of a same up-sampled resolution. The concatenating module concatenates the up-sampled images into an output image.Type: ApplicationFiled: September 26, 2017Publication date: April 26, 2018Inventors: Hung-Yi YANG, Cheng-Hsien LIN, Po-Chuan CHO