Patents by Inventor Hung-Yi Yang

Hung-Yi Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10482616
    Abstract: A 3D model reconstruction method includes: providing, by one or more laser emitters, laser beams; capturing, by a depth camera on an electronic device, a depth data of a target object when that the electronic device moves around the target object; detecting, by one or more light sensors on the electronic device, the laser beams emitted by the one or more laser emitters to obtain a camera pose initial value of the depth camera accordingly; and performing, by a processing circuit, a 3D reconstruction of the target object using the depth data based on the camera pose initial value to output a 3D model of the target object.
    Type: Grant
    Filed: April 2, 2018
    Date of Patent: November 19, 2019
    Assignee: HTC Corporation
    Inventors: Jui-Hsuan Chang, Yu-Cheng Hsu, Cheng-Yuan Shih, Sheng-Yen Lo, Hung-Yi Yang
  • Publication number: 20190325251
    Abstract: A scene reconstructing system, scene reconstructing method and non-transitory computer-readable medium are provided in this disclosure. The scene reconstructing system includes a first electronic device and a second electronic device. A first electronic device includes a first camera unit, a first processor, and a first communication unit. The first processor is configured for recognizing at least a first object from a first image to construct a first map. The second electronic device includes a second camera unit, a second processor, and a second communication unit. The second processor is configured for recognizing at least a second object from a second image to construct a second map; calculating a plurality of confidence values corresponding to the second map. The second communication unit is configured for transmitting a location information to the first communication unit according to the plurality of confidence values.
    Type: Application
    Filed: February 27, 2019
    Publication date: October 24, 2019
    Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Yen-Jung LEE, Hung-Yi YANG
  • Patent number: 10402943
    Abstract: An image enhancement device that includes a down-sampling module, correction modules and an up-sampling module is provided. The down-sampling module down-samples an input image to generate down-sampled images having different down-sampled resolutions. Each of the correction modules performs correction on one of the down-sampled images according to a correction model based on at least one correction parameter to generate one of corrected images. The up-sampling module up-samples the corrected images to generate up-sampled images, wherein each of the up-sampled images is of a same up-sampled resolution. The concatenating module concatenates the up-sampled images into an output image.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: September 3, 2019
    Assignee: HTC Corporation
    Inventors: Hung-Yi Yang, Cheng-Hsien Lin, Po-Chuan Cho
  • Publication number: 20190197770
    Abstract: A 3D (three dimensional) model reconstruction method that includes the steps outlined below. Depth data of a target object corresponding to a current time spot is received. Camera pose data of the depth camera corresponding to the current time spot is received. posed 3D point clouds corresponding to the current time spot are generated according to the depth data and the camera pose data. Posed estimated point clouds corresponding to the current time spot are generated according to the camera pose data corresponding to the current time spot and a previous 3D model corresponding to a previous time spot. A current 3D model of the target object is generated according to the posed 3D point clouds based on a difference between the posed 3D point clouds and the posed estimated point clouds.
    Type: Application
    Filed: December 24, 2018
    Publication date: June 27, 2019
    Inventors: Jui-Hsuan CHANG, Cheng-Yuan SHIH, Hung-Yi YANG
  • Publication number: 20190197767
    Abstract: A surface extraction method that includes the steps outlined below is provided. (A) Raw input depth data of 3D child nodes are received that includes positions and distances of first 3D child nodes and positions of second 3D child nodes. (B) The neighboring 3D child nodes are grouped into 3D parent nodes. (C) For each 3D parent nodes, propagated distances of the second 3D child nodes are generated. (D) The 3D parent nodes are treated as the 3D child nodes to perform the steps (B)-(D) to generate a tree including a plurality levels of nodes such that a surface extraction processing is performed on the tree to extract at least one surface of the scene.
    Type: Application
    Filed: December 25, 2018
    Publication date: June 27, 2019
    Inventors: Yi-Chen LIN, Hung-Yi YANG
  • Publication number: 20190066265
    Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. An input image is down-sampled to generate down-sampled images. Previous convolution neural network (CNN) data having a first resolution is received to up-sample the previous CNN result to generate an up-sampled previous CNN data having a second resolution. A current down-sampled image of the down-sampled images having the second resolution and the up-sampled previous CNN data are received. Convolution is performed according to the up-sampled previous CNN data and the current down-sampled image to generate a current image segmentation result.
    Type: Application
    Filed: May 17, 2018
    Publication date: February 28, 2019
    Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Hung-Yi YANG
  • Publication number: 20180350077
    Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. Previous CNN weight data is received by a current convolution neural network unit of the neural network, wherein the previous CNN weight data is generated by a previous convolution neural network unit of the neural network based on a previous image of video data corresponding to a previous time spot. A current image of the video data corresponding to a current time spot next to the previous time spot is received by the current convolution neural network unit. Convolution is performed according to the previous CNN weight data and the current image to generate a current image segmentation result by the current convolution neural network unit.
    Type: Application
    Filed: May 9, 2018
    Publication date: December 6, 2018
    Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Hung-Yi YANG
  • Publication number: 20180349737
    Abstract: An image correspondence determining method is provided that includes the steps outlined below. A first image and a second image are concatenated to generate a concatenated image having global information. Features are extracted from the concatenated image to generate a plurality of feature maps and the feature maps are divided into first feature maps and second feature maps. First image patches are extracted from the first feature maps corresponding to a first region and second image patches are extracted from the second feature maps corresponding to a second region. The first and the second image patches are concatenated to generate concatenated image patches. A similarity metric is calculated according to the concatenated image patches to determine a similarity between the first region and the second region.
    Type: Application
    Filed: May 10, 2018
    Publication date: December 6, 2018
    Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Hung-Yi YANG
  • Publication number: 20180300531
    Abstract: A computer-implemented 3D model analysis method includes: projecting, by a processing circuit, multiple sample points in a 3-dimensional model of a scene to one or more 2-dimensional planes in order to obtain one or more 2-dimensional images corresponding to the 3-dimensional model of the scene; performing, by the processing circuit, an object segmentation and classification based on the one or more 2-dimensional images to obtain 2-dimensional sematic information on the one or more 2-dimensional planes; and projecting, by the processing circuit, the 2-dimensional sematic information to the 3-dimensional model of the scene to identify one or more target objects in the scene.
    Type: Application
    Filed: April 12, 2018
    Publication date: October 18, 2018
    Inventors: Jui-Hsuan CHANG, Hung-Yi YANG
  • Publication number: 20180300950
    Abstract: A 3D model reconstruction method includes: providing, by one or more laser emitters, laser beams; capturing, by a depth camera on an electronic device, a depth data of a target object when that the electronic device moves around the target object; detecting, by one or more light sensors on the electronic device, the laser beams emitted by the one or more laser emitters to obtain a camera pose initial value of the depth camera accordingly; and performing, by a processing circuit, a 3D reconstruction of the target object using the depth data based on the camera pose initial value to output a 3D model of the target object.
    Type: Application
    Filed: April 2, 2018
    Publication date: October 18, 2018
    Inventors: Jui-Hsuan CHANG, Yu-Cheng HSU, Cheng-Yuan SHIH, Sheng-Yen LO, Hung-Yi YANG
  • Publication number: 20180114294
    Abstract: An image enhancement device that includes a down-sampling module, correction modules and an up-sampling module is provided. The down-sampling module down-samples an input image to generate down-sampled images having different down-sampled resolutions. Each of the correction modules performs correction on one of the down-sampled images according to a correction model based on at least one correction parameter to generate one of corrected images. The up-sampling module up-samples the corrected images to generate up-sampled images, wherein each of the up-sampled images is of a same up-sampled resolution. The concatenating module concatenates the up-sampled images into an output image.
    Type: Application
    Filed: September 26, 2017
    Publication date: April 26, 2018
    Inventors: Hung-Yi YANG, Cheng-Hsien LIN, Po-Chuan CHO
  • Publication number: 20180101990
    Abstract: Present disclosure relates to a system for providing a simulated environment and a method thereof. The system comprises a first wearable device, a second wearable device and a computing device. The first wearable device is configured to output a scenario of the simulated environment and to output a first audio. The second wearable device is configured to collect an environmental sound around the second wearable device and send out the sound. The computing device is configured to merge the sound into the first audio according to an index and send the merged first audio to the first wearable device, wherein the index is determined by a relative distance and a relative direction between coordinates being assigned to the first wearable device and the second wearable device in the simulated environment.
    Type: Application
    Filed: September 27, 2017
    Publication date: April 12, 2018
    Inventors: Hung-Yi YANG, Iok-Kan CHOI
  • Patent number: 8644788
    Abstract: A signal receiver used in a satellite communication system is provided. The signal receiver comprises a first type code generator, a second type code generator, a composite code generator, a correlation module and a determining module. The first type code generator generates a first type code corresponding to a first type signal. The second type code generator generates a second type code corresponding to a second type signal and having a code length N-time longer than that of the first type code. The composite code generator generates a composite code by superimposing N successive first type codes on the second type code. The correlation module correlates the composite code with a cell of a received signal to generate correlation results. The determining module determines a type of the received signal according to correlation results of the composite code with the received signal.
    Type: Grant
    Filed: May 2, 2012
    Date of Patent: February 4, 2014
    Assignee: Skytraq Technology, Inc.
    Inventor: Hung-Yi Yang
  • Publication number: 20130295871
    Abstract: A signal receiver used in a satellite communication system is provided. The signal receiver comprises a first type code generator, a second type code generator, a composite code generator, a correlation module and a determining module. The first type code generator generates a first type code corresponding to a first type signal. The second type code generator generates a second type code corresponding to a second type signal and having a code length N-time longer than that of the first type code. The composite code generator generates a composite code by superimposing N successive first type codes on the second type code. The correlation module correlates the composite code with a cell of a received signal to generate correlation results. The determining module determines a type of the received signal according to correlation results of the composite code with the received signal.
    Type: Application
    Filed: May 2, 2012
    Publication date: November 7, 2013
    Applicant: SKYTRAQ TECHNOLOGY, INC.
    Inventor: Hung-Yi YANG
  • Publication number: 20050251477
    Abstract: An on-line billing system and method capable of integrating the electronic billing and payment process, comprises the steps of: forwarding the file of check sheet details to HUBWEB interface, generating the file of check sheet, issuing statement of account, inquiring/deleting/printing the corresponding statement of account/inquiry invoice, restoring statement of account back to ERP, restoring confirmed dispute of check sheet by batch to HUBWEB interface, generating the file of statement of account details and generating the invoice file. By consolidating the details of check sheet from ERP and forwarding to HUBWEB, the system further activates HUBWEB to generate a file of check sheet for the procedure as inquire check sheet, mark dispute of check sheet, confirm dispute of check sheet and delete dispute of check sheet. Meanwhile, in the process to mark, confirm and delete dispute of check sheet, the system will initiate the electrical mails to notify the relevant personnel.
    Type: Application
    Filed: May 7, 2004
    Publication date: November 10, 2005
    Inventors: Chia-Youn Chang, Li-Hua Lin, Tien-Chieh Wu, Hsiao-Huei Chao, Le-Yun Chen, Hung-Yi Yang