Patents by Inventor Hung-Yi Yang

Hung-Yi Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11620509
    Abstract: A model constructing method for a neural network model applicable for image recognition processing is disclosed. The model constructing method includes the following operation: updating, by a processor, a plurality of connection variables between a plurality of layers of the neural network model, according to a plurality of inputs and a plurality of outputs of the neural network model. The plurality of outputs represent a plurality of image recognition results. The plurality of connection variables represent a plurality of connection intensities between each two of the plurality of layers.
    Type: Grant
    Filed: May 24, 2019
    Date of Patent: April 4, 2023
    Assignee: HTC Corporation
    Inventors: Cheng-Hsien Lin, Tung-Ting Yang, Hung-Yi Yang
  • Patent number: 11615549
    Abstract: An image processing method includes the following steps: dividing an object block into a two-dimensional image; identifying at least one view hotspot in a viewing field corresponding to pupil gaze direction; receiving the view hotspot and an indicator signal; wherein the indicator signal is used to remark the object block; and generating a mask block that corresponds to the object block according to the view hotspot; wherein the indicator signal determines the label of the mask block.
    Type: Grant
    Filed: October 31, 2019
    Date of Patent: March 28, 2023
    Assignee: HTC CORPORATION
    Inventors: Tung-Ting Yang, Chun-Li Wang, Cheng-Hsien Lin, Hung-Yi Yang
  • Patent number: 11321891
    Abstract: The disclosure provides a method for generating action according to an audio signal and an electronic device. The method includes: receiving an audio signal and extracting a high-level audio feature therefrom; extracting a latent audio feature from the high-level audio feature; in response to determining that the audio signal corresponds to a beat, obtaining a joint angle distribution matrix based on the latent audio feature; in response to determining that the audio signal corresponds to a music, obtaining a plurality of designated joint angles corresponding to a plurality of joint points based on the joint angle distribution matrix; and adjusting a joint angle of each of the joint points on the avatar according to the designated joint angles.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: May 3, 2022
    Assignee: HTC Corporation
    Inventors: Tung-Ting Yang, Chun-Li Wang, Yao-Chen Kuo, Hung-Yi Yang
  • Publication number: 20210343058
    Abstract: The disclosure provides a method for generating action according to an audio signal and an electronic device. The method includes: receiving an audio signal and extracting a high-level audio feature therefrom; extracting a latent audio feature from the high-level audio feature; in response to determining that the audio signal corresponds to a beat, obtaining a joint angle distribution matrix based on the latent audio feature; in response to determining that the audio signal corresponds to a music, obtaining a plurality of designated joint angles corresponding to a plurality of joint points based on the joint angle distribution matrix; and adjusting a joint angle of each of the joint points on the avatar according to the designated joint angles.
    Type: Application
    Filed: April 29, 2020
    Publication date: November 4, 2021
    Applicant: HTC Corporation
    Inventors: Tung-Ting Yang, Chun-Li Wang, Yao-Chen Kuo, Hung-Yi Yang
  • Patent number: 11080822
    Abstract: A method, a system and a recording medium for building environment map are provided. In the method, a plurality of captured images are captured by using a head-mounted display (HMD). Then, a viewing direction and a translation direction of the HMD are calculated, and an importance map corresponding to the captured images is determined according to the viewing direction and the translation direction of the HMD. Finally, the captured images are stitched to generate a panoramic image according to the importance map.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: August 3, 2021
    Assignee: HTC Corporation
    Inventors: Yi-Chen Lin, Hung-Yi Yang
  • Patent number: 10962355
    Abstract: A 3D (three dimensional) model reconstruction method that includes the steps outlined below. Depth data of a target object corresponding to a current time spot is received. Camera pose data of the depth camera corresponding to the current time spot is received. Posed 3D point clouds corresponding to the current time spot are generated according to the depth data and the camera pose data. Posed estimated point clouds corresponding to the current time spot are generated according to the camera pose data corresponding to the current time spot and a previous 3D model corresponding to a previous time spot. A current 3D model of the target object is generated according to the posed 3D point clouds based on a difference between the posed 3D point clouds and the posed estimated point clouds.
    Type: Grant
    Filed: December 24, 2018
    Date of Patent: March 30, 2021
    Assignee: HTC Corporation
    Inventors: Jui-Hsuan Chang, Cheng-Yuan Shih, Hung-Yi Yang
  • Patent number: 10957048
    Abstract: An image segmentation method is providing that includes the steps outlined below. A first image corresponding to a first time spot and a second image corresponding to a second time spot are received from a video stream, wherein the second time spot is behind the first time spot. Segmentation is performed on the second image by a segmentation neural network to generate a label probability set. Similarity determination is performed on the first image and the second image by a similarity calculation neural network to generate a similarity probability set. The label probability set and the similarity probability set are concatenated by a concatenating unit to generate a concatenated result. Further inference is performed on the concatenated result by a strategic neural network to generate a label mask.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: March 23, 2021
    Assignee: HTC Corporation
    Inventors: Tung-Ting Yang, Chun-Li Wang, Cheng-Hsien Lin, Hung-Yi Yang
  • Patent number: 10915781
    Abstract: A scene reconstructing system, scene reconstructing method and non-transitory computer-readable medium are provided in this disclosure. The scene reconstructing system includes a first electronic device and a second electronic device. A first electronic device includes a first camera unit, a first processor, and a first communication unit. The first processor is configured for recognizing at least a first object from a first image to construct a first map. The second electronic device includes a second camera unit, a second processor, and a second communication unit. The second processor is configured for recognizing at least a second object from a second image to construct a second map; calculating a plurality of confidence values corresponding to the second map. The second communication unit is configured for transmitting a location information to the first communication unit according to the plurality of confidence values.
    Type: Grant
    Filed: February 27, 2019
    Date of Patent: February 9, 2021
    Assignee: HTC Corporation
    Inventors: Cheng-Hsien Lin, Po-Chuan Cho, Yen-Jung Lee, Hung-Yi Yang
  • Patent number: 10896544
    Abstract: Present disclosure relates to a system for providing a simulated environment and a method thereof. The system comprises a first wearable device, a second wearable device and a computing device. The first wearable device is configured to output a scenario of the simulated environment and to output a first audio. The second wearable device is configured to collect an environmental sound around the second wearable device and send out the sound. The computing device is configured to merge the sound into the first audio according to an index and send the merged first audio to the first wearable device, wherein the index is determined by a relative distance and a relative direction between coordinates being assigned to the first wearable device and the second wearable device in the simulated environment.
    Type: Grant
    Filed: September 27, 2017
    Date of Patent: January 19, 2021
    Assignee: HTC Corporation
    Inventors: Hung-Yi Yang, Iok-Kan Choi
  • Patent number: 10885702
    Abstract: A facial expression modeling method used in a facial expression modeling apparatus is provided that includes the steps outlined below. Two two-dimensional images of a facial expression retrieved by two image retrieving modules respectively are received. A deep learning process is performed on the two two-dimensional images to generate a disparity map. The two two-dimensional images and the disparity map are concatenated to generate a three-channel feature map. The three-channel feature map is processed by a weighting calculation neural network to generate a plurality of blend-shape weightings. A three-dimensional facial expression is modeled according to the blend-shape weightings.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: January 5, 2021
    Assignee: HTC Corporation
    Inventors: Shih-Hao Wang, Hsin-Ching Sun, Cheng-Hsien Lin, Hung-Yi Yang
  • Patent number: 10719982
    Abstract: A surface extraction method that includes the steps outlined below is provided. (A) Raw input depth data of 3D child nodes are received that includes positions and distances of first 3D child nodes and positions of second 3D child nodes. (B) The neighboring 3D child nodes are grouped into 3D parent nodes. (C) For each 3D parent nodes, propagated distances of the second 3D child nodes are generated. (D) The 3D parent nodes are treated as the 3D child nodes to perform the steps (B)-(D) to generate a tree including a plurality levels of nodes such that a surface extraction processing is performed on the tree to extract at least one surface of the scene.
    Type: Grant
    Filed: December 25, 2018
    Date of Patent: July 21, 2020
    Assignee: HTC Corporation
    Inventors: Yi-Chen Lin, Hung-Yi Yang
  • Patent number: 10706547
    Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. Previous CNN weight data is received by a current convolution neural network unit of the neural network, wherein the previous CNN weight data is generated by a previous convolution neural network unit of the neural network based on a previous image of video data corresponding to a previous time spot. A current image of the video data corresponding to a current time spot next to the previous time spot is received by the current convolution neural network unit. Convolution is performed according to the previous CNN weight data and the current image to generate a current image segmentation result by the current convolution neural network unit.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: July 7, 2020
    Assignee: HTC Corporation
    Inventors: Cheng-Hsien Lin, Po-Chuan Cho, Hung-Yi Yang
  • Publication number: 20200202491
    Abstract: A method, a system and a recording medium for building environment map are provided. In the method, a plurality of captured images are captured by using a head-mounted display (HMD). Then, a viewing direction and a translation direction of the HMD are calculated, and an importance map corresponding to the captured images is determined according to the viewing direction and the translation direction of the HMD. Finally, the captured images are stitched to generate a panoramic image according to the importance map.
    Type: Application
    Filed: December 9, 2019
    Publication date: June 25, 2020
    Applicant: HTC Corporation
    Inventors: Yi-Chen Lin, Hung-Yi Yang
  • Publication number: 20200186776
    Abstract: An image processing method includes the following steps: generating a current depth map and a current confidence map, wherein the current confidence map comprises the confidence value of each pixel; receiving a previous camera pose corresponding to a previous position, wherein the previous position corresponds to a first depth map and a first confidence map; mapping at least one pixel position of the first depth map to at least one pixel position of the current depth map according to the previous camera pose and the current camera pose of the current position; selecting the one with the highest confidence value after the confidence value of at least one pixel of the first confidence map is compared with the corresponding confidence value of the pixel of the current confidence map; and generating an optimized depth map of the current position according to the pixels corresponding to the highest confidence value.
    Type: Application
    Filed: November 14, 2019
    Publication date: June 11, 2020
    Applicant: HTC Corporation
    Inventors: Hsiao-Tsung WANG, Cheng-Yuan SHIH, Hung-Yi YANG
  • Publication number: 20200184671
    Abstract: An image processing method includes the following steps: dividing an object block into a two-dimensional image; identifying at least one view hotspot in a viewing field corresponding to pupil gaze direction; receiving the view hotspot and an indicator signal; wherein the indicator signal is used to remark the object block; and generating a mask block that corresponds to the object block according to the view hotspot; wherein the indicator signal determines the label of the mask block.
    Type: Application
    Filed: October 31, 2019
    Publication date: June 11, 2020
    Applicant: HTC Corporation
    Inventors: Tung-Ting YANG, Chun-Li WANG, Cheng-Hsien LIN, Hung-Yi YANG
  • Patent number: 10657415
    Abstract: An image correspondence determining method is provided that includes the steps outlined below. A first image and a second image are concatenated to generate a concatenated image having global information. Features are extracted from the concatenated image to generate a plurality of feature maps and the feature maps are divided into first feature maps and second feature maps. First image patches are extracted from the first feature maps corresponding to a first region and second image patches are extracted from the second feature maps corresponding to a second region. The first and the second image patches are concatenated to generate concatenated image patches. A similarity metric is calculated according to the concatenated image patches to determine a similarity between the first region and the second region.
    Type: Grant
    Filed: May 10, 2018
    Date of Patent: May 19, 2020
    Assignee: HTC Corporation
    Inventors: Cheng-Hsien Lin, Po-Chuan Cho, Hung-Yi Yang
  • Patent number: 10628919
    Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. An input image is down-sampled to generate down-sampled images. Previous convolution neural network (CNN) data having a first resolution is received to up-sample the previous CNN result to generate an up-sampled previous CNN data having a second resolution. A current down-sampled image of the down-sampled images having the second resolution and the up-sampled previous CNN data are received. Convolution is performed according to the up-sampled previous CNN data and the current down-sampled image to generate a current image segmentation result.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: April 21, 2020
    Assignee: HTC Corporation
    Inventors: Cheng-Hsien Lin, Po-Chuan Cho, Hung-Yi Yang
  • Publication number: 20200074638
    Abstract: An image segmentation method is providing that includes the steps outlined below. A first image corresponding to a first time spot and a second image corresponding to a second time spot are received from a video stream, wherein the second time spot is behind the first time spot. Segmentation is performed on the second image by a segmentation neural network to generate a label probability set. Similarity determination is performed on the first image and the second image by a similarity calculation neural network to generate a similarity probability set. The label probability set and the similarity probability set are concatenated by a concatenating unit to generate a concatenated result. Further inference is performed on the concatenated result by a strategic neural network to generate a label mask.
    Type: Application
    Filed: July 31, 2019
    Publication date: March 5, 2020
    Inventors: Tung-Ting YANG, Chun-Li WANG, Cheng-Hsien LIN, Hung-Yi YANG
  • Publication number: 20200051326
    Abstract: A facial expression modeling method used in a facial expression modeling apparatus is provided that includes the steps outlined below. Two two-dimensional images of a facial expression retrieved by two image retrieving modules respectively are received. A deep learning process is performed on the two two-dimensional images to generate a disparity map. The two two-dimensional images and the disparity map are concatenated to generate a three-channel feature map. The three-channel feature map is processed by a weighting calculation neural network to generate a plurality of blend-shape weightings. A three-dimensional facial expression is modeled according to the blend-shape weightings.
    Type: Application
    Filed: July 22, 2019
    Publication date: February 13, 2020
    Inventors: Shih-Hao WANG, Hsin-Ching SUN, Cheng-Hsien LIN, Hung-Yi YANG
  • Publication number: 20190362230
    Abstract: A model constructing method for a neural network model applicable for image recognition processing is disclosed. The model constructing method includes the following operation: updating, by a processor, a plurality of connection variables between a plurality of layers of the neural network model, according to a plurality of inputs and a plurality of outputs of the neural network model. The plurality of outputs represent a plurality of image recognition results. The plurality of connection variables represent a plurality of connection intensities between each two of the plurality of layers.
    Type: Application
    Filed: May 24, 2019
    Publication date: November 28, 2019
    Inventors: Cheng-Hsien LIN, Tung-Ting YANG, Hung-Yi YANG