Patents by Inventor Cheng-Hsien Lin

Cheng-Hsien Lin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200225551
    Abstract: A reflective display device includes a thin-film transistor (TFT) array substrate, a front panel laminate (FPL), a front protection sheet, a back protection sheet, a light blocking layer, and a light source. The front panel laminate is located on the TFT array substrate, and has a transparent conductive layer and a display medium layer. The display medium layer is located between the transparent conductive layer and the TFT array substrate. The front protection sheet is located on the front panel laminate. The back protection sheet is located below the TFT array substrate. The light blocking layer at least covers a lateral surface of the back protection sheet. The light source faces toward a lateral surface of the front panel laminate, a lateral surface of the TFT array substrate, and the lateral surface of the back protection sheet.
    Type: Application
    Filed: July 24, 2019
    Publication date: July 16, 2020
    Inventors: Chia-Chi CHANG, Chih-Chun CHEN, Chi-Ming WU, Yi-Ching WANG, Jia-Hung CHEN, Cheng-Hsien LIN
  • Patent number: 10706547
    Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. Previous CNN weight data is received by a current convolution neural network unit of the neural network, wherein the previous CNN weight data is generated by a previous convolution neural network unit of the neural network based on a previous image of video data corresponding to a previous time spot. A current image of the video data corresponding to a current time spot next to the previous time spot is received by the current convolution neural network unit. Convolution is performed according to the previous CNN weight data and the current image to generate a current image segmentation result by the current convolution neural network unit.
    Type: Grant
    Filed: May 9, 2018
    Date of Patent: July 7, 2020
    Assignee: HTC Corporation
    Inventors: Cheng-Hsien Lin, Po-Chuan Cho, Hung-Yi Yang
  • Publication number: 20200184671
    Abstract: An image processing method includes the following steps: dividing an object block into a two-dimensional image; identifying at least one view hotspot in a viewing field corresponding to pupil gaze direction; receiving the view hotspot and an indicator signal; wherein the indicator signal is used to remark the object block; and generating a mask block that corresponds to the object block according to the view hotspot; wherein the indicator signal determines the label of the mask block.
    Type: Application
    Filed: October 31, 2019
    Publication date: June 11, 2020
    Applicant: HTC Corporation
    Inventors: Tung-Ting YANG, Chun-Li WANG, Cheng-Hsien LIN, Hung-Yi YANG
  • Publication number: 20200177537
    Abstract: A control system and a control method for a social network are provided. The control method includes following steps. Obtaining detection information. Analyzing status information of at least one social member according to the detection information. Condensing the status information according to a time interval to obtain condensed information. Summarizing the condensed information according to a summary priority score to obtain summary information. Displaying the summary information.
    Type: Application
    Filed: August 30, 2019
    Publication date: June 4, 2020
    Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE
    Inventors: Chieh-Chih CHANG, Chih-Chung KUO, Shih-Chieh CHIEN, Jian-Yung HUNG, Cheng-Hsien LIN
  • Patent number: 10657415
    Abstract: An image correspondence determining method is provided that includes the steps outlined below. A first image and a second image are concatenated to generate a concatenated image having global information. Features are extracted from the concatenated image to generate a plurality of feature maps and the feature maps are divided into first feature maps and second feature maps. First image patches are extracted from the first feature maps corresponding to a first region and second image patches are extracted from the second feature maps corresponding to a second region. The first and the second image patches are concatenated to generate concatenated image patches. A similarity metric is calculated according to the concatenated image patches to determine a similarity between the first region and the second region.
    Type: Grant
    Filed: May 10, 2018
    Date of Patent: May 19, 2020
    Assignee: HTC Corporation
    Inventors: Cheng-Hsien Lin, Po-Chuan Cho, Hung-Yi Yang
  • Patent number: 10628919
    Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. An input image is down-sampled to generate down-sampled images. Previous convolution neural network (CNN) data having a first resolution is received to up-sample the previous CNN result to generate an up-sampled previous CNN data having a second resolution. A current down-sampled image of the down-sampled images having the second resolution and the up-sampled previous CNN data are received. Convolution is performed according to the up-sampled previous CNN data and the current down-sampled image to generate a current image segmentation result.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: April 21, 2020
    Assignee: HTC Corporation
    Inventors: Cheng-Hsien Lin, Po-Chuan Cho, Hung-Yi Yang
  • Publication number: 20200074638
    Abstract: An image segmentation method is providing that includes the steps outlined below. A first image corresponding to a first time spot and a second image corresponding to a second time spot are received from a video stream, wherein the second time spot is behind the first time spot. Segmentation is performed on the second image by a segmentation neural network to generate a label probability set. Similarity determination is performed on the first image and the second image by a similarity calculation neural network to generate a similarity probability set. The label probability set and the similarity probability set are concatenated by a concatenating unit to generate a concatenated result. Further inference is performed on the concatenated result by a strategic neural network to generate a label mask.
    Type: Application
    Filed: July 31, 2019
    Publication date: March 5, 2020
    Inventors: Tung-Ting YANG, Chun-Li WANG, Cheng-Hsien LIN, Hung-Yi YANG
  • Publication number: 20200051326
    Abstract: A facial expression modeling method used in a facial expression modeling apparatus is provided that includes the steps outlined below. Two two-dimensional images of a facial expression retrieved by two image retrieving modules respectively are received. A deep learning process is performed on the two two-dimensional images to generate a disparity map. The two two-dimensional images and the disparity map are concatenated to generate a three-channel feature map. The three-channel feature map is processed by a weighting calculation neural network to generate a plurality of blend-shape weightings. A three-dimensional facial expression is modeled according to the blend-shape weightings.
    Type: Application
    Filed: July 22, 2019
    Publication date: February 13, 2020
    Inventors: Shih-Hao WANG, Hsin-Ching SUN, Cheng-Hsien LIN, Hung-Yi YANG
  • Publication number: 20190362230
    Abstract: A model constructing method for a neural network model applicable for image recognition processing is disclosed. The model constructing method includes the following operation: updating, by a processor, a plurality of connection variables between a plurality of layers of the neural network model, according to a plurality of inputs and a plurality of outputs of the neural network model. The plurality of outputs represent a plurality of image recognition results. The plurality of connection variables represent a plurality of connection intensities between each two of the plurality of layers.
    Type: Application
    Filed: May 24, 2019
    Publication date: November 28, 2019
    Inventors: Cheng-Hsien LIN, Tung-Ting YANG, Hung-Yi YANG
  • Publication number: 20190325251
    Abstract: A scene reconstructing system, scene reconstructing method and non-transitory computer-readable medium are provided in this disclosure. The scene reconstructing system includes a first electronic device and a second electronic device. A first electronic device includes a first camera unit, a first processor, and a first communication unit. The first processor is configured for recognizing at least a first object from a first image to construct a first map. The second electronic device includes a second camera unit, a second processor, and a second communication unit. The second processor is configured for recognizing at least a second object from a second image to construct a second map; calculating a plurality of confidence values corresponding to the second map. The second communication unit is configured for transmitting a location information to the first communication unit according to the plurality of confidence values.
    Type: Application
    Filed: February 27, 2019
    Publication date: October 24, 2019
    Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Yen-Jung LEE, Hung-Yi YANG
  • Publication number: 20190302900
    Abstract: A display device includes a reflective display panel, a surface layer and an optically encoded pattern. The surface layer is disposed over the reflective display panel. The optically encoded pattern has a plurality of optical codes respectively corresponding to a plurality of positions of the reflective display panel, and the optically encoded pattern is displayed by the reflective display panel or disposed between the reflective display panel and the surface layer.
    Type: Application
    Filed: March 21, 2019
    Publication date: October 3, 2019
    Inventors: Cheng-Hsien LIN, Hsin-Tao HUANG
  • Patent number: 10402943
    Abstract: An image enhancement device that includes a down-sampling module, correction modules and an up-sampling module is provided. The down-sampling module down-samples an input image to generate down-sampled images having different down-sampled resolutions. Each of the correction modules performs correction on one of the down-sampled images according to a correction model based on at least one correction parameter to generate one of corrected images. The up-sampling module up-samples the corrected images to generate up-sampled images, wherein each of the up-sampled images is of a same up-sampled resolution. The concatenating module concatenates the up-sampled images into an output image.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: September 3, 2019
    Assignee: HTC Corporation
    Inventors: Hung-Yi Yang, Cheng-Hsien Lin, Po-Chuan Cho
  • Patent number: 10330860
    Abstract: A cover plate structure for a display device is provided. The cover plate structure includes a light-transmitting substrate, a light-shielding layer, and a light-transmitting covering layer. The light-transmitting substrate has a flat upper surface and a lower surface. The light-shielding layer is disposed on an edge of the flat upper surface of the light-transmitting substrate and in direct contact with the flat upper surface. The light-transmitting covering layer is located on the light-shielding layer and the light-transmitting substrate.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: June 25, 2019
    Assignee: E Ink Holdings Inc.
    Inventors: Tsai-Wei Shei, Cheng-Hsien Lin, Hsin-Tao Huang
  • Publication number: 20190066265
    Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. An input image is down-sampled to generate down-sampled images. Previous convolution neural network (CNN) data having a first resolution is received to up-sample the previous CNN result to generate an up-sampled previous CNN data having a second resolution. A current down-sampled image of the down-sampled images having the second resolution and the up-sampled previous CNN data are received. Convolution is performed according to the up-sampled previous CNN data and the current down-sampled image to generate a current image segmentation result.
    Type: Application
    Filed: May 17, 2018
    Publication date: February 28, 2019
    Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Hung-Yi YANG
  • Publication number: 20180349737
    Abstract: An image correspondence determining method is provided that includes the steps outlined below. A first image and a second image are concatenated to generate a concatenated image having global information. Features are extracted from the concatenated image to generate a plurality of feature maps and the feature maps are divided into first feature maps and second feature maps. First image patches are extracted from the first feature maps corresponding to a first region and second image patches are extracted from the second feature maps corresponding to a second region. The first and the second image patches are concatenated to generate concatenated image patches. A similarity metric is calculated according to the concatenated image patches to determine a similarity between the first region and the second region.
    Type: Application
    Filed: May 10, 2018
    Publication date: December 6, 2018
    Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Hung-Yi YANG
  • Publication number: 20180350077
    Abstract: An image segmentation method for performing image segmentation through a neural network implemented by an image segmentation apparatus is provided. The image segmentation method includes the steps outlined below. Previous CNN weight data is received by a current convolution neural network unit of the neural network, wherein the previous CNN weight data is generated by a previous convolution neural network unit of the neural network based on a previous image of video data corresponding to a previous time spot. A current image of the video data corresponding to a current time spot next to the previous time spot is received by the current convolution neural network unit. Convolution is performed according to the previous CNN weight data and the current image to generate a current image segmentation result by the current convolution neural network unit.
    Type: Application
    Filed: May 9, 2018
    Publication date: December 6, 2018
    Inventors: Cheng-Hsien LIN, Po-Chuan CHO, Hung-Yi YANG
  • Publication number: 20180252862
    Abstract: A cover plate structure for a display device is provided. The cover plate structure includes a light-transmitting substrate, a light-shielding layer, and a light-transmitting covering layer. The light-transmitting substrate has a flat upper surface and a lower surface. The light-shielding layer is disposed on an edge of the flat upper surface of the light-transmitting substrate and in direct contact with the flat upper surface. The light-transmitting covering layer is located on the light-shielding layer and the light-transmitting substrate.
    Type: Application
    Filed: January 11, 2018
    Publication date: September 6, 2018
    Inventors: Tsai-Wei SHEI, Cheng-Hsien LIN, Hsin-Tao HUANG
  • Publication number: 20180114294
    Abstract: An image enhancement device that includes a down-sampling module, correction modules and an up-sampling module is provided. The down-sampling module down-samples an input image to generate down-sampled images having different down-sampled resolutions. Each of the correction modules performs correction on one of the down-sampled images according to a correction model based on at least one correction parameter to generate one of corrected images. The up-sampling module up-samples the corrected images to generate up-sampled images, wherein each of the up-sampled images is of a same up-sampled resolution. The concatenating module concatenates the up-sampled images into an output image.
    Type: Application
    Filed: September 26, 2017
    Publication date: April 26, 2018
    Inventors: Hung-Yi YANG, Cheng-Hsien LIN, Po-Chuan CHO
  • Patent number: 9612475
    Abstract: The front light module includes a light guide plate, a light source, a first light transmissive substrate, a second light transmissive substrate, and a printing ink layer. The light guide plate has a first light emitting surface, a second light emitting surface, and a light incident surface. The light source faces the light incident surface. The first light transmissive substrate is located on the first light emitting surface. The second light transmissive substrate is located on the surface of the first light transmissive substrate facing away from the light guide plate, and the thickness of the second light transmissive substrate is smaller than that of the first light transmissive substrate. The printing ink layer is located on the surface of the second light transmissive substrate facing the first light transmissive substrate, and on an edge of the second light transmissive substrate.
    Type: Grant
    Filed: September 28, 2014
    Date of Patent: April 4, 2017
    Assignee: E Ink Holdings Inc.
    Inventors: Yun-Nan Hsieh, Cheng-Hsien Lin, Lin-An Chen
  • Patent number: 9561465
    Abstract: An ecosystem operated in a plant having a drying unit is provided. The ecosystem includes: a regenerative thermal oxidization unit for processing a waste gas to produce a hot gas; a first hot gas pipeline connected to the regenerative thermal oxidization unit and the drying unit, wherein the hot gas is transferred from the regenerative thermal oxidization unit to the drying unit via the first hot gas pipeline; a heat recovery unit disposed at the first hot gas pipeline to absorb heat from the first hot gas pipeline; an absorption refrigeration unit connected to a target to be cooled; and a hot liquid pipeline connected to the heat recovery unit and the absorption refrigeration unit, wherein the heat recovery unit transfers heat from the first hot gas pipeline to the absorption refrigeration unit via the hot liquid pipeline to actuate the absorption refrigeration unit to cool the target.
    Type: Grant
    Filed: November 29, 2013
    Date of Patent: February 7, 2017
    Assignee: TSRC Corporation
    Inventors: Jung Chang Lu, Cheng Hsien Lin, Sheng-Te Yang