Patents by Inventor Yu-Lin Chang

Yu-Lin Chang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240409180
    Abstract: Provided is a bicycle handlebar grip assembly for mounting on a handlebar of a bicycle handlebar, the bicycle handlebar grip assembly including at least one handgrip body for arranging on a handlebar, such as the end thereof, for a rider to thereby hold the handlebar, fixing means for fixing the bicycle handlebar grip assembly to the handlebar, such as for the purpose of securing or clamping this, and a coupling connection for coupling the handgrip body to the fixing means
    Type: Application
    Filed: October 7, 2022
    Publication date: December 12, 2024
    Inventors: Yi-Fang Chen, Zhao-Bo Zhan, Alexandre Phaneuf, Chun-Hsun Kao, Chien-I Chen, Yu-Lin Chang, Job Hendrik Stehmann
  • Publication number: 20240400144
    Abstract: The invention relates to a bicycle (200), in particular an electric bicycle (200). The invention also relates to a light module (100) for use in a bicycle (200), in particular an electric bicycle (200) according to the invention. The invention further relates to a bicycle control unit (8) programmed to independently control a plurality of light sources (104) of at least one light module (100) for use in a bicycle (200), in particular an electric bicycle (200) according to the invention. The invention moreover relates to an assembly of at least one light module (100) according to the invention and at least one bicycle control unit (8) according to the invention.
    Type: Application
    Filed: October 7, 2022
    Publication date: December 5, 2024
    Inventors: Marjolein Deun, Alexandre Phaneuf, Olivier Hébert, Wei-Ting Yu, Tzu-Jung Huang, Yu-Lin Chang, Job Hendrik Stehmann
  • Patent number: 10803003
    Abstract: A data recording system includes a host terminal and a data recorder. The host terminal defines a first module card to be corresponding to a first data channel and a first module card slot of the data recorder. The first module card is inserted into the first module card slot, and the data recorder stores a first type of data captured from the first data channel to the first module card. The host terminal has the data recorder stop capturing the first type of data, and defines a second module card to be corresponding to a second data channel and the first module card slot of the data recorder. The data recorder is shut down, and the first module card is dismounted from the first module card slot. The second module card is inserted into the first module card slot, and the data recorder is rebooted.
    Type: Grant
    Filed: December 8, 2019
    Date of Patent: October 13, 2020
    Assignees: Inventec (Pudong) Technology Corp., Inventec Corporation
    Inventors: Yu-Lin Chang, Kai-Yang Tung
  • Patent number: 10520546
    Abstract: An automatic power supply system is electrically coupled to a component to be tested. The automatic power supply system includes a power array and a controller. The power array includes a plurality of power channels, and provides power supplies through the plurality of power channels. The component to be tested is electrically coupled to a first power channel of the plurality of power channels and receives a power supply through the first power channel. The controller is electrically coupled to the power array, and calculates a power of the power supply received by the component to be tested. The controller adjusts a power specification of the power supply provided through the first power channel according to the power.
    Type: Grant
    Filed: December 13, 2018
    Date of Patent: December 31, 2019
    Assignees: Inventec (Pudong) Technology Corp., Inventec Corporation
    Inventors: Yu-Lin Chang, Kai-Yang Tung, Mao-Ching Lin
  • Patent number: 10482626
    Abstract: Calibration methods for calibrating image capture devices of an around view monitoring (AVM) system mounted on vehicle are provided, the calibration method including: extracting local patterns from images captured by each image capture device, wherein each local pattern is respectively disposed at a position within the image capturing range of one of the image capture devices; acquiring an overhead-view (OHV) image from OHV point above vehicle, wherein the OHV image includes first patterns relative to the local patterns for the image capture devices; generating global patterns from the OHV image using the first patterns, each global pattern corresponding to one of the local patterns; matching the local patterns with the corresponding global patterns to determine camera parameters and transformation information corresponding thereto for each image capture device; and calibrating each image capture device using determined camera parameters and transformation information corresponding thereto so as to generate A
    Type: Grant
    Filed: January 8, 2018
    Date of Patent: November 19, 2019
    Assignee: MEDIATEK INC.
    Inventors: Yu-Lin Chang, Yu-Pao Tsai
  • Patent number: 10477183
    Abstract: A method of three-dimensional video encoding and decoding that adaptively incorporates camera parameters in the video bitstream according to a control flag is disclosed. The control flag is derived based on a combination of individual control flags associated with multiple depth-oriented coding tools. Another control flag can be incorporated in the video bitstream to indicate whether there is a need for the camera parameters for the current layer. In another embodiment, a first flag and a second flag are used to adaptively control the presence and location of camera parameters for each layer or each view in the video bitstream. The first flag indicates whether camera parameters for each layer or view are present in the video bitstream. The second flag indicates camera parameter location for each layer or view in the video bitstream.
    Type: Grant
    Filed: July 18, 2014
    Date of Patent: November 12, 2019
    Assignee: HFI INNOVATION INC.
    Inventors: Yu-Lin Chang, Yi-Wen Chen, Jian-Liang Lin
  • Publication number: 20190301661
    Abstract: The proposed vacuum jacketed tube may deliver the high/low temperature fluid with less temperature-transfer, especially may delivery high/low temperature fluid through a flexible structure. The vacuum jacketed tube includes a tubular structure surrounding a pipe wherein the fluid is delivered therethrough. Also, the space between the tubular structure and the pipe may be vacuumed. Therefore, the heat transferred into and/or away the fluid may be minimized, especially if the tubular structure and the pipe is separated by at least one thermal insulator or is separated mutually. Moreover, the vacuum jacketed tube may be mechanically connected to the source/destination of the delivered fluid, even other vacuum jacketed tube, through the bellows and/or the rotary joint. Besides, the pipe may be surrounded by a Teflon bellows and the tubular structure may be surrounded by a steel bellows, so as to further reduce the heat transferred into/away the fluid delivered inside the pipe.
    Type: Application
    Filed: March 29, 2019
    Publication date: October 3, 2019
    Inventors: Yu-Lin Chang, Chien-Cheng Kuo, Yu-Ho Ni, Chun-Chieh Lin
  • Publication number: 20190213756
    Abstract: Calibration methods for calibrating image capture devices of an around view monitoring (AVM) system mounted on vehicle are provided, the calibration method including: extracting local patterns from images captured by each image capture device, wherein each local pattern is respectively disposed at a position within the image capturing range of one of the image capture devices; acquiring an overhead-view (OHV) image from OHV point above vehicle, wherein the OHV image includes first patterns relative to the local patterns for the image capture devices; generating global patterns from the OHV image using the first patterns, each global pattern corresponding to one of the local patterns; matching the local patterns with the corresponding global patterns to determine camera parameters and transformation information corresponding thereto for each image capture device; and calibrating each image capture device using determined camera parameters and transformation information corresponding thereto so as to generate A
    Type: Application
    Filed: January 8, 2018
    Publication date: July 11, 2019
    Inventors: Yu-Lin CHANG, Yu-Pao TSAI
  • Patent number: 10230937
    Abstract: A method and apparatus for a three-dimensional or multi-view video encoding or decoding system utilizing unified disparity vector derivation is disclosed. When a three-dimensional coding tool using a derived disparity vector (DV) is selected, embodiments according to the present invention will first obtain the derived DV from one or more neighboring blocks. If the derived DV is available, the selected three-dimensional coding tool is applied to the current block using the derived DV. If the derived DV is not available, the selected three-dimensional coding tool is applied to the current block using a default DV, where the default DV is set to point to an inter-view reference picture in a reference picture list of the current block.
    Type: Grant
    Filed: August 13, 2014
    Date of Patent: March 12, 2019
    Assignee: HFI Innovation Inc.
    Inventors: Jian-Liang Lin, Na Zhang, Yi-Wen Chen, Jicheng An, Yu-Lin Chang
  • Patent number: 10110925
    Abstract: A method of video coding utilizing ARP (advanced residual prediction) by explicitly signaling the temporal reference picture or deriving the temporal reference picture at the encoder and the decoder using identical process is disclosed. To encode or decode a current block in a current picture from a dependent view, a corresponding block in a reference view corresponding to the current block is determined based on a DV (disparity vector). For the encoder side, the temporal reference picture in the reference view of the corresponding block is explicitly signaled using syntax element(s) in the slice header or derived using an identical process as the decoder. For the decoder side, the temporal reference picture in the reference view of the corresponding block is determined according to the syntax element(s) in the slice header or derived using an identical process as the decoder. The temporal reference picture is then used for ARP.
    Type: Grant
    Filed: September 15, 2014
    Date of Patent: October 23, 2018
    Assignee: HFI Innovation Inc.
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang, Yu-Lin Chang
  • Patent number: 10085039
    Abstract: A method and apparatus for three-dimensional video coding using the virtual depth information are disclosed. For a current texture block in the dependent view, the method incorporating the present invention first derives an estimated disparity vector to locate a corresponding texture block in a coded view. A collocated depth block in the coded view collocated with the corresponding texture block in the coded view is identified and used to derive the virtual depth information. One aspect of the present invention addresses derivation process for the estimated disparity vector. Another aspect of the present invention addresses the usage of the derived virtual depth information.
    Type: Grant
    Filed: September 17, 2013
    Date of Patent: September 25, 2018
    Assignee: HFI INNOVATION INC.
    Inventors: Yu-Lin Chang, Yu-Pao Tsai
  • Patent number: 9918068
    Abstract: A method and apparatus for texture image compression in a 3D video coding system are disclosed. Embodiments according to the present invention derive depth information related to a depth map associated with a texture image and then process the texture image based on the depth information derived. The invention can be applied to the encoder side as well as the decoder side. The encoding order or decoding order for the depth maps and the texture images can be based on block-wise interleaving or picture-wise interleaving. One aspect of the present invent is related to partitioning of the texture image based on depth information of the depth map. Another aspect of the present invention is related to motion vector or motion vector predictor processing based on the depth information.
    Type: Grant
    Filed: June 15, 2012
    Date of Patent: March 13, 2018
    Assignee: MEDIA TEK INC.
    Inventors: Yu-Lin Chang, Shih-Ta Hsiang, Chi-Ling Wu, Chih-Ming Fu, Chia-Ping Chen, Yu-Pao Tsai, Yu-Wen Huang, Shaw-Min Lei
  • Patent number: 9571810
    Abstract: A method for generating a target perspective model referenced for depth map generation includes at least following steps: receiving a first input image; utilizing a region-based analysis unit for analyzing a plurality of regions in the first input image to extract image characteristics of the regions; and determining the target perspective model according to at least the image characteristics. Another method for generating a target perspective model referenced for depth map generation includes: receiving a first input image; determining a first perspective model in response to the first input image; and utilizing a perspective model generation unit for generating the target perspective model by a weighted sum of the first perspective model and at least one second perspective model.
    Type: Grant
    Filed: August 16, 2012
    Date of Patent: February 14, 2017
    Assignee: MEDIATEK INC.
    Inventors: Yu-Lin Chang, Chao-Chung Cheng, Te-Hao Chang
  • Publication number: 20160205403
    Abstract: In one implementation, a method codes video pictures, in which each of the video pictures is partitioned into LCUs (largest coding units). The method operates by receiving a current LCU, partitioning the current LCU adaptively to result in multiple leaf CUs, determining whether a current leaf CU has at least one nonzero quantized transform coefficient according to both Prediction Mode (PredMode) and Coded Block Flag (CBF), and incorporating quantization parameter information for the current leaf CU in a video bitstream, if the current leaf CU has at least one nonzero quantized transform coefficient. If the current leaf CU has no nonzero quantized transform coefficient, the method excludes the quantization parameter information for the current leaf CU in the video bitstream.
    Type: Application
    Filed: March 18, 2016
    Publication date: July 14, 2016
    Inventors: Yu-Wen HUANG, Ching-Yeh CHEN, Chih-Ming FU, Chih-Wei HSU, Yu-Lin CHANG, Tzu-Der CHUANG, Shaw-Min LEI
  • Publication number: 20160182884
    Abstract: A method and apparatus for a three-dimensional or multi-view video encoding or decoding system utilizing unified disparity vector derivation is disclosed. When a three-dimensional coding tool using a derived disparity vector (DV) is selected, embodiments according to the present invention will first obtain the derived DV from one or more neighboring blocks. If the derived DV is available, the selected three-dimensional coding tool is applied to the current block using the derived DV. If the derived DV is not available, the selected three-dimensional coding tool is applied to the current block using a default DV, where the default DV is set to point to an inter-view reference picture in a reference picture list of the current block.
    Type: Application
    Filed: August 13, 2014
    Publication date: June 23, 2016
    Inventors: Jian-Liang LIN, Na ZHANG, Yi-Wen CHEN, Jicheng AN, Yu-Lin CHANG
  • Publication number: 20160057453
    Abstract: A method of three-dimensional video encoding and decoding that adaptively incorporates camera parameters in the video bitstream according to a control flag is disclosed. The control flag is derived based on a combination of individual control flags associated with multiple depth-oriented coding tools. Another control flag can be incorporated in the video bitstream to indicate whether there is a need for the camera parameters for the current layer. In another embodiment, a first flag and a second flag are used to adaptively control the presence and location of camera parameters for each layer or each view in the video bitstream. The first flag indicates whether camera parameters for each layer or view are present in the video bitstream. The second flag indicates camera parameter location for each layer or view in the video bitstream.
    Type: Application
    Filed: July 18, 2014
    Publication date: February 25, 2016
    Inventors: Yu-Lin CHANG, Yi-Wen CHEN, Jian-Liang LIN
  • Publication number: 20150249838
    Abstract: A method and apparatus for three-dimensional video coding using the virtual depth information are disclosed. For a current texture block in the dependent view, the method incorporating the present invention first derives an estimated disparity vector to locate a corresponding texture block in a coded view. A collocated depth block in the coded view collocated with the corresponding texture block in the coded view is identified and used to derive the virtual depth information. One aspect of the present invention addresses derivation process for the estimated disparity vector. Another aspect of the present invention addresses the usage of the derived virtual depth information.
    Type: Application
    Filed: September 17, 2013
    Publication date: September 3, 2015
    Inventors: Yu-Lin Chang, Yu-Pao Tsai
  • Publication number: 20150195506
    Abstract: A method of video coding utilizing ARP (advanced residual prediction) by explicitly signaling the temporal reference picture or deriving the temporal reference picture at the encoder and the decoder using identical process is disclosed. To encode or decode a current block in a current picture from a dependent view, a corresponding block in a reference view corresponding to the current block is determined based on a DV (disparity vector). For the encoder side, the temporal reference picture in the reference view of the corresponding block is explicitly signaled using syntax element(s) in the slice header or derived using an identical process as the decoder. For the decoder side, the temporal reference picture in the reference view of the corresponding block is determined according to the syntax element(s) in the slice header or derived using an identical process as the decoder. The temporal reference picture is then used for ARP.
    Type: Application
    Filed: September 15, 2014
    Publication date: July 9, 2015
    Inventors: Jian-Liang Lin, Yi-Wen Chen, Yu-Wen Huang, Yu-Lin Chang
  • Publication number: 20150189321
    Abstract: A method for improved binarization and entropy coding process of syntax related to depth coding is disclosed. In one embodiment, a first value associated with the current depth block is bypass coded, where the first value corresponds to the residual magnitude of a block coded by an Intra or Inter SDC mode, the delta magnitude of a block coded by a DMM mode, or a residual sign of a block coded by the Inter SDC mode. In another embodiment, a first bin of a binary codeword is coded using arithmetic coding and the rest bins of the binary codeword are coded using bypass coding. The codeword corresponds to the residual magnitude of a block coded by the Intra or Inter SDC mode, or the delta DC magnitude of a block coded by the DMM mode.
    Type: Application
    Filed: November 11, 2014
    Publication date: July 2, 2015
    Inventors: Yi-Wen Chen, Jian-Liang Lin, Tzu-Der Chuang, Yu-Lin Chang
  • Publication number: 20150172714
    Abstract: A method and apparatus for three-dimensional video encoding or decoding using sub-block based inter-view prediction are disclosed. The method partitions a texture block into texture sub-blocks and determines disparity vectors of the texture sub-blocks. The inter-view reference data is derived based on the disparity vectors of the texture sub-blocks and a reference texture frame in a different view. The inter-view reference data is then used as prediction of the current block for encoding or decoding. One aspect of the present invention addresses partitioning the current texture block. Another aspect of the present invention addresses derivation of disparity vectors for the current texture sub-blocks.
    Type: Application
    Filed: June 28, 2013
    Publication date: June 18, 2015
    Inventors: Chi-Ling Wu, Yu-Lin Chang, Yu-Pao Tsai, Shaw-Min Lei