Patents by Inventor Yao-An Hsieh

Yao-An Hsieh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200372622
    Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short- exposure images without additional information.
    Type: Application
    Filed: August 4, 2020
    Publication date: November 26, 2020
    Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh
  • Patent number: 10796421
    Abstract: Embodiments of the present invention are directed to facilitating images with selective application of the long-exposure effect. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. The virtual long-exposure image is combined with one of the frames forming the virtual long-exposure image to create a selective virtual long-exposure image. The selective virtual long-exposure image comprises a visible portion of the original virtual long-exposure image and a visible portion of the individual frame that corresponds to the selected region of pixels. Additional frames may be combined with the virtual long-exposure image to create a plurality of selective virtual long-exposure image options, and the user may select one for continued use or for saving.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: October 6, 2020
    Inventors: Seyed Morteza Safdarnejad, Sarah Aye Kong, Gregg Darryl Wilensky, Chih-Yao Hsieh
  • Patent number: 10783649
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that analyze feature points of digital images to selectively apply a pixel-adjusted-gyroscope-alignment model and a feature-based-alignment model to align the digital images. For instance, the disclosed systems can select an appropriate alignment model based on feature-point-deficiency metrics specific to an input image and reference-input image. Moreover, in certain implementations, the pixel-adjusted-gyroscope-alignment model utilizes parameters from pixel-based alignment and gyroscope-based alignment to align such digital images. Together with the feature-based-alignment model, the disclosed methods, non-transitory computer readable media, and systems can use a selective image-alignment algorithm that improves computational efficiency, accuracy, and flexibility in generating enhanced digital images from a set of input images.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: September 22, 2020
    Assignee: ADOBE INC.
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 10783622
    Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short-exposure images without additional information.
    Type: Grant
    Filed: April 25, 2018
    Date of Patent: September 22, 2020
    Assignee: ADOBE INC.
    Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh
  • Patent number: 10768474
    Abstract: A display panel includes an active device substrate, an opposite substrate, a liquid crystal layer, a color filter layer, and a first polarized pattern layer. The color filter layer includes a first filter pattern, a second filter pattern, and a third filter pattern. The first polarized pattern layer includes a first upper polarized pattern, a second upper polarized pattern, and a third upper polarized pattern. The first upper polarized pattern is disposed in correspondence to the first filter pattern and includes a plurality of metal wires arranged along a first direction. The second upper polarized pattern is disposed in correspondence to the second filter pattern and includes a plurality of metal wires arranged along a second direction. The third upper polarized pattern is disposed in correspondence to the third filter pattern and includes a plurality of metal wires arranged along a third direction.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: September 8, 2020
    Assignee: Au Optronics Corporation
    Inventors: Szu-Yen Lin, Yao-An Hsieh, Hsin-Chun Huang, Wen-Rei Guo
  • Publication number: 20200280670
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Application
    Filed: May 18, 2020
    Publication date: September 3, 2020
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Patent number: 10706512
    Abstract: Methods and systems are provided for adjusting the brightness of images. In some implementations, an exposure bracketed set of input images produced by a camera is received. A brightness adjustment is determined for at least one input image from the set of input images. The determined brightness adjustment is applied to the input image. An output image is produced by exposure fusion from the set of input images, using the input image having the determined brightness adjustment. The output image is transmitted where, the transmitting causes display of the output image on a user device.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: July 7, 2020
    Assignee: ADOBE INC.
    Inventors: Yinglan Ma, Sylvain Philippe Paris, Chih-Yao Hsieh
  • Patent number: 10701279
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: June 30, 2020
    Assignee: ADOBE INC.
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Publication number: 20200175975
    Abstract: This application relates generally to modifying visual data based on audio commands and more specifically, to performing complex operations that modify visual data based on one or more audio commands. In some embodiments, a computer system may receive an audio input and identify an audio command based on the audio input. The audio command may be mapped to one or more operations capable of being performed by a multimedia editing application. The computer system may perform the one or more operations to edit to received multimedia data.
    Type: Application
    Filed: November 29, 2018
    Publication date: June 4, 2020
    Inventors: Sarah Kong, Yinglan Ma, Hyunghwan Byun, Chih-Yao Hsieh
  • Patent number: 10670906
    Abstract: A display panel including a scan line, a first data line, a second data line, a first switching element, a second switching element, a first electrode, two second electrodes, a third electrode, a black matrix and a plurality of color filter layers is provided. The first switching element is electrically connected with the scan line and the first data line. The second switching element is electrically connected with the second data line. The first electrode includes at least two first openings. The two second electrodes are electrically connected with the first switching element and the second switching element respectively through the at least two first openings. The third electrode includes two first body portions, a second body portion and at least two branch portions. The black matrix includes a plurality of second openings. Two neighboring color filter layers are disposed corresponding to one of the second openings and form an overlapped structure.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: June 2, 2020
    Assignee: Au Optronics Corporation
    Inventors: Sih-Yan Lin, Yao-An Hsieh, Hsin-Chun Huang
  • Publication number: 20200151860
    Abstract: The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.
    Type: Application
    Filed: November 13, 2018
    Publication date: May 14, 2020
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Publication number: 20200106945
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate a virtual long exposure image from a sequence of short exposure images portraying a moving object. In various embodiments, the image transformation system aligns two digital images in the sequence of short exposure images. The image transformation system can determine a motion vector path for the moving object between the first digital image and the second digital image. The image transformation system can also blend pixels along the motion vector path to generate a blended image representative of the motion of the moving object between the first digital image and the second digital image. The image transformation system can generate additional blended images based on consecutive pairs of images in the sequence of digital images and generates a virtual long exposure image by combining the first blended image with the additional blended images.
    Type: Application
    Filed: October 2, 2018
    Publication date: April 2, 2020
    Inventors: Chih-Yao Hsieh, Sylvain Paris, Seyed Morteza Safdarnejad, Gregg Wilensky
  • Patent number: 10598997
    Abstract: A pixel array includes pixel unit sets each including a substrate having first and second pixel regions, a scan line, first and second data lines extending along a second direction, first and second active devices respectively in the first and second pixel regions, and first and second pixel electrodes respectively located in the first and second pixel regions and electrically connected to the first and second active devices, respectively. The scan line includes a main scan line and first and second branch scan lines (connected to the main scan line) extending along a first direction. The first active device is electrically connected to the first branch scan line and the first data line. The second active device is electrically connected to the second branch scan line and the second data line. At least one of the first and second data lines is overlapped with the first and second pixel electrodes.
    Type: Grant
    Filed: August 6, 2018
    Date of Patent: March 24, 2020
    Assignee: Au Optronics Corporation
    Inventors: Chun-Feng Lin, Yao-An Hsieh, Yu-Ping Kuo, Ching-Sheng Cheng
  • Publication number: 20200090351
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that analyze feature points of digital images to selectively apply a pixel-adjusted-gyroscope-alignment model and a feature-based-alignment model to align the digital images. For instance, the disclosed systems can select an appropriate alignment model based on feature-point-deficiency metrics specific to an input image and reference-input image. Moreover, in certain implementations, the pixel-adjusted-gyroscope-alignment model utilizes parameters from pixel-based alignment and gyroscope-based alignment to align such digital images. Together with the feature-based-alignment model, the disclosed methods, non-transitory computer readable media, and systems can use a selective image-alignment algorithm that improves computational efficiency, accuracy, and flexibility in generating enhanced digital images from a set of input images.
    Type: Application
    Filed: September 17, 2018
    Publication date: March 19, 2020
    Inventors: Seyed Morteza Safdarnejad, Chih-Yao Hsieh
  • Patent number: 10573052
    Abstract: Embodiments of the present invention are directed to facilitate creating cinemagraphs from virtual long-exposure images. In accordance with some embodiments of the present invention, virtual long-exposure image comprising a plurality of aligned frames is provided and a selection of a region of pixels in the virtual long-exposure image is received. Based on the selected region of pixels, a set of frames for animation is identified from the plurality of frames. The set of frames may be identified by automatically detecting a sequence of frames or by receiving a user selection of frames. The virtual LE image is combined with the set of frames to create a cinemagraph having a visible non-animated portion formed by the virtual LE image and a visible animated portion formed by the set of frames.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: February 25, 2020
    Assignee: ADOBE INC.
    Inventors: Seyed Morteza Safdarnejad, Sarah Aye Kong, Chih-Yao Hsieh
  • Publication number: 20200037837
    Abstract: The present invention provides a dust collector apparatus, comprising: a housing, an airflow generation module and a filter. The housing has one side formed at least one intake vent-hole and another side formed of at least one exhaust vent-hole. An inner side of the housing has an airflow channel formed between the intake vent-hole and the exhaust vent-hole. The airflow generation module comprises an airflow drawing unit for drawing an airflow in order to form the airflow channel between the intake vent-hole and the exhaust vent-hole, a control module for controlling the airflow drawing unit, and a power module. The filter is detachably arranged at the intake end and used for filtering dust in the air sucked by the airflow drawing unit. The dust collector apparatus of the present invention allows the dust particles to be properly collected by the filter without secondary scattering, and guides the airflow to be exhausted smoothly.
    Type: Application
    Filed: August 6, 2019
    Publication date: February 6, 2020
    Inventors: Wan Chieh HSIEH, Pai Yao HSIEH, Chien Chieh TUNG
  • Publication number: 20200004080
    Abstract: A display panel including a scan line, a first data line, a second data line, a first switching element, a second switching element, a first electrode, two second electrodes, a third electrode, a black matrix and a plurality of color filter layers is provided. The first switching element is electrically connected with the scan line and the first data line. The second switching element is electrically connected with the second data line. The first electrode includes at least two first openings. The two second electrodes are electrically connected with the first switching element and the second switching element respectively through the at least two first openings. The third electrode includes two first body portions, a second body portion and at least two branch portions. The black matrix includes a plurality of second openings. Two neighboring color filter layers are disposed corresponding to one of the second openings and form an overlapped structure.
    Type: Application
    Filed: November 6, 2018
    Publication date: January 2, 2020
    Applicant: Au Optronics Corporation
    Inventors: Sih-Yan Lin, Yao-An Hsieh, Hsin-Chun Huang
  • Publication number: 20190333198
    Abstract: The present disclosure relates to training and utilizing an image exposure transformation network to generate a long-exposure image from a single short-exposure image (e.g., still image). In various embodiments, the image exposure transformation network is trained using adversarial learning, long-exposure ground truth images, and a multi-term loss function. In some embodiments, the image exposure transformation network includes an optical flow prediction network and/or an appearance guided attention network. Trained embodiments of the image exposure transformation network generate realistic long-exposure images from single short-exposure images without additional information.
    Type: Application
    Filed: April 25, 2018
    Publication date: October 31, 2019
    Inventors: Yilin Wang, Zhe Lin, Zhaowen Wang, Xin Lu, Xiaohui Shen, Chih-Yao Hsieh
  • Patent number: D858882
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: September 3, 2019
    Assignee: COSMEX CO., LTD.
    Inventors: Wan Chieh Hsieh, Pai Yao Hsieh, Hung Wei Chang, Kuan Chen Liao
  • Patent number: D890431
    Type: Grant
    Filed: July 27, 2018
    Date of Patent: July 14, 2020
    Assignee: COSMEX CO. LTD.
    Inventors: Wan Chieh Hsieh, Pai Yao Hsieh, Hung Wei Chang, Hao-Hong Ciou