Patents by Inventor Shao Yi Chien

Shao Yi Chien has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10915169
    Abstract: Correction method and device for an eye-tracker are provided. A non-predetermined scene frame is provided and analyzed to obtain a salient feature information, which is in-turn used to correct an eye-tracking operation. The correction can be done at the initial or during the wearing of an eye-tracker.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: February 9, 2021
    Assignee: National Taiwan University
    Inventors: Shao-Yi Chien, Chia-Yang Chang, Shih-Yi Wu
  • Patent number: 10895911
    Abstract: An image operation method and system for obtaining an eye's gazing direction are provided. The method and system employ multiple extraction stages for extracting eye-tracking features. An eye frame is divided into sub-frames, which are then sequentially temporarily stored in a storage unit. Launch features of sub frames are sequentially extracted from the sub frames by a first feature extraction stage, where a data of a former sub-frame is extracted before a data of a latter sub-frame is needed to be stored. Next, the remaining feature extraction stages apply a superposition operation on the launch features to obtain terminal features, which are then computed to obtain an eye's gazing direction.
    Type: Grant
    Filed: March 12, 2019
    Date of Patent: January 19, 2021
    Assignee: National Taiwan University
    Inventors: Shao-Yi Chien, Yu-Sheng Lin, Po-Jung Chiu
  • Patent number: 10895910
    Abstract: An adaptive eye-tracking calibration method includes generating an eye model acting as a current eye model; calibrating the eye model; comparing real-time pupil data set and pupil data set of the current eye model to obtain a pupil data difference when an event happens; and generating a new eye model if the pupil data difference is greater than a predetermined threshold.
    Type: Grant
    Filed: March 11, 2019
    Date of Patent: January 19, 2021
    Assignee: National Taiwan University
    Inventors: Shao-Yi Chien, Liang Fang
  • Publication number: 20200382717
    Abstract: An eye-tracking module includes a power management module, a sensing module and a processor. The power management module is configured to supply power according to a power control signal for operating the eye-tracking module in a corresponding mode, and provide a power status signal associated its power supplying status. The sensing module includes at least one image sensor configured to capture an eye image of a user at a sampling rate. The processor is configured to acquire eye characteristics of the user, the eye-movement information of the user, or vision information including user gaze coordinates according to the eye image. The processor is also configured to adjust the sampling rate, adjust its output frequency and/or switch the operational mode of the eye-tracking module according to the power status signal.
    Type: Application
    Filed: May 28, 2020
    Publication date: December 3, 2020
    Inventors: Po-Jung Chiu, Kuei-An Li, Kuan-Ling Liu, Ming-Yi Tai, Chia-Ming Chang, Shao-Yi Chien
  • Publication number: 20200218345
    Abstract: In a calibration process for an eye-tracking application, a calibration mark is displayed on an instrument, and the user is instructed to keep the gaze focused on the calibration mark. Next, a dynamic image is displayed on the instrument, and the user is instructed to move his head or the instrument as indicated by the dynamic image while keeping the gaze focused on the calibration mark. The ocular information of the user is recorded during the head movement or the instrument movement for calibrating the eye-tracking application.
    Type: Application
    Filed: January 1, 2020
    Publication date: July 9, 2020
    Inventors: Kuan-Ling Liu, LIANG FANG, Po-Jung Chiu, Yi-Heng Wu, Ming-Yi Tai, Yi-Hsiang Chen, Chia-Ming Chang, Shao-Yi Chien
  • Publication number: 20200205657
    Abstract: A method of monitoring eye strain includes detecting the blink status, the vergence status and the pupil status of a user, and then determining whether the user encounters eye strain according to at least one of the blink status, the vergence status and the pupil status of the user. The method further includes facilitating the user to blink or informing the user of eye strain.
    Type: Application
    Filed: September 11, 2019
    Publication date: July 2, 2020
    Inventors: Kuei-An Li, Su-Ling Yeh, Shao-Yi Chien
  • Publication number: 20200192471
    Abstract: Eye-tracking methods and devices are provided for reducing operating workload. One sub frame of an eye frame and/or a scene frame is used to estimate the eyeball position. Such an approach can reduce operating workload and power consumption.
    Type: Application
    Filed: March 11, 2019
    Publication date: June 18, 2020
    Inventors: Shao-Yi Chien, Yu-Sheng Lin, Shih-Yi Wu
  • Publication number: 20200183489
    Abstract: An adaptive eye-tracking calibration method includes generating an eye model acting as a current eye model; calibrating the eye model; comparing real-time pupil data set and pupil data set of the current eye model to obtain a pupil data difference when an event happens; and generating a new eye model if the pupil data difference is greater than a predetermined threshold.
    Type: Application
    Filed: March 11, 2019
    Publication date: June 11, 2020
    Inventors: Shao-Yi Chien, Liang Fang
  • Publication number: 20200166995
    Abstract: An image operation method and system for obtaining an eye's gazing direction are provided. The method and system employ multiple extraction stages for extracting eye-tracking features. An eye frame is divided into sub-frames, which are then sequentially temporarily stored in a storage unit. Launch features of sub frames are sequentially extracted from the sub frames by a first feature extraction stage, where a data of a former sub-frame is extracted before a data of a latter sub-frame is needed to be stored. Next, the remaining feature extraction stages apply a superposition operation on the launch features to obtain terminal features, which are then computed to obtain an eye's gazing direction.
    Type: Application
    Filed: March 12, 2019
    Publication date: May 28, 2020
    Inventors: Shao-Yi Chien, Yu-Sheng Lin, Po-Jung Chiu
  • Publication number: 20200167957
    Abstract: Correction method and device for an eye-tracker are provided. A non-predetermined scene frame is provided and analyzed to obtain a salient feature information, which is in-turn used to correct an eye-tracking operation. The correction can be done at the initial or during the wearing of an eye-tracker.
    Type: Application
    Filed: March 11, 2019
    Publication date: May 28, 2020
    Inventors: Shao-Yi Chien, Chia-Yang Chang, Shih-Yi Wu
  • Patent number: 10606352
    Abstract: In a dual mode eye tracking method and system, an infrared (IR) ray is emitted to perform an IR tracking mode and visible light (VL) calibration. When a first error between a VL associated gaze position and an IR associated gaze position is less than a first threshold, the IR ray is turned off to perform a VL tracking mode. In a VL checking period, when a second error between a VL associated gaze position and an IR associated gaze position is less than a second threshold, the IR ray is turned off to continue the VL tracking mode.
    Type: Grant
    Filed: October 16, 2018
    Date of Patent: March 31, 2020
    Assignee: National Taiwan University
    Inventors: Shao-Yi Chien, Yi-Heng Wu, Po-Jung Chiu, Liang Fang
  • Publication number: 20190227622
    Abstract: In a dual mode eye tracking method and system, an infrared (IR) ray is emitted to perform an IR tracking mode and visible light (VL) calibration. When a first error between a VL associated gaze position and an IR associated gaze position is less than a first threshold, the IR ray is turned off to perform a VL tracking mode. In a VL checking period, when a second error between a VL associated gaze position and an IR associated gaze position is less than a second threshold, the IR ray is turned off to continue the VL tracking mode.
    Type: Application
    Filed: October 16, 2018
    Publication date: July 25, 2019
    Inventors: Shao-Yi Chien, Yi-Heng Wu, Po-Jung Chiu, Liang Fang
  • Patent number: 10275937
    Abstract: The present invention provides a method of indirect illumination, for a 3D graphics processing device, including obtaining a scene and perform a voxelization to the scene; performing a lighting computation to the voxelized scene from a plurality of light sources, and store a potential lighting driven voxel (pLDV) list according to the lighting computation; sorting the pLDV list to generate a sorted pLDV list; and performing a compaction process to the sorted pLDV list; wherein each voxel in the pLDV list stores a reflective radiance and a Morton code corresponding to each voxel.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: April 30, 2019
    Assignees: National Taiwan University, MEDIATEK INC.
    Inventors: Shao-Yi Chien, Yen-Yu Chen
  • Publication number: 20180300937
    Abstract: A system and method of restoring an occluded background region includes detecting surfaces of a point cloud, thereby resulting in a surface map; substantially enhancing edges between detected surfaces according to a gradient map and the surface map, thereby generating an edge map; inpainting a depth image, thereby generating in an inpainted depth image; and inpainting a color image, thereby generating an inpainted color image.
    Type: Application
    Filed: April 13, 2017
    Publication date: October 18, 2018
    Inventors: Shao-Yi Chien, Yung-Lin Huang, Po-Jen Lai, Yi-Nung Liu
  • Publication number: 20170323474
    Abstract: The present invention provides a method of indirect illumination, for a 3D graphics processing device, including obtaining a scene and perform a voxelization to the scene; performing a lighting computation to the voxelized scene from a plurality of light sources, and store a potential lighting driven voxel (pLDV) list according to the lighting computation; sorting the pLDV list to generate a sorted pLDV list; and performing a compaction process to the sorted pLDV list; wherein each voxel in the pLDV list stores a reflective radiance and a Morton code corresponding to each voxel.
    Type: Application
    Filed: May 8, 2017
    Publication date: November 9, 2017
    Inventors: Shao-Yi Chien, Yen-Yu Chen
  • Publication number: 20170323471
    Abstract: The present invention provides a method of 3D rendering processing, which includes obtaining a scene with a plurality of geometries and performing a first voxelization process according to the scene to obtain a first voxel scene; and performing a second voxelization process according to the scene and the first voxel scene to obtain a second voxel scene; wherein the first voxel scene comprises a plurality of first voxels and the second voxel scene comprises a plurality of second voxels
    Type: Application
    Filed: May 8, 2017
    Publication date: November 9, 2017
    Inventors: Shao-Yi Chien, Yen-Yu Chen
  • Patent number: 9639923
    Abstract: A bilateral filtering method includes decomposing an image patch by pixel intensity to form a stack of patches; computing spatial filtering response of each intensity; multiplying the spatial filtering response of each intensity by corresponding intensity, thereby resulting in multiplied spatial filtering response of each intensity; and summing up the multiplied spatial filtering responses of different intensities weighted with corresponding range weights.
    Type: Grant
    Filed: July 8, 2015
    Date of Patent: May 2, 2017
    Assignees: National Taiwan University, Himax Technologies Limited
    Inventors: Shao-Yi Chien, Wei-Chi Tu, Yi-Nung Liu
  • Publication number: 20170064279
    Abstract: A multi-view 3D video method and system are disclosed. 3D points of an initial 3D model are projected back into back-projected points in image space. The back-projected points are compared with pixel points in next frames from multiple viewpoints. The pixel points in the next frames are updated according to comparing result between the back-projected points and the pixel points. Depth of the back-projected point is point-wise compared with depth of the corresponding pixel point. A farther point of the back-projected point and the corresponding pixel point, which are not similar in depth, are preserved, and the preserved farther point is used to update the pixel point in the next frame.
    Type: Application
    Filed: September 1, 2015
    Publication date: March 2, 2017
    Inventors: Shao-Yi Chien, Yung-Lin Huang, Yi-Nung Liu
  • Publication number: 20170011497
    Abstract: A bilateral filtering method includes decomposing an image patch by pixel intensity to form a stack of patches; computing spatial filtering response of each intensity; multiplying the spatial filtering response of each intensity by corresponding intensity, thereby resulting in multiplied spatial filtering response of each intensity; and summing up the multiplied spatial filtering responses of different intensities weighted with corresponding range weights.
    Type: Application
    Filed: July 8, 2015
    Publication date: January 12, 2017
    Inventors: SHAO-YI CHIEN, WEI-CHI TU, Yi-Nung Liu
  • Patent number: 9329677
    Abstract: The present invention is related to a social system and process used for bringing virtual social network into real life, which is allowed for gathering and analyzing a social message of at least one interlocutor from virtual social network so as to generate at least one recommended topic, allowing a user to talk with the interlocutor through the utilization of the recommended topic, and then capturing and analyzing at least one speech and behavior and/or physiological response of the interlocutor during talking so as to generate an emotion state of the interlocutor. The user is allowed to determine whether the interlocutor is interested in the recommended topic through the emotion state of interlocutor. Thus, it is possible to bring the social message on virtual network into real life, so as to increase communication topics between persons in real life.
    Type: Grant
    Filed: March 15, 2012
    Date of Patent: May 3, 2016
    Assignee: National Taiwan University
    Inventors: Shao-Yi Chien, Jui-Hsin Lai, Jhe-Yi Lin, Min-Yian Su, Po-Chen Wu, Chieh-Chi Kao