Patents by Inventor Ying-Ko Lu

Ying-Ko Lu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9224248
    Abstract: Method of applying virtual makeup and producing makeover effects to 3D face model driven by facial tracking in real-time includes the steps: capturing static or live facial images of a user; performing facial tracking of facial image, and obtaining tracking points on captured facial image; and producing makeover effects according to tracking points in real time. Virtual makeup can be applied using virtual makeup input tool such as a user's finger sliding over touch panel screen, mouse cursor or an object passing through makeup-allowed area. Makeup-allowed area for producing makeover effects is defined by extracting feature points from facial tracking points and dividing makeup-allowed area into segments and layers; and defining and storing parameters of makeup-allowed area. Virtual visual effects including color series, alpha blending, and/or superposition are capable of being applied.
    Type: Grant
    Filed: August 8, 2013
    Date of Patent: December 29, 2015
    Assignee: ULSee Inc.
    Inventors: Zhou Ye, Ying-Ko Lu, Yi-Chia Hsu, Sheng-Wen Jeng, Hsin-Wei Hsiao
  • Patent number: 9182813
    Abstract: Image-based object tracking system includes at least a controller with two or more color clusters, an input button, a processing unit with a camera, an object tracking algorithm and a display. Camera is configured to capture images of the controller, the processing unit is connected to display to display processed image contents, the controller is directly interacting with displayed processed image content. The controller can have two or three color clusters located on a side surface thereof and two color clusters having concentric circular areas located at a top surface thereof, the color of the first color cluster can be the same as or different from the color of the third color cluster. An object tracking method with or without scale calibration is also provided, which includes color learning and color relearning, image capturing, separating and splitting of the controller and the background, object pairing procedure steps on the controller.
    Type: Grant
    Filed: November 28, 2013
    Date of Patent: November 10, 2015
    Assignee: ULSee Inc.
    Inventors: Zhou Ye, Sheng-Wen Jeng, Chih-Ming Chang, Hsin-Wei Hsiao, Yi-Chia Hsu, Ying-Ko Lu
  • Publication number: 20150146169
    Abstract: Method for automatically measuring pupillary distance includes extracting facial features of face image, a head current center indicator is shown/displayed based on facial feature extraction, elliptical frame and target center indicator are shown, a first distance between head current center indicator and target center indicator is calculated to see if below a threshold range, then allowing head current center indicator, elliptical frame and target center indicator to disappear. Card window based on facial tracking result is shown. Credit card band detection is performed to see if located within card window. Card window then disappear. Elliptical frame of moving head and target elliptical frame are shown. Elliptical frame of the moving head is aligned with the target elliptical frame and maintaining a correct head posture. If elliptical frame of moving head is aligned with target elliptical frame, then allow them to disappear from view, and performing a pupillary distance measurement.
    Type: Application
    Filed: November 25, 2014
    Publication date: May 28, 2015
    Inventors: Zhou Ye, Sheng-Wen Jeng, Ying-Ko Lu, Shih Wei Liu
  • Patent number: 8983203
    Abstract: A face-tracking method with high accuracy is provided. The face-tracking method includes generating an initial face shape according to the detected face region of an input image and a learned data base, wherein the initial face shape comprises an initial inner shape and an initial outer shape; generating a refined inner shape by refining the initial inner shape according to the input image and the learned data base; and generating a refined outer shape by searching an edge of the refined outer shape from the initial outer shape toward the limit of outer shape.
    Type: Grant
    Filed: October 11, 2012
    Date of Patent: March 17, 2015
    Assignee: ULSee Inc.
    Inventors: Zhou Ye, Ying-Ko Lu, Sheng-Wen Jeng
  • Patent number: 8971574
    Abstract: A method of performing facial recognition and tracking of an image captured by an electronic device includes: utilizing a camera of the electronic device to capture an image including at least a face; displaying the image on a display screen of the electronic device; determining a degree of orientation of the electronic device; and adjusting an orientation of scanning lines used to scan the image for performing face detection so that the orientation of the scanning lines corresponds to the orientation of the electronic device.
    Type: Grant
    Filed: November 22, 2012
    Date of Patent: March 3, 2015
    Assignee: Ulsee Inc.
    Inventors: Zhou Ye, Ying-Ko Lu, Sheng-Wen Jeng
  • Patent number: 8917241
    Abstract: An operating method of a display device includes controlling a shift of a cursor of a user interface reference frame according to a shift of the pointing device with reference to an initial point in a 3D spatial reference frame; and updating a position of the initial point in the 3D spatial reference frame according to an updating signal. An advantage of the present invention is when the operating range is changed, reference coordinates utilized by the pointing device are appropriately adjusted, so as to lower the affect of offset, allowing the pointing device to be applied in different areas/directions without having the cursor displayed on the display device to incorrectly reflect shift of the pointing device.
    Type: Grant
    Filed: January 9, 2013
    Date of Patent: December 23, 2014
    Assignee: Cywee Group Limited
    Inventors: Ching-Lin Hsieh, Chin-Lung Lee, Shun-Nan Liou, Ying-Ko Lu, Zhou Ye
  • Patent number: 8847880
    Abstract: A method and an apparatus for providing a motion library, adapted to a service end device to provide a customized motion library supporting recognition of at least one motion pattern for a user end device. At least one sensing component disposed on the user end device is determined. At least one motion group is determined according to the determined sensing components, wherein each motion group comprises at least one motion pattern. The at least one motion pattern is selected and a motion database to is queried to display a list of the motion groups corresponding to the selected motion patterns and the motion groups are selected from the list. The motion patterns belonging to the motion groups are selected to re-compile the customized motion library, which is provided for the user end device, so as to enable the user end device to recognize the selected motion patterns.
    Type: Grant
    Filed: June 21, 2011
    Date of Patent: September 30, 2014
    Assignee: Cywee Group Ltd.
    Inventors: Ying-Ko Lu, Shun-Nan Liou, Zhou Ye, Chin-Lung Li
  • Patent number: 8793393
    Abstract: A video processing device providing multi-channel encoding with low latency is provided. The video processing device can be applied to a video server to perform video compression on game graphics for cloud gaming. With multi-channel encoding with low latency, the video server can provide compressed video streams to a variety of client devices with low latency. As a result, the users can obtain high gaming interactivity and fine entertainment in cloud gaming.
    Type: Grant
    Filed: November 23, 2011
    Date of Patent: July 29, 2014
    Assignee: Bluespace Corporation
    Inventors: Zhou Ye, Wenxiang Dai, Ying-Ko Lu, Yi-Hong Hsu
  • Publication number: 20140191951
    Abstract: An image-based object tracking system and an image-based object tracking method are provided. The image-based object tracking system includes an object, a camera, a computing device, and a display device. A color stripe is disposed on the surface of the object. The color stripe divides the surface of the object into a first section and a second section. The camera is configured to capture real-time images of the object. Further, the object tracking algorithm is stored in the computing device. The display device includes a display screen and is electrically connected to the computing device, and the display screen is configured to display the real-time images of the object. By using the image-based object tracking system provided in the invention, the object can be tracked accurately and efficiently without interference with the image of the background.
    Type: Application
    Filed: January 10, 2014
    Publication date: July 10, 2014
    Applicant: Cywee Group Limited
    Inventors: Zhou Ye, Sheng-Weng Jeng, Chih-Ming Chang, Hsin-Wei Hsiao, Yi-Chia Hsu, Ying-Ko Lu
  • Publication number: 20140085194
    Abstract: Image-based object tracking system includes at least a controller with two or more color clusters, an input button, a processing unit with a camera, an object tracking algorithm and a display. Camera is configured to capture images of the controller, the processing unit is connected to display to display processed image contents, the controller is directly interacting with displayed processed image content. The controller can have two or three color clusters located on a side surface thereof and two color clusters having concentric circular areas located at a top surface thereof, the color of the first color cluster can be the same as or different from the color of the third color cluster. An object tracking method with or without scale calibration is also provided, which includes color learning and color relearning, image capturing, separating and splitting of the controller and the background, object pairing procedure steps on the controller.
    Type: Application
    Filed: November 28, 2013
    Publication date: March 27, 2014
    Applicant: Cywee Group Limited
    Inventors: Zhou Ye, Sheng-Wen Jeng, Chih-ming Chang, Hsin-Wei Hsiao, Yi-Chia Hsu, Ying-Ko Lu
  • Publication number: 20140055401
    Abstract: Method for performing actual operation on a second device having a second screen or a touch panel from a first device having a first screen or a touch panel in real time are provided. First screen is configured to display mirrored content of second screen. Method includes detecting a first touch signal on first screen of first device, converting first touch signal into a first touch data associated with a first position information, and transmitting first touch data to second device so actual operation is performed on second device. For first and second devices having touch panels, method includes receiving a first touch data associated with a first position information of first touch panel, calculating a second position information of second touch panel based on first position information, and performing actual operation on second device based on second position information. Second device is coupled to first device by network communication.
    Type: Application
    Filed: November 1, 2013
    Publication date: February 27, 2014
    Inventors: Zhou Ye, Pei-Chuan Liu, San-Yuan Huang, Yun-Fei Wei, Ying-Ko Lu
  • Publication number: 20140022249
    Abstract: A method of 3D morphing driven by facial tracking is provided. First, a 3D model is loaded. After that, facial feature control points and boundary control points are picked up respectively. A configure file “A.config” including the facial feature control points and boundary control points data that are picked up corresponding to the 3D avatar is saved. Facial tracking algorithm is started, and then “A.config” is loaded. After that, controlled morphing of a 3D avatar by facial tracking based on “A.config” is performed in real time by a deformation method having control points. Meanwhile, teeth and tongue tracking of the real-time face image, and scaling, translation and rotation of the real-time 3D avatar image is also provided. In addition, a control point reassignment and reconfiguration method, and a pupil movement detection method is also provided in the method of 3D morphing driven by facial tracking.
    Type: Application
    Filed: July 12, 2013
    Publication date: January 23, 2014
    Inventors: Zhou Ye, Ying-Ko Lu, Yi-Chia Hsu, Sheng-Wen Jeng
  • Publication number: 20140016823
    Abstract: Method of applying virtual makeup and producing makeover effects to 3D face model driven by facial tracking in real-time includes the steps: capturing static or live facial images of a user; performing facial tracking of facial image, and obtaining tracking points on captured facial image; and producing makeover effects according to tracking points in real time. Virtual makeup can be applied using virtual makeup input tool such as a user's finger sliding over touch panel screen, mouse cursor or an object passing through makeup-allowed area. Makeup-allowed area for producing makeover effects is defined by extracting feature points from facial tracking points and dividing makeup-allowed area into segments and layers; and defining and storing parameters of makeup-allowed area. Virtual visual effects including color series, alpha blending, and/or superposition are capable of being applied.
    Type: Application
    Filed: August 8, 2013
    Publication date: January 16, 2014
    Applicant: CYWEE GROUP LIMITED
    Inventors: Zhou Ye, Ying-Ko Lu, Yi-Chia Hsu, Sheng-Wen Jeng, Hsin-Wei Hsiao
  • Publication number: 20140008496
    Abstract: Method and system for remote control of a drone helicopter and RC plane using a handheld device is disclosed. Piloting commands and actions are performed using the handheld device, which includes a motion sensor module, with gyro-sensor and g-sensor for controlling roll, yaw and pitch of flying object under relative or absolute coordinate system. The gyro-sensor controls both heading and rotation of flying object in place around its yaw by rotating handheld device around its yaw axis; g-sensor controls pitch and roll by rotating handheld device around its pitch axis and roll axes. Upon determining free falling of flying object, throttle is thereby adjusted so as to land it safely. Flying object further has a camera, and video images are transferred wireless to be displayed on touch screen, and image zoom-in and zoom-out are provided via multi-touch of touch screen. RF and IR capability is included for wireless communication.
    Type: Application
    Filed: July 5, 2012
    Publication date: January 9, 2014
    Inventors: Zhou Ye, Ying-Ko Lu
  • Patent number: 8605048
    Abstract: The present invention discloses a method for controlling in realtime fashion multimedia contents on a second screen at a TX side from a first screen at a RX side. The method comprising: detecting at least one touch signal via the first screen, and converting the at least one touch signal into touch data associated with a first position information defining a virtual operation on the first screen corresponding to an actual operation on the second screen, the first position information being with respect to a first coordinate system of the first screen; transmitting the touch data of the RX side to the TX side via the network communication, and calculating at the TX side a second position information with respect to a second coordinate system of the second screen based on the first position information; and performing the actual operation at the second screen based on the second position information.
    Type: Grant
    Filed: July 18, 2011
    Date of Patent: December 10, 2013
    Assignee: Bluespace Corporation
    Inventors: Zhou Ye, Pei-Chuan Liu, San-Yuan Huang, Yun-Fei Wei, Ying-Ko Lu
  • Publication number: 20130287294
    Abstract: A method for generating a personalized 3D model using a plurality of 2D images and a generic 3D model is provided. The method includes the following steps: extracting a plurality of feature points from the plurality of 2D images; extracting a plurality of landmark points from the generic 3D model; mapping the plurality of features extracted from the plurality of 2D images to the plurality of landmark points extracted from the generic 3D model so as to generate relationship parameters for a mapping algorithm; morphing the generic 3D model into a personalized 3D model with the plurality of landmark points, the relationship parameters and the mapping algorithm; iteratively refining the personalized 3D model with the plurality of feature points extracted from the plurality of 2D images; and when a convergent condition is met, the step of iteratively refining the personalized 3D model is complete and the personalized 3D model is saved to the 3D model database.
    Type: Application
    Filed: April 30, 2013
    Publication date: October 31, 2013
    Applicant: Cywee Group Limited
    Inventors: Zhou Ye, Ying-Ko Lu, Sheng-Wen Jen
  • Patent number: 8555205
    Abstract: The present invention discloses a system for human and machine interface. The system includes a 3-dimensional (3D) image capture device, for capturing a gesture of a motion object in a period of time; a hand-held inertial device (HHID), for transmitting a control signal; and a computing device. The computing device includes a system integration and GUI module, for compensating the control signal according to an image signal corresponding to the motion object, to generate a compensated control signal.
    Type: Grant
    Filed: May 10, 2011
    Date of Patent: October 8, 2013
    Assignee: Cywee Group Limited
    Inventors: Zhou Ye, Sheng-Wen Jeng, Shun-Nan Liou, Ying-Ko Lu
  • Publication number: 20130234940
    Abstract: An operating method of a display device includes controlling a shift of a cursor of a user interface reference frame according to a shift of the pointing device with reference to an initial point in a 3D spatial reference frame; and updating a position of the initial point in the 3D spatial reference frame according to an updating signal. An advantage of the present invention is when the operating range is changed, reference coordinates utilized by the pointing device are appropriately adjusted, so as to lower the affect of offset, allowing the pointing device to be applied in different areas/directions without having the cursor displayed on the display device to incorrectly reflect shift of the pointing device.
    Type: Application
    Filed: January 9, 2013
    Publication date: September 12, 2013
    Applicant: Cywee Group Limited
    Inventors: Ching-Lin Hsieh, Chin-Lung Lee, Shun-Nan Liou, Ying-Ko Lu, Zhou Ye
  • Publication number: 20130132510
    Abstract: A video processing device providing multi-channel encoding with low latency is provided. The video processing device can be applied to a video server to perform video compression on game graphics for cloud gaming. With multi-channel encoding with low latency, the video server can provide compressed video streams to a variety of client devices with low latency. As a result, the users can obtain high gaming interactivity and fine entertainment in cloud gaming.
    Type: Application
    Filed: November 23, 2011
    Publication date: May 23, 2013
    Applicant: BLUESPACE CORPORATION
    Inventors: Zhou Ye, Wenxiang Dai, Ying-Ko Lu, Yi-Hong Hsu
  • Publication number: 20130054130
    Abstract: A hybrid-computing navigation system worn by a user includes a modified motion sensor group which includes 9-axis or 10-axis motion sensors that are built-in, and a host device configured for providing navigation information, in which the modified motion sensor group is worn on the user so that a moving direction of the user is the same as a heading direction calculated from the modified motion sensor group. The modified motion sensor group provides step counting and absolute orientation in yaw, roll and pitch using a sensor fusion technique. The navigation system further includes at least one wireless sensor at wifi hot spot to perform sensor fusion for obtaining an absolute position of an estimated position of the user. Sensor fusion combining with location map are used to perform location map matching and fingerprinting. A method of position estimation of a user using the navigation system is also disclosed.
    Type: Application
    Filed: October 29, 2012
    Publication date: February 28, 2013
    Applicant: CYWEE GROUP LIMITED
    Inventors: Zhou Ye, Shun-Nan Liou, Ying-Ko Lu, Chin-Lung Lee