Patents Assigned to MING CHUAN UNIVERSITY
  • Patent number: 11692936
    Abstract: A biological sensing apparatus includes an optical waveguide substrate, a surface plasmon resonance (SPR) layer, and a lossy mode resonance (LMR) layer. The optical waveguide substrate includes a light input end and a light output end opposite to each other, and a biological sensing area is formed on one surface of the optical waveguide substrate between the light input end and the light output end. The SPR layer includes a metal layer and a plurality of biological probes. The metal layer is arranged on part of the biological sensing area, and the plurality of biological probes are evenly arranged on the metal layer. The LMR layer is arranged on part of the biological sensing area, and the LMR layer and the SPR layer are not overlapped. The present disclosure further includes a biological sensing system and a method of using the same.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: July 4, 2023
    Assignee: MING CHUAN UNIVERSITY
    Inventor: Yu-Cheng Lin
  • Patent number: 11630105
    Abstract: A self-heating biosensor based on lossy mode resonance (LMR) includes a waveguide unit and a lossy mode resonance layer. The waveguide unit is a flat plate, including two planes and at least two sets of opposite sides. One set of the opposite sides of the waveguide unit has a light input end and a light output end. The lossy mode resonance layer is disposed on one of the planes of the waveguide unit. Two heating electrodes are formed at two positions of the lossy mode resonance layer, and the two positions are relevant to one set of the opposite sides of the waveguide unit. A biomaterial sensing region having bioprobes are formed between the two heating electrodes. The present disclosure further includes a using method relevant to the self-heating biosensor based on lossy mode resonance.
    Type: Grant
    Filed: May 4, 2022
    Date of Patent: April 18, 2023
    Assignee: MING CHUAN UNIVERSITY
    Inventor: Yu-Cheng Lin
  • Patent number: 11467093
    Abstract: An electrical polarity adjustable biosensor based on lossy mode resonance includes a first polarity module, a second polarity module, and a plurality of spacers disposed between the first polarity module and the second polarity module. A biomaterial sensing region for injecting an object to be tested is formed between a bioprobe layer of the first polarity module and a second electrode layer of the second polarity module. An electric field is formed between a lossy mode resonance layer of the first polarity module and the second electrode layer, and the electric field acts on a plurality of bioprobes of the bioprobe layer and the object to be tested. The present disclosure further includes a method of using the electrical polarity adjustable biosensor based on lossy mode resonance.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: October 11, 2022
    Assignee: MING CHUAN UNIVERSITY
    Inventor: Yu-Cheng Lin
  • Patent number: 10338695
    Abstract: An augmented reality edugaming interaction method includes the steps of: creating at least one database in a processing device and at least one target value in the database and linking the target value with different data values; defining plural interactive object images and at least one controllable object image by the processing device; setting plural interaction statuses for the interactive object image and at least one interactive instruction for the controllable object image, and selecting one of the target values, so that its data value depends on the interactive object image; setting at least one color recognition value for the processing device; defining the image with the range of the color block as a characteristic area, if the image captured by the image capturing device has a color block corresponding to the color recognition value, and letting the controllable object image depend on and be controlled in the characteristic area.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: July 2, 2019
    Assignee: MING CHUAN UNIVERSITY
    Inventor: Kuei-Fang Hsiao
  • Patent number: 10296081
    Abstract: An augmented reality man-machine interactive system includes a processing device for defining an interactive object image and a controllable object image and setting a color identification value, and an image capturing device for capturing an image for the processing device. The processing device defines the range of the image having the color block as a characteristic region when the image has a color block with the color identification value and makes the controllable object image to be dependent and controllable by the characteristic region. Therefore, the present invention uses a label object of the color identification value to define a characteristic region without requiring any expensive image identification and computing device, so as to operate and control the controllable object image and interact with the interactive object image. The system is applicable for augmented reality of daily life or classroom teaching.
    Type: Grant
    Filed: August 28, 2017
    Date of Patent: May 21, 2019
    Assignee: Ming Chuan University
    Inventor: Kuei-Fang Hsiao
  • Patent number: 10277926
    Abstract: A block-base error measure method for object segmentation includes the steps of dividing a reference image having an object into plural non-overlapping blocks, superimposing the reference image with a segmented image to obtain an error ratio of the block to define an enhanced equation and a modification equation to suppress the scattered error and enhance the contribution of the region error, so as to calculate the error amount of the segmented image and evaluate the performance of image segmentation. Compared with the conventional error measure method based on pixels, the present invention provides a more accurate high-level semantic.
    Type: Grant
    Filed: August 25, 2017
    Date of Patent: April 30, 2019
    Assignee: Ming Chuan University
    Inventors: Chung-Lin Chia, Chaur-Heh Hsieh
  • Patent number: 9934599
    Abstract: A method of extracting and reconstructing court lines includes the steps of binarizing a court image of a court including court lines to form a binary image; performing horizontal projection for the binary image; searching for plural corners in the binary image and defining a court line range by the corners; forming plural linear segments from images within the court line range by linear transformation; defining at least one first cluster and at least one second cluster according to the characteristics of the linear segments and categorizing the linear segments into plural groups; taking an average of each group as a standard court line and creating a linear equation of the standard court line to locate the point of intersection of the standard court lines; and reconstructing the court lines according to the point of intersection.
    Type: Grant
    Filed: June 28, 2016
    Date of Patent: April 3, 2018
    Assignee: Ming Chuan-University
    Inventors: Chaur-Heh Hsieh, Hsing-Yu Yeh
  • Patent number: 9898835
    Abstract: A method for creating a face replacement database includes steps of creating a face database for storing a plurality of replaced images with a face image rotation angle by using a method for estimating a 3D vector angle from a 2D face image, and defining a region to be replaced in the replaced image. The method for estimating a 3D vector angle from a 2D face image includes the steps of creating a feature vector template; detecting a corner of eye and mouth in a face image; defining a sharp point in a vertical direction of the quadrilateral plane, and converting the vertices into 3D coordinates; computing the four vectors from the sharp point to the four vertices to obtain a vector set, and matching the vector set with the feature vector model to obtain an angle which is defined as a rotation angle of the input face image.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: February 20, 2018
    Assignee: Ming Chuan University
    Inventors: Chaur-Heh Hsieh, Hsiu-Chien Hsu
  • Patent number: 9898836
    Abstract: A method for automatic video face replacement includes steps of capturing a face image, detecting a rotation angle of the face image, defining a region to be replaced in the face image, and pasting a region to be replaced of one of the replaced images having the corresponding rotation angle of the face image into a target replacing region. Therefore, the region to be replaced of a static or dynamic face image can be replaced by a replaced image quickly by a single camera without requiring a manual setting of the feature points of a target image. These methods support face replacement at different angles and compensate the color difference to provide a natural look of the replaced image.
    Type: Grant
    Filed: July 27, 2016
    Date of Patent: February 20, 2018
    Assignee: Ming Chuan University
    Inventors: Chaur-Heh Hsieh, Hsiu-Chien Hsu
  • Patent number: 9639738
    Abstract: A method for estimating a 3D vector angle from a 2D face image, a method for creating a face replacement database and a method for replacing a face image includes steps of capturing a face image, detecting a rotation angle of the face image, defining a region to be replaced in the face image, creating a face database for storing replaced images corresponding to the region to be replaced, and pasting one of the replaced images having the corresponding rotation angle of the face image into a target replacing region. Therefore, the region to be replaced of a static or dynamic face image can be replaced by a replaced image quickly by a single camera without requiring a manual setting of the feature points of a target image. These methods support face replacement at different angles and compensate the color difference to provide a natural look of the replaced image.
    Type: Grant
    Filed: February 6, 2015
    Date of Patent: May 2, 2017
    Assignee: Ming Chuan University
    Inventors: Chaur-Heh Hsieh, Hsiu-Chien Hsu
  • Patent number: 9429527
    Abstract: An automatic optical inspection method for periodic patterns includes defining regular control points in a periodic pattern, forming aligned images surrounded by the control points, obtaining a median image and a deviation image from the aligned images and defining upper- and lower-limit images to form an adaptive model, using the adaptive model to compare each point of all aligned images, defining the point of the aligned image having a gray-scale pixel greater than the upper-limit image or the smaller than the lower-limit image as a defect area. The optical inspection method is applicable for the defect detection of various periodic patterns and users simply need to manually select a first reference point to a fifth reference point from the control points and further select a rectangular range of one of the control points to create an edge image to detect a defect of the periodic pattern.
    Type: Grant
    Filed: July 1, 2015
    Date of Patent: August 30, 2016
    Assignee: Ming Chuan University
    Inventor: Mao-Hsiung Hung
  • Patent number: 9171229
    Abstract: A visual object tracking method includes the steps of: setting an object window having a target in a video image; defining a search window greater than the object window; analyzing an image pixel of the object window to generate a color histogram for defining a color filter which includes a dominant color characteristic of the target; using the color filter to generate an object template and a dominant color map in the object window and the search window respectively, the object template including a shape characteristic of the target, the dominant color map including at least one candidate block; comparing the similarity between the object template and the candidate block to obtain a probability distribution map, and using the probability distribution map to compute the mass center of the target. The method generates the probability map by the color and shape characteristics to compute the mass center.
    Type: Grant
    Filed: February 20, 2014
    Date of Patent: October 27, 2015
    Assignee: Ming Chuan University
    Inventors: Chaur-Heh Hsieh, Shu-Wei Chou
  • Publication number: 20150161858
    Abstract: A speed display device includes a wearing unit to which a display unit is connected. The display unit has a lighting module. A satellite positioning unit is connected to the wearing unit and the display unit. The satellite positioning unit has a processing module which is wirelessly connected with a global satellite system and receives at least one position-time signal of the wearing unit from the global satellite system. The processing module calculates a movement speed of the wearing unit according to the at least one position-time signal, and the movement speed is displayed by the lighting module. The drivers behind the wearer are acknowledged the clear and precise speed information to avoid from making a wrong judgment.
    Type: Application
    Filed: February 21, 2014
    Publication date: June 11, 2015
    Applicant: MING CHUAN UNIVERSITY
    Inventors: TA-YU FU, YUN-CHENG HSU, JUI-MING CHANG, KAI-CHIN CHOU
  • Publication number: 20150117706
    Abstract: A visual object tracking method includes the steps of: setting an object window having a target in a video image; defining a search window greater than the object window; analyzing an image pixel of the object window to generate a color histogram for defining a color filter which includes a dominant color characteristic of the target; using the color filter to generate an object template and a dominant color map in the object window and the search window respectively, the object template including a shape characteristic of the target, the dominant color map including at least one candidate block; comparing the similarity between the object template and the candidate block to obtain a probability distribution map, and using the probability distribution map to compute the mass center of the target. The method generates the probability map by the color and shape characteristics to compute the mass center.
    Type: Application
    Filed: February 20, 2014
    Publication date: April 30, 2015
    Applicant: MING CHUAN UNIVERSITY
    Inventors: CHAUR-HEH HSIEH, SHU-WEI CHOU
  • Patent number: 8919955
    Abstract: Disclosed are virtual glasses try-on method and apparatus, and the apparatus includes an image capturing unit for capturing a user's image and a processing device for detecting a face image from the user's image, storing a glasses model, defining a first feature point of a lens of the glasses model and a second feature point at the center of a frame, and obtaining vertical vectors of the first and second feature points to find a third feature point. Two eye images are searched and binarized into a binarized picture that is divided into an eye area and a non-eye area. A center point between first and second extreme values is found, and vertical vectors of the first extreme value and the center point are obtained to find an example point. An affine transformation of the feature points is performed and attached to the face image to form a try-on image.
    Type: Grant
    Filed: June 20, 2013
    Date of Patent: December 30, 2014
    Assignee: Ming Chuan University
    Inventors: Chaur-Heh Hsieh, Wan-Yu Huang, Jeng-Sheng Yeh
  • Publication number: 20140354947
    Abstract: Disclosed are virtual glasses try-on method and apparatus, and the apparatus includes an image capturing unit for capturing a user's image and a processing device for detecting a face image from the user's image, storing a glasses model, defining a first feature point of a lens of the glasses model and a second feature point at the center of a frame, and obtaining vertical vectors of the first and second feature points to find a third feature point. Two eye images are searched and binarized into a binarized picture that is divided into an eye area and a non-eye area. A center point between first and second extreme values is found, and vertical vectors of the first extreme value and the center point are obtained to find an example point. An affine transformation of the feature points is performed and attached to the face image to form a try-on image.
    Type: Application
    Filed: June 20, 2013
    Publication date: December 4, 2014
    Applicant: MING CHUAN UNIVERSITY
    Inventors: CHAUR-HEH HSIEH, WAN-YU HUANG, JENG-SHENG YEH
  • Patent number: 8762300
    Abstract: The present invention provides a method for document classification, especially an adaptive learning method for document classification. The document includes a plurality of feature words. The method includes steps of calculating a plurality of similarities between the document and a categorical basic knowledge; calculating a first ratio of a first largest similarity to a second largest similarity of the plurality of similarities; storing the feature words of the document as an extensive categorical knowledge when the first ratio is larger than a first threshold value; and updating the categorical basic knowledge by using the extensive categorical knowledge.
    Type: Grant
    Filed: October 18, 2011
    Date of Patent: June 24, 2014
    Assignee: Ming Chuan University
    Inventors: Yang-Cheng Lu, Jen-Nan Chen, Yu-Chen Wei
  • Publication number: 20130097104
    Abstract: The present invention provides a method for document classification, especially an adaptive learning method for document classification. The document includes a plurality of feature words. The method includes steps of calculating a plurality of similarities between the document and a categorical basic knowledge; calculating a first ratio of a first largest similarity to a second largest similarity of the plurality of similarities; storing the feature words of the document as an extensive categorical knowledge when the first ratio is larger than a first threshold value; and updating the categorical basic knowledge by using the extensive categorical knowledge.
    Type: Application
    Filed: October 18, 2011
    Publication date: April 18, 2013
    Applicant: MING CHUAN UNIVERSITY
    Inventors: Yang-Cheng Lu, Jen-Nan Chen, Yu-Chen Wei