Patents by Inventor Ming-Jiun Wang
Ming-Jiun Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9030468Abstract: A method for depth map generation is disclosed, capable of generating a depth map corresponding an image signal, for the application of a 2D to 3D image transformation system. In the depth map generated by the disclosed method, each of the plural image regions of the image signal is assigned with a depth value. Besides, by means of comparing the depth map with another depth map of the earlier time point, the disclosed method can generate a modulated depth map, for assigning a depth value to each of the plural image regions of the image signal more precisely. Thus, the transformation performance and efficiency of the 2D to 3D image transformation system are hereby improved.Type: GrantFiled: April 26, 2012Date of Patent: May 12, 2015Assignee: National Cheng Kung UniversityInventors: Gwo Giun (Chris) Lee, He-Yuan Lin, Ming-Jiun Wang, Chun-Fu Chen
-
Patent number: 8774503Abstract: A method for color feature extraction extracts a color feature vector representative of the color of each image pixel contained in an image signal. The method comprises: receiving the image signal; mapping the image signal to a color space model, where the color of each of the plural image pixels is represented by a first parameter, a second parameter, and a third parameter; obtaining an adjusted second parameter; clustering the plural image pixels into plural color regions or plural fuzzy regions of a color plane of the color space model; and designating the color feature vector to each of the plural image pixels based on the clustering result.Type: GrantFiled: April 26, 2012Date of Patent: July 8, 2014Assignee: National Cheng Kung UniversityInventors: Gwo Giun (Chris) Lee, He-Yuan Lin, Ming-Jiun Wang, Chun-Fu Chen
-
Patent number: 8761501Abstract: A method for 3D video content generation is disclosed, capable of transforming a 2D image into a 3D video through proper operation process. The method comprises the following steps of: (A) receiving a 2D image and generating a ROI distribution map from the 2D image; (B) executing a color feature capture process, for forming a plural of color feature regions; (C) executing an image segmentation process basing on the texture feature of the plural of color feature regions, for forming an image region distribution map; (D) executing a depth map generation process, for generating a depth map basing on the ROI distribution map and the image region distribution map; (E) executing a 3D image generation process, for forming the 3D image basing on the image region distribution map and the depth map; and (F) chaining a plurality of the 3D images to form the 3D video basing on a frame rate.Type: GrantFiled: April 26, 2012Date of Patent: June 24, 2014Assignee: National Cheng Kung UniversityInventors: Gwo Giun (Chris) Lee, He-Yuan Lin, Ming-Jiun Wang
-
Patent number: 8621414Abstract: A method of determining a design framework is implemented by an algorithm analyzer. The method includes configuring the algorithm analyzer to perform intrinsic complexity analysis of an algorithm for a predetermined application to obtain a set of parameters representing intrinsic characteristics of the algorithm. The method also includes configuring the algorithm analyzer to establish candidate design frameworks based on the parameters. Each candidate design framework includes a set of design constraints corresponding to the algorithm and which are used when designing a hardware and/or software configuration for implementing the predetermined application. The method also includes configuring the algorithm analyzer to analyze the suitability of the set of design constraints of each candidate design framework based on given specification restrictions of the predetermined application to determine which candidate design framework(s) is suited for the predetermined application.Type: GrantFiled: April 21, 2010Date of Patent: December 31, 2013Assignee: National Cheng Kung UniversityInventors: Gwo-Giun Lee, Ming-Jiun Wang, He-Yuan Lin
-
Patent number: 8391365Abstract: A motion estimation method includes: (A) defining one pixel in a reference image as a center of search (CS) corresponding to a target pixel set in a current image; (B) determining a center error (CE) signal; (C) defining another pixel in the reference image as a target of search (TS) with reference to the CS, one candidate search vector available for selection from a vector set, and a step size; (D) determining a target error (TE) signal; (E) determining whether to update the CS and the CE signal; (F) if determined, updating the CS, the CE signal and the vector set; (G) repeating steps (C)˜(F) using a candidate search vector selected from the vector set and the same step size until there is no candidate search vector available for selection in the vector set; (H) repeating steps (C)˜(G) using a smaller step size until a predetermined value is reached; and (I) computing a motion vector based on the target pixel set and one pixel set that includes the CS.Type: GrantFiled: March 20, 2009Date of Patent: March 5, 2013Assignee: National Cheng Kung UniversityInventors: Gwo-Giun Lee, He-Yuan Lin, Ming-Jiun Wang
-
Publication number: 20120301019Abstract: A method for color feature extraction extracts a color feature vector representative of the color of each image pixel contained in an image signal. The method comprises: receiving the image signal; mapping the image signal to a color space model, where the color of each of the plural image pixels is represented by a first parameter, a second parameter, and a third parameter; obtaining an adjusted second parameter; clustering the plural image pixels into plural color regions or plural fuzzy regions of a color plane of the color space model; and designating the color feature vector to each of the plural image pixels based on the clustering result.Type: ApplicationFiled: April 26, 2012Publication date: November 29, 2012Applicant: National Cheng Kung UniversityInventors: Gwo Giun (Chris) Lee, He-Yuan Lin, Ming-Jiun Wang, Chun-Fu Chen
-
Publication number: 20120294521Abstract: A method for 3D video content generation is disclosed, capable of transforming a 2D image into a 3D video through proper operation process. The method comprises the following steps of: (A) receiving a 2D image and generating a ROI distribution map from the 2D image; (B) executing a color feature capture process, for forming a plural of color feature regions; (C) executing an image segmentation process basing on the texture feature of the plural of color feature regions, for forming an image region distribution map; (D) executing a depth map generation process, for generating a depth map basing on the ROI distribution map and the image region distribution map; (E) executing a 3D image generation process, for forming the 3D image basing on the image region distribution map and the depth map; and (F) chaining a plurality of the 3D images to form the 3D video basing on a frame rate.Type: ApplicationFiled: April 26, 2012Publication date: November 22, 2012Applicant: National Cheng Kung UniversityInventors: Gwo Giun (Chris) LEE, He-Yuan LIN, Ming-Jiun WANG
-
Publication number: 20120293499Abstract: A method for depth map generation is disclosed, capable of generating a depth map corresponding an image signal, for the application of a 2D to 3D image transformation system. In the depth map generated by the disclosed method, each of the plural image regions of the image signal is assigned with a depth value. Besides, by means of comparing the depth map with another depth map of the earlier time point, the disclosed method can generate a modulated depth map, for assigning a depth value to each of the plural image regions of the image signal more precisely. Thus, the transformation performance and efficiency of the 2D to 3D image transformation system are hereby improved.Type: ApplicationFiled: April 26, 2012Publication date: November 22, 2012Applicant: National Cheng Kung UniversityInventors: Gwo Giun (Chris) LEE, He-Yuan LIN, Ming-Jiun WANG, Chun-Fu CHEN
-
Patent number: 8150150Abstract: A method and system of extracting a perceptual feature set for image/video segmentation are disclosed. An input image is converted to obtain a hue component and a saturation component, where the hue component is quantized into a number of quantum values. After weighting the quantized hue component with the saturation component, the weighted quantized hue component and the saturation component are subjected to a statistical operation in order to extract feature vectors. Accordingly, the method and system provide overall segmentation results that are very close to human interpretation.Type: GrantFiled: March 2, 2009Date of Patent: April 3, 2012Assignees: Himax Technologies Limited, NCKU Research and Development FoundationInventors: Gwo Giun Lee, Ming-Jiun Wang, Ling-Hsiu Huang
-
Publication number: 20110265057Abstract: A method of determining a design framework is implemented by an algorithm analyzer. The method includes configuring the algorithm analyzer to perform intrinsic complexity analysis of an algorithm for a predetermined application to obtain a set of parameters representing intrinsic characteristics of the algorithm. The method also includes configuring the algorithm analyzer to establish candidate design frameworks based on the parameters. Each candidate design framework includes a set of design constraints corresponding to the algorithm and which are used when designing a hardware and/or software configuration for implementing the predetermined application. The method also includes configuring the algorithm analyzer to analyze the suitability of the set of design constraints of each candidate design framework based on given specification restrictions of the predetermined application to determine which candidate design framework(s) is suited for the predetermined application.Type: ApplicationFiled: April 21, 2010Publication date: October 27, 2011Applicant: National Cheng Kung UniversityInventors: Gwo-Giun Lee, Ming-Jiun Wang, He-Yuan Lin
-
Publication number: 20100239017Abstract: A motion estimation method includes: (A) defining one pixel in a reference image as a center of search (CS) corresponding to a target pixel set in a current image; (B) determining a center error (CE) signal; (C) defining another pixel in the reference image as a target of search (TS) with reference to the CS, one candidate search vector available for selection from a vector set, and a step size; (D) determining a target error (TE) signal; (E) determining whether to update the CS and the CE signal; (F) if determined, updating the CS, the CE signal and the vector set; (G) repeating steps (C)˜(F) using a candidate search vector selected from the vector set and the same step size until there is no candidate search vector available for selection in the vector set; (H) repeating steps (C)˜(G) using a smaller step size until a predetermined value is reached; and (I) computing a motion vector based on the target pixel set and one pixel set that includes the CS.Type: ApplicationFiled: March 20, 2009Publication date: September 23, 2010Applicant: NATIONAL CHENG KUNG UNIVERSITYInventors: Gwo-Giun Lee, He-Yuan Lin, Ming-Jiun Wang
-
Publication number: 20100220924Abstract: A method and system of extracting a perceptual feature set for image/video segmentation are disclosed. An input image is converted to obtain a hue component and a saturation component, where the hue component is quantized into a number of quantum values. After weighting the quantized hue component with the saturation component, the weighted quantized hue component and the saturation component are subjected to a statistical operation in order to extract feature vectors. Accordingly, the method and system provide overall segmentation results that are very close to human interpretation.Type: ApplicationFiled: March 2, 2009Publication date: September 2, 2010Inventors: Gwo Giun Lee, Ming-Jiun Wang, Ling-Hsiu Huang
-
Publication number: 20100220893Abstract: A method and system of mono-view depth estimation are disclosed. A two-dimensional (2D) image is first segmented into a number of objects. A depth diffusion region (DDR), such as the ground or a floor, is then detected among the objects. The DDR generally includes a horizontal plane. The DDR is assigned the depth, and each object connected to the DDR is assigned depth according to the depth of the DDR at the connected site.Type: ApplicationFiled: March 2, 2009Publication date: September 2, 2010Inventors: GWO GIUN LEE, MING-JIUN WANG, LING-HSIU HUANG