Patents by Inventor Chia-Te Chou
Chia-Te Chou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20140230635Abstract: A blended yarn comprises a plurality of first fibers and a plurality of second fibers. A coefficient of friction of the second fibers is greater than a coefficient of friction of the first fibers. Abrasion resistance characteristics of the second fibers are greater than abrasion resistance properties of the first fibers. A gripping ability of the second fibers is greater than a gripping ability of the first fibers. The plurality of second fibers are combined with the plurality of first fibers such that the first fibers extend along the length of the blended yarn and the second fibers do not extend along the length of the blended yarn at least a portion of the second fibers are engaged with and extend from the plurality of first fibers effectively to define surface characteristics of the blended yarn.Type: ApplicationFiled: April 25, 2014Publication date: August 21, 2014Applicant: Samson Rope TechnologiesInventors: Justin Gilmore, David E. O'Neal, Danielle D. Stenvers, Chia-te Chou, Ronald L. Bryant, Eric W. McCorkle
-
Patent number: 8797446Abstract: An optical imaging device includes a display panel whereon a coordinate detecting area is formed, at least one first reflective optical unit installed on an outside corner of the display panel for reflecting light transmitted from an object moving within the coordinate detecting area, at least one second reflective optical unit installed outside the display panel for reflecting the light reflected from the at least one first reflective optical unit, an image capturing module for capturing the light reflected from the at least one second reflective optical unit so as to capture image of the object, and a control module coupled to the image capturing module for receiving the image captured by the image capturing module and for calculating a coordinate value of the object within the coordinate area according to the image.Type: GrantFiled: December 30, 2011Date of Patent: August 5, 2014Assignee: Wistron CorporationInventor: Chia-Te Chou
-
Publication number: 20140210704Abstract: A gesture recognizing and controlling method and device thereof are provided. The gesture recognizing and controlling method includes the following steps. First, a pending image having depth information is captured, in which the pending image includes a human form image. The human form image is analyzed so as to obtain hand skeleton information having a first skeleton and a second skeleton. It is determined whether the first skeleton and the second skeleton have an intersection point. If yes, it is determined whether an included angle formed by the first skeleton and the second skeleton is within a predetermined angle range. When the included angle is within the predetermined angle range, a controlling signal is output accordingly.Type: ApplicationFiled: October 11, 2013Publication date: July 31, 2014Applicant: Wistron CorporationInventors: Chia-Te Chou, Shou-Te Wei, Wei-Jei Chiu, Chih-Hao Huang
-
Publication number: 20140204015Abstract: A gesture recognition module, for recognizing a gesture of a user, includes a detecting unit, for capturing at least one hand image of a hand of the user, so as to sequentially acquire a first coordinate and a second coordinate; a computing unit, coupled to the detecting unit for defining a first zone and a second zone according to the first coordinate and the second coordinate, respectively, and calculating a first area and a second area according to the first zone and the second zone; and a determining unit, coupled to the detecting unit and the computing unit for recognizing the gesture according to the first coordinate, the second coordinate, the first area and the second area.Type: ApplicationFiled: May 13, 2013Publication date: July 24, 2014Inventors: Chung-Wei Lee, Chih-Hsuan Lee, Chia-Te Chou, Shou-Te Wei
-
Publication number: 20140185868Abstract: A gesture recognition module for recognizing a gesture of a user, includes a detecting unit, including at least one image capture device, for capturing at least one image of a hand of the user, to obtain a first position and a second position of the hand sequentially; a computing unit, electrically coupled to the detecting unit, for determining a first angle between a first virtual straight line connected between a fixed reference point and the first position and a reference plane passing through the fixed reference point, and determining a second angle between a second virtual straight line connected between the fixed reference point and the second position and the reference plane; and a determining unit, electrically coupled to the computing unit, for determining a relation between the first angle and the second angle, to decide whether a gesture of the hand is a back-and-forth gesture.Type: ApplicationFiled: August 28, 2013Publication date: July 3, 2014Applicant: Wistron CorporationInventors: Chih-Hsuan Lee, Chia-Te Chou
-
Publication number: 20140184569Abstract: A coordinate transformation method for a user and an interactive system including a detection module is disclosed. The coordinate transformation method includes determining a face information and a command object of the user via the detection module to obtain a face coordinate and a command object coordinate, transforming the face coordinate into a transformed face coordinate according to a coordinate of the detection module, obtaining an angle between an optical-axis ray and a line formed via connecting the transformed face coordinate and the coordinate of the detection module, obtaining a transformed command object coordinate according to the angle and the command object coordinate, and determining a depth change of the command object according to the transformed object coordinate to set up an interactive operation between the interactive system and the user.Type: ApplicationFiled: October 7, 2013Publication date: July 3, 2014Applicant: Wistron CorporaitionInventors: Chih-Hsuan Lee, Shou-Te Wei, Chia-Te Chou
-
Publication number: 20140125607Abstract: A method for inputting instruction, a portable electronic device and a computer readable recording medium are provided. The method includes detecting taps applied on a touch screen, and determining whether tap positions of the taps belongs to the same group. The method also includes dividing the tap positions of the taps to groups if the tap positions of the taps do not belong to the same group, generating group flags according to the groups, and sorting the group flags according to a tap order of the taps, so as to generate a group flag sequence. In addition, the method further includes generating an operating instruction according to the group flag sequence.Type: ApplicationFiled: May 28, 2013Publication date: May 8, 2014Applicant: Wistron CorporationInventors: Shou-Te Wei, Chia-Te Chou, Chih-Hsuan Lee, Chung-Wei Lee
-
Patent number: 8707668Abstract: A blended yarn comprises a plurality of first fibers and a plurality of second fibers. A coefficient of friction of the second fibers is greater than a coefficient of friction of the first fibers. Abrasion resistance characteristics of the second fibers are greater than abrasion resistance properties of the first fibers. A gripping ability of the second fibers is greater than a gripping ability of the first fibers. The plurality of second fibers are combined with the plurality of first fibers such that the first fibers extend along the length of the blended yarn and the second fibers do not extend along the length of the blended yarn at least a portion of the second fibers are engaged with and extend from the plurality of first fibers effectively to define surface characteristics of the blended yarn.Type: GrantFiled: May 8, 2012Date of Patent: April 29, 2014Assignee: Samson Rope TechnologiesInventors: Justin Gilmore, David E. O'Neal, Danielle D. Stenvers, Chia-Te Chou, Ronald L. Bryant, Eric Wayne McCorkle
-
Publication number: 20140098292Abstract: The present invention discloses a display system. The display system includes a display device, an image recognition device and a communication device. The display device is arranged to display a plurality of frames of a video stream. The image recognition device is arranged to compare the current frame with the previous frame displayed by the display device to define at least one stationary block and determine whether the stationary block has a phone number or an Internet address, wherein the current frame is the frame displayed by the display device currently, and the current frame is displayed next to the previous frame. The communication device is arranged to make a phone call to the phone number or connect to the web address via the Internet.Type: ApplicationFiled: October 1, 2013Publication date: April 10, 2014Applicant: WISTRON CORP.Inventors: Wei-Jei Chiu, Hsi-chun Hsiao, Chia-Te Chou
-
Patent number: 8689534Abstract: A rope structure comprises a plurality of link structures each defining first and second ends. Each link structure is formed of synthetic fibers. Each first end comprises at least first and second bend portions. Each second end comprises at least third and fourth bend portions. The first end of a first one of the plurality of link structures engages the second end of a second one of the plurality of link structures such that the first and second bend portions of the first end of the first one of the plurality of link structures are substantially parallel to each other and substantially perpendicular to the third and fourth bend portions of the second end of the second one of the plurality of link structures.Type: GrantFiled: March 6, 2013Date of Patent: April 8, 2014Assignee: Samson Rope TechnologiesInventor: Chia-Te Chou
-
Publication number: 20140086449Abstract: A motion detection method applied in an interaction system is provided. The method has the following steps of: retrieving a plurality of images; recognizing a target object from the retrieved images; calculating a first integral value of a position offset value of the target object along a first direction from the retrieved images; determining whether the calculated first integral value is larger than a first predetermined threshold value; and determining the target object as moving when the calculated first integral value is larger than the first predetermined threshold value.Type: ApplicationFiled: January 25, 2013Publication date: March 27, 2014Applicant: WISTRON CORP.Inventors: Chih-Hsuan Lee, Shou-Te Wei, Chia-Te Chou
-
Publication number: 20140057103Abstract: A chafe jacket is used with a line extending around a structure comprising a tube structure defining an inner surface and a jacket axis. The tube structure comprises fibers each defining a fiber axis. The fiber axes defined by portions of the fibers defining the interior surface of the tube structure extend at an interior fiber angle of less than approximately 50 degrees relative to the jacket axis. The chafe jacket extends around at least a portion of the line adjacent to the structure to reduce wear on the line.Type: ApplicationFiled: August 24, 2012Publication date: February 27, 2014Applicant: SAMSON ROPE TECHNOLOGIESInventors: Greg Z. Mozsgai, Francis W. Choltco-Devlin, Chia-Te Chou
-
Publication number: 20140008516Abstract: A foldable frame assembly is adapted for an optical touch device. The foldable frame assembly includes a first frame, a second frame and a third frame. The second frame is pivotally connected to a first end of the first frame and the third frame is pivotally connected to a second end of the first frame, wherein the first end is opposite to the second end. The second frame and the third frame can rotate with respect to the first frame so as to be folded or expanded with respect to the first frame.Type: ApplicationFiled: October 23, 2012Publication date: January 9, 2014Applicant: WISTRON CORPORATIONInventors: You-Xin Liu, Jr-Shiung Jang, Chia-Te Chou, Shou-Te Wei, Shih-Che Chien, Po-Liang Huang
-
Publication number: 20140009382Abstract: A method for recognizing an object from two original images, includes the steps of accessing the two original images, reducing resolutions of the two original images so as to generate two resolution-reduced images, respectively, calculating a plurality of shift amounts, each of which is between two corresponding pixels in pixel blocks that have similar content and that are respectively in the two resolution-reduced images and generating a low-level depth image based on the shift amounts, determining an object area of the low-level depth image containing the object therein, and obtaining a sub-image, from one of the original images, corresponding to the object area of the low-level depth image, thereby recognizing the object based on the sub-image.Type: ApplicationFiled: March 10, 2013Publication date: January 9, 2014Applicant: WISTRON CORPORATIONInventors: Chia-Te Chou, Shou-Te Wei, Chih-Hsuan Lee
-
Publication number: 20140000233Abstract: A rope structure adapted to engage an intermediate structure while loads are applied to ends of the rope structure comprises a primary strength component and a coating. The primary strength component comprises a plurality of fibers adapted to bear the loads applied to the ends of the rope structure. The coating comprises a mixture of a lubricant portion and a binder portion. The lubricant portion comprises particles having an average size of within approximately 0.01 microns to 2.00 microns. The binder portion is applied to the primary strength portion as a liquid and dries to support the lubricant portion relative to at least some of the fibers. The matrix supports the lubricant portion such that the lubricant portion reduces friction between at least some of the plurality of fibers and between at least some of the plurality of fibers and the intermediate structure.Type: ApplicationFiled: December 31, 2012Publication date: January 2, 2014Applicant: SAMSON ROPE TECHNOLOGIESInventors: Chia-Te Chou, Danielle D. Stenvers, Jonathan D. Miller
-
Publication number: 20130333346Abstract: A rope structure of the present invention comprises a plurality of first yarns and a plurality of second yarns. The first yarns are formed of at least one material selected from the group of materials comprising HMPE, LCP, Aramids, and PBO, have a breaking elongation of approximately 2%-5%, and have a tenacity of approximately 25-45 gpd. The second yarns are formed of at least one material selected from the group of materials comprising polyolefin, polyethylene, polypropylene, and blends or copolymers of the two, have a breaking elongation of approximately 2%-12%, and have a tenacity of approximately 6-22 gpd. The first and second yarns are combined to form rope sub-components. The rope sub-components comprise approximately 20-80% by weight of the first yarns.Type: ApplicationFiled: August 19, 2013Publication date: December 19, 2013Applicant: Samson Rope TechnologiesInventors: Chia-Te Chou, Danielle Dawn Stenvers, Howard Philbrook Wright, Jr., Liangfeng Sun
-
Publication number: 20130328833Abstract: A dual-mode input apparatus is disclosed and includes a panel, a processing module, and an image-capturing system capable of capturing images in front of the panel. The image-capturing system can capture a touch input image and a gesture input device by an image-sensing component and a light-modulating component. The processing module processes the sensed images to determine input information. Alternatively, the image-capturing system can include an image-capturing device capable of rotating relative to the panel. The processing module is therefore capable of controlling rotating of the image-capturing device so as to capture a touch input image and a gesture input image selectively for the processing module to determine input information. Therefore, the dual-mode input apparatus of the invention can provide two input modes by use of the same image device, which overcomes the limitation of a conventional single image-capturing system being applied only to a single input architecture.Type: ApplicationFiled: July 20, 2012Publication date: December 12, 2013Inventors: Sheng-Hsien Hsieh, Shou-Te Wei, Chia-Te Chou, You-Xin Liu, Chun-Chao Chang
-
Publication number: 20130321404Abstract: An operating area determination method and system are provided. In the operating area determination method, a plurality of depth maps of a target scene is generated at several time points. At least two specific depth maps among the depth maps are selected and compared to identify a moving object in the target scene, and a position of the moving object in the target scene is defined as a reference point. A standard point in the target scene is obtained according to the reference point and a specific depth corresponding to the reference point. An effective operating area in the target scene is determined according to the reference point and the standard point for controlling an electronic apparatus.Type: ApplicationFiled: February 20, 2013Publication date: December 5, 2013Applicant: WISTRON CORPORATIONInventors: Chia-Te Chou, Shou-Te Wei, Hsun-Chih Tsao, Chih-Hsuan Lee
-
Publication number: 20130321580Abstract: A 3-dimensional depth image generating system and method thereof are provided. The 3-dimensional depth image generating system includes a first and a second camera devices and an image processing device. The first and the second camera devices are apart for a predetermined distance, and respectively captures an object to obtain a firs and a second images. The image processing device is connected with the first and the second camera devices and respectively obtains a first and a second partial images, wherein the first and the second partial images both include a first predetermined portion and a second predetermined portion of the object, and sizes of the first partial image and the second partial image are respectively smaller than that of the first image and the second image. Wherein, the image processing device combines the first and the second partial images to generate a 3-dimensional depth image of the object.Type: ApplicationFiled: October 16, 2012Publication date: December 5, 2013Applicant: WISTRON CORPORATIONInventors: Chia-Te Chou, Shou-Te Wei, Chih-Hsuan Lee
-
Publication number: 20130278493Abstract: A gesture control method includes steps of capturing at least one image; detecting whether there is a face in the at least one image; if there is a face in the at least one image, detecting whether there is a hand in the at least one image; if there is a hand in the at least one image, identifying a gesture performed by the hand and identifying a relative distance or a relative moving speed between the hand and the face; and executing a predetermined function in a display screen according to the gesture and the relative distance or according to the gesture and the relative moving speed.Type: ApplicationFiled: September 5, 2012Publication date: October 24, 2013Inventors: Shou-Te Wei, Chia-Te Chou, Hsun-Chih Tsao, Chih-Pin Liao