Patents by Inventor Hung-Chun CHOU
Hung-Chun CHOU has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12374134Abstract: An automatic objects labeling method includes: capturing M consecutive image frames at one station of an assembly line. Performing an object detection step which includes selecting a detection image frame that displays an operation using a work piece against a target object from the M consecutive image frames; and calibrating the position range of the target object in the detection image frame; retracing from the detection image frame to select an Nth retraced image frame from the M consecutive image frames; obtaining a labeled image of the target object from the Nth retraced image frame according to the position range; comparing the labeled image with images of the M consecutive image frames to find at least one other labeled image similar to the target object; and storing both the labeled image and the at least one other labeled image as the same labeled data set.Type: GrantFiled: December 22, 2022Date of Patent: July 29, 2025Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Chih-Neng Liu, Hung-Chun Chou, Tsann-Tay Tang
-
Patent number: 12290917Abstract: An object pose estimation system, an execution method thereof and a graphic user interface are provided. The execution method of the object pose estimation system includes the following steps. A feature extraction strategy of a pose estimation unit is determined by a feature extraction strategy neural network model according to a scene point cloud. According to the feature extraction strategy, a model feature is extracted from a 3D model of an object and a scene feature is extracted from the scene point cloud by the pose estimation unit. The model feature is compared with the scene feature by the pose estimation unit to obtain an estimated pose of the object.Type: GrantFiled: October 19, 2021Date of Patent: May 6, 2025Assignee: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Dong-Chen Tsai, Ping-Chang Shih, Yu-Ru Huang, Hung-Chun Chou
-
Publication number: 20240404259Abstract: A system and a method for presenting three-dimensional content and a three-dimensional content calculation apparatus are provided. In the method, the calculation apparatus receives a request for presentation content including one or more images from a client device, receives the presentation content from a content delivery network according to the request, processes the images using a first machine-learning model to generate a first predicted result, processes the images using multiple machine-learning models to generate at least a second predicted result and a third predicted result, selects a second machine-learning model from the machine-learning models based on a comparison of the first predicted result with the at least the second predicted result and the third predicted result, processes the images using the second machine-learning model and sends a processing result to the client device. Accordingly, the client device generates a three-dimensional presentation of the presentation content.Type: ApplicationFiled: August 22, 2023Publication date: December 5, 2024Applicant: Acer IncorporatedInventors: Kai-Hsiang Lin, Hung-Chun Chou, Tung-Chan Tsai, Chieh-Sheng Wang, Shih-Hao Lin, Wen-Cheng Hsu
-
Publication number: 20240404260Abstract: A distributed data processing system and a distributed data processing method are provided. The distributed data processing system includes a computing device and at least one additional computing device.Type: ApplicationFiled: October 4, 2023Publication date: December 5, 2024Applicant: Acer IncorporatedInventors: Kai-Hsiang Lin, Hung-Chun Chou, Tung-Chan Tsai, Chieh-Sheng Wang, Shih-Hao Lin, Wen-Cheng Hsu
-
Publication number: 20240212372Abstract: An automatic objects labeling method includes: capturing M consecutive image frames at one station of an assembly line. Performing an object detection step which includes selecting a detection image frame that displays an operation using a work piece against a target object from the M consecutive image frames; and calibrating the position range of the target object in the detection image frame; retracing from the detection image frame to select an Nth retraced image frame from the M consecutive image frames; obtaining a labeled image of the target object from the Nth retraced image frame according to the position range; comparing the labeled image with images of the M consecutive image frames to find at least one other labeled image similar to the target object; and storing both the labeled image and the at least one other labeled image as the same labeled data set.Type: ApplicationFiled: December 22, 2022Publication date: June 27, 2024Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Chih-Neng Liu, Hung-Chun Chou, Tsann-Tay Tang
-
Publication number: 20240121373Abstract: Disclosed are an image display method and a 3d display system. The method is adapted to the 3d display system including a 3d display device and includes the following steps. A first image and a second image are obtained by splitting an input image according to a 3d image format. Whether the input image is a 3D format image complying with the 3D image format is determined through a stereo matching processing performed on the first image and the second image. An image interweaving process is enabled to be performed on the input image to generate an interweaving image in response to determining that the input image is the 3D format image complying with the 3D image format, and the interweaving image is displayed via the 3D display device.Type: ApplicationFiled: May 10, 2023Publication date: April 11, 2024Applicant: Acer IncorporatedInventors: Kai-Hsiang Lin, Hung-Chun Chou, Wen-Cheng Hsu, Shih-Hao Lin, Chih-Haw Tan
-
Publication number: 20240046608Abstract: A 3D format image detection method and an electronic apparatus using the same are provided. The 3D format image detection method includes the following steps. A first image and a second image are obtained by splitting an input image according to a 3D image format. A 3D matching processing is performed on the first image and the second image to generate a disparity map of the first image and the second image. The matching number of a plurality of first pixels in the first image matched with a plurality of second pixels in the second image is calculated according to the disparity map. Whether the input image is a 3D format image conforming to the 3D image format is determined according to the matching number.Type: ApplicationFiled: December 2, 2022Publication date: February 8, 2024Applicant: Acer IncorporatedInventors: Kai-Hsiang Lin, Hung-Chun Chou, Wen-Cheng Hsu, Shih-Hao Lin, Chih-Haw Tan
-
Publication number: 20230150141Abstract: A training data generation device includes a virtual scene generation unit, an orthographic virtual camera, an object-occlusion determination unit, an object-occlusion determination unit and a perspective virtual camera. The virtual scene generation unit is configured for generating a virtual scene, wherein the virtual scene comprises a plurality of objects. The orthographic virtual camera is configured for capturing a vertical projection image of the virtual scene. The object-occlusion determination unit is configured for labeling an occluded-state of each object according to the vertical projection image. The perspective virtual camera is configured for capturing a perspective projection image of the virtual scene. The training data generation unit is configured for generating a training data of the virtual scene according to the perspective projection image and the occluded-state of each object.Type: ApplicationFiled: October 7, 2022Publication date: May 18, 2023Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Hung-Chun CHOU, Yu-Ru HUANG, Dong-Chen TSAI
-
Publication number: 20220362945Abstract: An object pose estimation system, an execution method thereof and a graphic user interface are provided. The execution method of the object pose estimation system includes the following steps. A feature extraction strategy of a pose estimation unit is determined by a feature extraction strategy neural network model according to a scene point cloud. According to the feature extraction strategy, a model feature is extracted from a 3D model of an object and a scene feature is extracted from the scene point cloud by the pose estimation unit. The model feature is compared with the scene feature by the pose estimation unit to obtain an estimated pose of the object.Type: ApplicationFiled: October 19, 2021Publication date: November 17, 2022Applicant: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTEInventors: Dong-Chen TSAI, Ping-Chang SHIH, Yu-Ru HUANG, Hung-Chun CHOU