Patents by Inventor Te Tang
Te Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12243214Abstract: A method for identifying inaccurately depicted boxes in an image, such as miss detected boxes and partially detected boxes. The method obtains a 2D RGB image of the boxes and a 2D depth map image of the boxes using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the boxes. The method generates a segmentation image of the boxes using a neural network by performing an image segmentation process that extracts features from the RGB image and segments the boxes by assigning a label to pixels in the RGB image so that each box in the segmentation image has the same label and different boxes in the segmentation image have different labels. The method analyzes the segmentation image to determine if the image segmentation process has failed to accurately segment the boxes in the segmentation image.Type: GrantFiled: February 3, 2022Date of Patent: March 4, 2025Assignee: FANUC CORPORATIONInventors: Te Tang, Tetsuaki Kato
-
Patent number: 12112499Abstract: A system and method for identifying a box to be picked up by a robot from a stack of boxes. The method includes obtaining a 2D red-green-blue (RGB) color image of the boxes and a 2D depth map image of the boxes using a 3D camera. The method employs an image segmentation process that uses a simplified mask R-CNN executable by a central processing unit (CPU) to predict which pixels in the RGB image are associated with each box, where the pixels associated with each box are assigned a unique label that combine to define a mask for the box. The method then identifies a location for picking up the box using the segmentation image.Type: GrantFiled: November 30, 2021Date of Patent: October 8, 2024Assignee: FANUC CORPORATIONInventors: Te Tang, Tetsuaki Kato
-
Publication number: 20240257505Abstract: A system and method for adapting a feature extraction neural network to only identify environment independent features in an image. The method includes modifying weights within a dataset classifier neural network to improve the accuracy of the ability of the dataset classifier neural network to identify that training features images are from training images and test features images are from test images. The method also includes modifying the weights within a feature extraction neural network to reduce the accuracy of the ability of the dataset classifier neural network to identify that the training features images are from the training images and the test features images are from the test images.Type: ApplicationFiled: January 30, 2023Publication date: August 1, 2024Inventors: Te Tang, Tetsuaki Kato
-
Patent number: 12036678Abstract: A system and method identifying an object, such as a transparent object, to be picked up by a robot from a bin of objects. The method includes obtaining a 2D red-green-blue (RGB) color image and a 2D depth map image of the objects using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the objects. The method generates a segmentation image of the objects using a deep learning mask R-CNN (convolutional neural network) that performs an image segmentation process that extracts features from the RGB image and assigns a label to the pixels so that objects in the segmentation image have the same label. The method then identifies a location for picking up the object using the segmentation image and the depth map image.Type: GrantFiled: May 25, 2021Date of Patent: July 16, 2024Assignee: FANUC CORPORATIONInventors: Te Tang, Tetsuaki Kato
-
Patent number: 12017368Abstract: A system and method for identifying a box to be picked up by a robot from a stack of boxes. The method includes obtaining a 2D red-green-blue (RGB) color image of the boxes and a 2D depth map image of the boxes using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the boxes. The method generates a segmentation image of the boxes by performing an image segmentation process that extracts features from the RGB image and the depth map image, combines the extracted features in the images and assigns a label to the pixels in a features image so that each box in the segmentation image has the same label. The method then identifies a location for picking up the box using the segmentation image.Type: GrantFiled: September 9, 2020Date of Patent: June 25, 2024Assignee: FANUC CORPORATIONInventors: Te Tang, Tetsuaki Kato
-
Patent number: 11875528Abstract: A system and method for identifying an object to be picked up by a robot. The method includes obtaining a 2D red-green-blue (RGB) color image and a 2D depth map image of the objects using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the objects. The method generates a segmentation image of the objects using a deep learning convolutional neural network that performs an image segmentation process that extracts features from the RGB image, assigns a label to the pixels so that objects in the segmentation image have the same label and rotates the object using the orientation of the object in the segmented image. The method then identifies a location for picking up the object using the segmentation image and the depth map image and rotates the object when it is picked up.Type: GrantFiled: May 25, 2021Date of Patent: January 16, 2024Assignee: FANUC CORPORATIONInventors: Te Tang, Tetsuaki Kato
-
Publication number: 20230245293Abstract: A method for identifying inaccurately depicted boxes in an image, such as miss detected boxes and partially detected boxes. The method obtains a 2D RGB image of the boxes and a 2D depth map image of the boxes using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the boxes. The method generates a segmentation image of the boxes using a neural network by performing an image segmentation process that extracts features from the RGB image and segments the boxes by assigning a label to pixels in the RGB image so that each box in the segmentation image has the same label and different boxes in the segmentation image have different labels. The method analyzes the segmentation image to determine if the image segmentation process has failed to accurately segment the boxes in the segmentation image.Type: ApplicationFiled: February 3, 2022Publication date: August 3, 2023Inventors: Te Tang, Tetsuaki Kato
-
Patent number: 11701777Abstract: An adaptive robot grasp planning technique for bin picking. Workpieces in a bin having random positions and poses are to be grasped by a robot and placed in a goal position and pose. The workpiece shape is analyzed to identify a plurality of robust grasp options, each grasp option having a position and orientation. The workpiece shape is also analyzed to determine a plurality of stable intermediate poses. Each individual workpiece in the bin is evaluated to identity a set of feasible grasps, and the workpiece is moved to the goal pose if such direct movement is possible. If direct movement is not possible, a search problem is formulated, where each stable intermediate pose is a node. The search problem is solved by evaluating the feasibility and optimality of each link between nodes. Feasibility of each link is evaluated in terms of collision avoidance constraints and robot joint motion constraints.Type: GrantFiled: April 3, 2020Date of Patent: July 18, 2023Assignee: FANUC CORPORATIONInventors: Xinghao Zhu, Te Tang, Tetsuaki Kato
-
Publication number: 20230169675Abstract: A system and method for identifying a box to be picked up by a robot from a stack of boxes. The method includes obtaining a 2D red-green-blue (RGB) color image of the boxes and a 2D depth map image of the boxes using a 3D camera. The method employs an image segmentation process that uses a simplified mask R-CNN executable by a central processing unit (CPU) to predict which pixels in the RGB image are associated with each box, where the pixels associated with each box are assigned a unique label that combine to define a mask for the box. The method then identifies a location for picking up the box using the segmentation image.Type: ApplicationFiled: November 30, 2021Publication date: June 1, 2023Inventors: Te Tang, Tetsuaki Kato
-
Publication number: 20230169324Abstract: A system and method for training a neural network. The method includes modelling a plurality of different sized objects to generate virtual images of the objects using computer graphics software and generating a placement virtual image by randomly and sequentially selecting the modelled objects and placing the selected modelled objects within a predetermined boundary in a predetermined pattern using the software. The method also includes rendering a virtual image of the placement virtual image based on predetermined data and information using the computer graphics software and generating an annotated virtual image by independently labeling the objects in the rendered virtual image using the software. The method repeats generating a placement virtual image, rendering a virtual image and generating an annotated virtual for a plurality of randomly and sequentially selected modelled objects, and then trains the neural network using the plurality of rendered virtual images and the annotated virtual images.Type: ApplicationFiled: November 30, 2021Publication date: June 1, 2023Inventors: Te Tang, Tetsuaki Kato
-
Patent number: 11644811Abstract: A method and system for adapting a CNC machine tool path from a nominal workpiece shape to an actual workpiece shape. The method includes defining a grid of feature points on a nominal workpiece shape, where the feature points encompass an area around the machine tool path but do not necessarily include points on the machine tool path. A probe is used to detect locations of the feature points on an actual workpiece. A space mapping function is computed as a transformation from the nominal feature points to the actual feature points, and the function is applied to the nominal tool path to compute a new tool path. The new tool path is used by the CNC machine to operate on the actual workpiece. The feature points are used to characterize the three dimensional shape of the working surface of the actual workpiece, not just a curve or outline.Type: GrantFiled: October 30, 2019Date of Patent: May 9, 2023Assignee: FANUC CORPORATIONInventors: Te Tang, Tetsuaki Kato
-
Patent number: 11554496Abstract: A system and method for extracting features from a 2D image of an object using a deep learning neural network and a vector field estimation process. The method includes extracting a plurality of possible feature points, generating a mask image that defines pixels in the 2D image where the object is located, and generating a vector field image for each extracted feature point that includes an arrow directed towards the extracted feature point. The method also includes generating a vector intersection image by identifying an intersection point where the arrows for every combination of two pixels in the 2D image intersect. The method assigns a score for each intersection point depending on the distance from each pixel for each combination of two pixels and the intersection point, and generates a point voting image that identifies a feature location from a number of clustered points.Type: GrantFiled: April 3, 2020Date of Patent: January 17, 2023Assignee: FANUC CORPORATIONInventors: Te Tang, Tetsuaki Kato
-
Publication number: 20220379475Abstract: A system and method identifying an object, such as a transparent object, to be picked up by a robot from a bin of objects. The method includes obtaining a 2D red-green-blue (RGB) color image and a 2D depth map image of the objects using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the objects. The method generates a segmentation image of the objects using a deep learning mask R-CNN (convolutional neural network) that performs an image segmentation process that extracts features from the RGB image and assigns a label to the pixels so that objects in the segmentation image have the same label. The method then identifies a location for picking up the object using the segmentation image and the depth map image.Type: ApplicationFiled: May 25, 2021Publication date: December 1, 2022Inventors: Te Tang, Tetsuaki Kato
-
Publication number: 20220383538Abstract: A system and method for identifying an object to be picked up by a robot. The method includes obtaining a 2D red-green-blue (RGB) color image and a 2D depth map image of the objects using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the objects. The method generates a segmentation image of the objects using a deep learning convolutional neural network that performs an image segmentation process that extracts features from the RGB image, assigns a label to the pixels so that objects in the segmentation image have the same label and rotates the object using the orientation of the object in the segmented image. The method then identifies a location for picking up the object using the segmentation image and the depth map image and rotates the object when it is picked up.Type: ApplicationFiled: May 25, 2021Publication date: December 1, 2022Inventors: Te Tang, Tetsuaki Kato
-
Patent number: 11475589Abstract: A system and method for obtaining a 3D pose of an object using 2D images from a 2D camera and a learned-based neural network. The neural network extracts a plurality of features on the object from the 2D images and generates a heatmap for each of the extracted features that identify the probability of a location of a feature point on the object by a color representation. The method provides a feature point image that includes the feature points from the heatmaps on the 2D images, and estimates the 3D pose of the object by comparing the feature point image and a 3D virtual CAD model of the object.Type: GrantFiled: April 3, 2020Date of Patent: October 18, 2022Assignee: FANUC CORPORATIONInventors: Te Tang, Tetsuaki Kato
-
Patent number: 11470291Abstract: A projector including a casing, a control system, an image assembly and at least one electric thermal heater is provided. The image assembly is coupled to and controlled by the control system. The electric thermal heater is coupled to and controlled by the control system. The control system is configured to activate the electric thermal heater to preheat the image assembly, such that the image assembly is warmed up to a cut-off temperature. Then, the control system is configured to switch off the electric thermal heater. Then the focal length of the image assembly is adjusted. A focal length adjusting method for the projector is further provided. The projector and the method may be used to avoid the thermal expansion of various elements, so as to avoid focal length shift.Type: GrantFiled: January 8, 2020Date of Patent: October 11, 2022Assignee: Coretronic CorporationInventors: Te-Tang Chen, Chun-Lung Yen, Wen-Yen Chung, Tung-Chou Hu
-
Patent number: 11350078Abstract: A system and method for obtaining a 3D pose of an object using 2D images from multiple 2D cameras. The method includes positioning a first 2D camera so that it is directed towards the object along a first optical axis, obtaining 2D images of the object by the first 2D camera, and extracting feature points from the 2D images from the first 2D camera using a first feature extraction process. The method also includes positioning a second 2D camera so that it is directed towards the object along a second optical axis, obtaining 2D images of the object by the second 2D camera, and extracting feature points from the 2D images from the second 2D camera using a second feature extraction process. The method then estimates the 3D pose of the object using the extracted feature points from both of the first and second feature extraction process.Type: GrantFiled: April 3, 2020Date of Patent: May 31, 2022Assignee: FANUC CORPORATIONInventors: Te Tang, Tetsuaki Kato
-
Patent number: 11340519Abstract: A light source heat-dissipating device is disposed in a projection apparatus having a casing with an air inlet. The light source heat-dissipating device has a heat-dissipating fin assembly disposed in the casing and having an air intake side, and first and second heat-dissipating fin assemblies, a first air duct, and a fan adjacent to the first and second heat-dissipating fin assemblies. The second heat-dissipating fin assembly is stacked on the first one. The first heat-dissipating fin assembly is between the second one and the air inlet. The first air duct is adjacently-disposed at the air intake side, has a first entrance end connected with the air inlet and a first exit end opposite to the first entrance. The light source heat-dissipating device and projection apparatus improve temperature and life consistency between light source modules. The heat dissipation performances of the first and second heat-dissipating fin assemblies tend to be consistent.Type: GrantFiled: February 21, 2020Date of Patent: May 24, 2022Assignee: Coretronic CorporationInventors: Chun-Lung Yen, Te-Tang Chen, Wen-Jui Huang, Tsung-Ching Lin
-
Publication number: 20220084238Abstract: A system and method for obtaining a 3D pose of objects, such as transparent objects, in a group of objects to allow a robot to pick up the objects. The method includes obtaining a 2D red-green-blue (RGB) color image of the objects using a camera, and generating a segmentation image of the RGB images by performing an image segmentation process using a deep learning convolutional neural network that extracts features from the RGB image and assigns a label to pixels in the segmentation image so that objects in the segmentation image have the same label. The method also includes separating the segmentation image into a plurality of cropped images where each cropped image includes one of the objects, estimating the 3D pose of each object in each cropped image, and combining the 3D poses into a single pose image.Type: ApplicationFiled: September 11, 2020Publication date: March 17, 2022Inventors: Te Tang, Tetsuaki Kato
-
Publication number: 20220072712Abstract: A system and method for identifying a box to be picked up by a robot from a stack of boxes. The method includes obtaining a 2D red-green-blue (RGB) color image of the boxes and a 2D depth map image of the boxes using a 3D camera, where pixels in the depth map image are assigned a value identifying the distance from the camera to the boxes. The method generates a segmentation image of the boxes by performing an image segmentation process that extracts features from the RGB image and the depth map image, combines the extracted features in the images and assigns a label to the pixels in a features image so that each box in the segmentation image has the same label. The method then identifies a location for picking up the box using the segmentation image.Type: ApplicationFiled: September 9, 2020Publication date: March 10, 2022Inventors: Te Tang, Tetsuaki Kato