Patents by Inventor Tim WAEGEMAN

Tim WAEGEMAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240165807
    Abstract: The present invention relates to a method for computing a pose for a robot head for handling an object by means of a handle connected to said object, comprising the steps of: (a) obtaining, by means of a vision sensor, an image of a scene comprising said object and said handle, said image comprising 3D information and preferably color information; (b) segmenting, by means of a trained segmentation NN, said image, according to a plurality of semantic components comprising at least a first semantic component relating to said object and a second semantic component relating to said handle; (c) determining, based on said plurality of semantic components, handling data for handling said object, said handling data comprising a handling position being on said handle; and (d) computing, based on said handling data, a pose for said robot head, said pose comprising at least a robot head position for approaching said handle.
    Type: Application
    Filed: March 15, 2022
    Publication date: May 23, 2024
    Inventors: Andrew WAGNER, Tim WAEGEMAN, Rob GIELEN, Lidewei VERGEYNST, Matthias VERSTRAETE, Bert MORTIER
  • Publication number: 20240161325
    Abstract: A method for generating a technical instruction for handling a 3D physical object present within a reference volume and comprising a 3D surface, the method comprising: obtaining at least two images of the object from a plurality of cameras positioned at different respective angles with respect to the object; generating, with respect to the 3D surface, a voxel representation segmented based on the at least two images, said segmenting comprising identifying a first segment component corresponding to a plurality of first voxels and a second segment component corresponding to a plurality of second voxels different from the plurality of first voxels; performing a measurement with respect to the plurality of first voxels; and computing the technical instruction for the handling of the object based on the segmented voxel representation and the measurement, wherein said segmenting relates to at least one trained NN being trained with respect to the 3D surface.
    Type: Application
    Filed: March 15, 2022
    Publication date: May 16, 2024
    Inventors: Matthias VERSTRAETE, Ruben VAN PARYS, Stanislav RUSNAK, Tim WAEGEMAN
  • Publication number: 20240144525
    Abstract: Improved orientation detection based on deep learning A method for generating a robot command for handling a 3D physical object present within a reference volume, the object comprising a main direction and a 3D surface, the method comprising: obtaining at least two images of the object from a plurality of cameras positioned at different respective angles with respect to the object; generating, with respect to the 3D surface of the object, a voxel representation segmented based on the at least two images; determining a main direction based on the segmented voxel representation; and the robot command for the handling of the object based on the segmented voxel representation and the determined main direction, wherein the robot command is computed based on the determined main direction of the object relative to the reference volume, wherein the robot command is executable by means of a device comprising a robot element configured for handling the object.
    Type: Application
    Filed: March 15, 2022
    Publication date: May 2, 2024
    Inventors: Lidewei VERGEYNST, Ruben VAN PARYS, Andrew WAGNER, Tim WAEGEMAN
  • Publication number: 20230009292
    Abstract: The present invention relates to a computer-implemented method for labelling a training set, preferably for training a neural network, with respect to a 3D physical object by means of a GUI, the method comprising the steps of: obtaining a training set relating to a plurality of training objects, each of the training objects comprising a 3D surface similar to the 3D surface of said object, the training set comprising at least two images for each training object; generating, for each training object, a respective 3D voxel representation based on the respective at least two images; receiving, via said GUI, manual annotations with respect to a plurality of segment classes from a user of said GUI for labelling each of the training objects; and preferably training, based on said manual annotations (91, 92, 93), at least one NN, for obtaining said at least one trained NN.
    Type: Application
    Filed: November 30, 2020
    Publication date: January 12, 2023
    Inventors: Andrew WAGNER, Ruben VAN PARYS, Matthias VERSTRAETE, Tim WAEGEMAN
  • Publication number: 20220297291
    Abstract: The present invention relates to a method for generating a robot command for handling a three-dimensional, 3D, physical object present within a reference volume and comprising a 3D surface, comprising: obtaining at least two images of said physical object from a plurality of cameras positioned at different respective angles with respect to said object; generating, with respect to the 3D surface of said object, a voxel representation segmented based on said at least two images; and computing the robot command for said handling of said object based on said segmented voxel representation.
    Type: Application
    Filed: November 30, 2020
    Publication date: September 22, 2022
    Inventors: Ruben VAN PARYS, Andrew WAGNER, Matthias VERSTRAETE, Tim WAEGEMAN