Patents by Inventor Wim Abbeloos

Wim Abbeloos has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240144638
    Abstract: A method for adjusting an information system of a mobile machine, the information system being configured to calculate 3D information relative to a scene in which the mobile machine is moving, the method including: acquiring at least a first image of the scene at a first time and a second image of the scene at a second time; detecting one or more scene features in the first image and the second image; matching the one or more scene features across the first image and the second image based upon detection of the one or more scene features; estimating an egomotion of the mobile machine based upon the matching of the one or more scene features across the first image and the second image; and adjusting the information system by taking into account the estimation of the egomotion of the mobile machine.
    Type: Application
    Filed: August 17, 2023
    Publication date: May 2, 2024
    Applicants: TOYOTA JIDOSHA KABUSHIKI KAISHA, KATHOLIEKE UNIVERSITEIT LEUVEN
    Inventors: Wim ABBELOOS, Frank VERBIEST, Bruno DAWAGNE, Wim LEMKENS, Marc PROESMANS, Luc VAN GOOL
  • Publication number: 20240144487
    Abstract: A method for tracking a position of an object in a scene surrounding a mobile machine based upon information acquired from monocular images, includes: acquiring at least a first image at a first time and a second image at a second time, the first image and the second image each including image data corresponding to the object and a scene feature present in the scene surrounding the mobile machine; detecting the object in the first image and the second image; matching the scene feature across the first image and the second image; performing an estimation of an egomotion of the mobile machine based upon the scene feature matched across the first image and the second image; and predicting a position of the object taking into account the estimation of the egomotion of the mobile machine.
    Type: Application
    Filed: September 25, 2023
    Publication date: May 2, 2024
    Inventors: Wim ABBELOOS, Gabriel OTHMEZOURI, Frank VERBIEST, Bruno DAWAGNE, Wim LEMKENS, Marc PROESMANS, Luc VAN GOOL
  • Patent number: 11967131
    Abstract: The disclosure relates to system for processing an image of at least one camera. The camera has predetermined camera parameters including a lens distortion and a camera pose with respect to a predefined reference frame. The system comprises: a trained neural network with a predefined architecture, the neural network being configured to receive the image of the camera as input and to predict in response at least one characteristic, wherein the neural network architecture comprises at least one static feature map configured to encode the predetermined camera parameters including the lens distortion and/or the camera pose.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: April 23, 2024
    Assignee: TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventor: Wim Abbeloos
  • Publication number: 20230047017
    Abstract: A neural network, a system using this neural network and a method for training a neural network to output a description of the environment in the vicinity of at least one sound acquisition device on the basis of an audio signal acquired by the sound acquisition device, the method including: obtaining audio and image training signals of a scene showing an environment with objects generating sounds, obtaining a target description of the environment seen on the image training signal, inputting the audio training signal to the neural network so that the neural network outputs a training description of the environment, and comparing the target description of the environment with the training description of the environment.
    Type: Application
    Filed: January 10, 2020
    Publication date: February 16, 2023
    Applicants: TOYOTA MOTOR EUROPE, ETH ZURICH
    Inventors: Wim ABBELOOS, Arun BALAJEE VASUDEVAN, Dengxin DAI, Luc VAN GOOL
  • Publication number: 20220284696
    Abstract: A system and a method for training a semantic segmentation model includes obtaining a plurality of sets of images each having an index z for visibility, iteratively training the model. Iteratively training the model includes (a) for each z above 1, obtaining preliminary semantic segmentation labels for each image of the set of images of index z?1 by applying the model to each image of the set of images of index z?1, (b) processing each preliminary semantic segmentation labels using semantic segmentation labels obtained using the model on a selected image of index 1, and obtaining processed semantic segmentation labels, (c) training the model using the set of images of index z?1 and the associated processed semantic segmentation labels, and (d) performing steps (a) to (c) for z+1.
    Type: Application
    Filed: July 10, 2019
    Publication date: September 8, 2022
    Applicants: TOYOTA MOTOR EUROPE, ETH ZURICH THE SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZURICH
    Inventors: Wim ABBELOOS, Christos SAKARIDIS, Luc VAN GOOL, Dengxin DAI
  • Publication number: 20220092746
    Abstract: A system for image completion is disclosed. The system comprises a coordinate generation module configured to receive past frames and a present frame having a first field-of-view and to generate a set of coordinate maps, one for each of the received past frames; and a frame aggregation module configured to receive as input the past frames, the present frame, and the coordinate maps and to synthesize, based on said input, a present frame having a second field-of-view.
    Type: Application
    Filed: September 17, 2021
    Publication date: March 24, 2022
    Inventors: Wim ABBELOOS, Gabriel OTHMEZOURI, Liqian MA, Stamatios GEORGOULIS, Luc VAN GOOL
  • Publication number: 20220092869
    Abstract: A system for generating a mask for object instances in an image is provided. The system includes a first module comprising a trained neural network and configured to input the image to the neural network, wherein the neural network is configured to generate: pixel offset vectors for the pixels of the object instance configured to point towards a unique center of an object instance, the pixel offset vectors thereby forming a cluster with a cluster distribution, and for each object instance an estimate of said cluster distribution defining a margin for determining which pixels belong to the object instance. A method for training a neural network map to be used for generating a mask for object instances in an image is also provided.
    Type: Application
    Filed: January 17, 2019
    Publication date: March 24, 2022
    Applicants: Toyota Motor Europe, Katholieke Universiteit Leuven, K.U. Leuven R&D
    Inventors: Wim ABBELOOS, Davy NEVEN, Bert DE BRABANDERE, Marc PROESMANS, Luc VAN GOOL
  • Publication number: 20210365735
    Abstract: A computer-implemented method for training a classifier (??), including: training a pretext model (??) to learn a pretext task, so as to minimize a distance between an output of a source sample via the pretext model (??) and an output of a corresponding transformed sample via the pretext model (??), the transformed sample being a sample obtained by applying a transformation (T) to the source sample; S20) determining a neighborhood (NXi) of samples (Xi) of a dataset (SD) in the embedding space; S30) training the classifier (??) to predict respective estimated probabilities ??j(Xi), j=1 . . . C, for a sample (Xi) to belong to respective clusters (Cj), by using a second training criterion which tends to: maximize a likelihood for a sample and its neighbors (Xj) of its neighborhood (Nxi) to belong to the same cluster; and force the samples to be distributed over several clusters.
    Type: Application
    Filed: May 21, 2021
    Publication date: November 25, 2021
    Applicants: Toyota Jidosha Kabushiki Kaisha, Katholieke Universiteit Leuven
    Inventors: Wim Abbeloos, Gabriel Othmezouri, Wouter Van Gansbeke, Simon Vandenhende, Marc Proesmans, Stamatios Georgoulis, Luc Van Gool
  • Publication number: 20210295561
    Abstract: The disclosure relates to system for processing an image of at least one camera. The camera has predetermined camera parameters including a lens distortion and a camera pose with respect to a predefined reference frame. The system comprises: a trained neural network with a predefined architecture, the neural network being configured to receive the image of the camera as input and to predict in response at least one characteristic, wherein the neural network architecture comprises at least one static feature map configured to encode the predetermined camera parameters including the lens distortion and/or the camera pose.
    Type: Application
    Filed: March 19, 2021
    Publication date: September 23, 2021
    Inventor: Wim ABBELOOS
  • Publication number: 20210264196
    Abstract: The present disclosure provides a method for processing at least one image comprising inputting the image to at least one neural network, the at least one network being configured to deliver, for each pixel of a group of pixels belonging to an object of a given type visible on the image, an estimation of object parameters that are parameters of the object. The method further comprising processing the estimations of the object parameters using an instance segmentation mask identifying instances of objects having the given type.
    Type: Application
    Filed: February 17, 2021
    Publication date: August 26, 2021
    Applicants: TOYOTA JIDOSHA KABUSHIKI KAISHA, KATHOLIEKE UNIVERSITEIT LEUVEN
    Inventors: Wim ABBELOOS, Daniel OLMEDA REINO, Hazem ABDELKAWY, Jonas HEYLEN, Mark DE WOLF, Bruno DAWAGNE, Michael BARNES, Wim LEMKENS, Marc PROESMANS, Luc VAN GOOL
  • Patent number: 10909369
    Abstract: A method and system detects and localizes multiple instances of an object by first acquiring a frame of a three-dimensional (3D) scene with a sensor, and extracting features from the frame. The features are matched according to appearance similarity and triplets are formed among matching features. Based on 3D locations of the corresponding points in the matching triplets, a geometric transformation is computed. Matching triplets are clustered according to the computed geometric transformations. Since the set of features coining from two different object instances should have a single geometric transform, the output of clustering provides the features and poses of each object instance in the image.
    Type: Grant
    Filed: October 23, 2017
    Date of Patent: February 2, 2021
    Assignees: Mitsubishi Electric Research Laboratories, Inc, Mitsubishi Electric Corporation
    Inventors: Esra Cansizoglu, Wim Abbeloos, Sergio Salvatore Caccamo, Yuichi Taguchi, Yukiyasu Domae
  • Publication number: 20190019030
    Abstract: A method and system detects and localizes multiple instances of an object by first acquiring a frame of a three-dimensional (3D) scene with a sensor, and extracting features from the frame. The features are matched according to appearance similarity and triplets are formed among matching features. Based on 3D locations of the corresponding points in the matching triplets, a geometric transformation is computed. Matching triplets are clustered according to the computed geometric transformations. Since the set of features coining from two different object instances should have a single geometric transform, the output of clustering provides the features and poses of each object instance in the image.
    Type: Application
    Filed: October 23, 2017
    Publication date: January 17, 2019
    Applicants: Mitsubishi Electric Research Laboratories, Inc, Mitsubishi Electric Corporation
    Inventors: Esra Cansizoglu, Wim Abbeloos, Sergio Salvatore Caccamo, Yuichi Taguchi, Yukiyasu Domae