Patents by Inventor Philippe Weinzaepfel

Philippe Weinzaepfel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220114444
    Abstract: A computer-implemented method for training a neural network to perform a data processing task includes: for each data sample of a set of labeled data samples: by a first loss function for the data processing task, computing a first loss for that data sample; and by a second loss function, automatically computing a weight value for the data sample based on the first loss, the weight value indicative of a reliability of a label of the data sample predicted by the neural network for the data sample and dictating the extent to which that data sample impacts training of the neural network; and training the neural network with the set of labelled data samples according to their respective weight value.
    Type: Application
    Filed: July 23, 2021
    Publication date: April 14, 2022
    Applicant: NAVER CORPORATION
    Inventors: Philippe WEINZAEPFEL, Jérome REVAUD, Thibault CASTELLS
  • Publication number: 20220107645
    Abstract: A training system includes: an encoder module configured to receive a query image and to generate a first vector representative of one or more features in the query image using an encoder; a mixing module configured to generate a second vector by mixing a third vector, representative of one or more features in a second image that is classified as a negative relative to the query image, with a fourth vector; and an adjustment module configured to train the encoder by selectively adjusting one or more parameters of the encoder based on the first vector and the second vector.
    Type: Application
    Filed: April 23, 2021
    Publication date: April 7, 2022
    Applicant: NAVER CORPORATION
    Inventors: Ioannis KALANTIDIS, Diane LARLUS, Philippe WEINZAEPFEL, Mert Bulent SARIYILDIZ, Noé PION
  • Publication number: 20210374989
    Abstract: A system for generating whole body poses includes: a body regression module configured to generate a first pose of a body of an animal in an input image by regressing from a stored body anchor pose; a face regression module configured to generate a second pose of a face of the animal in the input image by regressing from a stored face anchor pose; an extremity regression module configured to generate a third pose of an extremity of the animal in the input image by regressing from a stored extremity anchor pose; and a pose module configured to generate a whole body pose of the animal in the input image based on the first pose, the second pose, and the third pose.
    Type: Application
    Filed: June 2, 2020
    Publication date: December 2, 2021
    Applicants: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Philippe WEINZAEPFEL, Romain BREGIER, Hadrien COMBALUZIER, Vincent LEROY, Gregory ROGEZ
  • Patent number: 11182620
    Abstract: A method for training a convolutional recurrent neural network for semantic segmentation in videos, includes (a) training, using a set of semantically segmented training images, a first convolutional neural network; (b) training, using a set of semantically segmented training videos, a convolutional recurrent neural network, corresponding to the first convolutional neural network, wherein a convolutional layer has been replaced by a recurrent module having a hidden state. The training of the convolutional recurrent neural network, for each pair of successive frames (t?1, t?1; T2) of a video of the set of semantically segmented training videos includes warping an internal state of a recurrent layer according to an estimated optical flow between the frames of the pair of successive frames, so as to adapt the internal state to the motion of pixels between the frames of the pair and learning parameters of at least the recurrent module.
    Type: Grant
    Filed: July 22, 2019
    Date of Patent: November 23, 2021
    Inventor: Philippe Weinzaepfel
  • Patent number: 11176425
    Abstract: A system for detecting and describing keypoints in images is described. A camera is configured to capture an image including a plurality of pixels. A fully convolutional network is configured to jointly and concurrently: generate descriptors for each of the pixels, respectively; generate reliability scores for each of the pixels, respectively; and generate repeatability scores for each of the pixels, respectively. A scoring module is configured to generate scores for the pixels, respectively, based on the reliability scores and the repeatability scores of the pixels, respectively. A keypoint list module is configured to: select X of the pixels having the X highest scores, where X is an integer greater than 1; and generate a keypoint list including: locations of the selected X pixels; and the descriptors of the selected X pixels.
    Type: Grant
    Filed: December 11, 2019
    Date of Patent: November 16, 2021
    Assignees: NAVER CORPORATION, NAVER LABS CORPORATION
    Inventors: Jérome Revaud, Cesar De Souza, Martin Humenberger, Philippe Weinzaepfel
  • Publication number: 20210182626
    Abstract: A system for detecting and describing keypoints in images is described. A camera is configured to capture an image including a plurality of pixels. A fully convolutional network is configured to jointly and concurrently: generate descriptors for each of the pixels, respectively; generate reliability scores for each of the pixels, respectively; and generate repeatability scores for each of the pixels, respectively. A scoring module is configured to generate scores for the pixels, respectively, based on the reliability scores and the repeatability scores of the pixels, respectively. A keypoint list module is configured to: select X of the pixels having the X highest scores, where X is an integer greater than 1; and generate a keypoint list including: locations of the selected X pixels; and the descriptors of the selected X pixels.
    Type: Application
    Filed: December 11, 2019
    Publication date: June 17, 2021
    Applicants: Naver Corporation, Naver Labs Corporation
    Inventors: Jérome Revaud, Cesar De Souza, Martin Humenberger, Philippe Weinzaepfel
  • Patent number: 11003956
    Abstract: A method for training, using a plurality of training images with corresponding six degrees of freedom camera pose for a given environment and a plurality of reference images, each reference image depicting an object-of-interest in the given environment and having a corresponding two-dimensional to three-dimensional correspondence for the given environment, a neural network to provide visual localization by: for each training image, detecting and segmenting object-of-interest in the training image; generating a set of two-dimensional to two-dimensional matches between the detected and segmented objects-of-interest and corresponding reference images; generating a set of two-dimensional to three-dimensional matches from the generated set of two-dimensional to two-dimensional matches and the two-dimensional to three-dimensional correspondences corresponding to the reference images; and determining localization, for each training image, by solving a perspective-n-point problem using the generated set of two-dimens
    Type: Grant
    Filed: May 16, 2019
    Date of Patent: May 11, 2021
    Inventors: Philippe Weinzaepfel, Gabriela Csurka, Yohann Cabon, Martin Humenberger
  • Publication number: 20210073525
    Abstract: A computer-implemented method of recognition of actions performed by individuals includes: by one or more processors, obtaining images including at least a portion of an individual; by the one or more processors, based on the images, generating implicit representations of poses of the individual in the images; and by the one or more processors, determining an action performed by the individual and captured in the images by classifying the implicit representations of the poses of the individual.
    Type: Application
    Filed: June 17, 2020
    Publication date: March 11, 2021
    Applicant: NAVER CORPORATION
    Inventors: Philippe WEINZAEPFEL, Gregory ROGEZ
  • Patent number: 10867184
    Abstract: A method for training a convolutional neural network for classification of actions performed by subjects in a video is realized by (a) for each frame of the video, for each key point of the subject, generating a heat map of the key point representing a position estimation of the key point within the frame; (b) colorizing each heat map as a function of the relative time of the corresponding frame in the video; (c) for each key point, aggregating all the colorized heat maps of the key point into at least one image representing the evolution of the position estimation of the key point during the video; and training the convolutional neural network using as input the sets associated to each training video of images representing the evolution of the position estimation of each key point during the video.
    Type: Grant
    Filed: March 20, 2019
    Date of Patent: December 15, 2020
    Inventors: Vasileios Choutas, Philippe Weinzaepfel, Jérôme Revaud, Cordelia Schmid
  • Publication number: 20200364509
    Abstract: A method for training, using a plurality of training images with corresponding six degrees of freedom camera pose for a given environment and a plurality of reference images, each reference image depicting an object-of-interest in the given environment and having a corresponding two-dimensional to three-dimensional correspondence for the given environment, a neural network to provide visual localization by: for each training image, detecting and segmenting object-of-interest in the training image; generating a set of two-dimensional to two-dimensional matches between the detected and segmented objects-of-interest and corresponding reference images; generating a set of two-dimensional to three-dimensional matches from the generated set of two-dimensional to two-dimensional matches and the two-dimensional to three-dimensional correspondences corresponding to the reference images; and determining localization, for each training image, by solving a perspective-n-point problem using the generated set of two-dimens
    Type: Application
    Filed: May 16, 2019
    Publication date: November 19, 2020
    Applicant: Naver Corporation
    Inventors: Philippe Weinzaepfel, Gabriela Csurka, Yohann Gabon, Martin Humenberger
  • Publication number: 20200160065
    Abstract: A method for training a convolutional recurrent neural network for semantic segmentation in videos, includes (a) training, using a set of semantically segmented training images, a first convolutional neural network;(b) training, using a set of semantically segmented training videos, a convolutional recurrent neural network, corresponding to the first convolutional neural network, wherein a convolutional layer has been replaced by a recurrent module having a hidden state. The training of the convolutional recurrent neural network, for each pair of successive frames (t?1, t ? 1; T2) of a video of the set of semantically segmented training videos includes warping an internal state of a recurrent layer according to an estimated optical flow between the frames of the pair of successive frames, so as to adapt the internal state to the motion of pixels between the frames of the pair and learning parameters of at least the recurrent module.
    Type: Application
    Filed: July 22, 2019
    Publication date: May 21, 2020
    Applicant: Naver Corporation
    Inventor: Philippe Weinzaepfel
  • Publication number: 20190303677
    Abstract: A method for training a convolutional neural network for classification of actions performed by subjects in a video is realized by (a) for each frame of the video, for each key point of the subject, generating a heat map of the key point representing a position estimation of the key point within the frame; (b) colorizing each heat map as a function of the relative time of the corresponding frame in the video; (c) for each key point, aggregating all the colorized heat maps of the key point into at least one image representing the evolution of the position estimation of the key point during the video; and training the convolutional neural network using as input the sets associated to each training video of images representing the evolution of the position estimation of each key point during the video.
    Type: Application
    Filed: March 20, 2019
    Publication date: October 3, 2019
    Applicant: Naver Corporation
    Inventors: Vasileios Choutas, Philippe Weinzaepfel, Jérôme Revaud, Cordelia Schmid