Abstract: A method for selecting frames used in face processing includes capturing video data featuring a face of an individual and determining with a processing unit at least one image quality indicator for at least some frames in the video data. The quality indicator is used for selecting a subset of the frames and a sequence of frames is determined corresponding to a movement of a body portion detected in the video data and/or corresponding to a response window during which a response to a challenge should be given. At least one second frame is added to the subset within a predefined interval before or after the sequence and the selected frames are stored in a memory.
Type:
Grant
Filed:
December 16, 2016
Date of Patent:
June 22, 2021
Assignee:
Keylemon SA
Inventors:
Yann Rodriguez, François Moulin, Sébastien Piccand, Sara Sedlar
Abstract: A method for selecting frames used in face processing includes capturing video data featuring a face of an individual and determining with a processing unit at least one image quality indicator for at least some frames in the video data. The quality indicator is used for selecting a subset of the frames and a sequence of frames is determined corresponding to a movement of a body portion detected in the video data and/or corresponding to a response window during which a response to a challenge should be given. At least one second frame is added to the subset within a predefined interval before or after the sequence and the selected frames are stored in a memory.
Type:
Application
Filed:
December 16, 2016
Publication date:
December 19, 2019
Applicant:
Keylemon SA
Inventors:
Yann Rodriguez, François Moulin, Sébastien Piccand, Sara Sedlar
Abstract: A pose rectification method for rectifying a pose in data representing face images, comprising the steps of: A—acquiring a least one test frame including 2D near infrared image data, 2D visible light image data, and a depth map; C—estimating the pose of a face in said test frame by aligning said depth map with a 3D model of a head of known orientation; D—mapping at least one of said 2D image on the depth map, so as to generate textured image data; E—projecting the textured image data in 2D so as to generate data representing a pose-rectified 2D projected image.