Patents by Inventor Symeon Nikitidis
Symeon Nikitidis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11941918Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.Type: GrantFiled: April 14, 2023Date of Patent: March 26, 2024Assignee: Yoti Holding LimitedInventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
-
Publication number: 20230252662Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.Type: ApplicationFiled: April 14, 2023Publication date: August 10, 2023Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
-
Patent number: 11657525Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.Type: GrantFiled: November 30, 2020Date of Patent: May 23, 2023Assignee: Yoti Holding LimitedInventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
-
Patent number: 11625464Abstract: One aspect provides a method of authenticating a user of a user device, the method comprising: receiving motion data captured using a motion sensor of the user device during an interval of motion of the user device induced by the user; processing the motion data to generate a device motion feature vector, inputting the device motion feature vector to a neural network, the neural network having been trained to distinguish between device motion feature vectors captured from different users; and authenticating the user of the user device, by using a resulting vector output of the neural network to determine whether the user-induced device motion matches an expected device motion pattern uniquely associated with an authorized user, the neural network having been trained based on device motion feature vectors captured from a group of training users, which does not include the authorized user.Type: GrantFiled: June 19, 2020Date of Patent: April 11, 2023Assignee: Yoti Holding LimitedInventors: Symeon Nikitidis, Jan Kurcius, Francisco Angel Garcia Rodriguez
-
Patent number: 11281921Abstract: A method of configuring an anti-spoofing system to detect if a spoofing attack has been attempted, in which an image processing component of the anti-spoofing system is trained to process 2D verification images according to a set of image processing parameters, in order to extract depth information from the 2D verification images. The configured anti-spoofing system comprises an anti-spoofing component which uses an output from the processing of a 2D verification image by the image processing component to determine whether an entity captured in that image corresponds to an actual human or a spoofing entity.Type: GrantFiled: December 4, 2019Date of Patent: March 22, 2022Assignee: Yoti Holding LimitedInventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
-
Publication number: 20210209387Abstract: A method of configuring an anti-spoofing system to detect if a spoofing attack has been attempted, in which an image processing component of the anti-spoofing system is trained to process 2D verification images according to a set of image processing parameters, in order to extract depth information from the 2D verification images. The configured anti-spoofing system comprises an anti-spoofing component which uses an output from the processing of a 2D verification image by the image processing component to determine whether an entity captured in that image corresponds to an actual human or a spoofing entity.Type: ApplicationFiled: December 4, 2019Publication date: July 8, 2021Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
-
Publication number: 20210082136Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.Type: ApplicationFiled: November 30, 2020Publication date: March 18, 2021Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
-
Publication number: 20200320184Abstract: One aspect provides a method of authenticating a user of a user device, the method comprising: receiving motion data captured using a motion sensor of the user device during an interval of motion of the user device induced by the user; processing the motion data to generate a device motion feature vector, inputting the device motion feature vector to a neural network, the neural network having been trained to distinguish between device motion feature vectors captured from different users; and authenticating the user of the user device, by using a resulting vector output of the neural network to determine whether the user-induced device motion matches an expected device motion pattern uniquely associated with an authorized user, the neural network having been trained based on device motion feature vectors captured from a group of training users, which does not include the authorized user.Type: ApplicationFiled: June 19, 2020Publication date: October 8, 2020Inventors: Symeon Nikitidis, Jan Kurcius, Francisco Angel Garcia Rodriguez
-
Patent number: 10546183Abstract: A liveness detection system comprises a controller, a video input, a feature recognition module, and a liveness detection module. The controller is configured to control an output device to provide randomized outputs to an entity over an interval of time. The video input is configured to receive a moving image of the entity captured by a camera over the interval of time. The feature recognition module is configured to process the moving image to detect at least one human feature of the entity. The liveness detection module is configured to compare with the randomized outputs a behaviour exhibited by the detected human feature over the interval of time to determine whether the behaviour is an expected reaction to the randomized outputs, thereby determining whether the entity is a living being.Type: GrantFiled: February 9, 2018Date of Patent: January 28, 2020Assignee: Yoti Holding LimitedInventors: Francisco Angel Garcia Rodriguez, Benjamin Robert Tremoulheac, Symeon Nikitidis, Thomas Bastiani, Miguel Jimenez
-
Publication number: 20180239955Abstract: A liveness detection system comprises a controller, a video input, a feature recognition module, and a liveness detection module. The controller is configured to control an output device to provide randomized outputs to an entity over an interval of time. The video input is configured to receive a moving image of the entity captured by a camera over the interval of time. The feature recognition module is configured to process the moving image to detect at least one human feature of the entity. The liveness detection module is configured to compare with the randomized outputs a behaviour exhibited by the detected human feature over the interval of time to determine whether the behaviour is an expected reaction to the randomized outputs, thereby determining whether the entity is a living being.Type: ApplicationFiled: February 9, 2018Publication date: August 23, 2018Inventors: Francisco Angel Garcia Rodriguez, Benjamin Robert Tremoulheac, Symeon Nikitidis, Thomas Bastiani, Miguel Jimenez