Patents by Inventor Symeon Nikitidis

Symeon Nikitidis has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11941918
    Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.
    Type: Grant
    Filed: April 14, 2023
    Date of Patent: March 26, 2024
    Assignee: Yoti Holding Limited
    Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
  • Publication number: 20230252662
    Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.
    Type: Application
    Filed: April 14, 2023
    Publication date: August 10, 2023
    Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
  • Patent number: 11657525
    Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.
    Type: Grant
    Filed: November 30, 2020
    Date of Patent: May 23, 2023
    Assignee: Yoti Holding Limited
    Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
  • Patent number: 11625464
    Abstract: One aspect provides a method of authenticating a user of a user device, the method comprising: receiving motion data captured using a motion sensor of the user device during an interval of motion of the user device induced by the user; processing the motion data to generate a device motion feature vector, inputting the device motion feature vector to a neural network, the neural network having been trained to distinguish between device motion feature vectors captured from different users; and authenticating the user of the user device, by using a resulting vector output of the neural network to determine whether the user-induced device motion matches an expected device motion pattern uniquely associated with an authorized user, the neural network having been trained based on device motion feature vectors captured from a group of training users, which does not include the authorized user.
    Type: Grant
    Filed: June 19, 2020
    Date of Patent: April 11, 2023
    Assignee: Yoti Holding Limited
    Inventors: Symeon Nikitidis, Jan Kurcius, Francisco Angel Garcia Rodriguez
  • Patent number: 11281921
    Abstract: A method of configuring an anti-spoofing system to detect if a spoofing attack has been attempted, in which an image processing component of the anti-spoofing system is trained to process 2D verification images according to a set of image processing parameters, in order to extract depth information from the 2D verification images. The configured anti-spoofing system comprises an anti-spoofing component which uses an output from the processing of a 2D verification image by the image processing component to determine whether an entity captured in that image corresponds to an actual human or a spoofing entity.
    Type: Grant
    Filed: December 4, 2019
    Date of Patent: March 22, 2022
    Assignee: Yoti Holding Limited
    Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
  • Publication number: 20210209387
    Abstract: A method of configuring an anti-spoofing system to detect if a spoofing attack has been attempted, in which an image processing component of the anti-spoofing system is trained to process 2D verification images according to a set of image processing parameters, in order to extract depth information from the 2D verification images. The configured anti-spoofing system comprises an anti-spoofing component which uses an output from the processing of a 2D verification image by the image processing component to determine whether an entity captured in that image corresponds to an actual human or a spoofing entity.
    Type: Application
    Filed: December 4, 2019
    Publication date: July 8, 2021
    Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
  • Publication number: 20210082136
    Abstract: An image processing component is trained to process 2D images of human body parts, in order to extract depth information about the human body parts captured therein. Image processing parameters are learned during the training from a training set of captured 3D training images, each 3D training image of a human body part and captured using 3D image capture equipment and comprising 2D image data and corresponding depth data, by: processing the 2D image data of each 3D training image according to the image processing parameters, so as to compute an image processing output for comparison with the corresponding depth data of that 3D image, and adapting the image processing parameters in order to match the image processing outputs to the corresponding depth data, thereby training the image processing component to extract depth information from 2D images of human body parts.
    Type: Application
    Filed: November 30, 2020
    Publication date: March 18, 2021
    Inventors: Symeon Nikitidis, Francisco Angel Garcia Rodriguez, Erlend Davidson, Samuel Neugber
  • Publication number: 20200320184
    Abstract: One aspect provides a method of authenticating a user of a user device, the method comprising: receiving motion data captured using a motion sensor of the user device during an interval of motion of the user device induced by the user; processing the motion data to generate a device motion feature vector, inputting the device motion feature vector to a neural network, the neural network having been trained to distinguish between device motion feature vectors captured from different users; and authenticating the user of the user device, by using a resulting vector output of the neural network to determine whether the user-induced device motion matches an expected device motion pattern uniquely associated with an authorized user, the neural network having been trained based on device motion feature vectors captured from a group of training users, which does not include the authorized user.
    Type: Application
    Filed: June 19, 2020
    Publication date: October 8, 2020
    Inventors: Symeon Nikitidis, Jan Kurcius, Francisco Angel Garcia Rodriguez
  • Patent number: 10546183
    Abstract: A liveness detection system comprises a controller, a video input, a feature recognition module, and a liveness detection module. The controller is configured to control an output device to provide randomized outputs to an entity over an interval of time. The video input is configured to receive a moving image of the entity captured by a camera over the interval of time. The feature recognition module is configured to process the moving image to detect at least one human feature of the entity. The liveness detection module is configured to compare with the randomized outputs a behaviour exhibited by the detected human feature over the interval of time to determine whether the behaviour is an expected reaction to the randomized outputs, thereby determining whether the entity is a living being.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: January 28, 2020
    Assignee: Yoti Holding Limited
    Inventors: Francisco Angel Garcia Rodriguez, Benjamin Robert Tremoulheac, Symeon Nikitidis, Thomas Bastiani, Miguel Jimenez
  • Publication number: 20180239955
    Abstract: A liveness detection system comprises a controller, a video input, a feature recognition module, and a liveness detection module. The controller is configured to control an output device to provide randomized outputs to an entity over an interval of time. The video input is configured to receive a moving image of the entity captured by a camera over the interval of time. The feature recognition module is configured to process the moving image to detect at least one human feature of the entity. The liveness detection module is configured to compare with the randomized outputs a behaviour exhibited by the detected human feature over the interval of time to determine whether the behaviour is an expected reaction to the randomized outputs, thereby determining whether the entity is a living being.
    Type: Application
    Filed: February 9, 2018
    Publication date: August 23, 2018
    Inventors: Francisco Angel Garcia Rodriguez, Benjamin Robert Tremoulheac, Symeon Nikitidis, Thomas Bastiani, Miguel Jimenez