Abstract: In some embodiments, an exemplary inventive computer-implemented method may include steps, performed by a processor, of: obtaining training real representations of a real subject; obtaining a training synthetic representation having a visual effect applied to a synthetic subject; training a first neural network and a second neural network by: presenting the first neural network with training real representation and candidate meta-parameters of latent variables for the visual effect to generate a training photorealistic-imitating synthetic representation of the real subject with the visual effect; presenting the second neural network with the training photorealistic-imitating synthetic representation and the training synthetic representation to determine actual meta-parameters of the latent variables of the visual effect, where the actual meta-parameters are meta-parameters at which the second neural network has identified that the training photorealistic-imitating synthetic representation is realistic, and pr
Type:
Grant
Filed:
July 12, 2018
Date of Patent:
July 21, 2020
Assignee:
Banuba Limited
Inventors:
Viktor Prokopenya, Yury Hushchyn, Alexander Lemeza
Abstract: The exemplary inventive instant messaging system may include a sending client that accesses encryption data associated with a receiving client on a distributed mesh network where the encryption data is signed by a receiver public key of the receiving client, forms a non-interactive message exchange session on the distributed mesh network, generates a first session key based on the encryption data and a sender secret key, encrypts a message using the first session key, encrypts session information using the receiver public key, produces a session state including the encrypted message and the encrypted session information and stores the session state in the non-interactive message exchange session. The receiving client accesses the session state, decrypts the encrypted session information with a receiver secret key, generates a second session key using the session information and a sender public key, and decrypts the message using the second session key.
Type:
Grant
Filed:
July 9, 2019
Date of Patent:
March 24, 2020
Assignee:
Banuba Limited
Inventors:
Viktor Prokopenya, Yury Hushchyn, Nikolay Voronetskiy, Kanstantsin Zakharchanka
Abstract: Embodiments of the present disclosure include receiving a sequence of images of a face of a user. A three-dimensional (3D) model of the face is generated and 3D facial points associated with flat facial surfaces are determined. The 3D facial points are projected onto a screen coordinate plane to produce a two-dimensional (2D) facial points. A hue is determined for each pixel associated with each of the 2D facial points in each image. A mean hue value is determined for each image. A spectral representation of a variation in each mean hue value across the sequence of images is determined. A frequency of a main hue is determined based on a largest weight of the variation in each mean hue value. A heart rate of the user is determined based on facial blood circulation according to the frequency of the main hue, and an activity recommendation is displayed.
Abstract: In some embodiments, the present invention provides for an exemplary system that may include at least the following components: a camera component, where the camera component is configured to acquire a visual input, where the visual input includes a face of a person; a processor configured to: obtain the visual input; apply a face detection algorithm to detect a presence of the face within the visual input; extract a vector of at least one feature of the face; match the vector to a stored profile of the person to identify the person; fit, based on person-specific meta-parameters, a three-dimensional morphable face model (3DMFM) to obtain a person-specific 3DMFM of the ne person; apply a facial expression detection algorithm to the person-specific 3DMFM to determine a person-specific facial expression; and cause to perform at least one activity associated with the person based at least in part on the person-specific facial expression of the person.
Abstract: Embodiments directed towards systems and methods for tracking a human face present within a video stream are described herein. In some embodiments, the exemplary illustrative methods and the exemplary illustrative systems of the present invention are specifically configured to process image data to identify and align the presence of a face in a particular frame.
Type:
Grant
Filed:
August 10, 2018
Date of Patent:
April 9, 2019
Assignee:
Banuba Limited
Inventors:
Yury Hushchyn, Aliaksei Sakolski, Alexander Poplavsky
Abstract: In some embodiments, the present invention provides an exemplary computing device, including at least: a scheduler processor; a CPU; a GPU; where the scheduler processor configured to: obtain a computing task; divide the computing task into: a first set of subtasks and a second set of subtasks; submit the first set to the CPU; submit the second set to the GPU; determine, for a first subtask of the first set, a first execution time, a first execution speed, or both; determine, for a second subtask of the second set, a second execution time, a second execution speed, or both; dynamically rebalance an allocation of remaining non-executed subtasks of the computing task to be submitted to the CPU and the GPU, based, at least in part, on at least one of: a first comparison of the first execution time to the second execution time, and a second comparison of the first execution speed to the second execution speed.
Abstract: In some embodiments, the present invention provides for a computer system that may include a camera component configured to acquire a visual content, where the visual content includes a plurality of frames having a visual representation of a person's face; and a processor configured to: train a face detection regressor with a synthetic face model database to obtain a face detection trained regressor; apply, for each frame, the face detection trained regressor to detect or to track the face based on facial features, local features, and a pre-defined hyperparameter; construct an intermediate multi-dimensional face model; apply machine learning to determine features of an intermediate multi-dimensional head model; construct a multi-dimensional avatar; and utilize the multi-dimensional avatar to perform an activity associated with the person.
Abstract: In some embodiments, the present invention provides for an exemplary inventive system, including: a communication pipeline, including: at a first end of the communication pipeline: a first processor configured to: obtain a plurality of original content data units having a representative content associated with a subject; apply a trained artificial intelligence algorithm to identify: the representative content of the subject and original background content that is not associated with the subject; remove the original background content to reduce a volume of data being transmitted resulting in an increased capacity of the communication channel; encode and transmit each respective modified content data unit from the first end of the communication pipeline to a second end; a second processor configured to: receive and decode each respective modified content data unit; generate a respective artificial background content; and combine the representative content associated with the subject and the respective artificia
Type:
Grant
Filed:
May 22, 2018
Date of Patent:
November 27, 2018
Assignee:
Banuba Limited
Inventors:
Viktor Prokopenya, Yury Hushchyn, Alexander Lemeza
Abstract: In some embodiments, the present invention provides for an exemplary computer system that may include: a camera component configured to acquire a visual content, wherein the visual content having a plurality of frames with a visual representation of a face of a person; a processor configured to: apply, for each frame, a multi-dimensional face detection regressor for fitting at least one meta-parameter to detect or to track a plurality of multi-dimensional landmarks representative of a face; apply a face movement detection algorithm to identify each displacement of each respective multi-dimensional landmark between frames; and apply a face movement compensation algorithm to generate a face movement compensated output that stabilizes the visual representation of the face.
Abstract: Embodiments directed towards systems and methods for tracking a human face present within a video stream are described herein. In some embodiments, the exemplary illustrative methods and the exemplary illustrative systems of the present invention are specifically configured to process image data to identify and align the presence of a face in a particular frame.
Type:
Grant
Filed:
January 26, 2018
Date of Patent:
August 14, 2018
Assignee:
Banuba Limited
Inventors:
Yury Hushchyn, Aliaksei Sakolski, Alexander Poplavsky