Patents Assigned to Spree3D Corporation
-
Patent number: 11854579Abstract: Apparati, methods, and computer readable media for inserting identity information from a source image (static image or video) (301) into a destination video (302), while mimicking motion of the destination video (302). In an apparatus embodiment, an identity encoder (304) is configured to encode identity information of the source image (301). When source image (301) is a multi-frame static image or a video, an identity code aggregator (307) is positioned at an output of the identity encoder (304), and produces an identity vector (314). A driver encoder (313) is coupled to the destination (driver) video (302), and has two components: a pose encoder (305) configured to encode pose information of the destination video (302), and a motion encoder (315) configured to separately encode motion information of the destination video (302). The driver encoder (313) produces two vectors: a pose vector (308) and a motion vector (316).Type: GrantFiled: July 12, 2021Date of Patent: December 26, 2023Assignee: Spree3D CorporationInventors: Mohamed N. Moustafa, Ahmed A. Ewais, Amr A. Ali
-
Patent number: 11836905Abstract: Apparati, methods, and computer readable media for inserting identity information from a source image (1) into a destination image (2), while mimicking illumination of the destination image (2). In an apparatus embodiment, an identity encoder (4) is configured to encode just identity information of the source image (1) and to produce an identity vector (7), where the identity encoder (4) does not encode any pose information or illumination information of the source image (1). A driver encoder (12) has two components: a pose encoder (5) configured to encode pose information of the destination image (2) and an illumination encoder (6) configured to separately encode illumination information of the destination image (2), and to produce two vectors: a pose vector (8) and an illumination vector (9). A neural network generator (10) is coupled to the identity encoder (4) and to the driver encoder (12), and has three inputs: the identity vector (7), the pose vector (8), and the illumination vector (9).Type: GrantFiled: June 3, 2021Date of Patent: December 5, 2023Assignee: Spree3D CorporationInventors: Mohamed N. Moustafa, Ahmed A. Ewais, Amr A. Ali
-
Patent number: 11769346Abstract: Methods and apparati for inserting face and hair information from a source video (401) into a destination (driver) video (402) while mimicking pose, illumination, and hair motion of the destination video (402). An apparatus embodiment comprises an identity encoder (404) configured to encode face and hair information of the source video (401) and to produce as an output an identity vector; a pose encoder (405) configured to encode pose information of the destination video (402) and to produce as an output a pose vector; an illumination encoder (406) configured to encode head and hair illumination of the destination video (402) and to produce as an output an illumination vector; and a hair motion encoder (414) configured to encode hair motion of the destination video (402) and to produce as an output a hair motion vector. The identity vector, pose vector, illumination vector, and hair motion vector are fed as inputs to a neural network generator (410).Type: GrantFiled: December 22, 2021Date of Patent: September 26, 2023Assignee: Spree3d CorporationInventors: Mohamed N. Moustafa, Ahmed A. Ewais, Amr A. Ali
-
Patent number: 11663764Abstract: Provided are methods and systems for automatic creation of a customized avatar animation of a user. An example method commences with receiving production parameters and creating, based on the production parameters, a multidimensional array of a plurality of blank avatar animations. Each blank avatar animation has a predetermined number of frames and a plurality of features associated with each frame. The method further includes receiving user parameters including body dimensions, hair, and images of a face of a user. The method continues with selecting, from the plurality of blank avatar animations, two blank avatar animations closest to the user based on the body dimensions. The method further includes interpolating corresponding frames of the two blank avatar animations to produce an interpolated avatar animation. The method continues with compositing the face and the hair with the interpolated avatar animation using a machine learning technique to render the customized avatar animation.Type: GrantFiled: April 15, 2021Date of Patent: May 30, 2023Assignee: Spree3D CorporationInventors: Gil Spencer, Dmitriy Vladlenovich Pinskiy, Evan Smyth