Patents by Inventor Duygu Ceylan Aksit
Duygu Ceylan Aksit has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240046566Abstract: Systems and methods for mesh generation are described. One aspect of the systems and methods includes receiving an image depicting a visible portion of a body; generating an intermediate mesh representing the body based on the image; generating visibility features indicating whether parts of the body are visible based on the image; generating parameters for a morphable model of the body based on the intermediate mesh and the visibility features; and generating an output mesh representing the body based on the parameters for the morphable model, wherein the output mesh includes a non-visible portion of the body that is not depicted by the image.Type: ApplicationFiled: August 2, 2022Publication date: February 8, 2024Inventors: Jimei Yang, Chun-han Yao, Duygu Ceylan Aksit, Yi Zhou
-
Publication number: 20240037827Abstract: Embodiments are disclosed for using machine learning models to perform three-dimensional garment deformation due to character body motion with collision handling. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input, the input including character body shape parameters and character body pose parameters defining a character body, and garment parameters. The disclosed systems and methods further comprise generating, by a first neural network, a first set of garment vertices defining deformations of a garment with the character body based on the input. The disclosed systems and methods further comprise determining, by a second neural network, that the first set of garment vertices includes a second set of garment vertices penetrating the character body. The disclosed systems and methods further comprise modifying, by a third neural network, each garment vertex in the second set of garment vertices to positions outside the character body.Type: ApplicationFiled: July 27, 2022Publication date: February 1, 2024Applicant: Adobe Inc.Inventors: Yi ZHOU, Yangtuanfeng WANG, Xin SUN, Qingyang TAN, Duygu CEYLAN AKSIT
-
Patent number: 11861779Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.Type: GrantFiled: December 14, 2021Date of Patent: January 2, 2024Assignee: Adobe Inc.Inventors: Jun Saito, Jimei Yang, Duygu Ceylan Aksit
-
Patent number: 11830138Abstract: Various disclosed embodiments are directed to estimating that a first vertex of a patch will change from a first position to a second position (the second position being at least partially indicative of secondary motion) based at least in part on one or more features of: primary motion data, one or more material properties, and constraint data associated with the particular patch. Such estimation can be made for some or all of the patches of an entire volumetric mesh in order to accurately predict the overall secondary motion of an object. This, among other functionality described herein resolves the inaccuracies, computer resource consumption, and the user experience of existing technologies.Type: GrantFiled: March 19, 2021Date of Patent: November 28, 2023Assignee: ADOBE INC.Inventors: Duygu Ceylan Aksit, Mianlun Zheng, Yi Zhou
-
Publication number: 20230360320Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.Type: ApplicationFiled: July 18, 2023Publication date: November 9, 2023Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
-
Publication number: 20230326137Abstract: Systems and methods are described for rendering garments. The system includes a first machine learning model trained to generate coarse garment templates of a garment and a second machine learning model trained to render garment images. The first machine learning model generates a coarse garment template based on position data. The system produces a neural texture for the garment, the neural texture comprising a multi-dimensional feature map characterizing detail of the garment. The system provides the coarse garment template and the neural texture to the second machine learning model trained to render garment images. The second machine learning model generates a rendered garment image of the garment based on the coarse garment template of the garment and the neural texture.Type: ApplicationFiled: April 7, 2022Publication date: October 12, 2023Inventors: Duygu Ceylan Aksit, Yangtuanfeng Wang, Niloy J. Mitra, Meng Zhang
-
Patent number: 11769279Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.Type: GrantFiled: May 11, 2021Date of Patent: September 26, 2023Assignee: Adobe Inc.Inventors: Giorgio Gori, Tamy Boubekeur, Radomir Mech, Nathan Aaron Carr, Matheus Abrantes Gadelha, Duygu Ceylan Aksit
-
Patent number: 11704865Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.Type: GrantFiled: July 22, 2021Date of Patent: July 18, 2023Assignee: Adobe Inc.Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
-
Patent number: 11694416Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.Type: GrantFiled: March 22, 2021Date of Patent: July 4, 2023Assignee: Adobe, Inc.Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
-
Patent number: 11682166Abstract: Embodiments provide systems, methods, and computer storage media for fitting 3D primitives to a 3D point cloud. In an example embodiment, 3D primitives are fit to a 3D point cloud using a global primitive fitting network that evaluates the entire 3D point cloud and a local primitive fitting network that evaluates local patches of the 3D point cloud. The global primitive fitting network regresses a representation of larger (global) primitives that fit the global structure. To identify smaller 3D primitives for regions with fine detail, local patches are constructed by sampling from a pool of points likely to contain fine detail, and the local primitive fitting network regresses a representation of smaller (local) primitives that fit the local structure of each of the local patches. The global and local primitives are merged into a combined, multi-scale set of fitted primitives, and representative primitive parameters are computed for each fitted primitive.Type: GrantFiled: March 15, 2021Date of Patent: June 20, 2023Assignee: Adobe Inc.Inventors: Eric-Tuan Le, Duygu Ceylan Aksit, Tamy Boubekeur, Radomir Meeh, Niloy Mitra, Minhyuk Sung
-
Publication number: 20230186544Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.Type: ApplicationFiled: December 14, 2021Publication date: June 15, 2023Applicant: Adobe Inc.Inventors: Jun Saito, Jimei Yang, Duygu Ceylan Aksit
-
Publication number: 20230123820Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and method that utilize a character animation neural network informed by motion and pose signatures to generate a digital video through person-specific appearance modeling and motion retargeting. In particular embodiments, the disclosed systems implement a character animation neural network that includes a pose embedding model to encode a pose signature into spatial pose features. The character animation neural network further includes a motion embedding model to encode a motion signature into motion features. In some embodiments, the disclosed systems utilize the motion features to refine per-frame pose features and improve temporal coherency. In certain implementations, the disclosed systems also utilize the motion features to demodulate neural network weights used to generate an image frame of a character in motion based on the refined pose features.Type: ApplicationFiled: October 15, 2021Publication date: April 20, 2023Inventors: Yangtuanfeng Wang, Duygu Ceylan Aksit, Krishna Kumar Singh, Niloy J Mitra
-
Patent number: 11625881Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.Type: GrantFiled: September 27, 2021Date of Patent: April 11, 2023Assignee: Adobe Inc.Inventors: Ruben Eduardo Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit
-
Publication number: 20230037339Abstract: One example method involves a processing device that performs operations that include receiving a request to retarget a source motion into a target object. Operations further include providing the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object. The contact-aware motion retargeting neural network is trained by accessing training data that includes a source object performing the source motion. The contact-aware motion retargeting neural network generates retargeted motion for the target object, based on a self-contact having a pair of input vertices. The retargeted motion is subject to motion constraints that: (i) preserve a relative location of the self-contact and (ii) prevent self-penetration of the target object.Type: ApplicationFiled: July 26, 2021Publication date: February 9, 2023Inventors: Ruben Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit, Aaron Hertzmann
-
Publication number: 20230037591Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.Type: ApplicationFiled: July 22, 2021Publication date: February 9, 2023Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
-
Patent number: 11531697Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.Type: GrantFiled: November 3, 2020Date of Patent: December 20, 2022Assignee: Adobe Inc.Inventors: Jinrong Xie, Shabnam Ghadar, Jun Saito, Jimei Yang, Elnaz Morad, Duygu Ceylan Aksit, Baldo Faieta, Alex Filipkowski
-
Publication number: 20220301262Abstract: Various disclosed embodiments are directed to estimating that a first vertex of a patch will change from a first position to a second position (the second position being at least partially indicative of secondary motion) based at least in part on one or more features of: primary motion data, one or more material properties, and constraint data associated with the particular patch. Such estimation can be made for some or all of the patches of an entire volumetric mesh in order to accurately predict the overall secondary motion of an object. This, among other functionality described herein resolves the inaccuracies, computer resource consumption, and the user experience of existing technologies.Type: ApplicationFiled: March 19, 2021Publication date: September 22, 2022Inventors: Duygu Ceylan Aksit, Mianlun Zheng, Yi Zhou
-
Publication number: 20220292765Abstract: Embodiments provide systems, methods, and computer storage media for fitting 3D primitives to a 3D point cloud. In an example embodiment, 3D primitives are fit to a 3D point cloud using a global primitive fitting network that evaluates the entire 3D point cloud and a local primitive fitting network that evaluates local patches of the 3D point cloud. The global primitive fitting network regresses a representation of larger (global) primitives that fit the global structure. To identify smaller 3D primitives for regions with fine detail, local patches are constructed by sampling from a pool of points likely to contain fine detail, and the local primitive fitting network regresses a representation of smaller (local) primitives that fit the local structure of each of the local patches. The global and local primitives are merged into a combined, multi-scale set of fitted primitives, and representative primitive parameters are computed for each fitted primitive.Type: ApplicationFiled: March 15, 2021Publication date: September 15, 2022Inventors: Eric-Tuan Le, Duygu Ceylan Aksit, Tamy Boubekeur, Radomir Mech, Niloy Mitra, Minhyuk Sung
-
Publication number: 20220138249Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.Type: ApplicationFiled: November 3, 2020Publication date: May 5, 2022Inventors: Jinrong Xie, Shabnam Ghadar, Jun Saito, Jimei Yang, Elnaz Morad, Duygu Ceylan Aksit, Baldo Faieta, Alex Filipkowski
-
Publication number: 20220020199Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.Type: ApplicationFiled: September 27, 2021Publication date: January 20, 2022Applicant: Adobe Inc.Inventors: Ruben Eduardo Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit