Patents by Inventor Duygu Ceylan Aksit

Duygu Ceylan Aksit has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230037591
    Abstract: This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map.
    Type: Application
    Filed: July 22, 2021
    Publication date: February 9, 2023
    Inventors: Ruben Villegas, Yunseok Jang, Duygu Ceylan Aksit, Jimei Yang, Xin Sun
  • Publication number: 20230037339
    Abstract: One example method involves a processing device that performs operations that include receiving a request to retarget a source motion into a target object. Operations further include providing the target object to a contact-aware motion retargeting neural network trained to retarget the source motion into the target object. The contact-aware motion retargeting neural network is trained by accessing training data that includes a source object performing the source motion. The contact-aware motion retargeting neural network generates retargeted motion for the target object, based on a self-contact having a pair of input vertices. The retargeted motion is subject to motion constraints that: (i) preserve a relative location of the self-contact and (ii) prevent self-penetration of the target object.
    Type: Application
    Filed: July 26, 2021
    Publication date: February 9, 2023
    Inventors: Ruben Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit, Aaron Hertzmann
  • Patent number: 11531697
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: December 20, 2022
    Assignee: Adobe Inc.
    Inventors: Jinrong Xie, Shabnam Ghadar, Jun Saito, Jimei Yang, Elnaz Morad, Duygu Ceylan Aksit, Baldo Faieta, Alex Filipkowski
  • Publication number: 20220301262
    Abstract: Various disclosed embodiments are directed to estimating that a first vertex of a patch will change from a first position to a second position (the second position being at least partially indicative of secondary motion) based at least in part on one or more features of: primary motion data, one or more material properties, and constraint data associated with the particular patch. Such estimation can be made for some or all of the patches of an entire volumetric mesh in order to accurately predict the overall secondary motion of an object. This, among other functionality described herein resolves the inaccuracies, computer resource consumption, and the user experience of existing technologies.
    Type: Application
    Filed: March 19, 2021
    Publication date: September 22, 2022
    Inventors: Duygu Ceylan Aksit, Mianlun Zheng, Yi Zhou
  • Publication number: 20220292765
    Abstract: Embodiments provide systems, methods, and computer storage media for fitting 3D primitives to a 3D point cloud. In an example embodiment, 3D primitives are fit to a 3D point cloud using a global primitive fitting network that evaluates the entire 3D point cloud and a local primitive fitting network that evaluates local patches of the 3D point cloud. The global primitive fitting network regresses a representation of larger (global) primitives that fit the global structure. To identify smaller 3D primitives for regions with fine detail, local patches are constructed by sampling from a pool of points likely to contain fine detail, and the local primitive fitting network regresses a representation of smaller (local) primitives that fit the local structure of each of the local patches. The global and local primitives are merged into a combined, multi-scale set of fitted primitives, and representative primitive parameters are computed for each fitted primitive.
    Type: Application
    Filed: March 15, 2021
    Publication date: September 15, 2022
    Inventors: Eric-Tuan Le, Duygu Ceylan Aksit, Tamy Boubekeur, Radomir Mech, Niloy Mitra, Minhyuk Sung
  • Publication number: 20220138249
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 5, 2022
    Inventors: Jinrong Xie, Shabnam Ghadar, Jun Saito, Jimei Yang, Elnaz Morad, Duygu Ceylan Aksit, Baldo Faieta, Alex Filipkowski
  • Publication number: 20220020199
    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.
    Type: Application
    Filed: September 27, 2021
    Publication date: January 20, 2022
    Applicant: Adobe Inc.
    Inventors: Ruben Eduardo Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit
  • Patent number: 11170551
    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: November 9, 2021
    Assignee: Adobe Inc.
    Inventors: Ruben Eduardo Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit
  • Publication number: 20210343059
    Abstract: Motion retargeting with kinematic constraints is implemented in a digital medium environment. Generally, the described techniques provide for retargeting motion data from a source motion sequence to a target visual object. Accordingly, the described techniques position a target visual object in a defined visual environment to identify kinematic constraints of the target object relative to the visual environment. Further, the described techniques utilize an iterative optimization process that fine tunes the conformance of retargeted motion of a target object to the identified kinematic constraints.
    Type: Application
    Filed: May 1, 2020
    Publication date: November 4, 2021
    Applicant: Adobe Inc.
    Inventors: Ruben Eduardo Villegas, Jun Saito, Jimei Yang, Duygu Ceylan Aksit
  • Patent number: 11115645
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: September 7, 2021
    Assignee: ADOBE INC.
    Inventors: Jimei Yang, Duygu Ceylan Aksit, Mehmet Ersin Yumer, Eunbyung Park
  • Publication number: 20210264649
    Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.
    Type: Application
    Filed: May 11, 2021
    Publication date: August 26, 2021
    Applicant: Adobe Inc.
    Inventors: Giorgio Gori, Tamy Boubekeur, Radomir Mech, Nathan Aaron Carr, Matheus Abrantes Gadelha, Duygu Ceylan Aksit
  • Publication number: 20210256775
    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
    Type: Application
    Filed: March 22, 2021
    Publication date: August 19, 2021
    Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
  • Patent number: 11037341
    Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.
    Type: Grant
    Filed: January 15, 2020
    Date of Patent: June 15, 2021
    Assignee: Adobe Inc.
    Inventors: Giorgio Gori, Tamy Boubekeur, Radomir Mech, Nathan Aaron Carr, Matheus Abrantes Gadelha, Duygu Ceylan Aksit
  • Patent number: 10957117
    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: March 23, 2021
    Assignee: Adobe Inc.
    Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
  • Patent number: 10950038
    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.
    Type: Grant
    Filed: February 25, 2020
    Date of Patent: March 16, 2021
    Assignee: Adobe Inc.
    Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
  • Patent number: 10916054
    Abstract: Techniques are disclosed for deforming a 3D source mesh to resemble a target object representation which may be a 2D image or another 3D mesh. A methodology implementing the techniques according to an embodiment includes extracting a set of one or more source features from a source 3D mesh. The source 3D mesh includes a plurality of source points representing a source object, and the extracting of the set of source features is independent of an ordering of the source points. The method also includes extracting a set of one or more target features from the target object representation, and decoding a concatenation of the set of source features and the set of target features to predict vertex offsets for application to the source 3D mesh to generate a deformed 3D mesh based on the target object. The feature extractions and the vertex offset predictions may employ Deep Neural Networks.
    Type: Grant
    Filed: November 8, 2018
    Date of Patent: February 9, 2021
    Assignee: Adobe Inc.
    Inventors: Duygu Ceylan Aksit, Weiyue Wang, Radomir Mech
  • Patent number: 10825253
    Abstract: The present disclosure includes systems, methods, computer readable media, and devices that can generate accurate augmented reality objects based on tracking a writing device in relation to a real-world surface. In particular, the systems and methods described herein can detect an initial location of a writing device, and further track movement of the writing device on a real-world surface based on one or more sensory inputs. For example, disclosed systems and methods can generate an augmented reality object based on pressure detected at a tip of a writing device, based on orientation of the writing device, based on motion detector elements of the writing device (e.g., reflective materials, emitters, or object tracking shapes), and/or optical sensors. The systems and methods further render augmented reality objects within an augmented reality environment that appear on the real-world surface based on tracking the movement of the writing device.
    Type: Grant
    Filed: April 4, 2019
    Date of Patent: November 3, 2020
    Assignee: ADOBE INC.
    Inventors: Tenell Rhodes, Gavin S. P. Miller, Duygu Ceylan Aksit, Daichi Ito
  • Patent number: 10755459
    Abstract: Techniques and systems are described herein that support improved object painting in digital images through use of perspectives and transfers in a digital medium environment. In one example, a user interacts with a two-dimensional digital image in a user interface output by a computing device to apply digital paint. The computing device fits a three-dimensional model to an object within the image, e.g., the face. The object, as fit to the three-dimensional model, is used to support output of a plurality of perspectives of a view of the object with which a user may interact to digitally paint the object. As part of this, digital paint as specified through the user inputs is applied directly by the computing device to a two-dimensional texture map of the object. This may support transfer of digital paint by a computing device between objects by transferring the digital paint using respective two-dimensional texture maps.
    Type: Grant
    Filed: October 19, 2016
    Date of Patent: August 25, 2020
    Assignee: Adobe Inc.
    Inventors: Zhili Chen, Srinivasa Madhava Phaneendra Angara, Duygu Ceylan Aksit, Byungmoon Kim, Gahye Park
  • Publication number: 20200193696
    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.
    Type: Application
    Filed: February 25, 2020
    Publication date: June 18, 2020
    Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
  • Publication number: 20200151952
    Abstract: Techniques are disclosed for deforming a 3D source mesh to resemble a target object representation which may be a 2D image or another 3D mesh. A methodology implementing the techniques according to an embodiment includes extracting a set of one or more source features from a source 3D mesh. The source 3D mesh includes a plurality of source points representing a source object, and the extracting of the set of source features is independent of an ordering of the source points. The method also includes extracting a set of one or more target features from the target object representation, and decoding a concatenation of the set of source features and the set of target features to predict vertex offsets for application to the source 3D mesh to generate a deformed 3D mesh based on the target object. The feature extractions and the vertex offset predictions may employ Deep Neural Networks.
    Type: Application
    Filed: November 8, 2018
    Publication date: May 14, 2020
    Applicant: Adobe Inc.
    Inventors: Duygu Ceylan Aksit, Weiyue Wang, Radomir Mech