Patents by Inventor Duygu Ceylan Aksit

Duygu Ceylan Aksit has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200118347
    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
    Type: Application
    Filed: November 29, 2018
    Publication date: April 16, 2020
    Inventors: Duygu Ceylan Aksit, Vladimir Kim, Siddhartha Chaudhuri, Radomir Mech, Noam Aigerman, Kevin Wampler, Jonathan Eisenmann, Giorgio Gori, Emiliano Gambaretto
  • Patent number: 10607065
    Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
    Type: Grant
    Filed: May 3, 2018
    Date of Patent: March 31, 2020
    Assignee: Adobe Inc.
    Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
  • Patent number: 10600239
    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.
    Type: Grant
    Filed: January 22, 2018
    Date of Patent: March 24, 2020
    Assignee: Adobe Inc.
    Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
  • Patent number: 10515456
    Abstract: Certain embodiments involve synthesizing image content depicting facial hair or other hair features based on orientation data obtained using guidance inputs or other user-provided guidance data. For instance, a graphic manipulation application accesses guidance data identifying a desired hair feature and an appearance exemplar having image data with color information for the desired hair feature. The graphic manipulation application transforms the guidance data into an input orientation map. The graphic manipulation application matches the input orientation map to an exemplar orientation map having a higher resolution than the input orientation map. The graphic manipulation application generates the desired hair feature by applying the color information from the appearance exemplar to the exemplar orientation map. The graphic manipulation application outputs the desired hair feature at a presentation device.
    Type: Grant
    Filed: March 22, 2018
    Date of Patent: December 24, 2019
    Assignee: Adobe Inc.
    Inventors: Duygu Ceylan Aksit, Zhili Chen, Jose Ignacio Echevarria Vallespi, Kyle Olszewski
  • Publication number: 20190340419
    Abstract: Generation of parameterized avatars is described. An avatar generation system uses a trained machine-learning model to generate a parameterized avatar, from which digital visual content (e.g., images, videos, augmented and/or virtual reality (AR/VR) content) can be generated. The machine-learning model is trained to identify cartoon features of a particular style—from a library of these cartoon features—that correspond to features of a person depicted in a digital photograph. The parameterized avatar is data (e.g., a feature vector) that indicates the cartoon features identified from the library by the trained machine-learning model for the depicted person. This parameterization enables the avatar to be animated. The parameterization also enables the avatar generation system to generate avatars in non-photorealistic (relatively cartoony) styles such that, despite the style, the avatars preserve identities and expressions of persons depicted in input digital photographs.
    Type: Application
    Filed: May 3, 2018
    Publication date: November 7, 2019
    Applicant: Adobe Inc.
    Inventors: Rebecca Ilene Milman, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shechtman, Duygu Ceylan Aksit, David P. Simons
  • Patent number: 10467822
    Abstract: Embodiments involve reducing collision-based defects in motion-stylizations. For example, a device obtains facial landmark data from video data. The facial landmark data includes a first trajectory traveled by a first point tracking one or more facial features, and a second trajectory traveled by a second point tracking one or more facial features. The device applies a motion-stylization to the facial landmark data that causes a first change to one or more of the first trajectory and the second trajectory. The device also identifies a new collision between the first and second points that is introduced by the first change. The device applies a modified stylization to the facial landmark data that causes a second change to one or more of the first trajectory and the second trajectory. If the new collision is removed by the second change, the device outputs the facial landmark data with the modified stylization applied.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: November 5, 2019
    Assignee: Adobe Inc.
    Inventors: Rinat Abdrashitov, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shectman, Duygu Ceylan Aksit, David Simons
  • Patent number: 10430978
    Abstract: The present disclosure includes methods and systems for generating modified digital images utilizing a neural network that includes a rendering layer. In particular, the disclosed systems and methods can train a neural network to decompose an input digital image into intrinsic physical properties (e.g., such as material, illumination, and shape). Moreover, the systems and methods can substitute one of the intrinsic physical properties for a target property (e.g., a modified material, illumination, or shape). The systems and methods can utilize a rendering layer trained to synthesize a digital image to generate a modified digital image based on the target property and the remaining (unsubstituted) intrinsic physical properties. Systems and methods can increase the accuracy of modified digital images by generating modified digital images that realistically reflect a confluence of intrinsic physical properties of an input digital image and target (i.e., modified) properties.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: October 1, 2019
    Assignee: Adobe Inc.
    Inventors: Mehmet Yumer, Jimei Yang, Guilin Liu, Duygu Ceylan Aksit
  • Publication number: 20190295272
    Abstract: Certain embodiments involve synthesizing image content depicting facial hair or other hair features based on orientation data obtained using guidance inputs or other user-provided guidance data. For instance, a graphic manipulation application accesses guidance data identifying a desired hair feature and an appearance exemplar having image data with color information for the desired hair feature. The graphic manipulation application transforms the guidance data into an input orientation map. The graphic manipulation application matches the input orientation map to an exemplar orientation map having a higher resolution than the input orientation map. The graphic manipulation application generates the desired hair feature by applying the color information from the appearance exemplar to the exemplar orientation map. The graphic manipulation application outputs the desired hair feature at a presentation device.
    Type: Application
    Filed: March 22, 2018
    Publication date: September 26, 2019
    Inventors: Duygu Ceylan Aksit, Zhili Chen, Jose Ignacio Echevarria Vallespi, Kyle Olszewski
  • Publication number: 20190279414
    Abstract: Systems and techniques provide a user interface within an application to enable users to designate a folded object image of a folded object, as well as a superimposed image of a superimposed object to be added to the folded object image. Within the user interface, the user may simply place the superimposed image over the folded object image to obtain the desired modified image. If the user places the superimposed image over one or more folds of the folded object image, portions of the superimposed image will be removed to create the illusion in the modified image that the removed portions are obscured by one or more folds.
    Type: Application
    Filed: March 8, 2018
    Publication date: September 12, 2019
    Inventors: Duygu Ceylan Aksit, Yangtuanfeng Wang, Niloy Jyoti Mitra, Mehmet Ersin Yumer, Jovan Popovic
  • Patent number: 10410400
    Abstract: Systems and techniques provide a user interface within an application to enable users to designate a folded object image of a folded object, as well as a superimposed image of a superimposed object to be added to the folded object image. Within the user interface, the user may simply place the superimposed image over the folded object image to obtain the desired modified image. If the user places the superimposed image over one or more folds of the folded object image, portions of the superimposed image will be removed to create the illusion in the modified image that the removed portions are obscured by one or more folds.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: September 10, 2019
    Assignee: Adobe Inc.
    Inventors: Duygu Ceylan Aksit, Yangtuanfeng Wang, Niloy Jyoti Mitra, Mehmet Ersin Yumer, Jovan Popovic
  • Publication number: 20190259214
    Abstract: Embodiments involve reducing collision-based defects in motion-stylizations. For example, a device obtains facial landmark data from video data. The facial landmark data includes a first trajectory traveled by a first point tracking one or more facial features, and a second trajectory traveled by a second point tracking one or more facial features. The device applies a motion-stylization to the facial landmark data that causes a first change to one or more of the first trajectory and the second trajectory. The device also identifies a new collision between the first and second points that is introduced by the first change. The device applies a modified stylization to the facial landmark data that causes a second change to one or more of the first trajectory and the second trajectory. If the new collision is removed by the second change, the device outputs the facial landmark data with the modified stylization applied.
    Type: Application
    Filed: February 20, 2018
    Publication date: August 22, 2019
    Inventors: Rinat Abdrashitov, Jose Ignacio Echevarria Vallespi, Jingwan Lu, Elya Shectman, Duygu Ceylan Aksit, David Simons
  • Publication number: 20190236845
    Abstract: The present disclosure includes systems, methods, computer readable media, and devices that can generate accurate augmented reality objects based on tracking a writing device in relation to a real-world surface. In particular, the systems and methods described herein can detect an initial location of a writing device, and further track movement of the writing device on a real-world surface based on one or more sensory inputs. For example, disclosed systems and methods can generate an augmented reality object based on pressure detected at a tip of a writing device, based on orientation of the writing device, based on motion detector elements of the writing device (e.g., reflective materials, emitters, or object tracking shapes), and/or optical sensors. The systems and methods further render augmented reality objects within an augmented reality environment that appear on the real-world surface based on tracking the movement of the writing device.
    Type: Application
    Filed: April 4, 2019
    Publication date: August 1, 2019
    Inventors: Tenell Rhodes, Gavin S.P. Miller, Duygu Ceylan Aksit, Daichi Ito
  • Patent number: 10368047
    Abstract: A stereoscopic six-degree of freedom viewing experience with a monoscopic 360-degree video is provided. A monoscopic 360-degree video of a subject scene can be processed by analyzing each frame to recover a three-dimensional geometric representation, and recover a camera motion path. Utilizing the recovered three-dimensional geometric representation and camera motion path, a dense three-dimensional geometric representation of the subject scene is generated. The processed video can be provided for stereoscopic display via a device. As motion of the device is detected, novel viewpoints can be stereoscopically synthesized for presentation in real time, so as to provide an immersive virtual reality experience based on the original monoscopic 360-degree video and the detected motion of the device.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: July 30, 2019
    Assignee: ADONE INC.
    Inventors: Zhili Chen, Duygu Ceylan Aksit, Jingwei Huang, Hailin Jin
  • Publication number: 20190228567
    Abstract: Matching an illumination of an embedded virtual object (VO) with current environment illumination conditions provides an enhanced immersive experience to a user. To match the VO and environment illuminations, illumination basis functions are determined based on preprocessing image data, captured as a first combination of intensities of direct illumination sources illuminates the environment. Each basis function corresponds to one of the direct illumination sources. During the capture of runtime image data, a second combination of intensities illuminates the environment. An illumination-weighting vector is determined based on the runtime image data. The determination of the weighting vector accounts for indirect illumination sources, such as surface reflections. The weighting vector encodes a superposition of the basis functions that corresponds to the second combination of intensities. The method illuminates the VO based on the weighting vector.
    Type: Application
    Filed: January 22, 2018
    Publication date: July 25, 2019
    Inventors: Jeong Joon Park, Zhili Chen, Xin Sun, Vladimir Kim, Kalyan Krishna Sunkavalli, Duygu Ceylan Aksit
  • Patent number: 10334222
    Abstract: Certain embodiments involve switching to a particular video loop based on a user's focus while displaying video content to a user. For example, a video playback system identifies a current frame of the video content in a current region of interest being presented to a user on a presentation device. The video playback system also identifies an updated region of interest in the video content. The video playback system determines that a start frame and an end frame of a video portion within the region of interest have a threshold similarity. The video playback system selects, based on the threshold similarity, the video portion as a loop to be displayed to the user in the updated region of interest. The video playback system causes the presentation device to display the video portion as the loop.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: June 25, 2019
    Assignee: Adobe Inc.
    Inventors: Pranjali Kokare, Geoffrey Oxholm, Zhili Chen, Duygu Ceylan Aksit
  • Publication number: 20190158800
    Abstract: Certain embodiments involve switching to a particular video loop based on a user's focus while displaying video content to a user. For example, a video playback system identifies a current frame of the video content in a current region of interest being presented to a user on a presentation device. The video playback system also identifies an updated region of interest in the video content. The video playback system determines that a start frame and an end frame of a video portion within the region of interest have a threshold similarity. The video playback system selects, based on the threshold similarity, the video portion as a loop to be displayed to the user in the updated region of interest. The video playback system causes the presentation device to display the video portion as the loop.
    Type: Application
    Filed: November 20, 2017
    Publication date: May 23, 2019
    Inventors: Pranjali Kokare, Geoffrey Oxholm, Zhili Chen, Duygu Ceylan Aksit
  • Patent number: 10297088
    Abstract: The present disclosure includes systems, methods, computer readable media, and devices that can generate accurate augmented reality objects based on tracking a writing device in relation to a real-world surface. In particular, the systems and methods described herein can detect an initial location of a writing device, and further track movement of the writing device on a real-world surface based on one or more sensory inputs. For example, disclosed systems and methods can generate an augmented reality object based on pressure detected at a tip of a writing device, based on orientation of the writing device, based on motion detector elements of the writing device (e.g., reflective materials, emitters, or object tracking shapes), and/or optical sensors. The systems and methods further render augmented reality objects within an augmented reality environment that appear on the real-world surface based on tracking the movement of the writing device.
    Type: Grant
    Filed: September 26, 2017
    Date of Patent: May 21, 2019
    Assignee: ADOBE INC.
    Inventors: Tenell Rhodes, Gavin S. P. Miller, Duygu Ceylan Aksit, Daichi Ito
  • Publication number: 20190124322
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Application
    Filed: December 21, 2018
    Publication date: April 25, 2019
    Inventors: JIMEI YANG, DUYGU CEYLAN AKSIT, MEHMET ERSIN YUMER, EUNBYUNG PARK
  • Publication number: 20190096129
    Abstract: The present disclosure includes systems, methods, computer readable media, and devices that can generate accurate augmented reality objects based on tracking a writing device in relation to a real-world surface. In particular, the systems and methods described herein can detect an initial location of a writing device, and further track movement of the writing device on a real-world surface based on one or more sensory inputs. For example, disclosed systems and methods can generate an augmented reality object based on pressure detected at a tip of a writing device, based on orientation of the writing device, based on motion detector elements of the writing device (e.g., reflective materials, emitters, or object tracking shapes), and/or optical sensors. The systems and methods further render augmented reality objects within an augmented reality environment that appear on the real-world surface based on tracking the movement of the writing device.
    Type: Application
    Filed: September 26, 2017
    Publication date: March 28, 2019
    Inventors: Tenell Rhodes, Gavin S.P. Miller, Duygu Ceylan Aksit, Daichi Ito
  • Patent number: 10165259
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Jimei Yang, Duygu Ceylan Aksit, Mehmet Ersin Yumer, Eunbyung Park