Patents by Inventor Duygu Ceylan Aksit

Duygu Ceylan Aksit has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190096129
    Abstract: The present disclosure includes systems, methods, computer readable media, and devices that can generate accurate augmented reality objects based on tracking a writing device in relation to a real-world surface. In particular, the systems and methods described herein can detect an initial location of a writing device, and further track movement of the writing device on a real-world surface based on one or more sensory inputs. For example, disclosed systems and methods can generate an augmented reality object based on pressure detected at a tip of a writing device, based on orientation of the writing device, based on motion detector elements of the writing device (e.g., reflective materials, emitters, or object tracking shapes), and/or optical sensors. The systems and methods further render augmented reality objects within an augmented reality environment that appear on the real-world surface based on tracking the movement of the writing device.
    Type: Application
    Filed: September 26, 2017
    Publication date: March 28, 2019
    Inventors: Tenell Rhodes, Gavin S.P. Miller, Duygu Ceylan Aksit, Daichi Ito
  • Patent number: 10165259
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Jimei Yang, Duygu Ceylan Aksit, Mehmet Ersin Yumer, Eunbyung Park
  • Publication number: 20180253869
    Abstract: The present disclosure includes methods and systems for generating modified digital images utilizing a neural network that includes a rendering layer. In particular, the disclosed systems and methods can train a neural network to decompose an input digital image into intrinsic physical properties (e.g., such as material, illumination, and shape). Moreover, the systems and methods can substitute one of the intrinsic physical properties for a target property (e.g., a modified material, illumination, or shape). The systems and methods can utilize a rendering layer trained to synthesize a digital image to generate a modified digital image based on the target property and the remaining (unsubstituted) intrinsic physical properties. Systems and methods can increase the accuracy of modified digital images by generating modified digital images that realistically reflect a confluence of intrinsic physical properties of an input digital image and target (i.e., modified) properties.
    Type: Application
    Filed: March 2, 2017
    Publication date: September 6, 2018
    Inventors: Mehmet Yumer, Jimei Yang, Guilin Liu, Duygu Ceylan Aksit
  • Publication number: 20180234671
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Application
    Filed: February 15, 2017
    Publication date: August 16, 2018
    Inventors: JIMEI YANG, DUYGU CEYLAN AKSIT, MEHMET ERSIN YUMER, EUNBYUNG PARK
  • Publication number: 20180234669
    Abstract: Systems and methods provide for providing a stereoscopic six-degree of freedom viewing experience with a monoscopic 360-degree video are provided. A monoscopic 360-degree video of a subject scene can be preprocessed by analyzing each frame to recover a three-dimensional geometric representation of the subject scene, and further recover a camera motion path that includes various parameters associated with the camera, such as orientation, translational movement, and the like, as evidenced by the recording. Utilizing the recovered three-dimensional geometric representation of the subject scene and recovered camera motion path, a dense three-dimensional geometric representation of the subject scene is generated utilizing random assignment and propagation operations. Once preprocessing is complete, the processed video can be provided for stereoscopic display via a device, such as a head-mounted display.
    Type: Application
    Filed: February 15, 2017
    Publication date: August 16, 2018
    Inventors: ZHILI CHEN, DUYGU CEYLAN AKSIT, JINGWEI HUANG, HAILIN JIN
  • Publication number: 20180108160
    Abstract: Techniques and systems are described herein that support improved object painting in digital images through use of perspectives and transfers in a digital medium environment. In one example, a user interacts with a two-dimensional digital image in a user interface output by a computing device to apply digital paint. The computing device fits a three-dimensional model to an object within the image, e.g., the face. The object, as fit to the three-dimensional model, is used to support output of a plurality of perspectives of a view of the object with which a user may interact to digitally paint the object. As part of this, digital paint as specified through the user inputs is applied directly by the computing device to a two-dimensional texture map of the object. This may support transfer of digital paint by a computing device between objects by transferring the digital paint using respective two-dimensional texture maps.
    Type: Application
    Filed: October 19, 2016
    Publication date: April 19, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Zhili Chen, Srinivasa Madhava Phaneendra Angara, Duygu Ceylan Aksit, Byungmoon Kim, Gahye Park