Patents by Inventor Mehmet Ersin Yumer

Mehmet Ersin Yumer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190259216
    Abstract: Certain embodiments involve refining local parameterizations that apply two-dimensional (“2D”) images to three-dimensional (“3D”) models. For instance, a particular parameterization-initialization process is select based on one or more features of a target mesh region. An initial local parameterization for a 2D image is generated from this parameterization-initialization process. A quality metric for the initial local parameterization is computed, and the local parameterization is modified to improve the quality metric. The 3D model is modified by applying image points from the 2D image to the target mesh region in accordance with the modified local parameterization.
    Type: Application
    Filed: February 21, 2018
    Publication date: August 22, 2019
    Inventors: Emiliano Gambaretto, Vladimir Kim, Qingnan Zhou, Mehmet Ersin Yumer
  • Patent number: 10339679
    Abstract: A digital medium environment is described to dynamically modify or extend an existing path in a user interface. An un-parameterized input is received that is originated by user interaction with a user interface to specify a path to be drawn. A parameterized path is fit as a mathematical ordering representation of the path to be drawn as specified by the un-parametrized input. A determination is made as to whether the parameterized path is to extend or modify the existing path in the user interface. The existing path is modified or extended in the user interface using the parameterized path in response to the determining that the parameterized path is to modify or extend the existing path.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: July 2, 2019
    Assignee: Adobe Inc.
    Inventor: Mehmet Ersin Yumer
  • Publication number: 20190164261
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Application
    Filed: November 28, 2017
    Publication date: May 30, 2019
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20190124322
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Application
    Filed: December 21, 2018
    Publication date: April 25, 2019
    Inventors: JIMEI YANG, DUYGU CEYLAN AKSIT, MEHMET ERSIN YUMER, EUNBYUNG PARK
  • Patent number: 10204423
    Abstract: Disclosed are techniques for more accurately estimating the pose of a camera used to capture a three-dimensional scene. Accuracy is enhanced by leveraging three-dimensional object priors extracted from a large-scale three-dimensional shape database. This allows existing feature matching techniques to be augmented by generic three-dimensional object priors, thereby providing robust information about object orientations across multiple images or frames. More specifically, the three-dimensional object priors provide a unit that is easier and more reliably tracked between images than a single feature point. By adding object pose estimates across images, drift is reduced and the resulting visual odometry techniques are more robust and accurate. This eliminates the need for three-dimensional object templates that are specifically generated for the imaged object, training data obtained for a specific environment, and other tedious preprocessing steps.
    Type: Grant
    Filed: February 13, 2017
    Date of Patent: February 12, 2019
    Assignee: Adobe Inc.
    Inventors: Vladimir Kim, Oliver Wang, Minhyuk Sung, Mehmet Ersin Yumer
  • Publication number: 20190026550
    Abstract: Disclosed systems and methods categorize text regions of an electronic document into document object types based on a combination of semantic information and appearance information from the electronic document. A page segmentation application executing on a computing device accesses textual feature representations that represent text portions in a vector space, where a set of pixels from the page is mapped to a textual feature representation. The page segmentation application generates a visual feature representation, which corresponds to an appearance of a document portion including the set of pixels, by applying a neural network to the page of the electronic document. The page segmentation application generates an output page segmentation of the electronic document by applying the neural network to the textual feature representation and the visual feature representation.
    Type: Application
    Filed: July 21, 2017
    Publication date: January 24, 2019
    Inventors: Xiao Yang, Paul Asente, Mehmet Ersin Yumer
  • Patent number: 10165259
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Grant
    Filed: February 15, 2017
    Date of Patent: December 25, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Jimei Yang, Duygu Ceylan Aksit, Mehmet Ersin Yumer, Eunbyung Park
  • Publication number: 20180300912
    Abstract: Various embodiments enable curves to be drawn around 3-D objects by intelligently determining or inferring how the curve flows in the space around the outside of the 3-D object. The various embodiments enable such curves to be drawn without having to constantly rotate the 3-D object. In at least some embodiments, curve flow is inferred by employing a vertex position discovery process, a path discovery process, and a final curve construction process.
    Type: Application
    Filed: April 12, 2017
    Publication date: October 18, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Vojtech Krs, Radomir Mech, Nathan Aaron Carr, Mehmet Ersin Yumer
  • Publication number: 20180260975
    Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.
    Type: Application
    Filed: March 13, 2017
    Publication date: September 13, 2018
    Inventors: KALYAN K. SUNKAVALLI, XIAOHUI SHEN, MEHMET ERSIN YUMER, MARC-ANDRÉ GARDNER, EMILIANO GAMBARETTO
  • Patent number: 10062215
    Abstract: Methods and systems are directed to improving the convenience of drawing applications. Some examples include generating 3D drawing objects using a drawing application and selecting one based on a 2D design (in some cases a hand-drawn sketch) provided by a user. The user provided 2D design is separated into an outline perimeter and interior design, and corresponding vectors are then generated. These vectors are then used with analogous vectors generated for drawing objects. The selection of a drawing object to correspond to the 2D design is based on finding a drawing object having a minimum difference between its vectors and the vectors of the 2D design. The selected drawing object is then used to generate a drawing object configured to receive edits from the user. This reduces the inconvenience required to manually reproduce the 2D design in the drawing application.
    Type: Grant
    Filed: February 3, 2016
    Date of Patent: August 28, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Radomir Mech, Mehmet Ersin Yumer, Haibin Huang
  • Publication number: 20180232906
    Abstract: Disclosed are techniques for more accurately estimating the pose of a camera used to capture a three-dimensional scene. Accuracy is enhanced by leveraging three-dimensional object priors extracted from a large-scale three-dimensional shape database. This allows existing feature matching techniques to be augmented by generic three-dimensional object priors, thereby providing robust information about object orientations across multiple images or frames. More specifically, the three-dimensional object priors provide a unit that is easier and more reliably tracked between images than a single feature point. By adding object pose estimates across images, drift is reduced and the resulting visual odometry techniques are more robust and accurate. This eliminates the need for three-dimensional object templates that are specifically generated for the imaged object, training data obtained for a specific environment, and other tedious preprocessing steps.
    Type: Application
    Filed: February 13, 2017
    Publication date: August 16, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Vladimir Kim, Oliver Wang, Minhyuk Sung, Mehmet Ersin Yumer
  • Publication number: 20180234671
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Application
    Filed: February 15, 2017
    Publication date: August 16, 2018
    Inventors: JIMEI YANG, DUYGU CEYLAN AKSIT, MEHMET ERSIN YUMER, EUNBYUNG PARK
  • Patent number: 9792734
    Abstract: Methods of generating one or more abstractions of a three-dimensional (3D) input model by performing volumetric manipulations on one or more volumetric abstractions of the 3D input model. In some embodiments, volumetric manipulations are made to a volumetric shell abstraction of a 3D input model in a successive and iterative manner to generate an abstraction hierarchy composed of a set of volumetric abstractions having differing levels of abstraction based on containing differing amounts of geometric detail from the 3D input model. In one example of geometric manipulation, one or more fitted subvolumes corresponding to geometric detail of the 3D input model are identified based on a current level of abstraction and the 3D input model, and each fitted subvolume is added to or subtracted from the current level of abstraction to generate a next, finer level of abstraction. In some embodiments, the disclosed methods are embodied in suitable software.
    Type: Grant
    Filed: September 12, 2014
    Date of Patent: October 17, 2017
    Assignee: Carnegie Mellon University
    Inventors: Levent Burak Kara, Mehmet Ersin Yumer
  • Publication number: 20170256098
    Abstract: A digital medium environment is described to generate a three dimensional facial expression from a blend shape and a facial expression source. A semantic type is detected that defines a facial expression of the blend shape. Transfer intensities are assigned based on the detected semantic type to the blend shape and the facial expression source, respectively, for individual portions of the three dimensional facial expression, the transfer intensities specifying weights given to the blend shape and the facial expression source, respectively, for the individual portions of the three dimensional facial expression. The three dimensional facial expression is generated from the blend shape and the facial expression source based on the assigned transfer intensities.
    Type: Application
    Filed: March 2, 2016
    Publication date: September 7, 2017
    Inventor: Mehmet Ersin Yumer
  • Publication number: 20170249761
    Abstract: A digital medium environment is described to dynamically modify or extend an existing path in a user interface. An un-parameterized input is received that is originated by user interaction with a user interface to specify a path to be drawn. A parameterized path is fit as a mathematical ordering representation of the path to be drawn as specified by the un-parametrized input. A determination is made as to whether the parameterized path is to extend or modify the existing path in the user interface. The existing path is modified or extended in the user interface using the parameterized path in response to the determining that the parameterized path is to modify or extend the existing path.
    Type: Application
    Filed: February 26, 2016
    Publication date: August 31, 2017
    Inventor: Mehmet Ersin Yumer
  • Publication number: 20170221257
    Abstract: Methods and systems are directed to improving the convenience of drawing applications. Some examples include generating 3D drawing objects using a drawing application and selecting one based on a 2D design (in some cases a hand-drawn sketch) provided by a user. The user provided 2D design is separated into an outline perimeter and interior design, and corresponding vectors are then generated. These vectors are then used with analogous vectors generated for drawing objects. The selection of a drawing object to correspond to the 2D design is based on finding a drawing object having a minimum difference between its vectors and the vectors of the 2D design. The selected drawing object is then used to generate a drawing object configured to receive edits from the user. This reduces the inconvenience required to manually reproduce the 2D design in the drawing application.
    Type: Application
    Filed: February 3, 2016
    Publication date: August 3, 2017
    Applicant: Adobe Systems Incorporated
    Inventors: Radomir Mech, Mehmet Ersin Yumer, Haibin Huang
  • Publication number: 20170004397
    Abstract: An intuitive object-generation experience is provided by employing an autoencoder neural network to reduce the dimensionality of a procedural model. A set of sample objects are generated using the procedural model. In embodiments, the sample objects may be selected according to visual features such that the sample objects are uniformly distributed in visual appearance. Both procedural model parameters and visual features from the sample objects are used to train an autoencoder neural network, which maps a small number of new parameters to the larger number of procedural model parameters of the original procedural model. A user interface may be provided that allows users to generate new objects by adjusting the new parameters of the trained autoencoder neural network, which outputs procedural model parameters. The output procedural model parameters may be provided to the procedural model to generate the new objects.
    Type: Application
    Filed: June 30, 2015
    Publication date: January 5, 2017
    Inventors: MEHMET ERSIN YUMER, RADOMIR MECH, PAUL JOHN ASENTE, GAVIN STUART PETER MILLER
  • Publication number: 20150077417
    Abstract: Methods of generating one or more abstractions of a three-dimensional (3D) input model by performing volumetric manipulations on one or more volumetric abstractions of the 3D input model. In some embodiments, volumetric manipulations are made to a volumetric shell abstraction of a 3D input model in a successive and iterative manner to generate an abstraction hierarchy composed of a set of volumetric abstractions having differing levels of abstraction based on containing differing amounts of geometric detail from the 3D input model. In one example of geometric manipulation, one or more fitted subvolumes corresponding to geometric detail of the 3D input model are identified based on a current level of abstraction and the 3D input model, and each fitted subvolume is added to or subtracted from the current level of abstraction to generate a next, finer level of abstraction. In some embodiments, the disclosed methods are embodied in suitable software.
    Type: Application
    Filed: September 12, 2014
    Publication date: March 19, 2015
    Inventors: Levent Burak Kara, Mehmet Ersin Yumer