Patents by Inventor Mehmet Ersin Yumer

Mehmet Ersin Yumer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11858536
    Abstract: Example aspects of the present disclosure describe determining, using a machine-learned model framework, a motion trajectory for an autonomous platform. The motion trajectory can be determined based at least in part on a plurality of costs based at least in part on a distribution of probabilities determined conditioned on the motion trajectory.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: January 2, 2024
    Assignee: UATC, LLC
    Inventors: Jerry Junkai Liu, Wenyuan Zeng, Raquel Urtasun, Mehmet Ersin Yumer
  • Publication number: 20230365143
    Abstract: A system comprises an autonomous vehicle and a control device. The control device detects an event trigger that impacts the autonomous vehicle. In response, to detecting the event trigger, the control device enters the autonomous vehicle into a first degraded autonomy mode, In the first degraded autonomy mode, the control device communicates sensor data to an oversight server. The control device receives high-level commands from the oversight server. The one or more high-level commands indicate minimal risk maneuvers for the autonomous vehicle. The control device receives a maximum traveling speed for the autonomous vehicle from the oversight server. The control device navigates the autonomous vehicle using an adaptive cruise control according to the one or more high-level commands and the maximum traveling speed.
    Type: Application
    Filed: March 29, 2023
    Publication date: November 16, 2023
    Inventors: Mehmet Ersin Yumer, Xiaodi Hou
  • Patent number: 11443412
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 8, 2019
    Date of Patent: September 13, 2022
    Assignee: ADOBE INC.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Patent number: 11115645
    Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.
    Type: Grant
    Filed: December 21, 2018
    Date of Patent: September 7, 2021
    Assignee: ADOBE INC.
    Inventors: Jimei Yang, Duygu Ceylan Aksit, Mehmet Ersin Yumer, Eunbyung Park
  • Patent number: 11106902
    Abstract: Certain embodiments detect human-object interactions in image content. For example, human-object interaction metadata is applied to an input image, thereby identifying contact between a part of a depicted human and a part of a depicted object. Applying the human-object interaction metadata involves computing a joint-location heat map by applying a pose estimation subnet to the input image and a contact-point heat map by applying an object contact subnet to the to the input image. The human-object interaction metadata is generated by applying an interaction-detection subnet to the joint-location heat map and the contact-point heat map. The interaction-detection subnet is trained to identify an interaction based on joint-object contact pairs, where a joint-object contact pair includes a relationship between a human joint location and a contact point. An image search system or other computing system is provided with access to the input image having the human-object interaction metadata.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: August 31, 2021
    Assignee: ADOBE INC.
    Inventors: Zimo Li, Vladimir Kim, Mehmet Ersin Yumer
  • Patent number: 11069099
    Abstract: Various embodiments enable curves to be drawn around 3-D objects by intelligently determining or inferring how the curve flows in the space around the outside of the 3-D object. The various embodiments enable such curves to be drawn without having to constantly rotate the 3-D object. In at least some embodiments, curve flow is inferred by employing a vertex position discovery process, a path discovery process, and a final curve construction process.
    Type: Grant
    Filed: April 22, 2020
    Date of Patent: July 20, 2021
    Assignee: Adobe Inc.
    Inventors: Vojtech Krs, Radomir Mech, Nathan Aaron Carr, Mehmet Ersin Yumer
  • Patent number: 11049296
    Abstract: A digital medium environment is described to dynamically modify or extend an existing path in a user interface. An un-parameterized input is received that is originated by user interaction with a user interface to specify a path to be drawn. A parameterized path is fit as a mathematical ordering representation of the path to be drawn as specified by the un-parametrized input. A determination is made as to whether the parameterized path is to extend or modify the existing path in the user interface. The existing path is modified or extended in the user interface using the parameterized path in response to the determining that the parameterized path is to modify or extend the existing path.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: June 29, 2021
    Assignee: Adobe Inc.
    Inventor: Mehmet Ersin Yumer
  • Patent number: 10783716
    Abstract: A digital medium environment is described to generate a three dimensional facial expression from a blend shape and a facial expression source. A semantic type is detected that defines a facial expression of the blend shape. Transfer intensities are assigned based on the detected semantic type to the blend shape and the facial expression source, respectively, for individual portions of the three dimensional facial expression, the transfer intensities specifying weights given to the blend shape and the facial expression source, respectively, for the individual portions of the three dimensional facial expression. The three dimensional facial expression is generated from the blend shape and the facial expression source based on the assigned transfer intensities.
    Type: Grant
    Filed: March 2, 2016
    Date of Patent: September 22, 2020
    Assignee: Adobe Inc.
    Inventor: Mehmet Ersin Yumer
  • Publication number: 20200250865
    Abstract: Various embodiments enable curves to be drawn around 3-D objects by intelligently determining or inferring how the curve flows in the space around the outside of the 3-D object. The various embodiments enable such curves to be drawn without having to constantly rotate the 3-D object. In at least some embodiments, curve flow is inferred by employing a vertex position discovery process, a path discovery process, and a final curve construction process.
    Type: Application
    Filed: April 22, 2020
    Publication date: August 6, 2020
    Applicant: Adobe Inc.
    Inventors: Vojtech Krs, Radomir Mech, Nathan Aaron Carr, Mehmet Ersin Yumer
  • Patent number: 10657682
    Abstract: Various embodiments enable curves to be drawn around 3-D objects by intelligently determining or inferring how the curve flows in the space around the outside of the 3-D object. The various embodiments enable such curves to be drawn without having to constantly rotate the 3-D object. In at least some embodiments, curve flow is inferred by employing a vertex position discovery process, a path discovery process, and a final curve construction process.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: May 19, 2020
    Assignee: Adobe Inc.
    Inventors: Vojtech Krs, Radomir Mech, Nathan Aaron Carr, Mehmet Ersin Yumer
  • Patent number: 10607329
    Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: March 31, 2020
    Assignee: ADOBE INC.
    Inventors: Kalyan K. Sunkavalli, Xiaohui Shen, Mehmet Ersin Yumer, Marc-André Gardner, Emiliano Gambaretto
  • Patent number: 10599924
    Abstract: Disclosed systems and methods categorize text regions of an electronic document into document object types based on a combination of semantic information and appearance information from the electronic document. A page segmentation application executing on a computing device accesses textual feature representations that represent text portions in a vector space, where a set of pixels from the page is mapped to a textual feature representation. The page segmentation application generates a visual feature representation, which corresponds to an appearance of a document portion including the set of pixels, by applying a neural network to the page of the electronic document. The page segmentation application generates an output page segmentation of the electronic document by applying the neural network to the textual feature representation and the visual feature representation.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: March 24, 2020
    Assignee: Adobe Inc.
    Inventors: Xiao Yang, Paul Asente, Mehmet Ersin Yumer
  • Publication number: 20200074600
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Application
    Filed: November 8, 2019
    Publication date: March 5, 2020
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Patent number: 10552730
    Abstract: An intuitive object-generation experience is provided by employing an autoencoder neural network to reduce the dimensionality of a procedural model. A set of sample objects are generated using the procedural model. In embodiments, the sample objects may be selected according to visual features such that the sample objects are uniformly distributed in visual appearance. Both procedural model parameters and visual features from the sample objects are used to train an autoencoder neural network, which maps a small number of new parameters to the larger number of procedural model parameters of the original procedural model. A user interface may be provided that allows users to generate new objects by adjusting the new parameters of the trained autoencoder neural network, which outputs procedural model parameters. The output procedural model parameters may be provided to the procedural model to generate the new objects.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: February 4, 2020
    Assignee: ADOBE INC.
    Inventors: Mehmet Ersin Yumer, Radomir Mech, Paul John Asente, Gavin Stuart Peter Miller
  • Patent number: 10521970
    Abstract: Certain embodiments involve refining local parameterizations that apply two-dimensional (“2D”) images to three-dimensional (“3D”) models. For instance, a particular parameterization-initialization process is select based on one or more features of a target mesh region. An initial local parameterization for a 2D image is generated from this parameterization-initialization process. A quality metric for the initial local parameterization is computed, and the local parameterization is modified to improve the quality metric. The 3D model is modified by applying image points from the 2D image to the target mesh region in accordance with the modified local parameterization.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: December 31, 2019
    Assignee: Adobe Inc.
    Inventors: Emiliano Gambaretto, Vladimir Kim, Qingnan Zhou, Mehmet Ersin Yumer
  • Patent number: 10475169
    Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: November 12, 2019
    Assignee: Adobe Inc.
    Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
  • Publication number: 20190287279
    Abstract: A digital medium environment is described to dynamically modify or extend an existing path in a user interface. An un-parameterized input is received that is originated by user interaction with a user interface to specify a path to be drawn. A parameterized path is fit as a mathematical ordering representation of the path to be drawn as specified by the un-parametrized input. A determination is made as to whether the parameterized path is to extend or modify the existing path in the user interface. The existing path is modified or extended in the user interface using the parameterized path in response to the determining that the parameterized path is to modify or extend the existing path.
    Type: Application
    Filed: May 31, 2019
    Publication date: September 19, 2019
    Applicant: Adobe Inc.
    Inventor: Mehmet Ersin Yumer
  • Publication number: 20190286892
    Abstract: Certain embodiments detect human-object interactions in image content. For example, human-object interaction metadata is applied to an input image, thereby identifying contact between a part of a depicted human and a part of a depicted object. Applying the human-object interaction metadata involves computing a joint-location heat map by applying a pose estimation subnet to the input image and a contact-point heat map by applying an object contact subnet to the to the input image. The human-object interaction metadata is generated by applying an interaction-detection subnet to the joint-location heat map and the contact-point heat map. The interaction-detection subnet is trained to identify an interaction based on joint-object contact pairs, where a joint-object contact pair includes a relationship between a human joint location and a contact point. An image search system or other computing system is provided with access to the input image having the human-object interaction metadata.
    Type: Application
    Filed: March 13, 2018
    Publication date: September 19, 2019
    Inventors: Zimo Li, Vladimir Kim, Mehmet Ersin Yumer
  • Publication number: 20190279414
    Abstract: Systems and techniques provide a user interface within an application to enable users to designate a folded object image of a folded object, as well as a superimposed image of a superimposed object to be added to the folded object image. Within the user interface, the user may simply place the superimposed image over the folded object image to obtain the desired modified image. If the user places the superimposed image over one or more folds of the folded object image, portions of the superimposed image will be removed to create the illusion in the modified image that the removed portions are obscured by one or more folds.
    Type: Application
    Filed: March 8, 2018
    Publication date: September 12, 2019
    Inventors: Duygu Ceylan Aksit, Yangtuanfeng Wang, Niloy Jyoti Mitra, Mehmet Ersin Yumer, Jovan Popovic
  • Patent number: 10410400
    Abstract: Systems and techniques provide a user interface within an application to enable users to designate a folded object image of a folded object, as well as a superimposed image of a superimposed object to be added to the folded object image. Within the user interface, the user may simply place the superimposed image over the folded object image to obtain the desired modified image. If the user places the superimposed image over one or more folds of the folded object image, portions of the superimposed image will be removed to create the illusion in the modified image that the removed portions are obscured by one or more folds.
    Type: Grant
    Filed: March 8, 2018
    Date of Patent: September 10, 2019
    Assignee: Adobe Inc.
    Inventors: Duygu Ceylan Aksit, Yangtuanfeng Wang, Niloy Jyoti Mitra, Mehmet Ersin Yumer, Jovan Popovic