Patents by Inventor Mehmet Ersin Yumer
Mehmet Ersin Yumer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11858536Abstract: Example aspects of the present disclosure describe determining, using a machine-learned model framework, a motion trajectory for an autonomous platform. The motion trajectory can be determined based at least in part on a plurality of costs based at least in part on a distribution of probabilities determined conditioned on the motion trajectory.Type: GrantFiled: November 1, 2021Date of Patent: January 2, 2024Assignee: UATC, LLCInventors: Jerry Junkai Liu, Wenyuan Zeng, Raquel Urtasun, Mehmet Ersin Yumer
-
Publication number: 20230365143Abstract: A system comprises an autonomous vehicle and a control device. The control device detects an event trigger that impacts the autonomous vehicle. In response, to detecting the event trigger, the control device enters the autonomous vehicle into a first degraded autonomy mode, In the first degraded autonomy mode, the control device communicates sensor data to an oversight server. The control device receives high-level commands from the oversight server. The one or more high-level commands indicate minimal risk maneuvers for the autonomous vehicle. The control device receives a maximum traveling speed for the autonomous vehicle from the oversight server. The control device navigates the autonomous vehicle using an adaptive cruise control according to the one or more high-level commands and the maximum traveling speed.Type: ApplicationFiled: March 29, 2023Publication date: November 16, 2023Inventors: Mehmet Ersin Yumer, Xiaodi Hou
-
Patent number: 11443412Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: GrantFiled: November 8, 2019Date of Patent: September 13, 2022Assignee: ADOBE INC.Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Patent number: 11115645Abstract: Embodiments are directed towards providing a target view, from a target viewpoint, of a 3D object. A source image, from a source viewpoint and including a common portion of the object, is encoded in 2D data. An intermediate image that includes an intermediate view of the object is generated based on the data. The intermediate view is from the target viewpoint and includes the common portion of the object and a disoccluded portion of the object not visible in the source image. The intermediate image includes a common region and a disoccluded region corresponding to the disoccluded portion of the object. The disoccluded region is updated to include a visual representation of a prediction of the disoccluded portion of the object. The prediction is based on a trained image completion model. The target view is based on the common region and the updated disoccluded region of the intermediate image.Type: GrantFiled: December 21, 2018Date of Patent: September 7, 2021Assignee: ADOBE INC.Inventors: Jimei Yang, Duygu Ceylan Aksit, Mehmet Ersin Yumer, Eunbyung Park
-
Patent number: 11106902Abstract: Certain embodiments detect human-object interactions in image content. For example, human-object interaction metadata is applied to an input image, thereby identifying contact between a part of a depicted human and a part of a depicted object. Applying the human-object interaction metadata involves computing a joint-location heat map by applying a pose estimation subnet to the input image and a contact-point heat map by applying an object contact subnet to the to the input image. The human-object interaction metadata is generated by applying an interaction-detection subnet to the joint-location heat map and the contact-point heat map. The interaction-detection subnet is trained to identify an interaction based on joint-object contact pairs, where a joint-object contact pair includes a relationship between a human joint location and a contact point. An image search system or other computing system is provided with access to the input image having the human-object interaction metadata.Type: GrantFiled: March 13, 2018Date of Patent: August 31, 2021Assignee: ADOBE INC.Inventors: Zimo Li, Vladimir Kim, Mehmet Ersin Yumer
-
Patent number: 11069099Abstract: Various embodiments enable curves to be drawn around 3-D objects by intelligently determining or inferring how the curve flows in the space around the outside of the 3-D object. The various embodiments enable such curves to be drawn without having to constantly rotate the 3-D object. In at least some embodiments, curve flow is inferred by employing a vertex position discovery process, a path discovery process, and a final curve construction process.Type: GrantFiled: April 22, 2020Date of Patent: July 20, 2021Assignee: Adobe Inc.Inventors: Vojtech Krs, Radomir Mech, Nathan Aaron Carr, Mehmet Ersin Yumer
-
Patent number: 11049296Abstract: A digital medium environment is described to dynamically modify or extend an existing path in a user interface. An un-parameterized input is received that is originated by user interaction with a user interface to specify a path to be drawn. A parameterized path is fit as a mathematical ordering representation of the path to be drawn as specified by the un-parametrized input. A determination is made as to whether the parameterized path is to extend or modify the existing path in the user interface. The existing path is modified or extended in the user interface using the parameterized path in response to the determining that the parameterized path is to modify or extend the existing path.Type: GrantFiled: May 31, 2019Date of Patent: June 29, 2021Assignee: Adobe Inc.Inventor: Mehmet Ersin Yumer
-
Patent number: 10783716Abstract: A digital medium environment is described to generate a three dimensional facial expression from a blend shape and a facial expression source. A semantic type is detected that defines a facial expression of the blend shape. Transfer intensities are assigned based on the detected semantic type to the blend shape and the facial expression source, respectively, for individual portions of the three dimensional facial expression, the transfer intensities specifying weights given to the blend shape and the facial expression source, respectively, for the individual portions of the three dimensional facial expression. The three dimensional facial expression is generated from the blend shape and the facial expression source based on the assigned transfer intensities.Type: GrantFiled: March 2, 2016Date of Patent: September 22, 2020Assignee: Adobe Inc.Inventor: Mehmet Ersin Yumer
-
Publication number: 20200250865Abstract: Various embodiments enable curves to be drawn around 3-D objects by intelligently determining or inferring how the curve flows in the space around the outside of the 3-D object. The various embodiments enable such curves to be drawn without having to constantly rotate the 3-D object. In at least some embodiments, curve flow is inferred by employing a vertex position discovery process, a path discovery process, and a final curve construction process.Type: ApplicationFiled: April 22, 2020Publication date: August 6, 2020Applicant: Adobe Inc.Inventors: Vojtech Krs, Radomir Mech, Nathan Aaron Carr, Mehmet Ersin Yumer
-
Patent number: 10657682Abstract: Various embodiments enable curves to be drawn around 3-D objects by intelligently determining or inferring how the curve flows in the space around the outside of the 3-D object. The various embodiments enable such curves to be drawn without having to constantly rotate the 3-D object. In at least some embodiments, curve flow is inferred by employing a vertex position discovery process, a path discovery process, and a final curve construction process.Type: GrantFiled: April 12, 2017Date of Patent: May 19, 2020Assignee: Adobe Inc.Inventors: Vojtech Krs, Radomir Mech, Nathan Aaron Carr, Mehmet Ersin Yumer
-
Patent number: 10607329Abstract: Methods and systems are provided for using a single image of an indoor scene to estimate illumination of an environment that includes the portion captured in the image. A neural network system may be trained to estimate illumination by generating recovery light masks indicating a probability of each pixel within the larger environment being a light source. Additionally, low-frequency RGB images may be generated that indicating low-frequency information for the environment. The neural network system may be trained using training input images that are extracted from known panoramic images. Once trained, the neural network system infers plausible illumination information from a single image to realistically illumination images and objects being manipulated in graphics applications, such as with image compositing, modeling, and reconstruction.Type: GrantFiled: March 13, 2017Date of Patent: March 31, 2020Assignee: ADOBE INC.Inventors: Kalyan K. Sunkavalli, Xiaohui Shen, Mehmet Ersin Yumer, Marc-André Gardner, Emiliano Gambaretto
-
Patent number: 10599924Abstract: Disclosed systems and methods categorize text regions of an electronic document into document object types based on a combination of semantic information and appearance information from the electronic document. A page segmentation application executing on a computing device accesses textual feature representations that represent text portions in a vector space, where a set of pixels from the page is mapped to a textual feature representation. The page segmentation application generates a visual feature representation, which corresponds to an appearance of a document portion including the set of pixels, by applying a neural network to the page of the electronic document. The page segmentation application generates an output page segmentation of the electronic document by applying the neural network to the textual feature representation and the visual feature representation.Type: GrantFiled: July 21, 2017Date of Patent: March 24, 2020Assignee: Adobe Inc.Inventors: Xiao Yang, Paul Asente, Mehmet Ersin Yumer
-
Publication number: 20200074600Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: ApplicationFiled: November 8, 2019Publication date: March 5, 2020Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Patent number: 10552730Abstract: An intuitive object-generation experience is provided by employing an autoencoder neural network to reduce the dimensionality of a procedural model. A set of sample objects are generated using the procedural model. In embodiments, the sample objects may be selected according to visual features such that the sample objects are uniformly distributed in visual appearance. Both procedural model parameters and visual features from the sample objects are used to train an autoencoder neural network, which maps a small number of new parameters to the larger number of procedural model parameters of the original procedural model. A user interface may be provided that allows users to generate new objects by adjusting the new parameters of the trained autoencoder neural network, which outputs procedural model parameters. The output procedural model parameters may be provided to the procedural model to generate the new objects.Type: GrantFiled: June 30, 2015Date of Patent: February 4, 2020Assignee: ADOBE INC.Inventors: Mehmet Ersin Yumer, Radomir Mech, Paul John Asente, Gavin Stuart Peter Miller
-
Patent number: 10521970Abstract: Certain embodiments involve refining local parameterizations that apply two-dimensional (“2D”) images to three-dimensional (“3D”) models. For instance, a particular parameterization-initialization process is select based on one or more features of a target mesh region. An initial local parameterization for a 2D image is generated from this parameterization-initialization process. A quality metric for the initial local parameterization is computed, and the local parameterization is modified to improve the quality metric. The 3D model is modified by applying image points from the 2D image to the target mesh region in accordance with the modified local parameterization.Type: GrantFiled: February 21, 2018Date of Patent: December 31, 2019Assignee: Adobe Inc.Inventors: Emiliano Gambaretto, Vladimir Kim, Qingnan Zhou, Mehmet Ersin Yumer
-
Patent number: 10475169Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.Type: GrantFiled: November 28, 2017Date of Patent: November 12, 2019Assignee: Adobe Inc.Inventors: Kalyan Sunkavalli, Mehmet Ersin Yumer, Marc-Andre Gardner, Xiaohui Shen, Jonathan Eisenmann, Emiliano Gambaretto
-
Publication number: 20190287279Abstract: A digital medium environment is described to dynamically modify or extend an existing path in a user interface. An un-parameterized input is received that is originated by user interaction with a user interface to specify a path to be drawn. A parameterized path is fit as a mathematical ordering representation of the path to be drawn as specified by the un-parametrized input. A determination is made as to whether the parameterized path is to extend or modify the existing path in the user interface. The existing path is modified or extended in the user interface using the parameterized path in response to the determining that the parameterized path is to modify or extend the existing path.Type: ApplicationFiled: May 31, 2019Publication date: September 19, 2019Applicant: Adobe Inc.Inventor: Mehmet Ersin Yumer
-
Publication number: 20190286892Abstract: Certain embodiments detect human-object interactions in image content. For example, human-object interaction metadata is applied to an input image, thereby identifying contact between a part of a depicted human and a part of a depicted object. Applying the human-object interaction metadata involves computing a joint-location heat map by applying a pose estimation subnet to the input image and a contact-point heat map by applying an object contact subnet to the to the input image. The human-object interaction metadata is generated by applying an interaction-detection subnet to the joint-location heat map and the contact-point heat map. The interaction-detection subnet is trained to identify an interaction based on joint-object contact pairs, where a joint-object contact pair includes a relationship between a human joint location and a contact point. An image search system or other computing system is provided with access to the input image having the human-object interaction metadata.Type: ApplicationFiled: March 13, 2018Publication date: September 19, 2019Inventors: Zimo Li, Vladimir Kim, Mehmet Ersin Yumer
-
Publication number: 20190279414Abstract: Systems and techniques provide a user interface within an application to enable users to designate a folded object image of a folded object, as well as a superimposed image of a superimposed object to be added to the folded object image. Within the user interface, the user may simply place the superimposed image over the folded object image to obtain the desired modified image. If the user places the superimposed image over one or more folds of the folded object image, portions of the superimposed image will be removed to create the illusion in the modified image that the removed portions are obscured by one or more folds.Type: ApplicationFiled: March 8, 2018Publication date: September 12, 2019Inventors: Duygu Ceylan Aksit, Yangtuanfeng Wang, Niloy Jyoti Mitra, Mehmet Ersin Yumer, Jovan Popovic
-
Patent number: 10410400Abstract: Systems and techniques provide a user interface within an application to enable users to designate a folded object image of a folded object, as well as a superimposed image of a superimposed object to be added to the folded object image. Within the user interface, the user may simply place the superimposed image over the folded object image to obtain the desired modified image. If the user places the superimposed image over one or more folds of the folded object image, portions of the superimposed image will be removed to create the illusion in the modified image that the removed portions are obscured by one or more folds.Type: GrantFiled: March 8, 2018Date of Patent: September 10, 2019Assignee: Adobe Inc.Inventors: Duygu Ceylan Aksit, Yangtuanfeng Wang, Niloy Jyoti Mitra, Mehmet Ersin Yumer, Jovan Popovic