Patents by Inventor Ahmet Cengiz Öztireli

Ahmet Cengiz Öztireli has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11615555
    Abstract: A method of generating a training data set for training an image matting machine learning model includes receiving a plurality of foreground images, generating a plurality of composited foreground images by compositing randomly selected foreground images from the plurality of foreground images, and generating a plurality of training images by compositing each composited foreground image with a randomly selected background image. The training data set includes the plurality of training images.
    Type: Grant
    Filed: April 9, 2021
    Date of Patent: March 28, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Tunc Ozan Aydin, Ahmet Cengiz Öztireli, Jingwei Tang, Yagiz Aksoy
  • Patent number: 11568212
    Abstract: In various embodiments, a relevance application quantifies how a trained neural network operates. In operation, the relevance application generates a set of input distributions based on a set of input points associated with the trained neural network. Each input distribution is characterized by a mean and a variance associated with a different neuron included in the trained neural network. The relevance application propagates the set of input distributions through a probabilistic neural network to generate at least a first output distribution. The probabilistic neural network is derived from at least a portion of the trained neural network. Based on the first output distribution, the relevance application computes a contribution of a first input point included in the set of input points to a difference between a first output point associated with a first output of the trained neural network and an estimated mean prediction associated with the first output.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: January 31, 2023
    Assignees: DISNEY ENTERPRISES, INC., ETH ZÜRICH, (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Ahmet Cengiz Öztireli, Markus Gross, Marco Ancona
  • Patent number: 11074743
    Abstract: In various embodiments, a differentiable rendering application enables an inverse rendering application to infer attributes associated with a 3D scene. In operation, the differentiable rendering application renders an image based on a first set of points associated with the 3D scene. The differentiable rendering application then generates an artificial gradient that approximates a change in a value of a first pixel included in the image with respect to a change in an attribute of a first point included in the first set of points. Subsequently, the inverse rendering application performs optimization operation(s) on the first point based on the artificial gradient to generate a second set of points. Notably, an error associated with the second set of points is less than an error associated with the first set of points.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: July 27, 2021
    Assignee: Disney Enterprises, Inc.
    Inventors: Ahmet Cengiz Öztireli, Olga Sorkine-Hornung, Shihao Wu, Yifan Wang
  • Publication number: 20210225037
    Abstract: A method of generating a training data set for training an image matting machine learning model includes receiving a plurality of foreground images, generating a plurality of com posited foreground images by com positing randomly selected foreground images from the plurality of foreground images, and generating a plurality of training images by compositing each composited foreground image with a randomly selected background image. The training data set includes the plurality of training images.
    Type: Application
    Filed: April 9, 2021
    Publication date: July 22, 2021
    Applicant: Disney Enterprises, Inc.
    Inventors: Tunc Ozan AYDIN, Ahmet Cengiz ÖZTIRELI, Jingwei TANG, Yagiz AKSOY
  • Patent number: 10984558
    Abstract: Techniques are disclosed for image matting. In particular, embodiments decompose the matting problem of estimating foreground opacity into the targeted subproblems of estimating a background using a first trained neural network, estimating a foreground using a second neural network and the estimated background as one of the inputs into the second neural network, and estimating an alpha matte using a third neural network and the estimated background and foreground as two of the inputs into the third neural network. Such a decomposition is in contrast to traditional sampling-based matting approaches that estimated foreground and background color pairs together directly for each pixel. By decomposing the matting problem into subproblems that are easier for a neural network to learn compared to traditional data-driven techniques for image matting, embodiments disclosed herein can produce better opacity estimates than such data-driven techniques as well as sampling-based and affinity-based matting approaches.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: April 20, 2021
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Tunc Ozan Aydin, Ahmet Cengiz Öztireli, Jingwei Tang, Yagiz Aksoy
  • Patent number: 10970849
    Abstract: According to one implementation, a pose estimation and body tracking system includes a computing platform having a hardware processor and a system memory storing a software code including a tracking module trained to track motions. The software code receives a series of images of motion by a subject, and for each image, uses the tracking module to determine locations corresponding respectively to two-dimensional (2D) skeletal landmarks of the subject based on constraints imposed by features of a hierarchical skeleton model intersecting at each 2D skeletal landmark. The software code further uses the tracking module to infer joint angles of the subject based on the locations and determine a three-dimensional (3D) pose of the subject based on the locations and the joint angles, resulting in a series of 3D poses. The software code outputs a tracking image corresponding to the motion by the subject based on the series of 3D poses.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: April 6, 2021
    Assignees: Disney Enterprises, Inc., ETH Zürich (EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE ZÜRICH)
    Inventors: Ahmet Cengiz Öztireli, Prashanth Chandran, Markus Gross
  • Publication number: 20210065434
    Abstract: In various embodiments, a differentiable rendering application enables an inverse rendering application to infer attributes associated with a 3D scene. In operation, the differentiable rendering application renders an image based on a first set of points associated with the 3D scene. The differentiable rendering application then generates an artificial gradient that approximates a change in a value of a first pixel included in the image with respect to a change in an attribute of a first point included in the first set of points. Subsequently, the inverse rendering application performs optimization operation(s) on the first point based on the artificial gradient to generate a second set of points. Notably, an error associated with the second set of points is less than an error associated with the first set of points.
    Type: Application
    Filed: September 27, 2019
    Publication date: March 4, 2021
    Inventors: Ahmet Cengiz ÖZTIRELI, Olga SORKINE-HORNUNG, Shihao WU, Yifan WANG
  • Patent number: 10916046
    Abstract: Techniques are disclosed for estimating poses from images. In one embodiment, a machine learning model, referred to herein as the “detector,” is trained to estimate animal poses from images in a bottom-up fashion. In particular, the detector may be trained using rendered images depicting animal body parts scattered over realistic backgrounds, as opposed to renderings of full animal bodies. In order to make appearances of the rendered body parts more realistic so that the detector can be trained to estimate poses from images of real animals, the body parts may be rendered using textures that are determined from a translation of rendered images of the animal into corresponding images with more realistic textures via adversarial learning. Three-dimensional poses may also be inferred from estimated joint locations using, e.g., inverse kinematics.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: February 9, 2021
    Assignees: Disney Enterprises, Inc., ETH Zurich (Eidgenoessische Technische Hochschule Zurich)
    Inventors: Martin Guay, Dominik Tobias Borer, Ahmet Cengiz Öztireli, Robert W. Sumner, Jakob Joachim Buhmann
  • Publication number: 20200357142
    Abstract: Techniques are disclosed for image matting. In particular, embodiments decompose the matting problem of estimating foreground opacity into the targeted subproblems of estimating a background using a first trained neural network, estimating a foreground using a second neural network and the estimated background as one of the inputs into the second neural network, and estimating an alpha matte using a third neural network and the estimated background and foreground as two of the inputs into the third neural network. Such a decomposition is in contrast to traditional sampling-based matting approaches that estimated foreground and background color pairs together directly for each pixel. By decomposing the matting problem into subproblems that are easier for a neural network to learn compared to traditional data-driven techniques for image matting, embodiments disclosed herein can produce better opacity estimates than such data-driven techniques as well as sampling-based and affinity-based matting approaches.
    Type: Application
    Filed: May 9, 2019
    Publication date: November 12, 2020
    Inventors: Tunc Ozan AYDIN, Ahmet Cengiz ÖZTIRELI, Jingwei TANG, Yagiz AKSOY
  • Publication number: 20200334828
    Abstract: According to one implementation, a pose estimation and body tracking system includes a computing platform having a hardware processor and a system memory storing a software code including a tracking module trained to track motions. The software code receives a series of images of motion by a subject, and for each image, uses the tracking module to determine locations corresponding respectively to two-dimensional (2D) skeletal landmarks of the subject based on constraints imposed by features of a hierarchical skeleton model intersecting at each 2D skeletal landmark. The software code further uses the tracking module to infer joint angles of the subject based on the locations and determine a three-dimensional (3D) pose of the subject based on the locations and the joint angles, resulting in a series of 3D poses. The software code outputs a tracking image corresponding to the motion by the subject based on the series of 3D poses.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 22, 2020
    Inventors: Ahmet Cengiz Öztireli, Prashanth Chandran, Markus Gross
  • Publication number: 20200279428
    Abstract: Techniques are disclosed for estimating poses from images. In one embodiment, a machine learning model, referred to herein as the “detector,” is trained to estimate animal poses from images in a bottom-up fashion. In particular, the detector may be trained using rendered images depicting animal body parts scattered over realistic backgrounds, as opposed to renderings of full animal bodies. In order to make appearances of the rendered body parts more realistic so that the detector can be trained to estimate poses from images of real animals, the body parts may be rendered using textures that are determined from a translation of rendered images of the animal into corresponding images with more realistic textures via adversarial learning. Three-dimensional poses may also be inferred from estimated joint locations using, e.g., inverse kinematics.
    Type: Application
    Filed: February 28, 2019
    Publication date: September 3, 2020
    Inventors: Martin GUAY, Dominik Tobias BORER, Ahmet Cengiz ÖZTIRELI, Robert W. SUMNER, Jakob Joachim BUHMANN
  • Patent number: 10325346
    Abstract: An image processor inputs a first image and outputs a downscaled second image by upscaling the second image to a third image, wherein the third image is substantially the same size as the first image size with a third resolution, associating pixels in the second image with a corresponding group of pixels from the third set of pixels, sampling a first image area at a first location of the first set of pixels to generate a first image sample, sampling a second image area of the third set of pixels to generate a second image sample, measuring similarity between the image areas, generating a perceptual image value, recursively adjusting values of third set of pixels until the image perception value matches a perceptual standard value, and adjusting pixel values in the second image to a representative pixel value of each of the corresponding group of pixels.
    Type: Grant
    Filed: July 25, 2016
    Date of Patent: June 18, 2019
    Assignee: ETH-Zurich
    Inventors: Ahmet Cengiz Öztireli, Markus Gross
  • Publication number: 20170024852
    Abstract: An image processor inputs a first image and outputs a downscaled second image by upscaling the second image to a third image, wherein the third image is substantially the same size as the first image size with a third resolution, associating pixels in the second image with a corresponding group of pixels from the third set of pixels, sampling a first image area at a first location of the first set of pixels to generate a first image sample, sampling a second image area of the third set of pixels to generate a second image sample, measuring similarity between the image areas, generating a perceptual image value, recursively adjusting values of third set of pixels until the image perception value matches a perceptual standard value, and adjusting pixel values in the second image to a representative pixel value of each of the corresponding group of pixels.
    Type: Application
    Filed: July 25, 2016
    Publication date: January 26, 2017
    Inventors: Ahmet Cengiz Öztireli, Markus Gross