Patents by Inventor Elya Schechtman

Elya Schechtman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240087265
    Abstract: Various disclosed embodiments are directed to changing parameters of an input image or multidimensional representation of the input image based on a user request to change such parameters. An input image is first received. A multidimensional image that represents the input image in multiple dimensions is generated via a model. A request to change at least a first parameter to a second parameter is received via user input at a user device. Such request is a request to edit or generate the multidimensional image in some way. For instance, the request may be to change the light source position or camera position from a first set of coordinates to a second set of coordinates.
    Type: Application
    Filed: September 9, 2022
    Publication date: March 14, 2024
    Inventors: Taesung Park, Richard Zhang, Elya Schechtman
  • Publication number: 20230360299
    Abstract: Face anonymization techniques are described that overcome conventional challenges to generate an anonymized face. In one example, a digital object editing system is configured to generate an anonymized face based on a target face and a reference face. As part of this, the digital object editing system employs an encoder as part of machine learning to extract a target encoding of the target face image and a reference encoding of the reference face. The digital object editing system then generates a mixed encoding from the target and reference encodings. The mixed encoding is employed by a machine-learning model of the digital object editing system to generate a mixed face. An object replacement module is used by the digital object editing system to replace the target face in the target digital image with the mixed face.
    Type: Application
    Filed: July 21, 2023
    Publication date: November 9, 2023
    Applicant: Adobe Inc.
    Inventors: Yang Yang, Zhixin Shu, Shabnam Ghadar, Jingwan Lu, Jakub Fiser, Elya Schechtman, Cameron Y. Smith, Baldo Antonio Faieta, Alex Charles Filipkowski
  • Patent number: 11748928
    Abstract: Face anonymization techniques are described that overcome conventional challenges to generate an anonymized face. In one example, a digital object editing system is configured to generate an anonymized face based on a target face and a reference face. As part of this, the digital object editing system employs an encoder as part of machine learning to extract a target encoding of the target face image and a reference encoding of the reference face. The digital object editing system then generates a mixed encoding from the target and reference encodings. The mixed encoding is employed by a machine-learning model of the digital object editing system to generate a mixed face. An object replacement module is used by the digital object editing system to replace the target face in the target digital image with the mixed face.
    Type: Grant
    Filed: November 10, 2020
    Date of Patent: September 5, 2023
    Assignee: Adobe Inc.
    Inventors: Yang Yang, Zhixin Shu, Shabnam Ghadar, Jingwan Lu, Jakub Fiser, Elya Schechtman, Cameron Y. Smith, Baldo Antonio Faieta, Alex Charles Filipkowski
  • Publication number: 20220156893
    Abstract: Various disclosed embodiments are directed to inpainting one or more portions of a target image based on merging (or selecting) one or more portions of a warped image with (or from) one or more portions of an inpainting candidate (e.g., via a learning model). This, among other functionality described herein, resolves the inaccuracies of existing image inpainting technologies.
    Type: Application
    Filed: November 13, 2020
    Publication date: May 19, 2022
    Inventors: Yuqian Zhou, Elya Schechtman, Connelly Stuart Barnes, Sohrab Amirghodsi
  • Publication number: 20220148243
    Abstract: Face anonymization techniques are described that overcome conventional challenges to generate an anonymized face. In one example, a digital object editing system is configured to generate an anonymized face based on a target face and a reference face. As part of this, the digital object editing system employs an encoder as part of machine learning to extract a target encoding of the target face image and a reference encoding of the reference face. The digital object editing system then generates a mixed encoding from the target and reference encodings. The mixed encoding is employed by a machine-learning model of the digital object editing system to generate a mixed face. An object replacement module is used by the digital object editing system to replace the target face in the target digital image with the mixed face.
    Type: Application
    Filed: November 10, 2020
    Publication date: May 12, 2022
    Applicant: Adobe Inc.
    Inventors: Yang Yang, Zhixin Shu, Shabnam Ghadar, Jingwan Lu, Jakub Fiser, Elya Schechtman, Cameron Y. Smith, Baldo Antonio Faieta, Alex Charles Filipkowski
  • Publication number: 20220076374
    Abstract: One example method involves operations for receiving a request to transform an input image into a target image. Operations further include providing the input image to a machine learning model trained to adapt images. Training the machine learning model includes accessing training data having a source domain of images and a target domain of images with a target style. Training further includes using a pre-trained generative model to generate an adapted source domain of adapted images having the target style. The adapted source domain is generated by determining a rate of change for parameters of the target style, generating weighted parameters by applying a weight to each of the parameters based on their respective rate of change, and applying the weighted parameters to the source domain. Additionally, operations include using the machine learning model to generate the target image by modifying parameters of the input image using the target style.
    Type: Application
    Filed: September 4, 2020
    Publication date: March 10, 2022
    Inventors: Yijun Li, Richard Zhang, Jingwan Lu, Elya Schechtman
  • Patent number: 10019817
    Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 10, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Paul J. Asente, Jingwan Lu, Michal Luká{hacek over (c)}, Elya Schechtman
  • Publication number: 20170109900
    Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Application
    Filed: December 29, 2016
    Publication date: April 20, 2017
    Applicant: Adobe Systems Incorporated
    Inventors: Paul J. Asente, Jingwan Lu, Michal Lukác, Elya Schechtman
  • Patent number: 9536327
    Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Grant
    Filed: May 28, 2015
    Date of Patent: January 3, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Paul J. Asente, Jingwan Lu, Michal Luká{hacek over (c)}, Elya Schechtman
  • Publication number: 20160350942
    Abstract: Example based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Application
    Filed: May 28, 2015
    Publication date: December 1, 2016
    Inventors: Paul J. Asente, Jingwan Lu, Michal Lukác, Elya Schechtman