Patents by Inventor Elya Shechtman

Elya Shechtman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190295227
    Abstract: Techniques for using deep learning to facilitate patch-based image inpainting are described. In an example, a computer system hosts a neural network trained to generate, from an image, code vectors including features learned by the neural network and descriptive of patches. The image is received and contains a region of interest (e.g., a hole missing content). The computer system inputs it to the network and, in response, receives the code vectors. Each code vector is associated with a pixel in the image. Rather than comparing RGB values between patches, the computer system compares the code vector of a pixel inside the region to code vectors of pixels outside the region to find the best match based on a feature similarity measure (e.g., a cosine similarity). The pixel value of the pixel inside the region is set based on the pixel value of the matched pixel outside this region.
    Type: Application
    Filed: March 26, 2018
    Publication date: September 26, 2019
    Inventors: Oliver Wang, Michal Lukac, Elya Shechtman, Mahyar Najibikohnehshahri
  • Publication number: 20190287224
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for automatically synthesizing a content-aware fill using similarity transformed patches. A user interface receives a user-specified hole and a user-specified sampling region, both of which may be stored in a constraint mask. A brush tool can be used to interactively brush the sampling region and modify the constraint mask. The mask is passed to a patch-based synthesizer configured to synthesize the fill using similarity transformed patches sampled from the sampling region. Fill properties such as similarity transform parameters can be set to control the manner in which the fill is synthesized. A live preview can be provided with gradual updates of the synthesized fill prior to completion. Once a fill has been synthesized, the user interface presents the original image, replacing the hole with the synthesized fill.
    Type: Application
    Filed: March 14, 2018
    Publication date: September 19, 2019
    Inventors: Sohrab Amirghodsi, Sarah Jane Stuckey, Elya Shechtman
  • Publication number: 20190287225
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for improved patch validity testing for patch-based synthesis applications using similarity transforms. The improved patch validity tests are used to validate (or invalidate) candidate patches as valid patches falling within a sampling region of a source image. The improved patch validity tests include a hole dilation test for patch validity, a no-dilation test for patch invalidity, and a comprehensive pixel test for patch invalidity. A fringe test for range invalidity can be used to identify pixels with an invalid range and invalidate corresponding candidate patches. The fringe test for range invalidity can be performed as a precursor to any or all of the improved patch validity tests. In this manner, validated candidate patches are used to automatically reconstruct a target image.
    Type: Application
    Filed: March 14, 2018
    Publication date: September 19, 2019
    Inventors: Sohrab Amirghodsi, Kevin Wampler, Elya Shechtman, Aliakbar Darabi
  • Patent number: 10417833
    Abstract: Embodiments disclosed herein provide systems, methods, and computer storage media for automatically aligning a 3D camera with a 2D background image. An automated image analysis can be performed on the 2D background image, and a classifier can predict whether the automated image analysis is accurate within a selected confidence level. As such, a feature can be enabled that allows a user to automatically align the 3D camera with the 2D background image. For example, where the automated analysis detects a horizon and one or more vanishing points from the background image, the 3D camera can be automatically transformed to align with the detected horizon and to point at a detected horizon-located vanishing point. In some embodiments, 3D objects in a 3D scene can be pivoted and the 3D camera dollied forward or backwards to reduce changes to the framing of the 3D composition resulting from the 3D camera transformation.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: September 17, 2019
    Assignee: Adobe Inc.
    Inventors: Jonathan Eisenmann, Geoffrey Alan Oxholm, Elya Shechtman, Bryan Russell
  • Patent number: 10402948
    Abstract: Embodiments described herein are directed to methods and systems for facilitating control of smoothness of transitions between images. In embodiments, a difference of color values of pixels between a foreground image and the background image are identified along a boundary associated with a location at which to paste the foreground image relative to the background image. Thereafter, recursive down sampling of a region of pixels within the boundary by a sampling factor is performed to produce a plurality of down sampled images having color difference indicators associated with each pixel of the down sampled images. Such color difference indicators indicate whether a difference of color value exists for the corresponding pixel. To effectuate a seamless transition, the color difference indicators are normalized in association with each recursively down sampled image.
    Type: Grant
    Filed: June 15, 2018
    Date of Patent: September 3, 2019
    Assignee: Adobe Inc.
    Inventors: Sylvain Paris, Sohrab Amirghodsi, Aliakbar Darabi, Elya Shechtman
  • Publication number: 20190251401
    Abstract: The present disclosure relates to an image composite system that employs a generative adversarial network to generate realistic composite images. For example, in one or more embodiments, the image composite system trains a geometric prediction neural network using an adversarial discrimination neural network to learn warp parameters that provide correct geometric alignment of foreground objects with respect to a background image. Once trained, the determined warp parameters provide realistic geometric corrections to foreground objects such that the warped foreground objects appear to blend into background images naturally when composited together.
    Type: Application
    Filed: February 15, 2018
    Publication date: August 15, 2019
    Inventors: Elya Shechtman, Oliver Wang, Mehmet Yumer, Chen-Hsuan Lin
  • Publication number: 20190236753
    Abstract: Environmental map generation techniques and systems are described. A digital image is scaled to achieve a target aspect ratio using a content aware scaling technique. A canvas is generated that is dimensionally larger than the scaled digital image and the scaled digital image is inserted within the canvas thereby resulting in an unfilled portion of the canvas. An initially filled canvas is then generated by filling the unfilled portion using a content aware fill technique based on the inserted digital image. A plurality of polar coordinate canvases is formed by transforming original coordinates of the canvas into polar coordinates. The unfilled portions of the polar coordinate canvases are filled using a content-aware fill technique that is initialized based on the initially filled canvas. An environmental map of the digital image is generated by combining a plurality of original coordinate canvas portions formed from the polar coordinate canvases.
    Type: Application
    Filed: April 9, 2019
    Publication date: August 1, 2019
    Applicant: Adobe Inc.
    Inventors: Xue Bai, Elya Shechtman, Sylvain Philippe Paris
  • Patent number: 10332291
    Abstract: An image is displayed using a computer system. The image includes contents that have a visible feature therein at a first location. A first input is received that includes a user movement of at least the visible feature from the first location. During the user movement, the first location is synthesized with content from where the visible feature is currently located. A second input is received that specifies an end of the user movement at a second location. A source area in the image is identified. The method further includes identifying additional contents within the source area. The additional contents are identified using a patch-based optimization algorithm on the image. The method further includes updating the image to have the additional contents at least in the first location.
    Type: Grant
    Filed: January 4, 2017
    Date of Patent: June 25, 2019
    Assignee: Adobe Inc.
    Inventors: Elya Shechtman, Dan Goldman
  • Publication number: 20190189158
    Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.
    Type: Application
    Filed: February 12, 2019
    Publication date: June 20, 2019
    Inventors: Geoffrey OXHOLM, Elya SHECHTMAN, Oliver WANG
  • Publication number: 20190139319
    Abstract: Embodiments disclosed herein provide systems, methods, and computer storage media for automatically aligning a 3D camera with a 2D background image. An automated image analysis can be performed on the 2D background image, and a classifier can predict whether the automated image analysis is accurate within a selected confidence level. As such, a feature can be enabled that allows a user to automatically align the 3D camera with the 2D background image. For example, where the automated analysis detects a horizon and one or more vanishing points from the background image, the 3D camera can be automatically transformed to align with the detected horizon and to point at a detected horizon-located vanishing point. In some embodiments, 3D objects in a 3D scene can be pivoted and the 3D camera dollied forward or backwards to reduce changes to the framing of the 3D composition resulting from the 3D camera transformation.
    Type: Application
    Filed: November 6, 2017
    Publication date: May 9, 2019
    Inventors: Jonathan Eisenmann, Geoffrey Alan Oxholm, Elya Shechtman, Bryan Russell
  • Patent number: 10282815
    Abstract: Environmental map generation techniques and systems are described. A digital image is scaled to achieve a target aspect ratio using a content aware scaling technique. A canvas is generated that is dimensionally larger than the scaled digital image and the scaled digital image is inserted within the canvas thereby resulting in an unfilled portion of the canvas. An initially filled canvas is then generated by filling the unfilled portion using a content aware fill technique based on the inserted digital image. A plurality of polar coordinate canvases is formed by transforming original coordinates of the canvas into polar coordinates. The unfilled portions of the polar coordinate canvases are filled using a content-aware fill technique that is initialized based on the initially filled canvas. An environmental map of the digital image is generated by combining a plurality of original coordinate canvas portions formed from the polar coordinate canvases.
    Type: Grant
    Filed: December 13, 2016
    Date of Patent: May 7, 2019
    Assignee: Adobe Inc.
    Inventors: Xue Bai, Elya Shechtman, Sylvain Philippe Paris
  • Publication number: 20190050961
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed at image synthesis utilizing sampling of patch correspondence information between iterations at different scales. A patch synthesis technique can be performed to synthesize a target region at a first image scale based on portions of a source region that are identified by the patch synthesis technique. The image can then be sampled to generate an image at a second image scale. The sampling can include generating patch correspondence information for the image at the second image scale. Invalid patch assignments in the patch correspondence information at the second image scale can then be identified, and valid patches can be assigned to the pixels having invalid patch assignments. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: October 15, 2018
    Publication date: February 14, 2019
    Inventors: Sohrab Amirghodsi, Aliakbar Darabi, Elya Shechtman
  • Patent number: 10204656
    Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: February 12, 2019
    Assignee: ADOBE INC.
    Inventors: Geoffrey Oxholm, Elya Shechtman, Oliver Wang
  • Publication number: 20190042875
    Abstract: The present disclosure is directed toward systems and methods for image patch matching. In particular, the systems and methods described herein sample image patches to identify those image patches that match a target image patch. The systems and methods described herein probabilistically accept image patch proposals as potential matches based on an oracle. The oracle is computationally inexpensive to evaluate but more approximate than similarity heuristics. The systems and methods use the oracle to quickly guide the search to areas of the search space more likely to have a match. Once areas are identified that likely include a match, the systems and methods use a more accurate similarity function to identify patch matches.
    Type: Application
    Filed: October 1, 2018
    Publication date: February 7, 2019
    Inventors: Nathan Carr, Kalyan Sunkavalli, Michal Lukac, Elya Shechtman
  • Publication number: 20190035428
    Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.
    Type: Application
    Filed: July 27, 2017
    Publication date: January 31, 2019
    Applicant: ADOBE SYSTEMS INCORPORATED
    Inventors: Geoffrey OXHOLM, Elya SHECHTMAN, Oliver WANG
  • Patent number: 10176624
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: January 8, 2019
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Publication number: 20180365874
    Abstract: Techniques are disclosed for performing manipulation of facial images using an artificial neural network. A facial rendering and generation network and method learns one or more compact, meaningful manifolds of facial appearance, by disentanglement of a facial image into intrinsic facial properties, and enables facial edits by traversing paths of such manifold(s). The facial rendering and generation network is able to handle a much wider range of manipulations including changes to, for example, viewpoint, lighting, expression, and even higher-level attributes like facial hair and age—aspects that cannot be represented using previous models.
    Type: Application
    Filed: June 14, 2017
    Publication date: December 20, 2018
    Applicant: Adobe Systems Incorporated
    Inventors: Sunil Hadap, Elya Shechtman, Zhixin Shu, Kalyan Sunkavalli, Mehmet Yumer
  • Publication number: 20180350030
    Abstract: Certain embodiments involve generating an appearance guide, a segmentation guide, and a positional guide and using one or more of the guides to synthesize a stylized image or animation. For example, a system obtains data indicating a target and a style exemplar image and generates a segmentation guide for segmenting the target image and the style exemplar image and identifying a feature of the target image and a corresponding feature of the style exemplar image. The system generates a positional guide for determining positions of the target feature and style feature relative to a common grid system. The system generates an appearance guide for modifying intensity levels and contrast values in the target image based on the style exemplar image. The system uses one or more of the guides to transfer a texture of the style feature to the corresponding target feature.
    Type: Application
    Filed: October 16, 2017
    Publication date: December 6, 2018
    Inventors: David Simons, Michal Lukac, Daniel Sykora, Elya Shechtman, Paul Asente, Jingwan Lu, Jakub Fiser, Ondrej Jamriska
  • Patent number: 10134165
    Abstract: Image distractor detection and processing techniques are described. In one or more implementations, a digital medium environment is configured for image distractor detection that includes detecting one or more locations within the image automatically and without user intervention by the one or more computing devices that include one or more distractors that are likely to be considered by a user as distracting from content within the image. The detection includes forming a plurality of segments from the image by the one or more computing devices and calculating a score for each of the plurality of segments that is indicative of a relative likelihood that a respective said segment is considered a distractor within the image. The calculation is performed using a distractor model trained using machine learning as applied to a plurality images having ground truth distractor locations.
    Type: Grant
    Filed: May 17, 2017
    Date of Patent: November 20, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Ohad I. Fried, Elya Shechtman, Daniel R. Goldman
  • Patent number: 10134108
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed at image synthesis utilizing sampling of patch correspondence information between iterations at different scales. A patch synthesis technique can be performed to synthesize a target region at a first image scale based on portions of a source region that are identified by the patch synthesis technique. The image can then be sampled to generate an image at a second image scale. The sampling can include generating patch correspondence information for the image at the second image scale. Invalid patch assignments in the patch correspondence information at the second image scale can then be identified, and valid patches can be assigned to the pixels having invalid patch assignments. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: October 5, 2016
    Date of Patent: November 20, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Sohrab Amirghodsi, Aliakbar Darabi, Elya Shechtman