Patents by Inventor Paul J. Asente

Paul J. Asente has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11875462
    Abstract: In implementations of systems for augmented reality authoring of remote environments, a computing device implements an augmented reality authoring system to display a three-dimensional representation of a remote physical environment on a display device based on orientations of an image capture device. The three-dimensional representation of the remote physical environment is generated from a three-dimensional mesh representing a geometry of the remote physical environment and digital video frames depicting portions of the remote physical environment. The augmented reality authoring system receives input data describing a request to display a digital video frame of the digital video frames. A particular digital video frame of the digital video frames is determined based on an orientation of the image capture device relative to the three-dimensional mesh. The augmented reality authoring system displays the particular digital video frame on the display device.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Zeyu Wang, Paul J. Asente, Cuong D. Nguyen
  • Publication number: 20220157024
    Abstract: In implementations of systems for augmented reality authoring of remote environments, a computing device implements an augmented reality authoring system to display a three-dimensional representation of a remote physical environment on a display device based on orientations of an image capture device. The three-dimensional representation of the remote physical environment is generated from a three-dimensional mesh representing a geometry of the remote physical environment and digital video frames depicting portions of the remote physical environment. The augmented reality authoring system receives input data describing a request to display a digital video frame of the digital video frames. A particular digital video frame of the digital video frames is determined based on an orientation of the image capture device relative to the three-dimensional mesh. The augmented reality authoring system displays the particular digital video frame on the display device.
    Type: Application
    Filed: November 18, 2020
    Publication date: May 19, 2022
    Applicant: Adobe Inc.
    Inventors: Zeyu Wang, Paul J. Asente, Cuong D. Nguyen
  • Patent number: 11238657
    Abstract: In implementations of augmented video prototyping, a mobile device records augmented video data as a captured video of a recorded scene in an environment, the augmented video data including augmented reality tracking data as 3D spatial information relative to objects in the recorded scene. A video prototyping module localizes the mobile device with reference to the objects in the recorded scene using the 3D spatial information for the mobile device being within boundaries of the recorded scene in the environment. The video prototyping module can generate an avatar for display that represents the mobile device at a current location from a perspective of the recorded scene, and create a spatial layer over a video frame at the current location of the avatar that represents the mobile device. The spatial layer is an interactive interface on which to create an augmented reality feature that displays during playback of the captured video.
    Type: Grant
    Filed: March 2, 2020
    Date of Patent: February 1, 2022
    Assignee: Adobe Inc.
    Inventors: Cuong D. Nguyen, Paul J. Asente, Germán Ariel Leiva, Rubaiat Habib Kazi
  • Publication number: 20210272363
    Abstract: In implementations of augmented video prototyping, a mobile device records augmented video data as a captured video of a recorded scene in an environment, the augmented video data including augmented reality tracking data as 3D spatial information relative to objects in the recorded scene. A video prototyping module localizes the mobile device with reference to the objects in the recorded scene using the 3D spatial information for the mobile device being within boundaries of the recorded scene in the environment. The video prototyping module can generate an avatar for display that represents the mobile device at a current location from a perspective of the recorded scene, and create a spatial layer over a video frame at the current location of the avatar that represents the mobile device. The spatial layer is an interactive interface on which to create an augmented reality feature that displays during playback of the captured video.
    Type: Application
    Filed: March 2, 2020
    Publication date: September 2, 2021
    Applicant: Adobe Inc.
    Inventors: Cuong D. Nguyen, Paul J. Asente, Germán Ariel Leiva, Rubaiat Habib Kazi
  • Patent number: 10997754
    Abstract: Freeform drawing beautification techniques are described. An input is received by a computing device describing a freeform path drawn by a user as part of a drawing, the freeform path not formed solely as a circular arc or a circle (e.g., a fixed distance from a point) and including one or more curved elements. The drawing is examined by the computing device to locate another curved element in the drawing. One or more suggestions are constructed to adjust the freeform path by the computing device based on the located curved element in the drawing. The constructed one or more suggestions are output to adjust the freeform path by the computing device.
    Type: Grant
    Filed: May 27, 2015
    Date of Patent: May 4, 2021
    Assignee: Adobe Inc.
    Inventors: Paul J. Asente, Jakub Fiser
  • Patent number: 10176624
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Grant
    Filed: December 22, 2017
    Date of Patent: January 8, 2019
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Patent number: 10019817
    Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: July 10, 2018
    Assignee: Adobe Systems Incorporated
    Inventors: Paul J. Asente, Jingwan Lu, Michal Luká{hacek over (c)}, Elya Schechtman
  • Publication number: 20180122131
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Application
    Filed: December 22, 2017
    Publication date: May 3, 2018
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Patent number: 9905054
    Abstract: Techniques for controlling patch-usage in image synthesis are described. In implementations, a curve is fitted to a set of sorted matching errors that correspond to potential source-to-target patch assignments between a source image and a target image. Then, an error budget is determined using the curve. In an example, the error budget is usable to identify feasible patch assignments from the potential source-to-target patch assignments. Using the error budget along with uniform patch-usage enforcement, source patches from the source image are assigned to target patches in the target image. Then, at least one of the assigned source patches is assigned to an additional target patch based on the error budget. Subsequently, an image is synthesized based on the source patches assigned to the target patches.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: February 27, 2018
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Patent number: 9881413
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Grant
    Filed: June 9, 2016
    Date of Patent: January 30, 2018
    Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Patent number: 9870638
    Abstract: Appearance transfer techniques are described in the following. In one example, a search and vote process is configured to select patches from the image exemplar and then search for a location in the target image that is a best fit for the patches. As part of this selection, a patch usage counter may also be employed in an example to ensure that selection of each of the patches from the image exemplar does not vary by more than one, one to another. In another example, transfer of an appearance of a boundary and interiors regions from the image exemplar to a target image is preserved.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: January 16, 2018
    Inventors: Ondrej Jamri{hacek over (s)}ka, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Patent number: 9852523
    Abstract: Appearance transfer techniques are described in the following that maintain temporal coherence between frames. In one example, a previous frame of a target video is warped that occurs in the sequence of the target video before a particular frame being synthesized. Color of the particular frame is transferred from an appearance of a corresponding frame of a video exemplar. In a further example, emitter portions are identified and addressed to preserve temporal coherence. This is performed to reduce an influence of the emitter portion of the target region in the selection of patches.
    Type: Grant
    Filed: February 24, 2016
    Date of Patent: December 26, 2017
    Inventors: Ondrej Jamri{hacek over (s)}ka, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Publication number: 20170358143
    Abstract: Techniques for controlling patch-usage in image synthesis are described. In implementations, a curve is fitted to a set of sorted matching errors that correspond to potential source-to-target patch assignments between a source image and a target image. Then, an error budget is determined using the curve. In an example, the error budget is usable to identify feasible patch assignments from the potential source-to-target patch assignments. Using the error budget along with uniform patch-usage enforcement, source patches from the source image are assigned to target patches in the target image. Then, at least one of the assigned source patches is assigned to an additional target patch based on the error budget. Subsequently, an image is synthesized based on the source patches assigned to the target patches.
    Type: Application
    Filed: June 9, 2016
    Publication date: December 14, 2017
    Inventors: Jakub Fiser, Ondrej Jamriska, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Publication number: 20170358128
    Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.
    Type: Application
    Filed: June 9, 2016
    Publication date: December 14, 2017
    Inventors: Jakub Fiser, Ondrej Jamriska, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
  • Publication number: 20170243376
    Abstract: Appearance transfer techniques are described in the following that maintain temporal coherence between frames. In one example, a previous frame of a target video is warped that occurs in the sequence of the target video before a particular frame being synthesized. Color of the particular frame is transferred from an appearance of a corresponding frame of a video exemplar. In a further example, emitter portions are identified and addressed to preserve temporal coherence. This is performed to reduce an influence of the emitter portion of the target region in the selection of patches.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventors: Ondrej Jamriska, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Publication number: 20170243388
    Abstract: Appearance transfer techniques are described in the following. In one example, a search and vote process is configured to select patches from the image exemplar and then search for a location in the target image that is a best fit for the patches. As part of this selection, a patch usage counter may also be employed in an example to ensure that selection of each of the patches from the image exemplar does not vary by more than one, one to another. In another example, transfer of an appearance of a boundary and interiors regions from the image exemplar to a target image is preserved.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 24, 2017
    Inventors: Ondrej Jamriska, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
  • Publication number: 20170109900
    Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Application
    Filed: December 29, 2016
    Publication date: April 20, 2017
    Applicant: Adobe Systems Incorporated
    Inventors: Paul J. Asente, Jingwan Lu, Michal Lukác, Elya Schechtman
  • Patent number: 9536327
    Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Grant
    Filed: May 28, 2015
    Date of Patent: January 3, 2017
    Assignee: Adobe Systems Incorporated
    Inventors: Paul J. Asente, Jingwan Lu, Michal Luká{hacek over (c)}, Elya Schechtman
  • Publication number: 20160351170
    Abstract: Freeform drawing beautification techniques are described. An input is received by a computing device describing a freeform path drawn by a user as part of a drawing, the freeform path not formed solely as a circular arc or a circle (e.g., a fixed distance from a point) and including one or more curved elements. The drawing is examined by the computing device to locate another curved element in the drawing. One or more suggestions are constructed to adjust the freeform path by the computing device based on the located curved element in the drawing. The constructed one or more suggestions are output to adjust the freeform path by the computing device.
    Type: Application
    Filed: May 27, 2015
    Publication date: December 1, 2016
    Inventors: Paul J. Asente, Jakub Fiser
  • Publication number: 20160350942
    Abstract: Example based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.
    Type: Application
    Filed: May 28, 2015
    Publication date: December 1, 2016
    Inventors: Paul J. Asente, Jingwan Lu, Michal Lukác, Elya Schechtman