Patents by Inventor Paul J. Asente
Paul J. Asente has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11875462Abstract: In implementations of systems for augmented reality authoring of remote environments, a computing device implements an augmented reality authoring system to display a three-dimensional representation of a remote physical environment on a display device based on orientations of an image capture device. The three-dimensional representation of the remote physical environment is generated from a three-dimensional mesh representing a geometry of the remote physical environment and digital video frames depicting portions of the remote physical environment. The augmented reality authoring system receives input data describing a request to display a digital video frame of the digital video frames. A particular digital video frame of the digital video frames is determined based on an orientation of the image capture device relative to the three-dimensional mesh. The augmented reality authoring system displays the particular digital video frame on the display device.Type: GrantFiled: November 18, 2020Date of Patent: January 16, 2024Assignee: Adobe Inc.Inventors: Zeyu Wang, Paul J. Asente, Cuong D. Nguyen
-
Publication number: 20220157024Abstract: In implementations of systems for augmented reality authoring of remote environments, a computing device implements an augmented reality authoring system to display a three-dimensional representation of a remote physical environment on a display device based on orientations of an image capture device. The three-dimensional representation of the remote physical environment is generated from a three-dimensional mesh representing a geometry of the remote physical environment and digital video frames depicting portions of the remote physical environment. The augmented reality authoring system receives input data describing a request to display a digital video frame of the digital video frames. A particular digital video frame of the digital video frames is determined based on an orientation of the image capture device relative to the three-dimensional mesh. The augmented reality authoring system displays the particular digital video frame on the display device.Type: ApplicationFiled: November 18, 2020Publication date: May 19, 2022Applicant: Adobe Inc.Inventors: Zeyu Wang, Paul J. Asente, Cuong D. Nguyen
-
Patent number: 11238657Abstract: In implementations of augmented video prototyping, a mobile device records augmented video data as a captured video of a recorded scene in an environment, the augmented video data including augmented reality tracking data as 3D spatial information relative to objects in the recorded scene. A video prototyping module localizes the mobile device with reference to the objects in the recorded scene using the 3D spatial information for the mobile device being within boundaries of the recorded scene in the environment. The video prototyping module can generate an avatar for display that represents the mobile device at a current location from a perspective of the recorded scene, and create a spatial layer over a video frame at the current location of the avatar that represents the mobile device. The spatial layer is an interactive interface on which to create an augmented reality feature that displays during playback of the captured video.Type: GrantFiled: March 2, 2020Date of Patent: February 1, 2022Assignee: Adobe Inc.Inventors: Cuong D. Nguyen, Paul J. Asente, Germán Ariel Leiva, Rubaiat Habib Kazi
-
Publication number: 20210272363Abstract: In implementations of augmented video prototyping, a mobile device records augmented video data as a captured video of a recorded scene in an environment, the augmented video data including augmented reality tracking data as 3D spatial information relative to objects in the recorded scene. A video prototyping module localizes the mobile device with reference to the objects in the recorded scene using the 3D spatial information for the mobile device being within boundaries of the recorded scene in the environment. The video prototyping module can generate an avatar for display that represents the mobile device at a current location from a perspective of the recorded scene, and create a spatial layer over a video frame at the current location of the avatar that represents the mobile device. The spatial layer is an interactive interface on which to create an augmented reality feature that displays during playback of the captured video.Type: ApplicationFiled: March 2, 2020Publication date: September 2, 2021Applicant: Adobe Inc.Inventors: Cuong D. Nguyen, Paul J. Asente, Germán Ariel Leiva, Rubaiat Habib Kazi
-
Patent number: 10997754Abstract: Freeform drawing beautification techniques are described. An input is received by a computing device describing a freeform path drawn by a user as part of a drawing, the freeform path not formed solely as a circular arc or a circle (e.g., a fixed distance from a point) and including one or more curved elements. The drawing is examined by the computing device to locate another curved element in the drawing. One or more suggestions are constructed to adjust the freeform path by the computing device based on the located curved element in the drawing. The constructed one or more suggestions are output to adjust the freeform path by the computing device.Type: GrantFiled: May 27, 2015Date of Patent: May 4, 2021Assignee: Adobe Inc.Inventors: Paul J. Asente, Jakub Fiser
-
Patent number: 10176624Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.Type: GrantFiled: December 22, 2017Date of Patent: January 8, 2019Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
-
Patent number: 10019817Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.Type: GrantFiled: December 29, 2016Date of Patent: July 10, 2018Assignee: Adobe Systems IncorporatedInventors: Paul J. Asente, Jingwan Lu, Michal Luká{hacek over (c)}, Elya Schechtman
-
Publication number: 20180122131Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.Type: ApplicationFiled: December 22, 2017Publication date: May 3, 2018Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
-
Patent number: 9905054Abstract: Techniques for controlling patch-usage in image synthesis are described. In implementations, a curve is fitted to a set of sorted matching errors that correspond to potential source-to-target patch assignments between a source image and a target image. Then, an error budget is determined using the curve. In an example, the error budget is usable to identify feasible patch assignments from the potential source-to-target patch assignments. Using the error budget along with uniform patch-usage enforcement, source patches from the source image are assigned to target patches in the target image. Then, at least one of the assigned source patches is assigned to an additional target patch based on the error budget. Subsequently, an image is synthesized based on the source patches assigned to the target patches.Type: GrantFiled: June 9, 2016Date of Patent: February 27, 2018Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
-
Patent number: 9881413Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.Type: GrantFiled: June 9, 2016Date of Patent: January 30, 2018Inventors: Jakub Fiser, Ondrej Jamri{hacek over (s)}ka, Michal Luká{hacek over (c)}, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
-
Patent number: 9870638Abstract: Appearance transfer techniques are described in the following. In one example, a search and vote process is configured to select patches from the image exemplar and then search for a location in the target image that is a best fit for the patches. As part of this selection, a patch usage counter may also be employed in an example to ensure that selection of each of the patches from the image exemplar does not vary by more than one, one to another. In another example, transfer of an appearance of a boundary and interiors regions from the image exemplar to a target image is preserved.Type: GrantFiled: February 24, 2016Date of Patent: January 16, 2018Inventors: Ondrej Jamri{hacek over (s)}ka, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
-
Patent number: 9852523Abstract: Appearance transfer techniques are described in the following that maintain temporal coherence between frames. In one example, a previous frame of a target video is warped that occurs in the sequence of the target video before a particular frame being synthesized. Color of the particular frame is transferred from an appearance of a corresponding frame of a video exemplar. In a further example, emitter portions are identified and addressed to preserve temporal coherence. This is performed to reduce an influence of the emitter portion of the target region in the selection of patches.Type: GrantFiled: February 24, 2016Date of Patent: December 26, 2017Inventors: Ondrej Jamri{hacek over (s)}ka, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
-
Publication number: 20170358143Abstract: Techniques for controlling patch-usage in image synthesis are described. In implementations, a curve is fitted to a set of sorted matching errors that correspond to potential source-to-target patch assignments between a source image and a target image. Then, an error budget is determined using the curve. In an example, the error budget is usable to identify feasible patch assignments from the potential source-to-target patch assignments. Using the error budget along with uniform patch-usage enforcement, source patches from the source image are assigned to target patches in the target image. Then, at least one of the assigned source patches is assigned to an additional target patch based on the error budget. Subsequently, an image is synthesized based on the source patches assigned to the target patches.Type: ApplicationFiled: June 9, 2016Publication date: December 14, 2017Inventors: Jakub Fiser, Ondrej Jamriska, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
-
Publication number: 20170358128Abstract: Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.Type: ApplicationFiled: June 9, 2016Publication date: December 14, 2017Inventors: Jakub Fiser, Ondrej Jamriska, Michal Lukác, Elya Shechtman, Paul J. Asente, Jingwan Lu, Daniel Sýkora
-
Publication number: 20170243376Abstract: Appearance transfer techniques are described in the following that maintain temporal coherence between frames. In one example, a previous frame of a target video is warped that occurs in the sequence of the target video before a particular frame being synthesized. Color of the particular frame is transferred from an appearance of a corresponding frame of a video exemplar. In a further example, emitter portions are identified and addressed to preserve temporal coherence. This is performed to reduce an influence of the emitter portion of the target region in the selection of patches.Type: ApplicationFiled: February 24, 2016Publication date: August 24, 2017Inventors: Ondrej Jamriska, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
-
Publication number: 20170243388Abstract: Appearance transfer techniques are described in the following. In one example, a search and vote process is configured to select patches from the image exemplar and then search for a location in the target image that is a best fit for the patches. As part of this selection, a patch usage counter may also be employed in an example to ensure that selection of each of the patches from the image exemplar does not vary by more than one, one to another. In another example, transfer of an appearance of a boundary and interiors regions from the image exemplar to a target image is preserved.Type: ApplicationFiled: February 24, 2016Publication date: August 24, 2017Inventors: Ondrej Jamriska, Jakub Fiser, Paul J. Asente, Jingwan Lu, Elya Shechtman, Daniel Sýkora
-
Publication number: 20170109900Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.Type: ApplicationFiled: December 29, 2016Publication date: April 20, 2017Applicant: Adobe Systems IncorporatedInventors: Paul J. Asente, Jingwan Lu, Michal Lukác, Elya Schechtman
-
Patent number: 9536327Abstract: Example-based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.Type: GrantFiled: May 28, 2015Date of Patent: January 3, 2017Assignee: Adobe Systems IncorporatedInventors: Paul J. Asente, Jingwan Lu, Michal Luká{hacek over (c)}, Elya Schechtman
-
Publication number: 20160351170Abstract: Freeform drawing beautification techniques are described. An input is received by a computing device describing a freeform path drawn by a user as part of a drawing, the freeform path not formed solely as a circular arc or a circle (e.g., a fixed distance from a point) and including one or more curved elements. The drawing is examined by the computing device to locate another curved element in the drawing. One or more suggestions are constructed to adjust the freeform path by the computing device based on the located curved element in the drawing. The constructed one or more suggestions are output to adjust the freeform path by the computing device.Type: ApplicationFiled: May 27, 2015Publication date: December 1, 2016Inventors: Paul J. Asente, Jakub Fiser
-
Publication number: 20160350942Abstract: Example based edge-aware directional texture painting techniques are described. Inputs are received that define a target direction field and a plurality of edges as part of a target shape mask. A texture is synthesized from a source image by the computing device to be applied to the set of pixels of the target mask using a source shape mask and a source direction field. The source shape mask defines a plurality of edges of the source mask such that the synthesized texture applied to the plurality of edges of the target shape mask correspond to respective ones of the plurality of edges of the source shape mask. The source direction field is taken from the source image such that the synthesized texture applied to the target direction field corresponds to the source direction field. The pixels in the user interface are painted by the computing device using the synthesized texture.Type: ApplicationFiled: May 28, 2015Publication date: December 1, 2016Inventors: Paul J. Asente, Jingwan Lu, Michal Lukác, Elya Schechtman