Patents by Inventor Geoffrey Oxholm
Geoffrey Oxholm has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240046430Abstract: One or more processing devices access a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The one or more processing devices determine that a target pixel corresponds to a sub-region within the target region that includes hallucinated content. The one or more processing devices determine gradient constraints using gradient values of neighboring pixels in the hallucinated content, the neighboring pixels being adjacent to the target pixel and corresponding to four cardinal directions. The one or more processing devices update color data of the target pixel subject to the determined gradient constraints.Type: ApplicationFiled: September 29, 2023Publication date: February 8, 2024Inventors: Oliver Wang, John Nelson, Geoffrey Oxholm, Elya Shechtman
-
Patent number: 11823357Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.Type: GrantFiled: March 9, 2021Date of Patent: November 21, 2023Assignee: Adobe Inc.Inventors: Oliver Wang, John Nelson, Geoffrey Oxholm, Elya Shechtman
-
Patent number: 11810326Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.Type: GrantFiled: July 28, 2021Date of Patent: November 7, 2023Assignee: Adobe Inc.Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
-
Patent number: 11756210Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.Type: GrantFiled: March 12, 2020Date of Patent: September 12, 2023Assignee: Adobe Inc.Inventors: Oliver Wang, Matthew Fisher, John Nelson, Geoffrey Oxholm, Elya Shechtman, Wenqi Xian
-
Publication number: 20230222762Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize a deep visual fingerprinting model with parameters learned from robust contrastive learning to identify matching digital images and image provenance information. For example, the disclosed systems utilize an efficient learning procedure that leverages training on bounded adversarial examples to more accurately identify digital images (including adversarial images) with a small computational overhead. To illustrate, the disclosed systems utilize a first objective function that iteratively identifies augmentations to increase contrastive loss. Moreover, the disclosed systems utilize a second objective function that iteratively learns parameters of a deep visual fingerprinting model to reduce the contrastive loss.Type: ApplicationFiled: January 11, 2022Publication date: July 13, 2023Inventors: Maksym Andriushchenko, John Collomosse, Xiaoyang Li, Geoffrey Oxholm
-
Publication number: 20220292649Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.Type: ApplicationFiled: March 9, 2021Publication date: September 15, 2022Inventors: Oliver Wang, John Nelson, Geoffrey Oxholm, Elya Shechtman
-
Publication number: 20210358170Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.Type: ApplicationFiled: July 28, 2021Publication date: November 18, 2021Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
-
Publication number: 20210287007Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.Type: ApplicationFiled: March 12, 2020Publication date: September 16, 2021Inventors: Oliver Wang, Matthew Fisher, John Nelson, Geoffrey Oxholm, Elya Shechtman, Wenqi Xian
-
Patent number: 11094083Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.Type: GrantFiled: January 25, 2019Date of Patent: August 17, 2021Assignee: ADOBE INC.Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
-
Patent number: 11081139Abstract: Certain aspects involve video inpainting via confidence-weighted motion estimation. For instance, a video editor accesses video content having a target region to be modified in one or more video frames. The video editor computes a motion for a boundary of the target region. The video editor interpolates, from the boundary motion, a target motion of a target pixel within the target region. In the interpolation, confidence values assigned to boundary pixels control how the motion of these pixels contributes to the interpolated target motion. A confidence value is computed based on a difference between forward and reverse motion with respect to a particular boundary pixel, a texture in a region that includes the particular boundary pixel, or a combination thereof. The video editor modifies the target region in the video by updating color data of the target pixel to correspond to the target motion interpolated from the boundary motion.Type: GrantFiled: April 9, 2019Date of Patent: August 3, 2021Assignee: Adobe Inc.Inventors: Geoffrey Oxholm, Oliver Wang, Elya Shechtman, Michal Lukac, Ramiz Sheikh
-
Patent number: 10872637Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.Type: GrantFiled: September 27, 2019Date of Patent: December 22, 2020Assignee: Adobe Inc.Inventors: Geoffrey Oxholm, Seth Walker, Ramiz Sheikh, Oliver Wang, John Nelson
-
Publication number: 20200242804Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.Type: ApplicationFiled: January 25, 2019Publication date: July 30, 2020Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
-
Publication number: 20200118254Abstract: Certain aspects involve video inpainting via confidence-weighted motion estimation. For instance, a video editor accesses video content having a target region to be modified in one or more video frames. The video editor computes a motion for a boundary of the target region. The video editor interpolates, from the boundary motion, a target motion of a target pixel within the target region. In the interpolation, confidence values assigned to boundary pixels control how the motion of these pixels contributes to the interpolated target motion. A confidence value is computed based on a difference between forward and reverse motion with respect to a particular boundary pixel, a texture in a region that includes the particular boundary pixel, or a combination thereof. The video editor modifies the target region in the video by updating color data of the target pixel to correspond to the target motion interpolated from the boundary motion.Type: ApplicationFiled: April 9, 2019Publication date: April 16, 2020Inventors: Geoffrey Oxholm, Oliver Wang, Elya Shechtman, Michal Lukac, Ramiz Sheikh
-
Publication number: 20200118594Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.Type: ApplicationFiled: September 27, 2019Publication date: April 16, 2020Inventors: Geoffrey Oxholm, Seth Walker, Ramiz Sheikh, Oliver Wang, John Nelson
-
Patent number: 10565757Abstract: A computing system transforms an input image into a stylized output image by applying first and second style features from a style exemplar. The input image is provided to a multimodal style-transfer network having a low-resolution-based stylization subnet and a high-resolution stylization subnet. The low-resolution-based stylization subnet is trained with low-resolution style exemplars to apply the first style feature. The high-resolution stylization subnet is trained with high-resolution style exemplars to apply the second style feature. The low-resolution-based stylization subnet generates an intermediate image by applying the first style feature from a low-resolution version of the style exemplar to first image data obtained from the input image. Second image data from the intermediate image is provided to the high-resolution stylization subnet.Type: GrantFiled: June 9, 2017Date of Patent: February 18, 2020Assignee: Adobe Inc.Inventors: Geoffrey Oxholm, Xin Wang
-
Patent number: 10453491Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.Type: GrantFiled: February 12, 2019Date of Patent: October 22, 2019Assignee: Adobe Inc.Inventors: Geoffrey Oxholm, Elya Shechtman, Oliver Wang
-
Patent number: 10334222Abstract: Certain embodiments involve switching to a particular video loop based on a user's focus while displaying video content to a user. For example, a video playback system identifies a current frame of the video content in a current region of interest being presented to a user on a presentation device. The video playback system also identifies an updated region of interest in the video content. The video playback system determines that a start frame and an end frame of a video portion within the region of interest have a threshold similarity. The video playback system selects, based on the threshold similarity, the video portion as a loop to be displayed to the user in the updated region of interest. The video playback system causes the presentation device to display the video portion as the loop.Type: GrantFiled: November 20, 2017Date of Patent: June 25, 2019Assignee: Adobe Inc.Inventors: Pranjali Kokare, Geoffrey Oxholm, Zhili Chen, Duygu Ceylan Aksit
-
Publication number: 20190189158Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.Type: ApplicationFiled: February 12, 2019Publication date: June 20, 2019Inventors: Geoffrey OXHOLM, Elya SHECHTMAN, Oliver WANG
-
Publication number: 20190158800Abstract: Certain embodiments involve switching to a particular video loop based on a user's focus while displaying video content to a user. For example, a video playback system identifies a current frame of the video content in a current region of interest being presented to a user on a presentation device. The video playback system also identifies an updated region of interest in the video content. The video playback system determines that a start frame and an end frame of a video portion within the region of interest have a threshold similarity. The video playback system selects, based on the threshold similarity, the video portion as a loop to be displayed to the user in the updated region of interest. The video playback system causes the presentation device to display the video portion as the loop.Type: ApplicationFiled: November 20, 2017Publication date: May 23, 2019Inventors: Pranjali Kokare, Geoffrey Oxholm, Zhili Chen, Duygu Ceylan Aksit
-
Patent number: 10204656Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.Type: GrantFiled: July 27, 2017Date of Patent: February 12, 2019Assignee: ADOBE INC.Inventors: Geoffrey Oxholm, Elya Shechtman, Oliver Wang