Patents by Inventor Geoffrey Oxholm

Geoffrey Oxholm has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240046430
    Abstract: One or more processing devices access a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The one or more processing devices determine that a target pixel corresponds to a sub-region within the target region that includes hallucinated content. The one or more processing devices determine gradient constraints using gradient values of neighboring pixels in the hallucinated content, the neighboring pixels being adjacent to the target pixel and corresponding to four cardinal directions. The one or more processing devices update color data of the target pixel subject to the determined gradient constraints.
    Type: Application
    Filed: September 29, 2023
    Publication date: February 8, 2024
    Inventors: Oliver Wang, John Nelson, Geoffrey Oxholm, Elya Shechtman
  • Patent number: 11823357
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.
    Type: Grant
    Filed: March 9, 2021
    Date of Patent: November 21, 2023
    Assignee: Adobe Inc.
    Inventors: Oliver Wang, John Nelson, Geoffrey Oxholm, Elya Shechtman
  • Patent number: 11810326
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: November 7, 2023
    Assignee: Adobe Inc.
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Patent number: 11756210
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: September 12, 2023
    Assignee: Adobe Inc.
    Inventors: Oliver Wang, Matthew Fisher, John Nelson, Geoffrey Oxholm, Elya Shechtman, Wenqi Xian
  • Publication number: 20230222762
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize a deep visual fingerprinting model with parameters learned from robust contrastive learning to identify matching digital images and image provenance information. For example, the disclosed systems utilize an efficient learning procedure that leverages training on bounded adversarial examples to more accurately identify digital images (including adversarial images) with a small computational overhead. To illustrate, the disclosed systems utilize a first objective function that iteratively identifies augmentations to increase contrastive loss. Moreover, the disclosed systems utilize a second objective function that iteratively learns parameters of a deep visual fingerprinting model to reduce the contrastive loss.
    Type: Application
    Filed: January 11, 2022
    Publication date: July 13, 2023
    Inventors: Maksym Andriushchenko, John Collomosse, Xiaoyang Li, Geoffrey Oxholm
  • Publication number: 20220292649
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference video frame to other video frames depicting a scene. One example method includes one or more processing devices that performs operations that include accessing a scene depicting a reference object that includes an annotation identifying a target region to be modified in one or more video frames. The operations also includes computing a target motion of a target pixel that is subject to a motion constraint. The motion constraint is based on a three-dimensional model of the reference object. Further, operations include determining color data of the target pixel to correspond to the target motion. The color data includes a color value and a gradient. Operations also include determining gradient constraints using gradient values of neighbor pixels. Additionally, the processing devices updates the color data of the target pixel subject to the gradient constraints.
    Type: Application
    Filed: March 9, 2021
    Publication date: September 15, 2022
    Inventors: Oliver Wang, John Nelson, Geoffrey Oxholm, Elya Shechtman
  • Publication number: 20210358170
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Application
    Filed: July 28, 2021
    Publication date: November 18, 2021
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Publication number: 20210287007
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.
    Type: Application
    Filed: March 12, 2020
    Publication date: September 16, 2021
    Inventors: Oliver Wang, Matthew Fisher, John Nelson, Geoffrey Oxholm, Elya Shechtman, Wenqi Xian
  • Patent number: 11094083
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: August 17, 2021
    Assignee: ADOBE INC.
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Patent number: 11081139
    Abstract: Certain aspects involve video inpainting via confidence-weighted motion estimation. For instance, a video editor accesses video content having a target region to be modified in one or more video frames. The video editor computes a motion for a boundary of the target region. The video editor interpolates, from the boundary motion, a target motion of a target pixel within the target region. In the interpolation, confidence values assigned to boundary pixels control how the motion of these pixels contributes to the interpolated target motion. A confidence value is computed based on a difference between forward and reverse motion with respect to a particular boundary pixel, a texture in a region that includes the particular boundary pixel, or a combination thereof. The video editor modifies the target region in the video by updating color data of the target pixel to correspond to the target motion interpolated from the boundary motion.
    Type: Grant
    Filed: April 9, 2019
    Date of Patent: August 3, 2021
    Assignee: Adobe Inc.
    Inventors: Geoffrey Oxholm, Oliver Wang, Elya Shechtman, Michal Lukac, Ramiz Sheikh
  • Patent number: 10872637
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: December 22, 2020
    Assignee: Adobe Inc.
    Inventors: Geoffrey Oxholm, Seth Walker, Ramiz Sheikh, Oliver Wang, John Nelson
  • Publication number: 20200242804
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
    Type: Application
    Filed: January 25, 2019
    Publication date: July 30, 2020
    Inventors: Jonathan Eisenmann, Wenqi Xian, Matthew Fisher, Geoffrey Oxholm, Elya Shechtman
  • Publication number: 20200118254
    Abstract: Certain aspects involve video inpainting via confidence-weighted motion estimation. For instance, a video editor accesses video content having a target region to be modified in one or more video frames. The video editor computes a motion for a boundary of the target region. The video editor interpolates, from the boundary motion, a target motion of a target pixel within the target region. In the interpolation, confidence values assigned to boundary pixels control how the motion of these pixels contributes to the interpolated target motion. A confidence value is computed based on a difference between forward and reverse motion with respect to a particular boundary pixel, a texture in a region that includes the particular boundary pixel, or a combination thereof. The video editor modifies the target region in the video by updating color data of the target pixel to correspond to the target motion interpolated from the boundary motion.
    Type: Application
    Filed: April 9, 2019
    Publication date: April 16, 2020
    Inventors: Geoffrey Oxholm, Oliver Wang, Elya Shechtman, Michal Lukac, Ramiz Sheikh
  • Publication number: 20200118594
    Abstract: Certain aspects involve video inpainting in which content is propagated from a user-provided reference frame to other video frames depicting a scene. For example, a computing system accesses a set of video frames with annotations identifying a target region to be modified. The computing system determines a motion of the target region's boundary across the set of video frames, and also interpolates pixel motion within the target region across the set of video frames. The computing system also inserts, responsive to user input, a reference frame into the set of video frames. The reference frame can include reference color data from a user-specified modification to the target region. The computing system can use the reference color data and the interpolated motion to update color data in the target region across set of video frames.
    Type: Application
    Filed: September 27, 2019
    Publication date: April 16, 2020
    Inventors: Geoffrey Oxholm, Seth Walker, Ramiz Sheikh, Oliver Wang, John Nelson
  • Patent number: 10565757
    Abstract: A computing system transforms an input image into a stylized output image by applying first and second style features from a style exemplar. The input image is provided to a multimodal style-transfer network having a low-resolution-based stylization subnet and a high-resolution stylization subnet. The low-resolution-based stylization subnet is trained with low-resolution style exemplars to apply the first style feature. The high-resolution stylization subnet is trained with high-resolution style exemplars to apply the second style feature. The low-resolution-based stylization subnet generates an intermediate image by applying the first style feature from a low-resolution version of the style exemplar to first image data obtained from the input image. Second image data from the intermediate image is provided to the high-resolution stylization subnet.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: February 18, 2020
    Assignee: Adobe Inc.
    Inventors: Geoffrey Oxholm, Xin Wang
  • Patent number: 10453491
    Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.
    Type: Grant
    Filed: February 12, 2019
    Date of Patent: October 22, 2019
    Assignee: Adobe Inc.
    Inventors: Geoffrey Oxholm, Elya Shechtman, Oliver Wang
  • Patent number: 10334222
    Abstract: Certain embodiments involve switching to a particular video loop based on a user's focus while displaying video content to a user. For example, a video playback system identifies a current frame of the video content in a current region of interest being presented to a user on a presentation device. The video playback system also identifies an updated region of interest in the video content. The video playback system determines that a start frame and an end frame of a video portion within the region of interest have a threshold similarity. The video playback system selects, based on the threshold similarity, the video portion as a loop to be displayed to the user in the updated region of interest. The video playback system causes the presentation device to display the video portion as the loop.
    Type: Grant
    Filed: November 20, 2017
    Date of Patent: June 25, 2019
    Assignee: Adobe Inc.
    Inventors: Pranjali Kokare, Geoffrey Oxholm, Zhili Chen, Duygu Ceylan Aksit
  • Publication number: 20190189158
    Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.
    Type: Application
    Filed: February 12, 2019
    Publication date: June 20, 2019
    Inventors: Geoffrey OXHOLM, Elya SHECHTMAN, Oliver WANG
  • Publication number: 20190158800
    Abstract: Certain embodiments involve switching to a particular video loop based on a user's focus while displaying video content to a user. For example, a video playback system identifies a current frame of the video content in a current region of interest being presented to a user on a presentation device. The video playback system also identifies an updated region of interest in the video content. The video playback system determines that a start frame and an end frame of a video portion within the region of interest have a threshold similarity. The video playback system selects, based on the threshold similarity, the video portion as a loop to be displayed to the user in the updated region of interest. The video playback system causes the presentation device to display the video portion as the loop.
    Type: Application
    Filed: November 20, 2017
    Publication date: May 23, 2019
    Inventors: Pranjali Kokare, Geoffrey Oxholm, Zhili Chen, Duygu Ceylan Aksit
  • Patent number: 10204656
    Abstract: Provided are video processing architectures and techniques configured to generate looping video. The video processing architectures and techniques automatically produce a looping video from a fixed-length video clip. Embodiments of the video processing architectures and techniques determine a lower-resolution version of the fixed-length video clip, and detect a presence of edges within image frames in the lower-resolution version. A pair of image frames having similar edges is identified as a pair of candidates for a transition point (i.e., a start frame and an end frame) at which the looping video can repeat. Using start and end frames having similar edges mitigates teleporting of objects displayed in the looping video. In some cases, teleporting during repeating is eliminated.
    Type: Grant
    Filed: July 27, 2017
    Date of Patent: February 12, 2019
    Assignee: ADOBE INC.
    Inventors: Geoffrey Oxholm, Elya Shechtman, Oliver Wang