Patents by Inventor Stephen DiVerdi

Stephen DiVerdi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230368452
    Abstract: A computing system captures a first image, comprising an object in a first position, using a camera. The object has indicators indicating points of interest on the object. The computing system receives first user input linking at least a subset of the indicators and establishing relationships between the points of interest on the object and second user input comprising a graphic element and a mapping between the graphic element and the object. The computing system captures second images, comprising the object in one or more modified positions using, the camera. The computing system tracks the modified positions of the object across the second images using the indicators and the relationships between the points of interest. The computing system generates a virtual graphic based on the one or more modified positions, the graphic element, and the mappings between the graphic element and the object.
    Type: Application
    Filed: May 10, 2022
    Publication date: November 16, 2023
    Inventors: Jiahao Li, Li-Yi Wei, Stephen DiVerdi, Kazi Rubaiat Habib
  • Patent number: 11776232
    Abstract: Certain aspects and features of this disclosure relate to virtual 3D pointing and manipulation. For example, video communication is established between a presenter client device and a viewer client device. A presenter video image is captured. A 3D image of a 3D object is rendered on the client devices and a presenter avatar is rendered on at least the viewer client device. The presenter avatar includes at least a portion of the presenter video image. When a positional input is detected at the presenter client device, the system renders, on the viewer client device, an articulated virtual appurtenance associated with the positional input, the 3D image, and the presenter avatar. A virtual interaction between the articulated virtual appurtenance and the 3D image appear to a viewer as naturally positioned for the interaction with respect to the viewer.
    Type: Grant
    Filed: February 8, 2022
    Date of Patent: October 3, 2023
    Assignee: Adobe Inc.
    Inventors: Kazi Rubaiat Habib, Tianyi Wang, Stephen DiVerdi, Li-Yi Wei
  • Publication number: 20230252746
    Abstract: Certain aspects and features of this disclosure relate to virtual 3D pointing and manipulation. For example, video communication is established between a presenter client device and a viewer client device. A presenter video image is captured. A 3D image of a 3D object is rendered on the client devices and a presenter avatar is rendered on at least the viewer client device. The presenter avatar includes at least a portion of the presenter video image. When a positional input is detected at the presenter client device, the system renders, on the viewer client device, an articulated virtual appurtenance associated with the positional input, the 3D image, and the presenter avatar. A virtual interaction between the articulated virtual appurtenance and the 3D image appear to a viewer as naturally positioned for the interaction with respect to the viewer.
    Type: Application
    Filed: February 8, 2022
    Publication date: August 10, 2023
    Inventors: Kazi Rubaiat Habib, Tianyi Wang, Stephen DiVerdi, Li-Yi Wei
  • Patent number: 11562169
    Abstract: The present disclosure is directed towards methods and systems for determining multimodal image edits for a digital image. The systems and methods receive a digital image and analyze the digital image. The systems and methods further generate a feature vector of the digital image, wherein each value of the feature vector represents a respective feature of the digital image. Additionally, based on the feature vector and determined latent variables, the systems and methods generate a plurality of determined image edits for the digital image, which includes determining a plurality of set of potential image attribute values and selecting a plurality of sets of determined image attribute values from the plurality of sets of potential image attribute values wherein each set of determined image attribute values comprises a determined image edit of the plurality of image edits.
    Type: Grant
    Filed: February 7, 2020
    Date of Patent: January 24, 2023
    Assignee: Adobe Inc.
    Inventors: Stephen DiVerdi, Matthew Douglas Hoffman, Ardavan Saeedi
  • Patent number: 11551384
    Abstract: Certain embodiments involve flow-based color transfers from a source graphic to target graphic. For instance, a palette flow is computed that maps colors of a target color palette to colors of the source color palette (e.g., by minimizing an earth-mover distance with respect to the source and target color palettes). In some embodiments, such color palettes are extracted from vector graphics using path and shape data. To modify the target graphic, the target color from the target graphic is mapped, via the palette flow, to a modified target color using color information of the source color palette. A modification to the target graphic is performed (e.g., responsive to a preview function or recoloring command) by recoloring an object in the target color with the modified target color.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: January 10, 2023
    Assignee: Adobe Inc.
    Inventors: Ankit Phogat, Vineet Batra, Sayan Ghosh, Stephen DiVerdi, Scott Cohen
  • Patent number: 11539932
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.
    Type: Grant
    Filed: November 4, 2021
    Date of Patent: December 27, 2022
    Assignee: Adobe Inc.
    Inventors: Stephen DiVerdi, Seth Walker, Oliver Wang, Cuong Nguyen
  • Patent number: 11328458
    Abstract: In some embodiments, a computing system generates a color gradient for data visualizations by displaying a color selection design interface. The computing system receives a user input identifying a start point of a color map path and an end point of a color map path. The computing system computes a color map path between the start point and the end point constrained to traverse colors having uniform transitions between one or more of lightness, chroma, and hue. The computing system selects a color gradient having a first color corresponding to the start point of the color map path and a second color corresponding to the end point of the color map path, and additional colors corresponding to additional points along the color map path. The computing system generates a color map for visually representing a range of data values.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: May 10, 2022
    Assignee: Adobe Inc.
    Inventors: Jose Ignacio Echevarria Vallespi, Stephen DiVerdi, Hema Susmita Padala, Bernard Kerr, Dmitry Baranovskiy
  • Patent number: 11288771
    Abstract: Systems and methods for texture hallucination with a large upscaling factor are described. Embodiments of the systems and methods may receive an input image and a reference image, extract an upscaled feature map from the input image, match the input image to a portion of the reference image, wherein a resolution of the reference image is higher than a resolution of the input image, concatenate the upscaled feature map with a reference feature map corresponding to the portion of the reference image to produce a concatenated feature map, and generate a reconstructed image based on the concatenated feature map using a machine learning model trained with a texture loss and a degradation loss, wherein the texture loss is based on a high frequency band filter, and the degradation loss is based on a downscaled version of the reconstructed image.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: March 29, 2022
    Assignee: ADOBE INC.
    Inventors: Yulun Zhang, Zhifei Zhang, Jose Ignacio Echevarria Vallespi, Zhaowen Wang, Stephen Diverdi
  • Patent number: 11281351
    Abstract: Techniques for interacting with virtual environments. For example, a virtual reality application outputs a three-dimensional virtual reality scene. The application receives a creation of a slicing volume that is positioned within the three-dimensional virtual space. The slicing volume includes virtual elements of an object within the scene. The application projects the slicing volume onto a two-dimensional view. The application displays the two-dimensional view within the three-dimensional virtual reality scene. The application associates a surface of a physical object with the two-dimensional view. The application receives an interaction with the surface of the physical object, and based on the interaction, selects one or more virtual elements.
    Type: Grant
    Filed: November 15, 2019
    Date of Patent: March 22, 2022
    Assignee: Adobe Inc.
    Inventors: Cuong Nguyen, Stephen DiVerdi, Kazi Rubaiat Habib, Roberto Montano Murillo
  • Publication number: 20220060671
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.
    Type: Application
    Filed: November 4, 2021
    Publication date: February 24, 2022
    Inventors: Stephen DiVerdi, Seth Walker, Oliver Wang, Cuong Nguyen
  • Patent number: 11216170
    Abstract: The present disclosure is directed toward systems and methods that enable simultaneous viewing and editing of audio-visual content within a virtual-reality environment (i.e., while wearing a virtual-reality device). For example, the virtual-reality editing system allows for editing of audio-visual content while viewing the audio-visual content via a virtual-reality device. In particular, the virtual-reality editing system provides an editing interface over a display of audio-visual content provided via a virtual-reality device (e.g., a virtual-reality headset) that allows for editing of the audio-visual content.
    Type: Grant
    Filed: July 31, 2020
    Date of Patent: January 4, 2022
    Assignee: ADOBE INC.
    Inventors: Stephen DiVerdi, Aaron Hertzmann, Cuong Nguyen
  • Patent number: 11178374
    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: November 16, 2021
    Assignee: ADOBE INC.
    Inventors: Stephen DiVerdi, Seth Walker, Oliver Wang, Cuong Nguyen
  • Publication number: 20210342974
    Abstract: Systems and methods for texture hallucination with a large upscaling factor are described. Embodiments of the systems and methods may receive an input image and a reference image, extract an upscaled feature map from the input image, match the input image to a portion of the reference image, wherein a resolution of the reference image is higher than a resolution of the input image, concatenate the upscaled feature map with a reference feature map corresponding to the portion of the reference image to produce a concatenated feature map, and generate a reconstructed image based on the concatenated feature map using a machine learning model trained with a texture loss and a degradation loss, wherein the texture loss is based on a high frequency band filter, and the degradation loss is based on a downscaled version of the reconstructed image.
    Type: Application
    Filed: April 29, 2020
    Publication date: November 4, 2021
    Inventors: Yulun Zhang, Zhifei Zhang, Jose Ignacio Echevarria Vallespi, Zhaowen Wang, Stephen Diverdi
  • Publication number: 20210272331
    Abstract: Certain embodiments involve flow-based color transfers from a source graphic to target graphic. For instance, a palette flow is computed that maps colors of a target color palette to colors of the source color palette (e.g., by minimizing an earth-mover distance with respect to the source and target color palettes). In some embodiments, such color palettes are extracted from vector graphics using path and shape data. To modify the target graphic, the target color from the target graphic is mapped, via the palette flow, to a modified target color using color information of the source color palette. A modification to the target graphic is performed (e.g., responsive to a preview function or recoloring command) by recoloring an object in the target color with the modified target color.
    Type: Application
    Filed: May 18, 2021
    Publication date: September 2, 2021
    Inventors: Ankit Phogat, Vineet Batra, Sayan Ghosh, Stephen DiVerdi, Scott Cohen
  • Patent number: 11107257
    Abstract: Disclosed herein are embodiments of systems and computer-implemented methods for extracting a set of discrete colors from an input image. A playful palette may be automatically generated from the set of discrete colors, where the playful palette contains a gamut limited to a blend of the set of discrete colors. A representation of the playful palette may be displayed on a graphical user interface of an electronic device. In a first method, an optimization may be performed using a bidirectional objective function comparing the color gamut of the input image and rendering of a candidate playful palette. Initial blobs may be generated by clustering. In a second method, color subsampling may be performed from the image, and a self-organizing map (SOM) may be generated. Clustering the SOM colors may be performed, and each pixel of the SOM may be replaced with an average color value to generate a cluster map.
    Type: Grant
    Filed: August 1, 2018
    Date of Patent: August 31, 2021
    Assignee: ADOBE INC.
    Inventors: Stephen Diverdi, Jose Ignacio Echevarria Vallespi, Jingwan Lu
  • Patent number: 11043012
    Abstract: Certain embodiments involve flow-based color transfers from a source graphic to target graphic. For instance, a palette flow is computed that maps colors of a target color palette to colors of the source color palette (e.g., by minimizing an earth-mover distance with respect to the source and target color palettes). In some embodiments, such color palettes are extracted from vector graphics using path and shape data. To modify the target graphic, the target color from the target graphic is mapped, via the palette flow, to a modified target color using color information of the source color palette. A modification to the target graphic is performed (e.g., responsive to a preview function or recoloring command) by recoloring an object in the target color with the modified target color.
    Type: Grant
    Filed: August 6, 2019
    Date of Patent: June 22, 2021
    Assignee: Adobe Inc.
    Inventors: Ankit Phogat, Vineet Batra, Sayan Ghosh, Stephen DiVerdi, Scott Cohen
  • Publication number: 20210149543
    Abstract: Techniques for interacting with virtual environments. For example, a virtual reality application outputs a three-dimensional virtual reality scene. The application receives a creation of a slicing volume that is positioned within the three-dimensional virtual space. The slicing volume includes virtual elements of an object within the scene. The application projects the slicing volume onto a two-dimensional view. The application displays the two-dimensional view within the three-dimensional virtual reality scene. The application associates a surface of a physical object with the two-dimensional view. The application receives an interaction with the surface of the physical object, and based on the interaction, selects one or more virtual elements.
    Type: Application
    Filed: November 15, 2019
    Publication date: May 20, 2021
    Inventors: Cuong Nguyen, Stephen DiVerdi, Kazi Rubaiat Habib, Roberto Montano Murillo
  • Publication number: 20210134025
    Abstract: In some embodiments, a computing system generates a color gradient for data visualizations by displaying a color selection design interface. The computing system receives a user input identifying a start point of a color map path and an end point of a color map path. The computing system computes a color map path between the start point and the end point constrained to traverse colors having uniform transitions between one or more of lightness, chroma, and hue. The computing system selects a color gradient having a first color corresponding to the start point of the color map path and a second color corresponding to the end point of the color map path, and additional colors corresponding to additional points along the color map path. The computing system generates a color map for visually representing a range of data values.
    Type: Application
    Filed: January 15, 2021
    Publication date: May 6, 2021
    Inventors: Jose Ignacio Echevarria Vallespi, Stephen DiVerdi, Hema Susmita Padala, Bernard Kerr, Dmitry Baranovskiy
  • Patent number: 10957063
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified video content to reduce depth conflicts between user interface elements and video objects. For example, the disclosed systems can analyze an input video to identify feature points that designate objects within the input video and to determine the depths of the identified feature points. In addition, the disclosed systems can compare the depths of the feature points with a depth of a user interface element to determine whether there are any depth conflicts. In response to detecting a depth conflict, the disclosed systems can modify the depth of the user interface element to reduce or avoid the depth conflict. Furthermore, the disclosed systems can apply a blurring effect to an area around a user interface element to reduce the effect of depth conflicts.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: March 23, 2021
    Assignee: ADOBE INC.
    Inventors: Stephen DiVerdi, Cuong Nguyen, Aaron Hertzmann, Feng Liu
  • Patent number: 10949057
    Abstract: Techniques are described for modifying a virtual reality environment to include or remove contextual information describing a virtual object within the virtual reality environment. The virtual object includes a user interface object associated with a development user interface of the virtual reality environment. In some cases, the contextual information includes information describing functions of controls included on the user interface object. In some cases, the virtual reality environment is modified based on a distance between the location of the user interface object and a location of a viewpoint within the virtual reality environment. Additionally or alternatively, the virtual reality environment is modified based on an elapsed time of the location of the user interface object remaining in a location.
    Type: Grant
    Filed: April 14, 2020
    Date of Patent: March 16, 2021
    Assignee: Adobe Inc.
    Inventors: Stephen DiVerdi, Seth Walker, Brian Williams