Patents by Inventor Stephen DiVerdi
Stephen DiVerdi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240348763Abstract: Methods and systems disclosed herein relate generally to body-driven interactions with three-dimensional (3D) layered graphics. The system includes a video capture module that can receive a video stream. The video stream may depict a subject and a 3D layered image, in which the 3D layered image has an associated viewpoint. The system may also include a video processing module that can identify one or more actions performed by the subject. The video processing module can determine a transform operation to be applied to the viewpoint. The transform operation may include at least one of changing the zoom level of the viewpoint, moving the location of the viewpoint, and changing the direction of the viewpoint. The video processing module may apply the transform operation to the 3D layered image and then render the transformed 3D layered image on the video stream.Type: ApplicationFiled: April 14, 2023Publication date: October 17, 2024Inventors: Ana Maria Cardenas Gasca, Stephen DiVerdi
-
Publication number: 20230368452Abstract: A computing system captures a first image, comprising an object in a first position, using a camera. The object has indicators indicating points of interest on the object. The computing system receives first user input linking at least a subset of the indicators and establishing relationships between the points of interest on the object and second user input comprising a graphic element and a mapping between the graphic element and the object. The computing system captures second images, comprising the object in one or more modified positions using, the camera. The computing system tracks the modified positions of the object across the second images using the indicators and the relationships between the points of interest. The computing system generates a virtual graphic based on the one or more modified positions, the graphic element, and the mappings between the graphic element and the object.Type: ApplicationFiled: May 10, 2022Publication date: November 16, 2023Inventors: Jiahao Li, Li-Yi Wei, Stephen DiVerdi, Kazi Rubaiat Habib
-
Patent number: 11776232Abstract: Certain aspects and features of this disclosure relate to virtual 3D pointing and manipulation. For example, video communication is established between a presenter client device and a viewer client device. A presenter video image is captured. A 3D image of a 3D object is rendered on the client devices and a presenter avatar is rendered on at least the viewer client device. The presenter avatar includes at least a portion of the presenter video image. When a positional input is detected at the presenter client device, the system renders, on the viewer client device, an articulated virtual appurtenance associated with the positional input, the 3D image, and the presenter avatar. A virtual interaction between the articulated virtual appurtenance and the 3D image appear to a viewer as naturally positioned for the interaction with respect to the viewer.Type: GrantFiled: February 8, 2022Date of Patent: October 3, 2023Assignee: Adobe Inc.Inventors: Kazi Rubaiat Habib, Tianyi Wang, Stephen DiVerdi, Li-Yi Wei
-
Publication number: 20230252746Abstract: Certain aspects and features of this disclosure relate to virtual 3D pointing and manipulation. For example, video communication is established between a presenter client device and a viewer client device. A presenter video image is captured. A 3D image of a 3D object is rendered on the client devices and a presenter avatar is rendered on at least the viewer client device. The presenter avatar includes at least a portion of the presenter video image. When a positional input is detected at the presenter client device, the system renders, on the viewer client device, an articulated virtual appurtenance associated with the positional input, the 3D image, and the presenter avatar. A virtual interaction between the articulated virtual appurtenance and the 3D image appear to a viewer as naturally positioned for the interaction with respect to the viewer.Type: ApplicationFiled: February 8, 2022Publication date: August 10, 2023Inventors: Kazi Rubaiat Habib, Tianyi Wang, Stephen DiVerdi, Li-Yi Wei
-
Patent number: 11562169Abstract: The present disclosure is directed towards methods and systems for determining multimodal image edits for a digital image. The systems and methods receive a digital image and analyze the digital image. The systems and methods further generate a feature vector of the digital image, wherein each value of the feature vector represents a respective feature of the digital image. Additionally, based on the feature vector and determined latent variables, the systems and methods generate a plurality of determined image edits for the digital image, which includes determining a plurality of set of potential image attribute values and selecting a plurality of sets of determined image attribute values from the plurality of sets of potential image attribute values wherein each set of determined image attribute values comprises a determined image edit of the plurality of image edits.Type: GrantFiled: February 7, 2020Date of Patent: January 24, 2023Assignee: Adobe Inc.Inventors: Stephen DiVerdi, Matthew Douglas Hoffman, Ardavan Saeedi
-
Patent number: 11551384Abstract: Certain embodiments involve flow-based color transfers from a source graphic to target graphic. For instance, a palette flow is computed that maps colors of a target color palette to colors of the source color palette (e.g., by minimizing an earth-mover distance with respect to the source and target color palettes). In some embodiments, such color palettes are extracted from vector graphics using path and shape data. To modify the target graphic, the target color from the target graphic is mapped, via the palette flow, to a modified target color using color information of the source color palette. A modification to the target graphic is performed (e.g., responsive to a preview function or recoloring command) by recoloring an object in the target color with the modified target color.Type: GrantFiled: May 18, 2021Date of Patent: January 10, 2023Assignee: Adobe Inc.Inventors: Ankit Phogat, Vineet Batra, Sayan Ghosh, Stephen DiVerdi, Scott Cohen
-
Patent number: 11539932Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.Type: GrantFiled: November 4, 2021Date of Patent: December 27, 2022Assignee: Adobe Inc.Inventors: Stephen DiVerdi, Seth Walker, Oliver Wang, Cuong Nguyen
-
Patent number: 11328458Abstract: In some embodiments, a computing system generates a color gradient for data visualizations by displaying a color selection design interface. The computing system receives a user input identifying a start point of a color map path and an end point of a color map path. The computing system computes a color map path between the start point and the end point constrained to traverse colors having uniform transitions between one or more of lightness, chroma, and hue. The computing system selects a color gradient having a first color corresponding to the start point of the color map path and a second color corresponding to the end point of the color map path, and additional colors corresponding to additional points along the color map path. The computing system generates a color map for visually representing a range of data values.Type: GrantFiled: January 15, 2021Date of Patent: May 10, 2022Assignee: Adobe Inc.Inventors: Jose Ignacio Echevarria Vallespi, Stephen DiVerdi, Hema Susmita Padala, Bernard Kerr, Dmitry Baranovskiy
-
Patent number: 11288771Abstract: Systems and methods for texture hallucination with a large upscaling factor are described. Embodiments of the systems and methods may receive an input image and a reference image, extract an upscaled feature map from the input image, match the input image to a portion of the reference image, wherein a resolution of the reference image is higher than a resolution of the input image, concatenate the upscaled feature map with a reference feature map corresponding to the portion of the reference image to produce a concatenated feature map, and generate a reconstructed image based on the concatenated feature map using a machine learning model trained with a texture loss and a degradation loss, wherein the texture loss is based on a high frequency band filter, and the degradation loss is based on a downscaled version of the reconstructed image.Type: GrantFiled: April 29, 2020Date of Patent: March 29, 2022Assignee: ADOBE INC.Inventors: Yulun Zhang, Zhifei Zhang, Jose Ignacio Echevarria Vallespi, Zhaowen Wang, Stephen Diverdi
-
Patent number: 11281351Abstract: Techniques for interacting with virtual environments. For example, a virtual reality application outputs a three-dimensional virtual reality scene. The application receives a creation of a slicing volume that is positioned within the three-dimensional virtual space. The slicing volume includes virtual elements of an object within the scene. The application projects the slicing volume onto a two-dimensional view. The application displays the two-dimensional view within the three-dimensional virtual reality scene. The application associates a surface of a physical object with the two-dimensional view. The application receives an interaction with the surface of the physical object, and based on the interaction, selects one or more virtual elements.Type: GrantFiled: November 15, 2019Date of Patent: March 22, 2022Assignee: Adobe Inc.Inventors: Cuong Nguyen, Stephen DiVerdi, Kazi Rubaiat Habib, Roberto Montano Murillo
-
Publication number: 20220060671Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.Type: ApplicationFiled: November 4, 2021Publication date: February 24, 2022Inventors: Stephen DiVerdi, Seth Walker, Oliver Wang, Cuong Nguyen
-
Patent number: 11216170Abstract: The present disclosure is directed toward systems and methods that enable simultaneous viewing and editing of audio-visual content within a virtual-reality environment (i.e., while wearing a virtual-reality device). For example, the virtual-reality editing system allows for editing of audio-visual content while viewing the audio-visual content via a virtual-reality device. In particular, the virtual-reality editing system provides an editing interface over a display of audio-visual content provided via a virtual-reality device (e.g., a virtual-reality headset) that allows for editing of the audio-visual content.Type: GrantFiled: July 31, 2020Date of Patent: January 4, 2022Assignee: ADOBE INC.Inventors: Stephen DiVerdi, Aaron Hertzmann, Cuong Nguyen
-
Patent number: 11178374Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that generate and dynamically change filter parameters for a frame of a 360-degree video based on detecting a field of view from a computing device. As a computing device rotates or otherwise changes orientation, for instance, the disclosed systems can detect a field of view and interpolate one or more filter parameters corresponding to nearby spatial keyframes of the 360-degree video to generate view-specific-filter parameters. By generating and storing filter parameters for spatial keyframes corresponding to different times and different view directions, the disclosed systems can dynamically adjust color grading or other visual effects using interpolated, view-specific-filter parameters to render a filtered version of the 360-degree video.Type: GrantFiled: May 31, 2019Date of Patent: November 16, 2021Assignee: ADOBE INC.Inventors: Stephen DiVerdi, Seth Walker, Oliver Wang, Cuong Nguyen
-
Publication number: 20210342974Abstract: Systems and methods for texture hallucination with a large upscaling factor are described. Embodiments of the systems and methods may receive an input image and a reference image, extract an upscaled feature map from the input image, match the input image to a portion of the reference image, wherein a resolution of the reference image is higher than a resolution of the input image, concatenate the upscaled feature map with a reference feature map corresponding to the portion of the reference image to produce a concatenated feature map, and generate a reconstructed image based on the concatenated feature map using a machine learning model trained with a texture loss and a degradation loss, wherein the texture loss is based on a high frequency band filter, and the degradation loss is based on a downscaled version of the reconstructed image.Type: ApplicationFiled: April 29, 2020Publication date: November 4, 2021Inventors: Yulun Zhang, Zhifei Zhang, Jose Ignacio Echevarria Vallespi, Zhaowen Wang, Stephen Diverdi
-
Publication number: 20210272331Abstract: Certain embodiments involve flow-based color transfers from a source graphic to target graphic. For instance, a palette flow is computed that maps colors of a target color palette to colors of the source color palette (e.g., by minimizing an earth-mover distance with respect to the source and target color palettes). In some embodiments, such color palettes are extracted from vector graphics using path and shape data. To modify the target graphic, the target color from the target graphic is mapped, via the palette flow, to a modified target color using color information of the source color palette. A modification to the target graphic is performed (e.g., responsive to a preview function or recoloring command) by recoloring an object in the target color with the modified target color.Type: ApplicationFiled: May 18, 2021Publication date: September 2, 2021Inventors: Ankit Phogat, Vineet Batra, Sayan Ghosh, Stephen DiVerdi, Scott Cohen
-
Patent number: 11107257Abstract: Disclosed herein are embodiments of systems and computer-implemented methods for extracting a set of discrete colors from an input image. A playful palette may be automatically generated from the set of discrete colors, where the playful palette contains a gamut limited to a blend of the set of discrete colors. A representation of the playful palette may be displayed on a graphical user interface of an electronic device. In a first method, an optimization may be performed using a bidirectional objective function comparing the color gamut of the input image and rendering of a candidate playful palette. Initial blobs may be generated by clustering. In a second method, color subsampling may be performed from the image, and a self-organizing map (SOM) may be generated. Clustering the SOM colors may be performed, and each pixel of the SOM may be replaced with an average color value to generate a cluster map.Type: GrantFiled: August 1, 2018Date of Patent: August 31, 2021Assignee: ADOBE INC.Inventors: Stephen Diverdi, Jose Ignacio Echevarria Vallespi, Jingwan Lu
-
Patent number: 11043012Abstract: Certain embodiments involve flow-based color transfers from a source graphic to target graphic. For instance, a palette flow is computed that maps colors of a target color palette to colors of the source color palette (e.g., by minimizing an earth-mover distance with respect to the source and target color palettes). In some embodiments, such color palettes are extracted from vector graphics using path and shape data. To modify the target graphic, the target color from the target graphic is mapped, via the palette flow, to a modified target color using color information of the source color palette. A modification to the target graphic is performed (e.g., responsive to a preview function or recoloring command) by recoloring an object in the target color with the modified target color.Type: GrantFiled: August 6, 2019Date of Patent: June 22, 2021Assignee: Adobe Inc.Inventors: Ankit Phogat, Vineet Batra, Sayan Ghosh, Stephen DiVerdi, Scott Cohen
-
Publication number: 20210149543Abstract: Techniques for interacting with virtual environments. For example, a virtual reality application outputs a three-dimensional virtual reality scene. The application receives a creation of a slicing volume that is positioned within the three-dimensional virtual space. The slicing volume includes virtual elements of an object within the scene. The application projects the slicing volume onto a two-dimensional view. The application displays the two-dimensional view within the three-dimensional virtual reality scene. The application associates a surface of a physical object with the two-dimensional view. The application receives an interaction with the surface of the physical object, and based on the interaction, selects one or more virtual elements.Type: ApplicationFiled: November 15, 2019Publication date: May 20, 2021Inventors: Cuong Nguyen, Stephen DiVerdi, Kazi Rubaiat Habib, Roberto Montano Murillo
-
Publication number: 20210134025Abstract: In some embodiments, a computing system generates a color gradient for data visualizations by displaying a color selection design interface. The computing system receives a user input identifying a start point of a color map path and an end point of a color map path. The computing system computes a color map path between the start point and the end point constrained to traverse colors having uniform transitions between one or more of lightness, chroma, and hue. The computing system selects a color gradient having a first color corresponding to the start point of the color map path and a second color corresponding to the end point of the color map path, and additional colors corresponding to additional points along the color map path. The computing system generates a color map for visually representing a range of data values.Type: ApplicationFiled: January 15, 2021Publication date: May 6, 2021Inventors: Jose Ignacio Echevarria Vallespi, Stephen DiVerdi, Hema Susmita Padala, Bernard Kerr, Dmitry Baranovskiy
-
Patent number: 10957063Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating modified video content to reduce depth conflicts between user interface elements and video objects. For example, the disclosed systems can analyze an input video to identify feature points that designate objects within the input video and to determine the depths of the identified feature points. In addition, the disclosed systems can compare the depths of the feature points with a depth of a user interface element to determine whether there are any depth conflicts. In response to detecting a depth conflict, the disclosed systems can modify the depth of the user interface element to reduce or avoid the depth conflict. Furthermore, the disclosed systems can apply a blurring effect to an area around a user interface element to reduce the effect of depth conflicts.Type: GrantFiled: March 26, 2018Date of Patent: March 23, 2021Assignee: ADOBE INC.Inventors: Stephen DiVerdi, Cuong Nguyen, Aaron Hertzmann, Feng Liu