Patents by Inventor Stephen Joseph DiVerdi

Stephen Joseph DiVerdi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11783534
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: October 10, 2023
    Assignee: Adobe Inc.
    Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
  • Publication number: 20230267696
    Abstract: Techniques for responsive video canvas generation are described to impart three-dimensional effects based on scene geometry to two-dimensional digital objects in a two-dimensional design environment. A responsive video canvas, for instance, is generated from input data including a digital video and scene data. The scene data describes a three-dimensional representation of an environment and includes a plurality of planes. A visual transform is generated and associated with each plane to enable digital objects to interact with the underlying scene geometry. In the responsive video canvas, an edit positioning a two-dimensional digital object with respect to a particular plane of the responsive video canvas is received. A visual transform associated with the particular plane is applied to the digital object and is operable to align the digital object to the depth and orientation of the particular plane. Accordingly, the digital object includes visual features based on the three-dimensional representation.
    Type: Application
    Filed: February 23, 2022
    Publication date: August 24, 2023
    Applicant: Adobe Inc.
    Inventors: Cuong D. Nguyen, Valerie Lina Head, Talin Chris Wadsworth, Stephen Joseph DiVerdi, Paul John Asente
  • Patent number: 11574450
    Abstract: In implementations of systems for augmented reality sketching, a computing device implements a sketch system to generate three-dimensional scene data describing a three-dimensional representation of a physical environment including a physical object. The sketch system displays a digital video in a user interface that depicts the physical environment and the physical object and the sketch system tracks movements of the physical object depicted in the digital video using two-dimensional coordinates of the user interface. These two-dimensional coordinates are projected into the three-dimensional representation of the physical environment. The sketch system receives a user input connecting a portion of a graphical element in the user interface to the physical object depicted in the digital video. The sketch system displays the portion of the graphical element as moving in the user interface corresponding to the movements of the physical object depicted in the digital video.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: February 7, 2023
    Assignee: Adobe Inc
    Inventors: Kazi Rubaiat Habib, Stephen Joseph DiVerdi, Ryo Suzuki, Li-Yi Wei, Wilmot Wei-Mau Li
  • Patent number: 11532106
    Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: December 20, 2022
    Assignee: Adobe Inc.
    Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
  • Publication number: 20220148267
    Abstract: In implementations of systems for augmented reality sketching, a computing device implements a sketch system to generate three-dimensional scene data describing a three-dimensional representation of a physical environment including a physical object. The sketch system displays a digital video in a user interface that depicts the physical environment and the physical object and the sketch system tracks movements of the physical object depicted in the digital video using two-dimensional coordinates of the user interface. These two-dimensional coordinates are projected into the three-dimensional representation of the physical environment. The sketch system receives a user input connecting a portion of a graphical element in the user interface to the physical object depicted in the digital video. The sketch system displays the portion of the graphical element as moving in the user interface corresponding to the movements of the physical object depicted in the digital video.
    Type: Application
    Filed: October 26, 2021
    Publication date: May 12, 2022
    Applicant: Adobe Inc.
    Inventors: Kazi Rubaiat Habib, Stephen Joseph DiVerdi, Ryo Suzuki, Li-Yi Wei, Wilmot Wei-Mau Li
  • Publication number: 20220058841
    Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.
    Type: Application
    Filed: November 5, 2021
    Publication date: February 24, 2022
    Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
  • Patent number: 11182932
    Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: November 23, 2021
    Assignee: Adobe Inc.
    Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
  • Patent number: 11158130
    Abstract: In implementations of systems for augmented reality sketching, a computing device implements a sketch system to generate three-dimensional scene data describing a three-dimensional representation of a physical environment including a physical object. The sketch system displays a digital video in a user interface that depicts the physical environment and the physical object and the sketch system tracks movements of the physical object depicted in the digital video using two-dimensional coordinates of the user interface. These two-dimensional coordinates are projected into the three-dimensional representation of the physical environment. The sketch system receives a user input connecting a portion of a graphical element in the user interface to the physical object depicted in the digital video. The sketch system displays the portion of the graphical element as moving in the user interface corresponding to the movements of the physical object depicted in the digital video.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: October 26, 2021
    Assignee: Adobe Inc.
    Inventors: Kazi Rubaiat Habib, Stephen Joseph DiVerdi, Ryo Suzuki, Li-Yi Wei, Wilmot Wei-Mau Li
  • Publication number: 20210272353
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.
    Type: Application
    Filed: May 17, 2021
    Publication date: September 2, 2021
    Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
  • Patent number: 11050994
    Abstract: Virtual reality parallax correction techniques and systems are described that are configured to correct parallax for VR digital content captured from a single point of origin. In one example, a parallax correction module is employed to correct artifacts caused in a change from a point of origin that corresponds to the VR digital content to a new viewpoint with respect to an output of the VR digital content. A variety of techniques may be employed by the parallax correction module to correct parallax. Examples of these techniques include depth filtering, boundary identification, smear detection, mesh cutting, confidence estimation, blurring, and error diffusion as further described in the following sections.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: June 29, 2021
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Ana Belén Serrano Pacheu, Aaron Phillip Hertzmann
  • Patent number: 11030796
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: June 8, 2021
    Assignee: ADOBE Inc.
    Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
  • Publication number: 20210150776
    Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.
    Type: Application
    Filed: November 18, 2019
    Publication date: May 20, 2021
    Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
  • Patent number: 10924633
    Abstract: Techniques are disclosed for parametric color mixing in a digital painting application. A methodology implementing the techniques according to an embodiment includes generating a Bezier curve extending from a first point to a second point in a 3-Dimensional space. The first and second points are specified by coordinates based on red-green-blue (RGB) values of first and second mixing colors, respectively. The Bezier curve is defined by a selected curvature parameter which can be related to the paint medium, such as oil colors, water colors, pastels, etc., and which further specifies additive or subtractive mixing. The method also includes locating a point on the Bezier curve, the point determined by a selected mixing ratio parameter specifying a ratio of the first mixing color to the second mixing color. The method further includes generating a color mix based on RGB values specified by coordinates of the located point on the Bezier curve.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: February 16, 2021
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Sarah Garanganao Almeda, Jose Ignacio Echevarria Vallespi
  • Patent number: 10897609
    Abstract: The present disclosure relates to methods and systems that may improve and/or modify images captured using multiscopic image capture systems. In an example embodiment, burst image data is captured via a multiscopic image capture system. The burst image data may include at least one image pair. The at least one image pair is aligned based on at least one rectifying homography function. The at least one aligned image pair is warped based on a stereo disparity between the respective images of the image pair. The warped and aligned images are then stacked and a denoising algorithm is applied. Optionally, a high dynamic range algorithm may be applied to at least one output image of the aligned, warped, and denoised images.
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: January 19, 2021
    Assignee: Google LLC
    Inventors: Jonathan Tilton Barron, Stephen Joseph DiVerdi, Ryan Geiss
  • Patent number: 10803642
    Abstract: Techniques and systems to support collaborative interaction as part of virtual reality video are described. In one example, a viewport is generated such that a reviewing user of a reviewing user device may view VR video viewed by a source user of a source user device. The viewport, for instance, may be configured as a border at least partially surrounding a portion of the VR video output by the reviewing VR device. In another instance, the viewport is configured to support output of thumbnails within an output of VR video by the reviewing VR device. Techniques and systems are also described to support communication of annotations between the source and reviewing VR devices. Techniques and systems are also described to support efficient distribution of VR video within a context of a content editing application.
    Type: Grant
    Filed: August 18, 2017
    Date of Patent: October 13, 2020
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Aaron Phillip Hertzmann, Brian David Williams
  • Patent number: 10791412
    Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.
    Type: Grant
    Filed: February 13, 2020
    Date of Patent: September 29, 2020
    Assignee: ADOBE INC.
    Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
  • Publication number: 20200296348
    Abstract: Virtual reality parallax correction techniques and systems are described that are configured to correct parallax for VR digital content captured from a single point of origin. In one example, a parallax correction module is employed to correct artifacts caused in a change from a point of origin that corresponds to the VR digital content to a new viewpoint with respect to an output of the VR digital content. A variety of techniques may be employed by the parallax correction module to correct parallax. Examples of these techniques include depth filtering, boundary identification, smear detection, mesh cutting, confidence estimation, blurring, and error diffusion as further described in the following sections.
    Type: Application
    Filed: June 3, 2020
    Publication date: September 17, 2020
    Applicant: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Ana Belén Serrano Pacheu, Aaron Phillip Hertzmann
  • Patent number: 10701334
    Abstract: Virtual reality parallax correction techniques and systems are described that are configured to correct parallax for VR digital content captured from a single point of origin. In one example, a parallax correction module is employed to correct artifacts caused in a change from a point of origin that corresponds to the VR digital content to a new viewpoint with respect to an output of the VR digital content. A variety of techniques may be employed by the parallax correction module to correct parallax. Examples of these techniques include depth filtering, boundary identification, smear detection, mesh cutting, confidence estimation, blurring, and error diffusion as further described in the following sections.
    Type: Grant
    Filed: October 11, 2017
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Ana Belén Serrano Pacheu, Aaron Phillip Hertzmann
  • Patent number: 10701431
    Abstract: Embodiments disclosed herein facilitate virtual reality (VR) video playback using handheld controller gestures. More specifically, jog and shuttle gestures are associated with controller rotations that can be tracked once a triggering event is detected (e.g., pressing and holding a controller play button). A corresponding jog or shuttle command can be initialized when the VR controller rotates more than a defined angular threshold in an associated rotational direction (e.g., yaw, pitch, roll). For example, the jog gesture can be associated with changes in controller yaw, and the shuttle gesture can be associated with changes in controller pitch. Subsequent controller rotations can be mapped to playback adjustments for a VR video, such as a frame adjustment for a jog gesture and a playback speed adjustment for the shuttle gesture. Corresponding visualizations of available gestures and progress bars can be generated or otherwise triggered to facilitate efficient VR video playback control.
    Type: Grant
    Filed: November 16, 2017
    Date of Patent: June 30, 2020
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Seth John Walker, Brian David Williams
  • Publication number: 20200186957
    Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.
    Type: Application
    Filed: February 13, 2020
    Publication date: June 11, 2020
    Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder