Patents by Inventor Stephen Joseph DiVerdi

Stephen Joseph DiVerdi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250168442
    Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.
    Type: Application
    Filed: January 21, 2025
    Publication date: May 22, 2025
    Inventors: Kim Pascal PIMMEL, Stephen Joseph DIVERDI, Jiaju MA, Rubaiat HABIB, LI-Yi WEI, Hijung SHIN, Deepali ANEJA, John G. NELSON, Wilmot LI, Dingzeyu LI, Lubomira Assenova DONTCHEVA, Joel Richard BRANDT
  • Publication number: 20250111695
    Abstract: In implementation of techniques for template-based behaviors in machine learning, a computing device implements a template system to receive a digital video and data executable to generate animated content. The template system determines a location within a frame of the digital video to place the animated content using a machine learning model. The template system then renders the animated content within the frame of the digital video at the location determined by the machine learning model. The template system then displays the rendered animated content within the frame of the digital video in a user interface.
    Type: Application
    Filed: December 18, 2023
    Publication date: April 3, 2025
    Applicant: Adobe Inc.
    Inventors: Wilmot Wei-Mau Li, Li-Yi Wei, Cuong D. Nguyen, Jakub Fiser, Hijung Shin, Stephen Joseph DiVerdi, Seth John Walker, Kazi Rubaiat Habib, Deepali Aneja, David Gilliaert Werner, Erica K. Schisler
  • Patent number: 12206930
    Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.
    Type: Grant
    Filed: January 13, 2023
    Date of Patent: January 21, 2025
    Assignee: Adobe Inc.
    Inventors: Kim Pascal Pimmel, Stephen Joseph Diverdi, Jiaju MA, Rubaiat Habib, Li-Yi Wei, Hijung Shin, Deepali Aneja, John G. Nelson, Wilmot Li, Dingzeyu Li, Lubomira Assenova Dontcheva, Joel Richard Brandt
  • Publication number: 20250006226
    Abstract: In various examples, a video effect is displayed in a live video stream in response to determining a portion of an audio stream of the live video stream that corresponds to a text segment of a script associated with the video effect. For example, during presentation of the script, the audio stream is obtained to determine if a portion of the audio stream corresponds to the text segment.
    Type: Application
    Filed: June 30, 2023
    Publication date: January 2, 2025
    Inventors: Deepali ANEJA, Rubaiat HABIB, Li-Yi WEI, Wilmot Wei-Mau LI, Stephen Joseph DIVERDI
  • Patent number: 12106443
    Abstract: Techniques for responsive video canvas generation are described to impart three-dimensional effects based on scene geometry to two-dimensional digital objects in a two-dimensional design environment. A responsive video canvas, for instance, is generated from input data including a digital video and scene data. The scene data describes a three-dimensional representation of an environment and includes a plurality of planes. A visual transform is generated and associated with each plane to enable digital objects to interact with the underlying scene geometry. In the responsive video canvas, an edit positioning a two-dimensional digital object with respect to a particular plane of the responsive video canvas is received. A visual transform associated with the particular plane is applied to the digital object and is operable to align the digital object to the depth and orientation of the particular plane. Accordingly, the digital object includes visual features based on the three-dimensional representation.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: October 1, 2024
    Assignee: Adobe Inc.
    Inventors: Cuong D. Nguyen, Valerie Lina Head, Talin Chris Wadsworth, Stephen Joseph DiVerdi, Paul John Asente
  • Publication number: 20240244287
    Abstract: Embodiments of the present disclosure provide, a method, a system, and a computer storage media that provide mechanisms for multimedia effect addition and editing support for text-based video editing tools. The method includes generating a user interface (UI) displaying a transcript of an audio track of a video and receiving, via the UI, input identifying selection of a text segment from the transcript. The method also includes in response to receiving, via the UI, input identifying selection of a particular type of text stylization or layout for application to the text segment. The method further includes identifying a video effect corresponding to the particular type of text stylization or layout, applying the video effect to a video segment corresponding to the text segment, and applying the particular type of text stylization or layout to the text segment to visually represent the video effect in the transcript.
    Type: Application
    Filed: January 13, 2023
    Publication date: July 18, 2024
    Inventors: Kim Pascal PIMMEL, Stephen Joseph DIVERDI, Jiaju MA, Rubaiat HABIB, Li-Yi WEI, Hijung SHIN, Deepali ANEJA, John G. NELSON, Wilmot LI, Dingzeyu LI, Lubomira Assenova DONTCHEVA, Joel Richard BRANDT
  • Patent number: 11783534
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: October 10, 2023
    Assignee: Adobe Inc.
    Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
  • Publication number: 20230267696
    Abstract: Techniques for responsive video canvas generation are described to impart three-dimensional effects based on scene geometry to two-dimensional digital objects in a two-dimensional design environment. A responsive video canvas, for instance, is generated from input data including a digital video and scene data. The scene data describes a three-dimensional representation of an environment and includes a plurality of planes. A visual transform is generated and associated with each plane to enable digital objects to interact with the underlying scene geometry. In the responsive video canvas, an edit positioning a two-dimensional digital object with respect to a particular plane of the responsive video canvas is received. A visual transform associated with the particular plane is applied to the digital object and is operable to align the digital object to the depth and orientation of the particular plane. Accordingly, the digital object includes visual features based on the three-dimensional representation.
    Type: Application
    Filed: February 23, 2022
    Publication date: August 24, 2023
    Applicant: Adobe Inc.
    Inventors: Cuong D. Nguyen, Valerie Lina Head, Talin Chris Wadsworth, Stephen Joseph DiVerdi, Paul John Asente
  • Patent number: 11574450
    Abstract: In implementations of systems for augmented reality sketching, a computing device implements a sketch system to generate three-dimensional scene data describing a three-dimensional representation of a physical environment including a physical object. The sketch system displays a digital video in a user interface that depicts the physical environment and the physical object and the sketch system tracks movements of the physical object depicted in the digital video using two-dimensional coordinates of the user interface. These two-dimensional coordinates are projected into the three-dimensional representation of the physical environment. The sketch system receives a user input connecting a portion of a graphical element in the user interface to the physical object depicted in the digital video. The sketch system displays the portion of the graphical element as moving in the user interface corresponding to the movements of the physical object depicted in the digital video.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: February 7, 2023
    Assignee: Adobe Inc
    Inventors: Kazi Rubaiat Habib, Stephen Joseph DiVerdi, Ryo Suzuki, Li-Yi Wei, Wilmot Wei-Mau Li
  • Patent number: 11532106
    Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: December 20, 2022
    Assignee: Adobe Inc.
    Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
  • Publication number: 20220148267
    Abstract: In implementations of systems for augmented reality sketching, a computing device implements a sketch system to generate three-dimensional scene data describing a three-dimensional representation of a physical environment including a physical object. The sketch system displays a digital video in a user interface that depicts the physical environment and the physical object and the sketch system tracks movements of the physical object depicted in the digital video using two-dimensional coordinates of the user interface. These two-dimensional coordinates are projected into the three-dimensional representation of the physical environment. The sketch system receives a user input connecting a portion of a graphical element in the user interface to the physical object depicted in the digital video. The sketch system displays the portion of the graphical element as moving in the user interface corresponding to the movements of the physical object depicted in the digital video.
    Type: Application
    Filed: October 26, 2021
    Publication date: May 12, 2022
    Applicant: Adobe Inc.
    Inventors: Kazi Rubaiat Habib, Stephen Joseph DiVerdi, Ryo Suzuki, Li-Yi Wei, Wilmot Wei-Mau Li
  • Publication number: 20220058841
    Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.
    Type: Application
    Filed: November 5, 2021
    Publication date: February 24, 2022
    Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
  • Patent number: 11182932
    Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: November 23, 2021
    Assignee: Adobe Inc.
    Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
  • Patent number: 11158130
    Abstract: In implementations of systems for augmented reality sketching, a computing device implements a sketch system to generate three-dimensional scene data describing a three-dimensional representation of a physical environment including a physical object. The sketch system displays a digital video in a user interface that depicts the physical environment and the physical object and the sketch system tracks movements of the physical object depicted in the digital video using two-dimensional coordinates of the user interface. These two-dimensional coordinates are projected into the three-dimensional representation of the physical environment. The sketch system receives a user input connecting a portion of a graphical element in the user interface to the physical object depicted in the digital video. The sketch system displays the portion of the graphical element as moving in the user interface corresponding to the movements of the physical object depicted in the digital video.
    Type: Grant
    Filed: August 3, 2020
    Date of Patent: October 26, 2021
    Assignee: Adobe Inc.
    Inventors: Kazi Rubaiat Habib, Stephen Joseph DiVerdi, Ryo Suzuki, Li-Yi Wei, Wilmot Wei-Mau Li
  • Publication number: 20210272353
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.
    Type: Application
    Filed: May 17, 2021
    Publication date: September 2, 2021
    Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
  • Patent number: 11050994
    Abstract: Virtual reality parallax correction techniques and systems are described that are configured to correct parallax for VR digital content captured from a single point of origin. In one example, a parallax correction module is employed to correct artifacts caused in a change from a point of origin that corresponds to the VR digital content to a new viewpoint with respect to an output of the VR digital content. A variety of techniques may be employed by the parallax correction module to correct parallax. Examples of these techniques include depth filtering, boundary identification, smear detection, mesh cutting, confidence estimation, blurring, and error diffusion as further described in the following sections.
    Type: Grant
    Filed: June 3, 2020
    Date of Patent: June 29, 2021
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Ana Belén Serrano Pacheu, Aaron Phillip Hertzmann
  • Patent number: 11030796
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: June 8, 2021
    Assignee: ADOBE Inc.
    Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
  • Publication number: 20210150776
    Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.
    Type: Application
    Filed: November 18, 2019
    Publication date: May 20, 2021
    Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
  • Patent number: 10924633
    Abstract: Techniques are disclosed for parametric color mixing in a digital painting application. A methodology implementing the techniques according to an embodiment includes generating a Bezier curve extending from a first point to a second point in a 3-Dimensional space. The first and second points are specified by coordinates based on red-green-blue (RGB) values of first and second mixing colors, respectively. The Bezier curve is defined by a selected curvature parameter which can be related to the paint medium, such as oil colors, water colors, pastels, etc., and which further specifies additive or subtractive mixing. The method also includes locating a point on the Bezier curve, the point determined by a selected mixing ratio parameter specifying a ratio of the first mixing color to the second mixing color. The method further includes generating a color mix based on RGB values specified by coordinates of the located point on the Bezier curve.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: February 16, 2021
    Assignee: Adobe Inc.
    Inventors: Stephen Joseph DiVerdi, Sarah Garanganao Almeda, Jose Ignacio Echevarria Vallespi
  • Patent number: 10897609
    Abstract: The present disclosure relates to methods and systems that may improve and/or modify images captured using multiscopic image capture systems. In an example embodiment, burst image data is captured via a multiscopic image capture system. The burst image data may include at least one image pair. The at least one image pair is aligned based on at least one rectifying homography function. The at least one aligned image pair is warped based on a stereo disparity between the respective images of the image pair. The warped and aligned images are then stacked and a denoising algorithm is applied. Optionally, a high dynamic range algorithm may be applied to at least one output image of the aligned, warped, and denoised images.
    Type: Grant
    Filed: November 11, 2019
    Date of Patent: January 19, 2021
    Assignee: Google LLC
    Inventors: Jonathan Tilton Barron, Stephen Joseph DiVerdi, Ryan Geiss