Patents by Inventor Stephen Joseph DiVerdi
Stephen Joseph DiVerdi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11783534Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.Type: GrantFiled: May 17, 2021Date of Patent: October 10, 2023Assignee: Adobe Inc.Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
-
Publication number: 20230267696Abstract: Techniques for responsive video canvas generation are described to impart three-dimensional effects based on scene geometry to two-dimensional digital objects in a two-dimensional design environment. A responsive video canvas, for instance, is generated from input data including a digital video and scene data. The scene data describes a three-dimensional representation of an environment and includes a plurality of planes. A visual transform is generated and associated with each plane to enable digital objects to interact with the underlying scene geometry. In the responsive video canvas, an edit positioning a two-dimensional digital object with respect to a particular plane of the responsive video canvas is received. A visual transform associated with the particular plane is applied to the digital object and is operable to align the digital object to the depth and orientation of the particular plane. Accordingly, the digital object includes visual features based on the three-dimensional representation.Type: ApplicationFiled: February 23, 2022Publication date: August 24, 2023Applicant: Adobe Inc.Inventors: Cuong D. Nguyen, Valerie Lina Head, Talin Chris Wadsworth, Stephen Joseph DiVerdi, Paul John Asente
-
Patent number: 11574450Abstract: In implementations of systems for augmented reality sketching, a computing device implements a sketch system to generate three-dimensional scene data describing a three-dimensional representation of a physical environment including a physical object. The sketch system displays a digital video in a user interface that depicts the physical environment and the physical object and the sketch system tracks movements of the physical object depicted in the digital video using two-dimensional coordinates of the user interface. These two-dimensional coordinates are projected into the three-dimensional representation of the physical environment. The sketch system receives a user input connecting a portion of a graphical element in the user interface to the physical object depicted in the digital video. The sketch system displays the portion of the graphical element as moving in the user interface corresponding to the movements of the physical object depicted in the digital video.Type: GrantFiled: October 26, 2021Date of Patent: February 7, 2023Assignee: Adobe IncInventors: Kazi Rubaiat Habib, Stephen Joseph DiVerdi, Ryo Suzuki, Li-Yi Wei, Wilmot Wei-Mau Li
-
Patent number: 11532106Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.Type: GrantFiled: November 5, 2021Date of Patent: December 20, 2022Assignee: Adobe Inc.Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
-
Publication number: 20220148267Abstract: In implementations of systems for augmented reality sketching, a computing device implements a sketch system to generate three-dimensional scene data describing a three-dimensional representation of a physical environment including a physical object. The sketch system displays a digital video in a user interface that depicts the physical environment and the physical object and the sketch system tracks movements of the physical object depicted in the digital video using two-dimensional coordinates of the user interface. These two-dimensional coordinates are projected into the three-dimensional representation of the physical environment. The sketch system receives a user input connecting a portion of a graphical element in the user interface to the physical object depicted in the digital video. The sketch system displays the portion of the graphical element as moving in the user interface corresponding to the movements of the physical object depicted in the digital video.Type: ApplicationFiled: October 26, 2021Publication date: May 12, 2022Applicant: Adobe Inc.Inventors: Kazi Rubaiat Habib, Stephen Joseph DiVerdi, Ryo Suzuki, Li-Yi Wei, Wilmot Wei-Mau Li
-
Publication number: 20220058841Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.Type: ApplicationFiled: November 5, 2021Publication date: February 24, 2022Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
-
Patent number: 11182932Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.Type: GrantFiled: November 18, 2019Date of Patent: November 23, 2021Assignee: Adobe Inc.Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
-
Patent number: 11158130Abstract: In implementations of systems for augmented reality sketching, a computing device implements a sketch system to generate three-dimensional scene data describing a three-dimensional representation of a physical environment including a physical object. The sketch system displays a digital video in a user interface that depicts the physical environment and the physical object and the sketch system tracks movements of the physical object depicted in the digital video using two-dimensional coordinates of the user interface. These two-dimensional coordinates are projected into the three-dimensional representation of the physical environment. The sketch system receives a user input connecting a portion of a graphical element in the user interface to the physical object depicted in the digital video. The sketch system displays the portion of the graphical element as moving in the user interface corresponding to the movements of the physical object depicted in the digital video.Type: GrantFiled: August 3, 2020Date of Patent: October 26, 2021Assignee: Adobe Inc.Inventors: Kazi Rubaiat Habib, Stephen Joseph DiVerdi, Ryo Suzuki, Li-Yi Wei, Wilmot Wei-Mau Li
-
Publication number: 20210272353Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.Type: ApplicationFiled: May 17, 2021Publication date: September 2, 2021Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
-
Patent number: 11050994Abstract: Virtual reality parallax correction techniques and systems are described that are configured to correct parallax for VR digital content captured from a single point of origin. In one example, a parallax correction module is employed to correct artifacts caused in a change from a point of origin that corresponds to the VR digital content to a new viewpoint with respect to an output of the VR digital content. A variety of techniques may be employed by the parallax correction module to correct parallax. Examples of these techniques include depth filtering, boundary identification, smear detection, mesh cutting, confidence estimation, blurring, and error diffusion as further described in the following sections.Type: GrantFiled: June 3, 2020Date of Patent: June 29, 2021Assignee: Adobe Inc.Inventors: Stephen Joseph DiVerdi, Ana Belén Serrano Pacheu, Aaron Phillip Hertzmann
-
Patent number: 11030796Abstract: Embodiments of the present invention provide systems, methods, and computer storage media which retarget 2D screencast video tutorials into an active VR host application. VR-embedded widgets can render on top of a VR host application environment while the VR host application is active. Thus, VR-embedded widgets can provide various interactive tutorial interfaces directly inside the environment of the VR host application. For example, VR-embedded widgets can present external video content, related information, and corresponding interfaces directly in a VR painting environment, so a user can simultaneously access external video (e.g., screencast video tutorials) and a VR painting. Possible VR-embedded widgets include a VR-embedded video player overlay widget, a perspective thumbnail overlay widget (e.g., a user-view thumbnail overlay, an instructor-view thumbnail overlay, etc.), an awareness overlay widget, a tutorial steps overlay widget, and/or a controller overlay widget, among others.Type: GrantFiled: October 17, 2018Date of Patent: June 8, 2021Assignee: ADOBE Inc.Inventors: Cuong Nguyen, Stephen Joseph DiVerdi, Balasaravanan Thoravi Kumaravel
-
Publication number: 20210150776Abstract: A method for generating a color gradient includes receiving an input indicating a smoothness of the color gradient and detecting a gradient path defined from an image. The method also includes identifying a set of colors from the gradient path. The method includes detecting a set of color pivots associated with the set of colors. A number of the color pivots in the set of color pivots is based on the input indicating the smoothness of the color gradient. The method includes generating a set of individual color gradients along the gradient path including a color gradient between a first pair of colors located at a first pair of the color pivots and a different color gradient between a second pair of colors located at a second pair of the color pivots. Additionally, the method includes generating the color gradient of the image from the set of individual color gradients.Type: ApplicationFiled: November 18, 2019Publication date: May 20, 2021Inventors: Mainak Biswas, Stephen Joseph DiVerdi, Jose Ignacio Echevarria Vallespi
-
Patent number: 10924633Abstract: Techniques are disclosed for parametric color mixing in a digital painting application. A methodology implementing the techniques according to an embodiment includes generating a Bezier curve extending from a first point to a second point in a 3-Dimensional space. The first and second points are specified by coordinates based on red-green-blue (RGB) values of first and second mixing colors, respectively. The Bezier curve is defined by a selected curvature parameter which can be related to the paint medium, such as oil colors, water colors, pastels, etc., and which further specifies additive or subtractive mixing. The method also includes locating a point on the Bezier curve, the point determined by a selected mixing ratio parameter specifying a ratio of the first mixing color to the second mixing color. The method further includes generating a color mix based on RGB values specified by coordinates of the located point on the Bezier curve.Type: GrantFiled: February 3, 2020Date of Patent: February 16, 2021Assignee: Adobe Inc.Inventors: Stephen Joseph DiVerdi, Sarah Garanganao Almeda, Jose Ignacio Echevarria Vallespi
-
Patent number: 10897609Abstract: The present disclosure relates to methods and systems that may improve and/or modify images captured using multiscopic image capture systems. In an example embodiment, burst image data is captured via a multiscopic image capture system. The burst image data may include at least one image pair. The at least one image pair is aligned based on at least one rectifying homography function. The at least one aligned image pair is warped based on a stereo disparity between the respective images of the image pair. The warped and aligned images are then stacked and a denoising algorithm is applied. Optionally, a high dynamic range algorithm may be applied to at least one output image of the aligned, warped, and denoised images.Type: GrantFiled: November 11, 2019Date of Patent: January 19, 2021Assignee: Google LLCInventors: Jonathan Tilton Barron, Stephen Joseph DiVerdi, Ryan Geiss
-
Patent number: 10803642Abstract: Techniques and systems to support collaborative interaction as part of virtual reality video are described. In one example, a viewport is generated such that a reviewing user of a reviewing user device may view VR video viewed by a source user of a source user device. The viewport, for instance, may be configured as a border at least partially surrounding a portion of the VR video output by the reviewing VR device. In another instance, the viewport is configured to support output of thumbnails within an output of VR video by the reviewing VR device. Techniques and systems are also described to support communication of annotations between the source and reviewing VR devices. Techniques and systems are also described to support efficient distribution of VR video within a context of a content editing application.Type: GrantFiled: August 18, 2017Date of Patent: October 13, 2020Assignee: Adobe Inc.Inventors: Stephen Joseph DiVerdi, Aaron Phillip Hertzmann, Brian David Williams
-
Patent number: 10791412Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.Type: GrantFiled: February 13, 2020Date of Patent: September 29, 2020Assignee: ADOBE INC.Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder
-
Publication number: 20200296348Abstract: Virtual reality parallax correction techniques and systems are described that are configured to correct parallax for VR digital content captured from a single point of origin. In one example, a parallax correction module is employed to correct artifacts caused in a change from a point of origin that corresponds to the VR digital content to a new viewpoint with respect to an output of the VR digital content. A variety of techniques may be employed by the parallax correction module to correct parallax. Examples of these techniques include depth filtering, boundary identification, smear detection, mesh cutting, confidence estimation, blurring, and error diffusion as further described in the following sections.Type: ApplicationFiled: June 3, 2020Publication date: September 17, 2020Applicant: Adobe Inc.Inventors: Stephen Joseph DiVerdi, Ana Belén Serrano Pacheu, Aaron Phillip Hertzmann
-
Patent number: 10701334Abstract: Virtual reality parallax correction techniques and systems are described that are configured to correct parallax for VR digital content captured from a single point of origin. In one example, a parallax correction module is employed to correct artifacts caused in a change from a point of origin that corresponds to the VR digital content to a new viewpoint with respect to an output of the VR digital content. A variety of techniques may be employed by the parallax correction module to correct parallax. Examples of these techniques include depth filtering, boundary identification, smear detection, mesh cutting, confidence estimation, blurring, and error diffusion as further described in the following sections.Type: GrantFiled: October 11, 2017Date of Patent: June 30, 2020Assignee: Adobe Inc.Inventors: Stephen Joseph DiVerdi, Ana Belén Serrano Pacheu, Aaron Phillip Hertzmann
-
Patent number: 10701431Abstract: Embodiments disclosed herein facilitate virtual reality (VR) video playback using handheld controller gestures. More specifically, jog and shuttle gestures are associated with controller rotations that can be tracked once a triggering event is detected (e.g., pressing and holding a controller play button). A corresponding jog or shuttle command can be initialized when the VR controller rotates more than a defined angular threshold in an associated rotational direction (e.g., yaw, pitch, roll). For example, the jog gesture can be associated with changes in controller yaw, and the shuttle gesture can be associated with changes in controller pitch. Subsequent controller rotations can be mapped to playback adjustments for a VR video, such as a frame adjustment for a jog gesture and a playback speed adjustment for the shuttle gesture. Corresponding visualizations of available gestures and progress bars can be generated or otherwise triggered to facilitate efficient VR video playback control.Type: GrantFiled: November 16, 2017Date of Patent: June 30, 2020Assignee: Adobe Inc.Inventors: Stephen Joseph DiVerdi, Seth John Walker, Brian David Williams
-
Publication number: 20200186957Abstract: Methods and systems are provided for visualizing spatial audio using determined properties for time segments of the spatial audio. Such properties include the position sound is coming from, intensity of the sound, focus of the sound, and color of the sound at a time segment of the spatial audio. These properties can be determined by analyzing the time segment of the spatial audio. Upon determining these properties, the properties are used in rendering a visualization of the sound with attributes based on the properties of the sound(s) at the time segment of the spatial audio.Type: ApplicationFiled: February 13, 2020Publication date: June 11, 2020Inventors: Stephen Joseph DiVerdi, Yaniv De Ridder