Patents Assigned to Adobe Inc.
-
Patent number: 12283060Abstract: Digital image synthesis techniques are described that leverage splatting, i.e., forward warping. In one example, a first digital image and a first optical flow are received by a digital image synthesis system. A first splat metric and a first merge metric are constructed by the digital image synthesis system that defines a weighted map of respective pixels. From this, the digital image synthesis system produces a first warped optical flow and a first warp merge metric corresponding to an interpolation instant by forward warping the first optical flow based on the splat metric and the merge metric. A first warped digital image corresponding to the interpolation instant is formed by the digital image synthesis system by backward warping the first digital image based on the first warped optical flow.Type: GrantFiled: April 6, 2022Date of Patent: April 22, 2025Assignee: Adobe Inc.Inventors: Simon Niklaus, Jiawen Chen
-
Patent number: 12282992Abstract: Systems and methods for machine learning based controllable animation of still images is provided. In one embodiment, a still image including a fluid element is obtained. Using a flow refinement machine learning model, a refined dense optical flow is generated for the still image based on a selection mask that includes the fluid element and a dense optical flow generated from a motion hint that indicates a direction of animation. The refined dense optical flow indicates a pattern of apparent motion for the at least one fluid element. Thereafter, a plurality of video frames is generated by projecting a plurality of pixels of the still image using the refined dense optical flow.Type: GrantFiled: July 1, 2022Date of Patent: April 22, 2025Assignee: Adobe Inc.Inventors: Kuldeep Kulkarni, Aniruddha Mahapatra
-
Patent number: 12282987Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating image mattes for detected objects in digital images without trimap segmentation via a multi-branch neural network. The disclosed system utilizes a first neural network branch of a generative neural network to extract a coarse semantic mask from a digital image. The disclosed system utilizes a second neural network branch of the generative neural network to extract a detail mask based on the coarse semantic mask. Additionally, the disclosed system utilizes a third neural network branch of the generative neural network to fuse the coarse semantic mask and the detail mask to generate an image matte. In one or more embodiments, the disclosed system also utilizes a refinement neural network to generate a final image matte by refining selected portions of the image matte generated by the generative neural network.Type: GrantFiled: November 8, 2022Date of Patent: April 22, 2025Assignee: Adobe Inc.Inventors: Zichuan Liu, Xin Lu, Ke Wang
-
Patent number: 12282948Abstract: A computer readable medium for sizing a product includes instructions, that when executed by at least one processor, cause a computing device to: retrieve from a webpage information on a product including product dimensions; present on a display of a client device a graphical button that upon access by a user activates a camera for capturing an image of an object positioned at a focal distance from the camera, the object having a surface; prompt the user to enter boundary information of an imaginary housing to be placed on the surface; generate the imaginary housing dimensions in two dimensions (2D) based on the boundary information and the focal distance; and determine whether the product fits within the imaginary housing by comparing the product dimensions against the imaginary housing dimensions.Type: GrantFiled: October 17, 2022Date of Patent: April 22, 2025Assignee: ADOBE INC.Inventors: Gourav Singhal, Sourabh Gupta, Mrinal Kumar Sharma
-
Publication number: 20250124548Abstract: Embodiments are disclosed for generating a rendering output using a hybrid stochastic layered alpha blending technique. In particular, in one or more embodiments, the method may include receiving a plurality of fragments for rendering into a composited output. The method may further include storing a set of fragments of the plurality of fragments for each pixel in a fragment buffer up to a per pixel fragment buffer limit. For each received fragment of the plurality of fragments in excess of the fragment buffer limit for the fragment buffer, a modified set of fragments is generated by probabilistically replacing a selected fragment in the fragment buffer with the received fragment. The resulting fragments in the fragment buffer after processing the plurality of fragments is a blending set of fragments. A composited output for the pixel is then rendered by blending the blending set of fragments.Type: ApplicationFiled: October 13, 2023Publication date: April 17, 2025Applicant: Adobe Inc.Inventor: Michael Seth MONGER
-
Publication number: 20250124212Abstract: In implementation of techniques for vector font generation based on cascaded diffusion, a computing device implements a glyph generation system to receive a sample glyph in a target font and a target glyph identifier. The glyph generation system generates a rasterized glyph in the target font using a raster diffusion model based on the sample glyph and the target glyph identifier, the rasterized glyph having a first level of resolution. The glyph generation system then generates a vector glyph using a vector diffusion model by vectorizing the rasterized glyph, the vector glyph having a second level of resolution different than the first level of resolution. The glyph generation system then displays the vector glyph in a user interface.Type: ApplicationFiled: November 13, 2023Publication date: April 17, 2025Applicant: Adobe Inc.Inventors: Difan Liu, Matthew David Fisher, Michaƫl Yanis Gharbi, Oliver Wang, Alec Stefan Jacobson, Vikas Thamizharasan, Evangelos Kalogerakis
-
Patent number: 12277652Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.Type: GrantFiled: November 15, 2022Date of Patent: April 15, 2025Assignee: Adobe Inc.Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
-
Patent number: 12278764Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for flexibly and efficiently managing network traffic for a web flow utilizing network access tokens. For example, the disclosed systems facilitate smooth, uninterrupted navigation through various webpages of a web flow (e.g., from an entry point to an exit point) by assigning network access tokens to client devices according to network capacity of servers hosting the web flow. In some embodiments, the disclosed systems permit access to, and navigation within various webpages within, the web flow for the client devices with network access tokens while preventing other client devices from accessing the web flow.Type: GrantFiled: November 4, 2021Date of Patent: April 15, 2025Assignee: Adobe Inc.Inventors: Thomas Kiencke, Arne Franken
-
Patent number: 12277624Abstract: In implementations of systems for generating blend objects from objects with pattern fills, a computing device implements a blend system to generate a source master texture using a first pattern fill of a source object and a destination master texture using a second pattern fill of the a destination object. First colors are sampled from the source master texture and second colors are sampled from the destination master texture. The blend system determines a blended pattern fill for the first pattern fill and the second pattern fill by combining the first colors and the second colors. The blend system generates an intermediate blend object for the source object and the destination object for display in a user interface based on the blended pattern fill.Type: GrantFiled: August 15, 2022Date of Patent: April 15, 2025Assignee: Adobe Inc.Inventors: Apurva Kumar, Paranjay Sharma
-
Patent number: 12277671Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure include an image processing apparatus configured to efficiently perform texture synthesis (e.g., increase the size of, or extend, texture in an input image while preserving a natural appearance of the synthesized texture pattern in the modified output image). In some aspects, the image processing apparatus implements an attention mechanism with a multi-stage attention model where different stages (e.g., different transformer blocks) progressively refine image feature patch mapping at different scales, while utilizing repetitive patterns in texture images to enable network generalization. One or more embodiments of the disclosure include skip connections and convolutional layers (e.g., between transformer block stages) that combine high-frequency and low-frequency features from different transformer stages and unify attention to micro-structures, meso-structures and macro-structures.Type: GrantFiled: November 10, 2021Date of Patent: April 15, 2025Assignee: ADOBE INC.Inventors: Shouchang Guo, Arthur Jules Martin Roullier, Tamy Boubekeur, Valentin Deschaintre, Jerome Derel, Paul Parneix
-
Patent number: 12277767Abstract: Systems and methods for video segmentation and summarization are described. Embodiments of the present disclosure receive a video and a transcript of the video; generate visual features representing frames of the video using an image encoder; generate language features representing the transcript using a text encoder, wherein the image encoder and the text encoder are trained based on a correlation between training visual features and training language features; and segment the video into a plurality of video segments based on the visual features and the language features.Type: GrantFiled: May 31, 2022Date of Patent: April 15, 2025Assignee: ADOBE INC.Inventors: Hailin Jin, Jielin Qiu, Zhaowen Wang, Trung Huu Bui, Franck Dernoncourt
-
Patent number: 12277630Abstract: Systems and methods for image processing are configured. Embodiments of the present disclosure identify target style attributes and target structure attributes for a composite image; generate a matrix of composite feature tokens based on the target style attributes and the target structure attributes, wherein subsequent feature tokens of the matrix of composite feature tokens are sequentially generated based on previous feature tokens of the matrix of composite feature tokens according to a linear ordering of the matrix of composite feature tokens; and generate the composite image based on the matrix of composite feature tokens, wherein the composite image includes the target style attributes and the target structure attributes.Type: GrantFiled: May 9, 2022Date of Patent: April 15, 2025Assignee: ADOBE INC.Inventors: Pranav Vineet Aggarwal, Midhun Harikumar, Ajinkya Gorakhnath Kale
-
Publication number: 20250118040Abstract: Three-dimensional object edit and visualization techniques and systems are described. In a first example, a content navigation control is implemented by a content editing system to aid navigation through a history of how a three-dimensional environment and a three-dimensional object included in the environment is created. In a second example, the content editing system is configured to streamline placement of a three-dimensional object within a three-dimensional environment. The content editing system, for instance, generates a manipulation visualization in support of corresponding editing operations to act as a guide, e.g., as an alignment guide or an option guide. In a third example, the content editing system implements a shadow control that is usable as part of an editing and as a visualization to control rendering of illumination within a three-dimensional environment.Type: ApplicationFiled: December 4, 2023Publication date: April 10, 2025Applicant: Adobe Inc.Inventors: David McKinley Cardwell, Kowsheek Mahmood, Christophe Darphin, Salvador German Soto Gutierrez
-
Publication number: 20250117994Abstract: In implementation of techniques for removing image overlays, a computing device implements a reflection removal system to receive an input RAW digital image, the input RAW digital image including both a base image and an overlay image. Using a machine learning model, the reflection removal system segments the base image from the overlay image. The reflection removal system generates an output RAW digital image that includes the base image and displays the output RAW digital image in a user interface.Type: ApplicationFiled: January 30, 2024Publication date: April 10, 2025Applicant: Adobe Inc.Inventors: Eric Randall Kee, Adam Ahmed Pikielny, Marc Stewart Levoy
-
Publication number: 20250117993Abstract: A high dynamic range editing system is configured to generate visualizations to aide digital image editing in both high dynamic ranges and standard dynamic ranges. In a first example, the visualization is generated as a histogram. In a second example, the visualization is generated to indicate high dynamic range capabilities. In a third example, the visualization is generated to indicate ranges of luminance values within a digital image. In a fourth example, the visualization is generated as a point curve that defines a mapping between detected luminance values from a digital image and output luminance values over both a standard dynamic range and a high dynamic range. In a fifth example, the visualization is generated as a preview to convert pixels from the digital image in a high dynamic range into a standard dynamic range.Type: ApplicationFiled: October 5, 2023Publication date: April 10, 2025Applicant: Adobe Inc.Inventors: Eric Chan, Thomas Frederick Knoll, Gregory Paul Zulkie
-
Publication number: 20250117989Abstract: An example vector path trajectory imitation system is configured to create a new vector path or to extend an existing vector path based on a reference. In this manner, a user (e.g., artist, illustrator, or designer) does not need to tweak individual anchor points to align a trajectory of the new vector path with the trajectory of the reference. Instead, the user moves a position indicator (e.g., a mouse cursor) on a digital canvas in a freehand fashion while the vector path trajectory imitation system provides visual feedback to show the user how a resultant curve will look. When the user reaches a position on the digital canvas where a new vector path is to be drawn, the user can perform an action (e.g., releasing a mouse button) and the new vector path, which follows the trajectory of the reference, is created.Type: ApplicationFiled: October 10, 2023Publication date: April 10, 2025Applicant: Adobe Inc.Inventors: Gagan Singhal, Shikhar Tayal, Nilesh Mishra
-
Publication number: 20250117977Abstract: A high dynamic range editing system is configured to generate visualizations to aide digital image editing in both high dynamic ranges and standard dynamic ranges. In a first example, the visualization is generated as a histogram. In a second example, the visualization is generated to indicate high dynamic range capabilities. In a third example, the visualization is generated to indicate ranges of luminance values within a digital image. In a fourth example, the visualization is generated as a point curve that defines a mapping between detected luminance values from a digital image and output luminance values over both a standard dynamic range and a high dynamic range. In a fifth example, the visualization is generated as a preview to convert pixels from the digital image in a high dynamic range into a standard dynamic range.Type: ApplicationFiled: October 5, 2023Publication date: April 10, 2025Applicant: Adobe Inc.Inventors: Eric Chan, Thomas Frederick Knoll, Gregory Paul Zulkie
-
Publication number: 20250118026Abstract: Three-dimensional object edit and visualization techniques and systems are described. In a first example, a content navigation control is implemented by a content editing system to aid navigation through a history of how a three-dimensional environment and a three-dimensional object included in the environment is created. In a second example, the content editing system is configured to streamline placement of a three-dimensional object within a three-dimensional environment. The content editing system, for instance, generates a manipulation visualization in support of corresponding editing operations to act as a guide, e.g., as an alignment guide or an option guide. In a third example, the content editing system implements a shadow control that is usable as part of an editing and as a visualization to control rendering of illumination within a three-dimensional environment.Type: ApplicationFiled: December 4, 2023Publication date: April 10, 2025Applicant: Adobe Inc.Inventors: David McKinley Cardwell, Kowsheek Mahmood, Christophe Darphin, Salvador German Soto Gutierrez
-
Patent number: 12272127Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generates object masks for digital objects portrayed in digital images utilizing a detection-masking neural network pipeline. In particular, in one or more embodiments, the disclosed systems utilize detection heads of a neural network to detect digital objects portrayed within a digital image. In some cases, each detection head is associated with one or more digital object classes that are not associated with the other detection heads. Further, in some cases, the detection heads implement multi-scale synchronized batch normalization to normalize feature maps across various feature levels. The disclosed systems further utilize a masking head of the neural network to generate one or more object masks for the detected digital objects. In some cases, the disclosed systems utilize post-processing techniques to filter out low-quality masks.Type: GrantFiled: January 31, 2022Date of Patent: April 8, 2025Assignee: Adobe Inc.Inventors: Jason Wen Yong Kuen, Su Chen, Scott Cohen, Zhe Lin, Zijun Wei, Jianming Zhang
-
Patent number: 12271976Abstract: Digital representation techniques of intertwined vector objects are described. These techniques support a non-destructive representation of intertwined digital objects. Additionally, these techniques support editing of overlaps to change a visual ordering in an intuitive and efficient manner. Optimization operations are also implemented that remove redundancy, combine overlaps into a single representation, address visual artifacts at borders between the intertwined objected, and so forth.Type: GrantFiled: January 27, 2023Date of Patent: April 8, 2025Assignee: Adobe Inc.Inventors: Harish Kumar, Praveen Kumar Dhanuka, Apurva Kumar