Patents Assigned to Adobe Inc.
  • Patent number: 12367625
    Abstract: Disclosed herein are various techniques for more precisely and reliably (a) positioning top and bottom border edges relative to textual content, (b) positioning left and right border edges relative to textual content, (c) positioning mixed edge borders relative to textual content, (d) positioning boundaries of a region of background shading that fall within borders of textual content, (e) positioning borders relative to textual content that spans columns, (f) positioning respective borders relative to discrete portions of textual content, (g) positioning collective borders relative to discrete, abutting portions of textual content, (h) applying stylized corner boundaries to a region of background shading, and (i) applying stylized corners to borders.
    Type: Grant
    Filed: September 26, 2022
    Date of Patent: July 22, 2025
    Assignee: ADOBE INC.
    Inventors: Varun Aggarwal, Souvik Sinha Deb, Sanyam Jain, Monica Singh, Mohammad Javed Ali, Gaurav Anand, Deepanjana Chakravarti, Aman Arora, Abhay Sibal
  • Patent number: 12367561
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: July 22, 2025
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Patent number: 12367585
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing machine learning models to generate refined depth maps of digital images utilizing digital segmentation masks. In particular, in one or more embodiments, the disclosed systems generate a depth map for a digital image utilizing a depth estimation machine learning model, determine a digital segmentation mask for the digital image, and generate a refined depth map from the depth map and the digital segmentation mask utilizing a depth refinement machine learning model. In some embodiments, the disclosed systems generate first and second intermediate depth maps using the digital segmentation mask and an inverse digital segmentation mask and merger the first and second intermediate depth maps to generate the refined depth map.
    Type: Grant
    Filed: April 12, 2022
    Date of Patent: July 22, 2025
    Assignee: Adobe Inc.
    Inventors: Jianming Zhang, Soo Ye Kim, Simon Niklaus, Yifei Fan, Su Chen, Zhe Lin
  • Patent number: 12367626
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input.
    Type: Grant
    Filed: November 15, 2022
    Date of Patent: July 22, 2025
    Assignee: Adobe Inc.
    Inventors: Radomir Mech, Nathan Carr, Matheus Gadelha
  • Patent number: 12367238
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for a visual and text search interface used to navigate a video transcript. In an example embodiment, a freeform text query triggers a visual search for frames of a loaded video that match the freeform text query (e.g., frame embeddings that match a corresponding embedding of the freeform query), and triggers a text search for matching words from a corresponding transcript or from tags of detected features from the loaded video. Visual search results are displayed (e.g., in a row of tiles that can be scrolled to the left and right), and textual search results are displayed (e.g., in a row of tiles that can be scrolled up and down). Selecting (e.g., clicking or tapping on) a search result tile navigates a transcript interface to a corresponding portion of the transcript.
    Type: Grant
    Filed: October 17, 2022
    Date of Patent: July 22, 2025
    Assignee: ADOBE INC.
    Inventors: Lubomira Assenova Dontcheva, Dingzeyu Li, Kim Pascal Pimmel, Hijung Shin, Hanieh Deilamsalehy, Aseem Omprakash Agarwala, Joy Oakyung Kim, Joel Richard Brandt, Cristin Ailidh Fraser
  • Patent number: 12367586
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: July 22, 2025
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Patent number: 12367562
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for panoptically guiding digital image inpainting utilizing a panoptic inpainting neural network. In some embodiments, the disclosed systems utilize a panoptic inpainting neural network to generate an inpainted digital image according to panoptic segmentation map that defines pixel regions corresponding to different panoptic labels. In some cases, the disclosed systems train a neural network utilizing a semantic discriminator that facilitates generation of digital images that are realistic while also conforming to a semantic segmentation. The disclosed systems generate and provide a panoptic inpainting interface to facilitate user interaction for inpainting digital images. In certain embodiments, the disclosed systems iteratively update an inpainted digital image based on changes to a panoptic segmentation map.
    Type: Grant
    Filed: October 3, 2022
    Date of Patent: July 22, 2025
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Haitian Zheng, Elya Shechtman, Jianming Zhang, Jingwan Lu, Ning Xu, Qing Liu, Scott Cohen, Sohrab Amirghodsi
  • Patent number: 12361300
    Abstract: Certain embodiments involve using machine-learning methods to generate a recommendation for sequential content items. A method involves accessing a content item associated with an interaction stage in an online environment. A stage graph, which includes a ratio of interactions, of the content item is generated. An additional content item that includes additional stage-transition content is identified. A sequencing function outcome indicating a portion of the ratio of interactions is determined. A transition probability of receiving an interaction with stage-transition content and an additional interaction with the additional stage-transition content is calculated. A content provider system is caused to provide a recipient device with interactive content that includes the additional content item.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: July 15, 2025
    Assignee: ADOBE INC.
    Inventors: Niyati Himanshu Chhaya, Niranjan Kumbi, Balaji Vasan Srinivasan, Akangsha Bedmutha, Ajay Awatramani, Sreekanth Reddy
  • Patent number: 12361512
    Abstract: This disclosure describes one or more implementations of a digital image semantic layout manipulation system that generates refined digital images resembling the style of one or more input images while following the structure of an edited semantic layout. For example, in various implementations, the digital image semantic layout manipulation system builds and utilizes a sparse attention warped image neural network to generate high-resolution warped images and a digital image layout neural network to enhance and refine the high-resolution warped digital image into a realistic and accurate refined digital image.
    Type: Grant
    Filed: April 11, 2023
    Date of Patent: July 15, 2025
    Assignee: Adobe Inc.
    Inventors: Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Jianming Zhang, Ning Xu
  • Publication number: 20250225697
    Abstract: Techniques for prompt-based image relighting and editing are described that support automatic generation of an edited digital image with high-fidelity and realistic lighting effects and background features. A processing device, for instance, receives as input a digital image that depicts a digital object, a lighting prompt, and a background prompt. The processing device generates a relit digital object that has a lighting condition specified by the lighting prompt applied to the digital object. The processing device further generates a background that includes a feature specified by the background prompt and the lighting condition. The processing device generates an edited digital object for output that includes the relit digital object and the background. The processing device further leverages a shadow synthesis model to edit shadows in the edited digital image. In this way, the techniques described herein preserve content details of the digital object when applying background and lighting effects.
    Type: Application
    Filed: January 5, 2024
    Publication date: July 10, 2025
    Applicant: Adobe Inc.
    Inventors: Ambareesh Revanur, Shradha Agrawal, Deepak Pai
  • Patent number: 12354149
    Abstract: Navigation and reward techniques involving physical goods and services are described. In one example, digital content is configured to aid navigation of a user between different physical goods or services. This navigation includes user specified good or services as well as recommended goods or services that are not specified by the user. In another example, digital content is provided as part of a reward system. In return for permitting access to user data, the user is provided with rewards that are based on this monitored interaction. In this way, an owner of the store may gain detailed knowledge which may be used to increase likelihood of offering goods or services of interest to the user. In return, the user is provided with rewards to permit access to this detailed knowledge.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: July 8, 2025
    Assignee: Adobe Inc.
    Inventors: Peter Raymond Fransen, Matthew William Rozen, Brian David Williams, Cory Lynn Edwards
  • Patent number: 12347080
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that modify digital images via scene-based editing using image understanding facilitated by artificial intelligence. For example, in one or more embodiments the disclosed systems utilize generative machine learning models to create modified digital images portraying human subjects. In particular, the disclosed systems generate modified digital images by performing infill modifications to complete a digital image or human inpainting for portions of a digital image that portrays a human. Moreover, in some embodiments, the disclosed systems perform reposing of subjects portrayed within a digital image to generate modified digital images. In addition, the disclosed systems in some embodiments perform facial expression transfer and facial expression animations to generate modified digital images or animations.
    Type: Grant
    Filed: March 27, 2023
    Date of Patent: July 1, 2025
    Assignee: Adobe Inc.
    Inventors: Krishna Kumar Singh, Yijun Li, Jingwan Lu, Duygu Ceylan Aksit, Yangtuanfeng Wang, Jimei Yang, Tobias Hinz, Qing Liu, Jianming Zhang, Zhe Lin
  • Patent number: 12347034
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating digital chain pull paintings in digital images. The disclosed system generate, utilizing a neural network, a plurality of matrices over an ambient space for a plurality of polygons of a three-dimensional mesh based on a plurality of features of the plurality of polygons associated with the three-dimensional mesh. The disclosed system determines a gradient field based on the plurality of matrices of the plurality of polygons. The disclosed system generates a mapping for the three-dimensional mesh based on the gradient field and a differential operator corresponding to the three-dimensional mesh.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: July 1, 2025
    Assignee: Adobe Inc.
    Inventors: Noam Aigerman, Kunal Gupta, Jun Saito, Thibault Groueix, Vladimir Kim, Siddhartha Chaudhuri
  • Patent number: 12346827
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating semantic scene graphs for digital images using an external knowledgebase for feature refinement. For example, the disclosed system can determine object proposals and subgraph proposals for a digital image to indicate candidate relationships between objects in the digital image. The disclosed system can then extract relationships from an external knowledgebase for refining features of the object proposals and the subgraph proposals. Additionally, the disclosed system can generate a semantic scene graph for the digital image based on the refined features of the object/subgraph proposals. Furthermore, the disclosed system can update/train a semantic scene graph generation network based on the generated semantic scene graph. The disclosed system can also reconstruct the image using object labels based on the refined features to further update/train the semantic scene graph generation network.
    Type: Grant
    Filed: June 3, 2022
    Date of Patent: July 1, 2025
    Assignee: Adobe Inc.
    Inventors: Handong Zhao, Zhe Lin, Sheng Li, Mingyang Ling, Jiuxiang Gu
  • Patent number: 12346655
    Abstract: Systems and methods for performing Document Visual Question Answering tasks are described. A document and query are received. The document encodes document tokens and the query encodes query tokens. The document is segmented into nested document sections, lines, and tokens. A nested structure of tokens is generated based on the segmented document. A feature vector for each token is generated. A graph structure is generated based on the nested structure of tokens. Each graph node corresponds to the query, a document section, a line, or a token. The node connections correspond to the nested structure. Each node is associated with the feature vector for the corresponding object. A graph attention network is employed to generate another embedding for each node. These embeddings are employed to identify a portion of the document that includes a response to the query. An indication of the identified portion of the document is be provided.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: July 1, 2025
    Assignee: Adobe Inc.
    Inventors: Shijie Geng, Christopher Tensmeyer, Curtis Michael Wigington, Jiuxiang Gu
  • Patent number: 12340544
    Abstract: Embodiments are disclosed for user-guided variable-rate compression. A method of user-guided variable-rate compression includes receiving a request to compress an image, the request including the image, a corresponding importance data, and a target bitrate, providing the image, the corresponding importance data, and the target bitrate to a compression network, generating, by the compression network, a learned importance map and a representation of the image, and generating, by the compressing network, a compressed representation of the image based on the learned importance map and the representation of the image.
    Type: Grant
    Filed: April 6, 2022
    Date of Patent: June 24, 2025
    Assignee: Adobe Inc.
    Inventors: Suryateja Bv, Sharmila Reddy Nangi, Rushil Gupta, Rajat Jaiswal, Nikhil Kapoor, Kuldeep Kulkarni
  • Patent number: 12340166
    Abstract: Techniques for document decomposition based on determined logical visual layering of document content. The techniques include iteratively identifying a plurality of logical visual layers of a document resulting in each logical visual layer being associated with one or more document content objects of the document. The one or more document content objects associated with each logical visual layer are annotated to be indicative of the associated logical visual layer. The document is then displayed with an indication of one or more of the annotated document objects.
    Type: Grant
    Filed: June 2, 2023
    Date of Patent: June 24, 2025
    Assignee: Adobe Inc.
    Inventors: Punit Singh, Jayant Vaibhav Srivastava, Ankit Bal
  • Patent number: 12340441
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure receive a raster image depicting a radial color gradient; compute an origin point of the radial color gradient based on an orthogonality measure between a color gradient vector at a point in the raster image and a relative position vector between the point and the origin point; construct a vector graphics representation of the radial color gradient based on the origin point; and generate a vector graphics image depicting the radial color gradient based on the vector graphics representation.
    Type: Grant
    Filed: August 31, 2022
    Date of Patent: June 24, 2025
    Assignee: ADOBE INC.
    Inventors: Michal Lukac, Souymodip Chakraborty, Matthew David Fisher, Vineet Batra, Ankit Phogat
  • Patent number: 12340606
    Abstract: Embodiments are disclosed for providing customizable, visually aesthetic color diverse template recommendations derived from a source image. A method may include receiving a source image and determining a source image background by separating a foreground of the source image from a background of the source image. The method separates a foreground from the background by identifying portions of the image that belong to the background and stripping out the rest of the image. The method includes identifying a text region of the source image using a machine learning model and identifying font type using the identified text region. The method includes generating an editable template image using the source image background, the text region, and the font type.
    Type: Grant
    Filed: November 9, 2022
    Date of Patent: June 24, 2025
    Assignee: Adobe Inc.
    Inventors: Prasenjit Mondal, Sachin Soni, Anshul Malik
  • Patent number: 12340571
    Abstract: Various embodiments classify one or more portions of an image based on deriving an “intrinsic” modality. Such intrinsic modality acts as a substitute to a “text” modality in a multi-modal network. A text modality in image processing is typically a natural language text that describes one or more portions of an image. However, explicit natural language text may not be available across one or more domains for training a multi-modal network. Accordingly, various embodiments described herein generate an intrinsic modality, which is also a description of one or more portions of an image, except that such description is not an explicit natural language description, but rather a machine learning model representation. Some embodiments additionally leverage a visual modality obtained from a vision-only model or branch, which may learn domain characteristics that are not present in the multi-modal network.
    Type: Grant
    Filed: October 28, 2022
    Date of Patent: June 24, 2025
    Assignee: Adobe Inc.
    Inventors: Puneet Mangla, Milan Aggarwal, Balaji Krishnamurthy