Patents Assigned to Adobe Inc.
  • Patent number: 12217395
    Abstract: Systems and methods for image processing are configured. Embodiments of the present disclosure encode a content image and a style image using a machine learning model to obtain content features and style features, wherein the content image includes a first object having a first appearance attribute and the style image includes a second object having a second appearance attribute; align the content features and the style features to obtain a sparse correspondence map that indicates a correspondence between a sparse set of pixels of the content image and corresponding pixels of the style image; and generate a hybrid image based on the sparse correspondence map, wherein the hybrid image depicts the first object having the second appearance attribute.
    Type: Grant
    Filed: April 27, 2022
    Date of Patent: February 4, 2025
    Assignee: ADOBE INC.
    Inventors: Sangryul Jeon, Zhifei Zhang, Zhe Lin, Scott Cohen, Zhihong Ding
  • Patent number: 12217742
    Abstract: Embodiments are disclosed for generating full-band audio from narrowband audio using a GAN-based audio super resolution model. A method of generating full-band audio may include receiving narrow-band input audio data, upsampling the narrow-band input audio data to generate upsampled audio data, providing the upsampled audio data to an audio super resolution model, the audio super resolution model trained to perform bandwidth expansion from narrow-band to wide-band, and returning wide-band output audio data corresponding to the narrow-band input audio data.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: February 4, 2025
    Assignees: Adobe Inc., The Trustees of Princeton University
    Inventors: Zeyu Jin, Jiaqi Su, Adam Finkelstein
  • Patent number: 12216677
    Abstract: Systems and methods for data analysis are described. Embodiments of the present disclosure data analysis include displaying, via a data analysis interface, a data visualization in a first region of the data analysis interface; and displaying, via the data analysis interface, an analysis thread visualization in a second region of the data analysis interface. The analysis thread visualization depicts an analysis thread graph including a first node corresponding to the data visualization and an edge corresponding to an analysis path between the first node and a second node.
    Type: Grant
    Filed: June 5, 2023
    Date of Patent: February 4, 2025
    Assignee: ADOBE INC.
    Inventors: Chen Chen, Jane Elizabeth Hoffswell, Shunan Guo, Fan Du, Nathan Carl Ross, Ryan A. Rossi, Yeuk Yin Chan, Eunyee Koh
  • Patent number: 12219180
    Abstract: Embodiments described herein provide methods and systems for facilitating actively-learned context modeling. In one embodiment, a subset of data is selected from a training dataset corresponding with an image to be compressed, the subset of data corresponding with a subset of data of pixels of the image. A context model is generated using the selected subset of data. The context model is generally in the form of a decision tree having a set of leaf nodes. Entropy values corresponding with each leaf node of the set of leaf nodes are determined. Each entropy value indicates an extent of diversity of context associated with the corresponding leaf node. Additional data from the training dataset is selected based on the entropy values corresponding with the leaf nodes. The updated subset of data is used to generate an updated context model for use in performing compression of the image.
    Type: Grant
    Filed: May 20, 2022
    Date of Patent: February 4, 2025
    Assignee: Adobe Inc.
    Inventors: Gang Wu, Yang Li, Stefano Petrangeli, Viswanathan Swaminathan, Haoliang Wang, Ryan A. Rossi, Zhao Song
  • Publication number: 20250037325
    Abstract: Environment map upscaling techniques are described for digital image generation. A digital object and an environment map are received, the environment map defines lighting conditions within a panoramic view of an environment. A viewpoint is detected with respect to the panoramic view in the environment map. A map fragment is identified from the environment map based on the detected viewpoint and an upscaled map fragment is formed by upscaling the map fragment. A digital image is then generated based on the upscaled map fragment and the digital object as having the lighting conditions applied based on the environment map.
    Type: Application
    Filed: July 24, 2023
    Publication date: January 30, 2025
    Applicant: Adobe Inc.
    Inventor: Vincent Guillaume Gault
  • Publication number: 20250036874
    Abstract: Techniques are disclosed for prompt-based few-shot entity extraction. The techniques include obtaining an annotated natural language document set for an arbitrary new entity type. A prompt sequence set is generated based on the annotated document set. A pre-trained entity extraction model is trained based on the prompt sequence set to yield a few-shot trained entity extraction model trained to extract at least the arbitrary new entity type. In response to obtaining a test document set, one or more entities of the arbitrary new entity type are extracted from the test document set using the few-shot trained entity extraction model.
    Type: Application
    Filed: July 27, 2023
    Publication date: January 30, 2025
    Applicant: Adobe Inc.
    Inventors: Inderjeet NAIR, Vikas BALANI, Pritika RAMU, Kumud LAKARA, Akshay SINGHAL, Anandhavelu N
  • Publication number: 20250037330
    Abstract: Digital object fusion techniques and systems are described. In one or more examples, a base object and an adornment object are received and anchor points of the base object and the adornment object are detected by a digital object fusion system. The digital object fusion system then identifies linked anchor points from the anchor points as supporting a link between the base object and the adornment object. A path is fused by the digital object fusion system that defines the base object and the adornment object based at least in part on the linked anchor points. From this, a fused object is generated by the digital object fusion system by propagating visual style data to the path from the base object or the adornment object.
    Type: Application
    Filed: July 29, 2023
    Publication date: January 30, 2025
    Applicant: Adobe Inc.
    Inventors: Praveen Kumar Dhanuka, Shivi Pal, Arushi Jain
  • Publication number: 20250036858
    Abstract: Techniques discussed herein generally relate to applying machine-learning techniques to design documents to determine relationships among the different style elements within the document. In one example, hypergraph model is trained on a corpus of hypertext markup language (HTML) documents. The trained model is utilized to identifying one or more candidate style elements for a candidate fragment and/or a candidate fragment. Each of the candidates are scored, and at least a portion of the scored candidates are presented as design options for generating a new document.
    Type: Application
    Filed: July 25, 2023
    Publication date: January 30, 2025
    Applicant: Adobe Inc.
    Inventors: Ryan Rossi, Ryan Aponte, Shunan Guo, Nedim Lipka, Jane Hoffswell, Chang Xiao, Eunyee Koh, Yeuk-yin Chan
  • Publication number: 20250037461
    Abstract: Embodiments are disclosed for a method including obtaining a region of interest of a current frame of a video sequence depicting an object. The method may further include determining, by a mask propagation model, a likelihood of each pixel of the current frame being associated with the object in the region of interest of the current frame based on the region of interest of the current frame and a fixed number of previous frames of the video sequence including the object. The method may further include replacing a previous frame of the fixed number of previous frames with the current frame. The method may further include displaying the current frame of the video sequence including a masked object in the region of interest of the current frame based on the likelihood of one or more pixels of the current frame being associated with the object.
    Type: Application
    Filed: July 28, 2023
    Publication date: January 30, 2025
    Applicant: Adobe Inc.
    Inventors: Joon-Young LEE, Seoung Wug OH, John G. NELSON, Wujun WANG
  • Publication number: 20250036678
    Abstract: In implementations of systems for searching for images using generated images, a computing device implements a search system to receive a natural language search query for digital images included in a digital image repository. The search system generates a set of digital images using a machine learning model based on the natural language search query. The machine learning model is trained on training data to generate digital images based on natural language inputs. The search system performs an image-based search for digital images included in the digital image repository using the set of digital images. An indication of the search result is generated for display in a user interface based on performing the image-based search.
    Type: Application
    Filed: July 29, 2023
    Publication date: January 30, 2025
    Applicant: Adobe Inc.
    Inventors: Saikat Chakrabarty, Shikhar Garg
  • Patent number: 12211132
    Abstract: The present disclosure is directed toward systems and methods for retargeting a user's input digital design based on a selected template digital design. For example, in response to the user's selection of a template digital design, one or more embodiments described herein change various design features of the user's input digital design to match corresponding design features in the selected template digital design. One or more embodiments described herein also provide template digital designs to the user for use in retargeting after a two-step selection process that ensures the provided template digital designs are compatible with the user's input digital design.
    Type: Grant
    Filed: September 11, 2023
    Date of Patent: January 28, 2025
    Assignee: Adobe Inc.
    Inventors: Peter O'Donovan, Adam Portilla, Satish Shankar
  • Patent number: 12210813
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generates a multi-modal vector and identifies a recommended font corresponding to the source font based on the multi-modal vector. For instance, in one or more embodiments, the disclosed systems receive an indication of a source font and determines font embeddings and glyph metrics embedding. Furthermore, the disclosed system generates, utilizing a multi-modal font machine-learning model, a multi-modal vector representing the source font based on the font embeddings and the glyph metrics embedding.
    Type: Grant
    Filed: November 1, 2022
    Date of Patent: January 28, 2025
    Assignee: Adobe Inc.
    Inventors: Pranay Kumar, Nipun Jindal
  • Patent number: 12210814
    Abstract: Techniques for content-aware font recommendations include obtaining an electronic document comprising an image and text. The image is processed using one or more convolutional neural networks to determine one or more image tags. The image tags are mapped to one or more font tags using a user map, a designer map, or one or more contextual synonyms of the image tags. A font to recommend for the electronic document is then determined using the one or more font tags.
    Type: Grant
    Filed: April 6, 2023
    Date of Patent: January 28, 2025
    Assignee: Adobe Inc.
    Inventors: Neel Kadia, Shikhar Garg, Saikat Chakrabarty
  • Patent number: 12211178
    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for combining digital images. In particular, in one or more embodiments, the disclosed systems combine latent codes of a source digital image and a target digital image utilizing a blending network to determine a combined latent encoding and generate a combined digital image from the combined latent encoding utilizing a generative neural network. In some embodiments, the disclosed systems determine an intersection face mask between the source digital image and the combined digital image utilizing a face segmentation network and combine the source digital image and the combined digital image utilizing the intersection face mask to generate a blended digital image.
    Type: Grant
    Filed: April 21, 2022
    Date of Patent: January 28, 2025
    Assignee: Adobe Inc.
    Inventors: Tobias Hinz, Shabnam Ghadar, Richard Zhang, Ratheesh Kalarot, Jingwan Lu, Elya Shechtman
  • Patent number: 12210825
    Abstract: Systems and methods for image captioning are described. One or more aspects of the systems and methods include generating a training caption for a training image using an image captioning network; encoding the training caption using a multi-modal encoder to obtain an encoded training caption; encoding the training image using the multi-modal encoder to obtain an encoded training image; computing a reward function based on the encoded training caption and the encoded training image; and updating parameters of the image captioning network based on the reward function.
    Type: Grant
    Filed: November 18, 2021
    Date of Patent: January 28, 2025
    Assignee: ADOBE INC.
    Inventors: Jaemin Cho, Seunghyun Yoon, Ajinkya Gorakhnath Kale, Trung Huu Bui, Franck Dernoncourt
  • Patent number: 12211225
    Abstract: A scene reconstruction system renders images of a scene with high-quality geometry and appearance and supports view synthesis, relighting, and scene editing. Given a set of input images of a scene, the scene reconstruction system trains a network to learn a volume representation of the scene that includes separate geometry and reflectance parameters. Using the volume representation, the scene reconstruction system can render images of the scene under arbitrary viewing (view synthesis) and lighting (relighting) locations. Additionally, the scene reconstruction system can render images that change the reflectance of objects in the scene (scene editing).
    Type: Grant
    Filed: April 15, 2021
    Date of Patent: January 28, 2025
    Assignee: ADOBE INC.
    Inventors: Sai Bi, Zexiang Xu, Kalyan Krishna Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Jay Kriegman, Ravi Ramamoorthi
  • Patent number: 12210800
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that edit digital images using combinations of speech input and gesture interactions. For instance, in some embodiments, the disclosed systems receive speech input from a client device displaying a digital image within a graphical user interface, the digital image portraying an object. Additionally, the disclosed systems detect, via the graphical user interface, one or more gesture interactions with respect to the object of the digital image. Based on the speech input, the disclosed systems determine an edit for the object of the digital image indicated by the one or more gesture interactions. Further, the disclosed systems modify the object within the digital image using the edit indicated by the one or more gesture interactions.
    Type: Grant
    Filed: May 3, 2023
    Date of Patent: January 28, 2025
    Assignee: Adobe Inc.
    Inventors: Nikita Soni, Trung Bui, Kevin Gary Smith
  • Patent number: 12211129
    Abstract: Embodiments are disclosed for identifying and modifying overlapping glyphs in a text layout. A method of identifying and modifying overlapping glyphs includes detecting a plurality of overlapping glyphs in a text layout, modifying a geometry of one or more of the overlapping glyphs based on an aesthetic score, updating a rendering tree based on the modified geometry of the one or more overlapping glyphs, and rendering the text layout using the rendering tree.
    Type: Grant
    Filed: May 13, 2022
    Date of Patent: January 28, 2025
    Assignee: Adobe Inc.
    Inventors: Praveen Kumar Dhanuka, Nirmal Kumawat, Arushi Jain
  • Patent number: 12211138
    Abstract: Embodiments of the present disclosure provide systems, methods, and computer storage media for generating editable synthesized views of scenes by inputting image rays into neural networks using neural basis decomposition. In embodiments, a set of input images of a scene depicting at least one object are collected and used to generate a plurality of rays of the scene. The rays each correspond to three-dimensional coordinates and viewing angles taken from the images. A volume density of the scene is determined by inputting the three-dimensional coordinates from the neural radiance fields into a first neural network to generate a 3D geometric representation of the object. An appearance decomposition is produced by inputting the three-dimensional coordinates and the viewing angles of the rays into a second neural network.
    Type: Grant
    Filed: December 13, 2022
    Date of Patent: January 28, 2025
    Assignee: Adobe Inc.
    Inventors: Zhengfei Kuang, Fujun Luan, Sai Bi, Zhixin Shu, Kalyan K. Sunkavalli
  • Publication number: 20250028751
    Abstract: Dialogue skeleton assisted prompt transfer for dialogue summarization techniques are described that support training of a language model to perform dialogue summarization in a few-shot scenario. A processing device, for instance, receives a training dataset that includes training dialogues. The processing device then generates dialogue skeletons based on the training dialogues using one or more perturbation-based probes. The processing device trains a language model using prompt transfer between a source task, e.g., dialogue state tracking, and a target task, e.g., dialogue summarization, using the dialogue skeletons as supervision. The processing device then receives an input dialogue and uses the trained language model to generate a summary of the input dialogue.
    Type: Application
    Filed: July 20, 2023
    Publication date: January 23, 2025
    Applicant: Adobe Inc.
    Inventors: Tong Yu, Kaige Xie, Haoliang Wang, Junda Wu, Handong Zhao, Ruiyi Zhang, Kanak Vivek Mahadik, Ani Nenkova