Patents Assigned to Adobe Inc.
  • Publication number: 20240153177
    Abstract: Techniques for vector object blending are described to generate a transformed vector object based on a first vector object and a second vector object. A transformation module, for instance, receives a first vector object that includes a plurality of first paths and a second vector object that includes a plurality of second paths. The transformation module computes morphing costs based on a correspondence within candidate path pairs that include one of the first paths and one of the second paths. Based on the morphing costs, the transformation module generates a low-cost mapping of paths between the first paths and the second paths. To generate the transformed vector object, the transformation module adjusts one or more properties of at least one of the first paths based on the mapping, such as geometry, appearance, and z-order.
    Type: Application
    Filed: November 4, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Tarun Beri, Matthew David Fisher
  • Publication number: 20240153294
    Abstract: Embodiments are disclosed for providing customizable, visually aesthetic color diverse template recommendations derived from a source image. A method may include receiving a source image and determining a source image background by separating a foreground of the source image from a background of the source image. The method separates a foreground from the background by identifying portions of the image that belong to the background and stripping out the rest of the image. The method includes identifying a text region of the source image using a machine learning model and identifying font type using the identified text region. The method includes generating an editable template image using the source image background, the text region, and the font type.
    Type: Application
    Filed: November 9, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Prasenjit Mondal, Sachin Soni, Anshul Malik
  • Publication number: 20240152680
    Abstract: Embodiments are disclosed for real-time copyfitting using a shape of a content area and input text. A content area and an input text for performing copyfitting using a trained classifier is received. A number of remaining characters in the content area is computed in real-time using the input, the computing performed in response to receiving additional input text, wherein computing, in real-time, the number of remaining characters in the content area using the input text includes generating, by the trained classifier, a set of weights including a first set of one or more weights for the input text and a second set of one or more weights for the content area. The first set of one or more weights, the second set of one or more weights, the input text, and the additional input text, and a copyfitting parameter indicating a number of additional characters to be fitted into the content area are determined based on the content area.
    Type: Application
    Filed: November 9, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Rishav AGARWAL, Vidisha Rama HEGDE, Vasu GUPTA, Sanyam JAIN
  • Publication number: 20240153171
    Abstract: Embodiments are disclosed for generating a data visualization. In some embodiments, a method of generating a data visualization includes generating a first graphic object on a digital canvas. A data set including data associated with a plurality of variables is added to a data panel of the digital canvas. A selection of a variable from the plurality of variables on the data panel is received and a second graphic object connecting the variable and a cursor position on the digital canvas is generated. A selection of a visual property of the first graphic object is received using the cursor. Upon selection of the visual property, the first graphic object is linked to the data panel via the second graphic object. A chart is then generated comprising the first graphic object and one or more new graphic objects, based on the variable and the visual property of the first graphic object.
    Type: Application
    Filed: November 3, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Bernard KERR, Dmytro BARANOVSKIY, Benjamin FARRELL
  • Publication number: 20240153172
    Abstract: Embodiments are disclosed for generating a data-bound axis for a data visualization. In some embodiments, a method of generating a data-bound axis for a data visualization includes receiving a data set and generating a chart including a plurality of graphic objects based on the data set and a visual property of the plurality of graphic objects. A scale associated with the chart is determined based on the data set and the plurality of graphic objects. At least one axis graphic object is generated based on the scale. The at least one axis graphic object is added to the plurality of graphic objects of the chart.
    Type: Application
    Filed: November 3, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Bernard KERR, Dmytro BARANOVSKIY, Corey LUCIER
  • Publication number: 20240153169
    Abstract: Embodiments are disclosed for changing coordinate systems for data bound objects. In some embodiments, a method of changing coordinate systems for data bound objects includes receiving a selection of at least one graphic object associated with a data visualization on a canvas of a graphic design application, wherein the data visualization includes a plurality of graphic objects. Additionally, a request is received to convert the data visualization from a first coordinate space to a second coordinate space. A subset of the plurality of graphic objects to convert to the second coordinate space is identified, the subset of the plurality of graphic objects having a same object type. A view of the plurality of graphic objects is generated by converting the subset of the plurality of graphic objects to the second coordinate space.
    Type: Application
    Filed: November 3, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Bernard KERR, Dmytro BARANOVSKIY
  • Publication number: 20240153170
    Abstract: Embodiments are disclosed for managing multiple data visualizations on a digital canvas. In some embodiments, a method of managing multiple data visualizations includes generating a first graphic object on a digital canvas. A first dataset is received and used to generate a first chart based on the first dataset and a visual property of the first graphic object. The first chart comprises a first plurality of graphic objects including the first graphic object. A second dataset is then received and used to generate a second chart on the digital canvas based on the second dataset. The second chart includes a second plurality of graphic objects. An axis of the first chart and an axis of the second chart are merged such that the axis the first chart and the axis of the second chart share a scale attribute.
    Type: Application
    Filed: November 3, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Bernard KERR, Dmytro BARANOVSKIY, Benjamin FARRELL
  • Publication number: 20240152771
    Abstract: Tabular data machine-learning model techniques and systems are described. In one example, common-sense knowledge is infused into training data through use of a knowledge graph to provide external knowledge to supplement a tabular data corpus. In another example, a dual-path architecture is employed to configure an adapter module. In an implementation, the adapter module is added as part of a pre-trained machine-learning model for general purpose tabular models. Specifically, dual-path adapters are trained using the knowledge graphs and semantically augmented trained data. A path-wise attention layer is applied to fuse a cross-modality representation of the two paths for a final result.
    Type: Application
    Filed: November 3, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Can Qin, Sungchul Kim, Tong Yu, Ryan A. Rossi, Handong Zhao
  • Publication number: 20240153155
    Abstract: Embodiments are disclosed for binding colors to data visualizations on a digital canvas. In some embodiments, a method of binding colors to data visualizations includes receiving a data set including data associated with a variable. A chart, including a plurality of graphic objects, is generated based on the variable of the data set and a visual property of the plurality of graphic objects. A data type associated with the variable determined and first colors are assigned to the plurality of graphic objects based on the data type using a color binding. A selection of second colors to be assigned to the plurality of graphic objects is received and the chart is updated using the second colors.
    Type: Application
    Filed: November 3, 2022
    Publication date: May 9, 2024
    Applicant: Adobe Inc.
    Inventors: Bernard KERR, Dmytro BARANOVSKIY, Benjamin FARRELL
  • Patent number: 11978272
    Abstract: Adapting a machine learning model to process data that differs from training data used to configure the model for a specified objective is described. A domain adaptation system trains the model to process new domain data that differs from a training data domain by using the model to generate a feature representation for the new domain data, which describes different content types included in the new domain data. The domain adaptation system then generates a probability distribution for each discrete region of the new domain data, which describes a likelihood of the region including different content described by the feature representation. The probability distribution is compared to ground truth information for the new domain data to determine a loss function, which is used to refine model parameters. After determining that model outputs achieve a threshold similarity to the ground truth information, the model is output as a domain-agnostic model.
    Type: Grant
    Filed: August 9, 2022
    Date of Patent: May 7, 2024
    Assignee: Adobe Inc.
    Inventors: Kai Li, Christopher Alan Tensmeyer, Curtis Michael Wigington, Handong Zhao, Nikolaos Barmpalios, Tong Sun, Varun Manjunatha, Vlad Ion Morariu
  • Patent number: 11977829
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating scalable and semantically editable font representations utilizing a machine learning approach. For example, the disclosed systems generate a font representation code from a glyph utilizing a particular neural network architecture. For example, the disclosed systems utilize a glyph appearance propagation model and perform an iterative process to generate a font representation code from an initial glyph. Additionally, using a glyph appearance propagation model, the disclosed systems automatically propagate the appearance of the initial glyph from the font representation code to generate additional glyphs corresponding to respective glyph labels. In some embodiments, the disclosed systems propagate edits or other changes in appearance of a glyph to other glyphs within a glyph set (e.g., to match the appearance of the edited glyph).
    Type: Grant
    Filed: June 29, 2021
    Date of Patent: May 7, 2024
    Assignee: Adobe Inc.
    Inventors: Zhifei Zhang, Zhaowen Wang, Hailin Jin, Matthew Fisher
  • Patent number: 11978067
    Abstract: Techniques are provided for analyzing user actions that have occurred over a time period. The user actions can be, for example, with respect to the user's navigation of content or interaction with an application. Such user data is provided in an action string, which is converted into a highly searchable format. As such, the presence and frequency of particular user actions and patterns of user actions within an action string of a particular user, as well as among multiple action strings of multiple users, are determinable. Subsequences of one or more action strings are identified and both the number of action strings that include a particular subsequence and the frequency that a particular subsequence is present in a given action string are determinable. The conversion involves breaking that string into a sorted list of locations for the actions within that string. Queries can be readily applied against the sorted list.
    Type: Grant
    Filed: November 12, 2020
    Date of Patent: May 7, 2024
    Assignee: Adobe Inc.
    Inventors: Tung Mai, Iftikhar Ahamath Burhanuddin, Georgios Theocharous, Anup Rao
  • Patent number: 11978144
    Abstract: Embodiments are disclosed for using machine learning models to perform three-dimensional garment deformation due to character body motion with collision handling. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input, the input including character body shape parameters and character body pose parameters defining a character body, and garment parameters. The disclosed systems and methods further comprise generating, by a first neural network, a first set of garment vertices defining deformations of a garment with the character body based on the input. The disclosed systems and methods further comprise determining, by a second neural network, that the first set of garment vertices includes a second set of garment vertices penetrating the character body. The disclosed systems and methods further comprise modifying, by a third neural network, each garment vertex in the second set of garment vertices to positions outside the character body.
    Type: Grant
    Filed: July 27, 2022
    Date of Patent: May 7, 2024
    Assignee: Adobe Inc.
    Inventors: Yi Zhou, Yangtuanfeng Wang, Xin Sun, Qingyang Tan, Duygu Ceylan Aksit
  • Patent number: 11978216
    Abstract: Methods and systems are provided for generating mattes for input images. A neural network system is trained to generate a matte for an input image utilizing contextual information within the image. Patches from the image and a corresponding trimap are extracted, and alpha values for each individual image patch are predicted based on correlations of features in different regions within the image patch. Predicting alpha values for an image patch may also be based on contextual information from other patches extracted from the same image. This contextual information may be determined by determining correlations between features in the query patch and context patches. The predicted alpha values for an image patch form a matte patch, and all matte patches generated for the patches are stitched together to form an overall matte for the input image.
    Type: Grant
    Filed: March 16, 2022
    Date of Patent: May 7, 2024
    Assignee: ADOBE INC.
    Inventor: Ning Xu
  • Publication number: 20240144574
    Abstract: Digital object animation techniques are described. In a first example, translation-based animation of the digital object operates using control points of the digital object. In another example, the animation system is configured to minimize an amount of feature positions that are used to generate the animation. In a further example, an input pose is normalized through use of a global scale factor to address changes in a z-position of a subject in different digital images. Yet further, a body tracking module is used to computing initial feature positions. The initial feature positions are then used to initialize a face tracker module to generate feature positions of the face. The animation system also supports a plurality of modes used to generate the digital object, techniques to define a base of the digital object, and a friction term limiting movement of features positions based on contact with a ground plane.
    Type: Application
    Filed: December 27, 2023
    Publication date: May 2, 2024
    Applicant: Adobe Inc.
    Inventors: Jun Saito, Jimei Yang, Duygu Ceylan Aksit
  • Publication number: 20240147048
    Abstract: Techniques for updating zoom properties of corresponding salient objects are described that support parallel zooming for image comparison. In an implementation, a zoom input is received involving a salient object in a digital image in a user interface. An identification module identifies the salient object in the digital image and zoom properties for the salient object. A detection module identifies a corresponding salient object in at least one additional digital image and zoom properties for the corresponding salient object in the at least one additional digital image. An adjustment module then updates the zoom properties for the corresponding salient object in the at least one additional digital image based on the zoom properties for the salient object in the digital image.
    Type: Application
    Filed: October 26, 2022
    Publication date: May 2, 2024
    Applicant: Adobe Inc.
    Inventor: Ankur Murarka
  • Patent number: 11972534
    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for utilizing a visual neural network to replace materials in a three-dimensional scene with visually similar materials from a source dataset. Specifically, the disclosed system utilizes the visual neural network to generate source deep visual features representing source texture maps from materials in a plurality of source materials. Additionally, the disclosed system utilizes the visual neural network to generate deep visual features representing texture maps from materials in a digital scene. The disclosed system then determines source texture maps that are visually similar to the texture maps of the digital scene based on visual similarity metrics that compare the source deep visual features and the deep visual features. Additionally, the disclosed system modifies the digital scene by replacing one or more of the texture maps in the digital scene with the visually similar source texture maps.
    Type: Grant
    Filed: November 5, 2021
    Date of Patent: April 30, 2024
    Assignee: Adobe Inc.
    Inventors: Maxine Perroni-Scharf, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Jonathan Eisenmann
  • Patent number: 11972466
    Abstract: A search system provides search results with images of products based on associations of primary products and secondary products from product image sets. The search system analyzes a product image set containing multiple images to determine a primary product and secondary products. Information associating the primary and secondary products are stored in a search index. When the search system receives a query image containing a search product, the search index is queried using the search product to identify search result images based on associations of products in the search index, and the result images are provided as a response to the query image.
    Type: Grant
    Filed: May 20, 2019
    Date of Patent: April 30, 2024
    Assignee: ADOBE INC
    Inventors: Jonas Dahl, Mausoom Sarkar, Hiresh Gupta, Balaji Krishnamurthy, Ayush Chopra, Abhishek Sinha
  • Patent number: 11971884
    Abstract: An interactive search session is implemented using an artificial intelligence model. For example, when the artificial intelligence model receives a search query from a user, the model selects an action from a plurality of actions based on the search query. The selected action queries the user for more contextual cues about the search query (e.g., may enquire about use of the search results, may request to refine the search query, or otherwise engage the user in conversation to better understand the intent of the search). The interactive search session may be in the form, for example, of a chat session between the user and the system, and the chat session may be displayed along with the search results (e.g., in a separate section of display). The interactive search session may enable the system to better understand the user's search needs, and accordingly may help provide more focused search results.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: April 30, 2024
    Assignee: Adobe Inc.
    Inventors: Milan Aggarwal, Balaji Krishnamurthy
  • Patent number: 11972569
    Abstract: The present disclosure relates to a multi-model object segmentation system that provides a multi-model object segmentation framework for automatically segmenting objects in digital images. In one or more implementations, the multi-model object segmentation system utilizes different types of object segmentation models to determine a comprehensive set of object masks for a digital image. In various implementations, the multi-model object segmentation system further improves and refines object masks in the set of object masks utilizing specialized object segmentation models, which results in more improved accuracy and precision with respect to object selection within the digital image. Further, in some implementations, the multi-model object segmentation system generates object masks for portions of a digital image otherwise not captured by various object segmentation models.
    Type: Grant
    Filed: January 26, 2021
    Date of Patent: April 30, 2024
    Assignee: Adobe Inc.
    Inventors: Brian Price, David Hart, Zhihong Ding, Scott Cohen