Patents Assigned to Adobe Inc.
  • Patent number: 11893345
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generate word representation vectors for the words, generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generate a relationship representation vector based on the document structures, and predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: February 6, 2024
    Assignee: ADOBE, INC.
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Varun Manjunatha, Lidan Wang, Rajiv Jain, Doo Soon Kim, Walter Chang
  • Patent number: 11893792
    Abstract: Techniques are disclosed for identifying and presenting video content that demonstrates features of a target product. The video content can be accessed, for example, from a media database of user-generated videos that demonstrate one or more features of the target product so that a user can see and hear the product in operation via a product webpage before making a purchasing decision. The product functioning videos supplement any static images of the target product and the textual product description to provide the user with additional context for each of the product's features, depending on the textual product description. The user can quickly and easily interact with the product webpage to access and playback the product functioning video to see and/or hear the product in operation.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Gourav Singhal, Sourabh Gupta, Mrinal Kumar Sharma
  • Publication number: 20240037845
    Abstract: In implementations of systems for efficiently generating blend objects, a computing device implements a blending system to assign unique shape identifiers to objects included in an input render tree. The blending system generates a shape mask based on the unique shape identifiers. A color of a pixel of a blend object is computed based on particular objects of the objects that contribute to the blend object using the shape mask. The blending system generates the blend object for display in a user interface based on the color of the pixel.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Applicant: Adobe Inc.
    Inventors: Harish Kumar, Apurva Kumar
  • Publication number: 20240037827
    Abstract: Embodiments are disclosed for using machine learning models to perform three-dimensional garment deformation due to character body motion with collision handling. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input, the input including character body shape parameters and character body pose parameters defining a character body, and garment parameters. The disclosed systems and methods further comprise generating, by a first neural network, a first set of garment vertices defining deformations of a garment with the character body based on the input. The disclosed systems and methods further comprise determining, by a second neural network, that the first set of garment vertices includes a second set of garment vertices penetrating the character body. The disclosed systems and methods further comprise modifying, by a third neural network, each garment vertex in the second set of garment vertices to positions outside the character body.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Applicant: Adobe Inc.
    Inventors: Yi ZHOU, Yangtuanfeng WANG, Xin SUN, Qingyang TAN, Duygu CEYLAN AKSIT
  • Patent number: 11887629
    Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Patent number: 11887217
    Abstract: Digital image text editing techniques as implemented by an image processing system are described that support increased user interaction in the creation and editing of digital images through understanding a content creator's intent as expressed using text. In one example, a text user input is received by a text input module. The text user input describes a visual object and a visual attribute, in which the visual object specifies a visual context of the visual attribute. A feature representation generated by a text-to-feature system using a machine-learning module based on the text user input. The feature representation is passed to an image editing system to edit a digital object in a digital image, e.g., by applying a texture to an outline of the digital object within the digital image.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Paridhi Maheshwari, Vishwa Vinay, Shraiysh Vaishay, Praneetha Vaddamanu, Nihal Jain, Dhananjay Bhausaheb Raut
  • Patent number: 11887371
    Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Patent number: 11886480
    Abstract: Certain embodiments involve using a gated convolutional encoder-decoder framework for applying affective characteristic labels to input text. For example, a method for identifying an affect label of text with a gated convolutional encoder-decoder model includes receiving, at a supervised classification engine, extracted linguistic features of an input text and a latent representation of an input text. The method also includes predicting, by the supervised classification engine, an affect characterization of the input text using the extracted linguistic features and the latent representation. Predicting the affect characterization includes normalizing and concatenating a linguistic feature representation generated from the extracted linguistic features with the latent representation to generate an appended latent representation. The method also includes identifying, by a gated convolutional encoder-decoder model, an affect label of the input text using the predicted affect characterization.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: January 30, 2024
    Assignee: ADOBE INC.
    Inventors: Kushal Chawla, Niyati Himanshu Chhaya, Sopan Khosla
  • Patent number: 11887241
    Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Zexiang Xu, Yannick Hold-Geoffroy, Milos Hasan, Kalyan Sunkavalli, Fanbo Xiang
  • Patent number: 11886809
    Abstract: In implementations of systems for identifying templates based on fonts, a computing device implements an identification system to receive input data describing a selection of a font included in a collection of fonts. The identification system generates an embedding that represents the font in a latent space using a machine learning model trained on training data to generate embeddings for digital templates in the latent space based on intent phrases associated with the digital templates and embeddings for fonts in the latent space based on intent phrases associated with the fonts. A digital template included in a collection of digital templates is identified based on the embedding that represents the font and an embedding that represents the digital template in the latent space. The identification system generates an indication of the digital template for display in a user interface.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Nipun Jindal, Anand Khanna, Oliver Brdiczka
  • Patent number: 11887216
    Abstract: The present disclosure describes systems and methods for image processing. Embodiments of the present disclosure include an image processing apparatus configured to generate modified images (e.g., synthetic faces) by conditionally changing attributes or landmarks of an input image. A machine learning model of the image processing apparatus encodes the input image to obtain a joint conditional vector that represents attributes and landmarks of the input image in a vector space. The joint conditional vector is then modified, according to the techniques described herein, to form a latent vector used to generate a modified image. In some cases, the machine learning model is trained using a generative adversarial network (GAN) with a normalization technique, followed by joint training of a landmark embedding and attribute embedding (e.g., to reduce inference time).
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: January 30, 2024
    Assignee: ADOBE, INC.
    Inventors: Ratheesh Kalarot, Timothy M. Converse, Shabnam Ghadar, John Thomas Nack, Jingwan Lu, Elya Shechtman, Baldo Faieta, Akhilesh Kumar
  • Patent number: 11886803
    Abstract: In implementations of systems for assistive digital form authoring, a computing device implements an authoring system to receive input data describing a search input associated with a digital form. The authoring system generates an input embedding vector that represents the search input in a latent space using a machine learning model trained on training data to generate embedding vectors in the latent space. A candidate embedding vector included in a group of candidate embedding vectors is identified based on a distance between the input embedding vector and the candidate embedding vector in the latent space. The authoring system generates an indication of a search output associated with the digital form for display in a user interface based on the candidate embedding vector.
    Type: Grant
    Filed: January 12, 2023
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Arneh Jain, Salil Taneja, Puneet Mangla, Gaurav Ahuja
  • Patent number: 11886768
    Abstract: Embodiments are disclosed for real time generative audio for brush and canvas interaction in digital drawing. The method may include receiving a user input and a selection of a tool for generating audio for a digital drawing interaction. The method may further include generating intermediary audio data based on the user input and the tool selection, wherein the intermediary audio data includes a pitch and a frequency. The method may further include processing, by a trained audio transformation model and through a series of one or more layers of the trained audio transformation model, the intermediary audio data. The method may further include adjusting the series of one or more layers of the trained audio transformation model to include one or more additional layers to produce an adjusted audio transformation model. The method may further include generating, by the adjusted audio transformation model, an audio sample based on the intermediary audio data.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Pranay Kumar, Nipun Jindal
  • Patent number: 11886964
    Abstract: Methods and systems disclosed herein relate generally to systems and methods for using a machine-learning model to predict user-engagement levels of users in response to presentation of future interactive content. A content provider system accesses a machine-learning model, which was trained using a training dataset including previous user-device actions performed by a plurality of users in response to previous interactive content. The content provider system receives user-activity data of a particular user and applies the machine-learning model to the user-activity data, in which the user-activity data includes user-device actions performed by the particular user in response to interactive content. The machine-learning model generates an output including a categorical value that represents a predicted user-engagement level of the particular user in response to a presentation of the future interactive content.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: January 30, 2024
    Assignee: ADOBE INC.
    Inventors: Atanu R. Sinha, Xiang Chen, Sungchul Kim, Omar Rahman, Jean Bernard Hishamunda, Goutham Srivatsav Arra, Shiv Kumar Saini
  • Patent number: 11886494
    Abstract: The present disclosure relates to an object selection system that automatically detects and selects objects in a digital image based on natural language-based inputs. For instance, the object selection system can utilize natural language processing tools to detect objects and their corresponding relationships within natural language object selection queries. For example, the object selection system can determine alternative object terms for unrecognized objects in a natural language object selection query. As another example, the object selection system can determine multiple types of relationships between objects in a natural language object selection query and utilize different object relationship models to select the requested query object.
    Type: Grant
    Filed: September 1, 2022
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Walter Wei Tuh Chang, Khoi Pham, Scott Cohen, Zhe Lin, Zhihong Ding
  • Patent number: 11886815
    Abstract: One example method involves operations for a processing device that include receiving, by a machine learning model trained to generate a search result, a search query for a text input. The machine learning model is trained by receiving pre-training data that includes multiple documents. Pre-training the machine learning model by generating, using an encoder, feature embeddings for each of the documents included in the pre-training data. The feature embeddings are generated by applying a masking function to visual and textual features in the documents. Training the machine learning model also includes generating, using the feature embeddings, output features for the documents by concatenating the feature embeddings and applying a non-linear mapping to the feature embeddings. Training the machine learning model further includes applying a linear classifier to the output features. Additionally, operations include generating, for display, a search result using the machine learning model based on the input.
    Type: Grant
    Filed: May 28, 2021
    Date of Patent: January 30, 2024
    Assignee: ADOBE INC.
    Inventors: Jiuxiang Gu, Vlad Morariu, Varun Manjunatha, Tong Sun, Rajiv Jain, Peizhao Li, Jason Kuen, Handong Zhao
  • Patent number: 11886793
    Abstract: Embodiments of the technology described herein, are an intelligent system that aims to expedite a text design process by providing text design predictions interactively. The system works with a typical text design scenario comprising a background image and one or more text strings as input. In the design scenario, the text string is to be placed on top of the background. The textual design agent may include a location recommendation model that recommends a location on the background image to place the text. The textual design agent may also include a font recommendation model, a size recommendation model, and a color recommendation model. The output of these four models may be combined to generate draft designs that are evaluated as a whole (combination of color, font, and size) for the best designs. The top designs may be output to the user.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Saeid Motiian, Baldo Faieta, Zegi Gu, Peter Evan O'Donovan, Alex Filipkowski, Jose Ignacio Echevarria Vallespi
  • Patent number: 11886825
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure generate a word embedding for each word of an input phrase, wherein the input phrase indicates a sentiment toward an aspect term, compute a gate vector based on the aspect term, identify a dependency tree representing relations between words of the input phrase, generate a representation vector based on the dependency tree and the word embedding using a graph convolution network, wherein the gate vector is applied to a layer of the graph convolution network, and generate a probability distribution over a plurality of sentiments based on the representation vector.
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: January 30, 2024
    Assignee: ADOBE, INC.
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt
  • Patent number: 11887277
    Abstract: The present disclosure relates to an image artifact removal system that improves digital images by removing complex artifacts caused by image compression. For example, in various implementations, the image artifact removal system builds a generative adversarial network that includes a generator neural network and a discriminator neural network. In addition, the image artifact removal system trains the generator neural network to reduce and eliminate compression artifacts from the image by synthesizing or retouching the compressed digital image. Further, in various implementations, the image artifact removal system utilizes dilated attention residual layers in the generator neural network to accurately remove compression artifacts from digital images of different sizes and/or having different compression ratios.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventor: Ionut Mironica
  • Patent number: 11886795
    Abstract: A method for marking text in digital typography includes identifying one or more glyphs that intersect or overlap with a text marking bounding box, drawing a modified text marking to avoid intersecting with the one or more glyphs, and causing a display device to display the modified text marking with the text. The text marking is associated with a line of text including the glyphs or adjacent to a waxline of text including the glyphs. For each of the glyphs, the glyph corresponding to the glyph bounding box intersecting with the text marking is indicated. The modified text marking is drawn based on outlines of the glyphs, intersections between a text marking bounding box and the glyph outlines, and a user-specified glyph offset, text marking weight, and/or text marking offset to avoid intersecting with the glyphs. The shape of the modified text marking avoids intersecting with or overlapping the glyph.
    Type: Grant
    Filed: March 7, 2022
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Aman Arora, Rohit Kumar Dubey