Patents Assigned to Adobe Inc.
  • Patent number: 11893794
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Patent number: 11893007
    Abstract: Embodiments of the present disclosure provide systems, methods, and computer storage media for optimizing computing resources generally associated with cloud-based media services. Instead of decoding digital assets on-premises to stream to a remote client device, an encoded asset can be streamed to the remote client device. A codebook employable for decoding the encoded asset can be embedded into the stream transmitted to the remote client device, so that the remote client device can extract the embedded codebook, and employ the extracted codebook to decode the encoded asset locally. In this way, not only are processing resources associated with on-premises decoding eliminated, but on-premises storage of codebooks can be significantly reduced, while expensive bandwidth is freed up by virtue of transmitting a smaller quantity of data from the cloud to the remote client device.
    Type: Grant
    Filed: July 7, 2021
    Date of Patent: February 6, 2024
    Assignee: ADOBE INC.
    Inventors: Viswanathan Swaminathan, Saayan Mitra
  • Patent number: 11893338
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that merge separate digital point text objects into a single merged digital text object while preserving the properties and original visual appearance associated with the digital text included therein. For example, the disclosed systems can determine point text character properties associated with the separate digital point text objects (e.g., rotations, baseline shifts, etc.). The disclosed systems can merge the separate digital point text objects into a single merged digital point text object and modify associated font character properties to reflect the determined point text character properties. Further, the disclosed systems can generate an area text object based on the merged digital point text object where the area text object includes the digital text and the font character properties.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Praveen Kumar Dhanuka, Arushi Jain, Matthew Fisher
  • Patent number: 11893352
    Abstract: The present disclosure provides systems and methods for relationship extraction. Embodiments of the present disclosure provide a relationship extraction network trained to identify relationships among entities in an input text. The relationship extraction network is used to generate a dependency path between entities in an input phrase. The dependency path includes a set of words that connect the entities, and is used to predict a relationship between the entities. In some cases, the dependency path is related to a syntax tree, but it may include additional words, and omit some words from a path extracted based on a syntax tree.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: February 6, 2024
    Assignee: ADOBE INC.
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt
  • Patent number: 11893717
    Abstract: This disclosure describes one or more embodiments of systems, non-transitory computer-readable media, and methods that can learn or identify a learned-initialization-latent vector for an initialization digital image and reconstruct a target digital image using an image-generating-neural network based on a modified version of the learned-initialization-latent vector. For example, the disclosed systems learn a learned-initialization-latent vector from an initialization image utilizing a high number (e.g., thousands) of learning iterations on an image-generating-neural network (e.g., a GAN). Then, the disclosed systems can modify the learned-initialization-latent vector (of the initialization image) to generate modified or reconstructed versions of target images using the image-generating-neural network.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Christopher Tensmeyer, Vlad Morariu, Michael Brodie
  • Patent number: 11893763
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Taesung Park, Richard Zhang, Oliver Wang, Junyan Zhu, Jingwan Lu, Elya Shechtman, Alexei A Efros
  • Patent number: 11893345
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generate word representation vectors for the words, generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generate a relationship representation vector based on the document structures, and predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: February 6, 2024
    Assignee: ADOBE, INC.
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Varun Manjunatha, Lidan Wang, Rajiv Jain, Doo Soon Kim, Walter Chang
  • Patent number: 11893792
    Abstract: Techniques are disclosed for identifying and presenting video content that demonstrates features of a target product. The video content can be accessed, for example, from a media database of user-generated videos that demonstrate one or more features of the target product so that a user can see and hear the product in operation via a product webpage before making a purchasing decision. The product functioning videos supplement any static images of the target product and the textual product description to provide the user with additional context for each of the product's features, depending on the textual product description. The user can quickly and easily interact with the product webpage to access and playback the product functioning video to see and/or hear the product in operation.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Gourav Singhal, Sourabh Gupta, Mrinal Kumar Sharma
  • Publication number: 20240037845
    Abstract: In implementations of systems for efficiently generating blend objects, a computing device implements a blending system to assign unique shape identifiers to objects included in an input render tree. The blending system generates a shape mask based on the unique shape identifiers. A color of a pixel of a blend object is computed based on particular objects of the objects that contribute to the blend object using the shape mask. The blending system generates the blend object for display in a user interface based on the color of the pixel.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Applicant: Adobe Inc.
    Inventors: Harish Kumar, Apurva Kumar
  • Publication number: 20240037827
    Abstract: Embodiments are disclosed for using machine learning models to perform three-dimensional garment deformation due to character body motion with collision handling. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input, the input including character body shape parameters and character body pose parameters defining a character body, and garment parameters. The disclosed systems and methods further comprise generating, by a first neural network, a first set of garment vertices defining deformations of a garment with the character body based on the input. The disclosed systems and methods further comprise determining, by a second neural network, that the first set of garment vertices includes a second set of garment vertices penetrating the character body. The disclosed systems and methods further comprise modifying, by a third neural network, each garment vertex in the second set of garment vertices to positions outside the character body.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Applicant: Adobe Inc.
    Inventors: Yi ZHOU, Yangtuanfeng WANG, Xin SUN, Qingyang TAN, Duygu CEYLAN AKSIT
  • Patent number: 11887629
    Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Patent number: 11887217
    Abstract: Digital image text editing techniques as implemented by an image processing system are described that support increased user interaction in the creation and editing of digital images through understanding a content creator's intent as expressed using text. In one example, a text user input is received by a text input module. The text user input describes a visual object and a visual attribute, in which the visual object specifies a visual context of the visual attribute. A feature representation generated by a text-to-feature system using a machine-learning module based on the text user input. The feature representation is passed to an image editing system to edit a digital object in a digital image, e.g., by applying a texture to an outline of the digital object within the digital image.
    Type: Grant
    Filed: October 26, 2020
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Paridhi Maheshwari, Vishwa Vinay, Shraiysh Vaishay, Praneetha Vaddamanu, Nihal Jain, Dhananjay Bhausaheb Raut
  • Patent number: 11887371
    Abstract: Embodiments are directed to a thumbnail segmentation that defines the locations on a video timeline where thumbnails are displayed. Candidate thumbnail locations are determined from boundaries of feature ranges of the video indicating when instances of detected features are present in the video. In some embodiments, candidate thumbnail separations are penalized for being separated by less than a minimum duration corresponding to a minimum pixel separation (e.g., the width of a thumbnail) between consecutive thumbnail locations on a video timeline. The thumbnail segmentation is computed by solving a shortest path problem through a graph that models different thumbnail locations and separations. As such, a video timeline is displayed with thumbnails at locations on the timeline defined by the thumbnail segmentation, with each thumbnail depicting a portion of the video associated with the thumbnail location.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li
  • Patent number: 11886480
    Abstract: Certain embodiments involve using a gated convolutional encoder-decoder framework for applying affective characteristic labels to input text. For example, a method for identifying an affect label of text with a gated convolutional encoder-decoder model includes receiving, at a supervised classification engine, extracted linguistic features of an input text and a latent representation of an input text. The method also includes predicting, by the supervised classification engine, an affect characterization of the input text using the extracted linguistic features and the latent representation. Predicting the affect characterization includes normalizing and concatenating a linguistic feature representation generated from the extracted linguistic features with the latent representation to generate an appended latent representation. The method also includes identifying, by a gated convolutional encoder-decoder model, an affect label of the input text using the predicted affect characterization.
    Type: Grant
    Filed: August 29, 2022
    Date of Patent: January 30, 2024
    Assignee: ADOBE INC.
    Inventors: Kushal Chawla, Niyati Himanshu Chhaya, Sopan Khosla
  • Patent number: 11886809
    Abstract: In implementations of systems for identifying templates based on fonts, a computing device implements an identification system to receive input data describing a selection of a font included in a collection of fonts. The identification system generates an embedding that represents the font in a latent space using a machine learning model trained on training data to generate embeddings for digital templates in the latent space based on intent phrases associated with the digital templates and embeddings for fonts in the latent space based on intent phrases associated with the fonts. A digital template included in a collection of digital templates is identified based on the embedding that represents the font and an embedding that represents the digital template in the latent space. The identification system generates an indication of the digital template for display in a user interface.
    Type: Grant
    Filed: October 31, 2022
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Nipun Jindal, Anand Khanna, Oliver Brdiczka
  • Patent number: 11886803
    Abstract: In implementations of systems for assistive digital form authoring, a computing device implements an authoring system to receive input data describing a search input associated with a digital form. The authoring system generates an input embedding vector that represents the search input in a latent space using a machine learning model trained on training data to generate embedding vectors in the latent space. A candidate embedding vector included in a group of candidate embedding vectors is identified based on a distance between the input embedding vector and the candidate embedding vector in the latent space. The authoring system generates an indication of a search output associated with the digital form for display in a user interface based on the candidate embedding vector.
    Type: Grant
    Filed: January 12, 2023
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Arneh Jain, Salil Taneja, Puneet Mangla, Gaurav Ahuja
  • Patent number: 11887241
    Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
    Type: Grant
    Filed: December 22, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Zexiang Xu, Yannick Hold-Geoffroy, Milos Hasan, Kalyan Sunkavalli, Fanbo Xiang
  • Patent number: 11886964
    Abstract: Methods and systems disclosed herein relate generally to systems and methods for using a machine-learning model to predict user-engagement levels of users in response to presentation of future interactive content. A content provider system accesses a machine-learning model, which was trained using a training dataset including previous user-device actions performed by a plurality of users in response to previous interactive content. The content provider system receives user-activity data of a particular user and applies the machine-learning model to the user-activity data, in which the user-activity data includes user-device actions performed by the particular user in response to interactive content. The machine-learning model generates an output including a categorical value that represents a predicted user-engagement level of the particular user in response to a presentation of the future interactive content.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: January 30, 2024
    Assignee: ADOBE INC.
    Inventors: Atanu R. Sinha, Xiang Chen, Sungchul Kim, Omar Rahman, Jean Bernard Hishamunda, Goutham Srivatsav Arra, Shiv Kumar Saini
  • Patent number: 11887216
    Abstract: The present disclosure describes systems and methods for image processing. Embodiments of the present disclosure include an image processing apparatus configured to generate modified images (e.g., synthetic faces) by conditionally changing attributes or landmarks of an input image. A machine learning model of the image processing apparatus encodes the input image to obtain a joint conditional vector that represents attributes and landmarks of the input image in a vector space. The joint conditional vector is then modified, according to the techniques described herein, to form a latent vector used to generate a modified image. In some cases, the machine learning model is trained using a generative adversarial network (GAN) with a normalization technique, followed by joint training of a landmark embedding and attribute embedding (e.g., to reduce inference time).
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: January 30, 2024
    Assignee: ADOBE, INC.
    Inventors: Ratheesh Kalarot, Timothy M. Converse, Shabnam Ghadar, John Thomas Nack, Jingwan Lu, Elya Shechtman, Baldo Faieta, Akhilesh Kumar
  • Patent number: 11886768
    Abstract: Embodiments are disclosed for real time generative audio for brush and canvas interaction in digital drawing. The method may include receiving a user input and a selection of a tool for generating audio for a digital drawing interaction. The method may further include generating intermediary audio data based on the user input and the tool selection, wherein the intermediary audio data includes a pitch and a frequency. The method may further include processing, by a trained audio transformation model and through a series of one or more layers of the trained audio transformation model, the intermediary audio data. The method may further include adjusting the series of one or more layers of the trained audio transformation model to include one or more additional layers to produce an adjusted audio transformation model. The method may further include generating, by the adjusted audio transformation model, an audio sample based on the intermediary audio data.
    Type: Grant
    Filed: April 29, 2022
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Pranay Kumar, Nipun Jindal