Patents Assigned to Adobe Inc.
  • Patent number: 11899693
    Abstract: A cluster generation system identifies data elements, from a first binary record, that each have a particular value and correspond to respective binary traits. A candidate description function describing the binary traits is generated, the candidate description function including a model factor that describes the data elements. Responsive to determining that a second record has additional data elements having the particular value and corresponding to the respective binary traits, the candidate description function is modified to indicate that the model factor describes the additional elements. The candidate description function is also modified to include a correction factor describing an additional binary trait excluded from the respective binary traits. Based on the modified candidate description function, the cluster generation system generates a data summary cluster, which includes a compact representation of the binary traits of the data elements and additional data elements.
    Type: Grant
    Filed: February 22, 2022
    Date of Patent: February 13, 2024
    Assignee: Adobe Inc.
    Inventors: Yeuk-yin Chan, Tung Mai, Ryan Rossi, Moumita Sinha, Matvey Kapilevich, Margarita Savova, Fan Du, Charles Menguy, Anup Rao
  • Patent number: 11899917
    Abstract: Embodiments are directed to techniques for interacting with a hierarchical video segmentation using a video timeline. In some embodiments, the finest level of a hierarchical segmentation identifies the smallest interaction unit of a video—semantically defined video segments of unequal duration called clip atoms, and higher levels cluster the clip atoms into coarser sets of video segments. A presented video timeline is segmented based on one of the levels, and one or more segments are selected through interactions with the video timeline. For example, a click or tap on a video segment or a drag operation dragging along the timeline snaps selection boundaries to corresponding segment boundaries defined by the level. Navigating to a different level of the hierarchy transforms the selection into coarser or finer video segments defined by the level. Any operation can be performed on selected video segments, including playing back, trimming, or editing.
    Type: Grant
    Filed: October 19, 2022
    Date of Patent: February 13, 2024
    Assignee: ADOBE INC.
    Inventors: Seth Walker, Joy O Kim, Aseem Agarwala, Joel Richard Brandt, Jovan Popovic, Lubomira Dontcheva, Dingzeyu Li, Hijung Shin, Xue Bai
  • Patent number: 11900510
    Abstract: Glyph sizing control techniques are described for digital content that provide insight regrading a true size of glyphs when rendered using a respective font and also leverages this insight to control font sizing and alignment. In one example, a glyph sizing system outputs a plurality of options to specify a unit-of-measure to control an actual size of a glyph as rendered in a user interface. Examples of units of measure include a capital height, x-height, ICF-height, dynamic height, object height, width, and other spans along a dimension, e.g., based on ascent, descent, or other. These units of measure are leveraged by the glyph sizing system to surface information regarding an actual size of respective glyphs for that unit-of-measure and control glyph sizing and arrangement.
    Type: Grant
    Filed: April 16, 2021
    Date of Patent: February 13, 2024
    Assignee: Adobe Inc.
    Inventors: Praveen Kumar Dhanuka, Arushi Jain, Neeraj Nandkeolyar, Shivi Pal
  • Patent number: 11900056
    Abstract: Rewriting text in the writing style of a target author is described. A stylistic rewriting system receives input text and an indication of the target author. The system trains a language model to understand the target author's writing style using a corpus of text associated with the target author. The language model may be transformer-based, and is first trained on a different corpus of text associated with a range of different authors to understand linguistic nuances of a particular language. Copies of the language model are then cascaded into an encoder-decoder framework, which is further trained using a masked language modeling objective and a noisy version of the target author corpus. After training, the encoder-decoder framework of the trained language model automatically rewrites input text in the writing style of the target author and outputs the rewritten text as stylized text.
    Type: Grant
    Filed: February 21, 2023
    Date of Patent: February 13, 2024
    Assignee: Adobe Inc.
    Inventors: Balaji Vasan Srinivasan, Gaurav Verma, Bakhtiyar Hussain Syed, Anandhavelu Natarajan
  • Patent number: 11899927
    Abstract: Techniques are provided for generating a digital image of simulated handwriting using an encoder-decoder neural network trained on images of natural handwriting samples. The simulated handwriting image can be generated based on a style of a handwriting sample and a variable length coded text input. The style represents visually distinctive characteristics of the handwriting sample, such as the shape, size, slope, and spacing of the letters, characters, or other markings in the handwriting sample. The resulting simulated handwriting image can include the text input rendered in the style of the handwriting sample. The distinctive visual appearance of the letters or words in the simulated handwriting image mimics the visual appearance of the letters or words in the handwriting sample image, whether the letters or words in the simulated handwriting image are the same as in the handwriting sample image or different from those in the handwriting sample image.
    Type: Grant
    Filed: January 24, 2022
    Date of Patent: February 13, 2024
    Assignee: Adobe Inc.
    Inventors: Christopher Alan Tensmeyer, Rajiv Jain, Curtis Michael Wigington, Brian Lynn Price, Brian Lafayette Davis
  • Patent number: 11900519
    Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure encode features of a source image to obtain a source appearance encoding that represents inherent attributes of a face in the source image; encode features of a target image to obtain a target non-appearance encoding that represents contextual attributes of the target image; combine the source appearance encoding and the target non-appearance encoding to obtain combined image features; and generate a modified target image based on the combined image features, wherein the modified target image includes the inherent attributes of the face in the source image together with the contextual attributes of the target image.
    Type: Grant
    Filed: November 17, 2021
    Date of Patent: February 13, 2024
    Assignee: ADOBE INC.
    Inventors: Kevin Duarte, Wei-An Lin, Ratheesh Kalarot, Shabnam Ghadar, Jingwan Lu, Elya Shechtman, John Thomas Nack
  • Patent number: 11900514
    Abstract: Procedural model digital content editing techniques are described that overcome the limitations of conventional techniques to make procedural models available for interaction by a wide range of users without requiring specialized knowledge and do so without “breaking” the underlying model. In the techniques described herein, an inverse procedural model system receives a user input that specifies an edit to digital content generated by a procedural model. Input parameters from these candidate input parameters are selected by the system which cause the digital content generated by the procedural model to incorporate the edit.
    Type: Grant
    Filed: July 18, 2022
    Date of Patent: February 13, 2024
    Assignee: Adobe Inc.
    Inventors: Vojtech Krs, Radomir Mech, Mathieu Gaillard, Giorgio Gori
  • Patent number: 11900902
    Abstract: Embodiments are disclosed for determining an answer to a query associated with a graphical representation of data. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including an unprocessed audio sequence and a request to perform an audio signal processing effect on the unprocessed audio sequence. The one or more embodiments further include analyzing, by a deep encoder, the unprocessed audio sequence to determine parameters for processing the unprocessed audio sequence. The one or more embodiments further include sending the unprocessed audio sequence and the parameters to one or more audio signal processing effects plugins to perform the requested audio signal processing effect using the parameters and outputting a processed audio sequence after processing of the unprocessed audio sequence using the parameters of the one or more audio signal processing effects plugins.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: February 13, 2024
    Assignee: Adobe Inc.
    Inventors: Marco Antonio Martinez Ramirez, Nicholas J. Bryan, Oliver Wang, Paris Smaragdis
  • Publication number: 20240046399
    Abstract: Systems and methods use machine learning models with content editing tools to prevent or mitigate inadvertent disclosure and dissemination of sensitive data. Entities associated with private information are identified by applying a trained machine learning model to a set of unstructured text data received via an input field of an interface. A privacy score is computed for the text data by identifying connections between the entities, the connections between the entities contributing to the privacy score according to a cumulative privacy risk, the privacy score indicating potential exposure of the private information. The interface is updated to include an indicator distinguishing a target portion of the set of unstructured text data within the input field from other portions of the set of unstructured text data within the input field, wherein a modification to the target portion changes the potential exposure of the private information indicated by the privacy score.
    Type: Application
    Filed: October 18, 2023
    Publication date: February 8, 2024
    Applicant: Adobe Inc.
    Inventors: Irgelkha Mejia, Ronald Oribio, Robert Burke, Michele Saad
  • Patent number: 11893794
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Patent number: 11893007
    Abstract: Embodiments of the present disclosure provide systems, methods, and computer storage media for optimizing computing resources generally associated with cloud-based media services. Instead of decoding digital assets on-premises to stream to a remote client device, an encoded asset can be streamed to the remote client device. A codebook employable for decoding the encoded asset can be embedded into the stream transmitted to the remote client device, so that the remote client device can extract the embedded codebook, and employ the extracted codebook to decode the encoded asset locally. In this way, not only are processing resources associated with on-premises decoding eliminated, but on-premises storage of codebooks can be significantly reduced, while expensive bandwidth is freed up by virtue of transmitting a smaller quantity of data from the cloud to the remote client device.
    Type: Grant
    Filed: July 7, 2021
    Date of Patent: February 6, 2024
    Assignee: ADOBE INC.
    Inventors: Viswanathan Swaminathan, Saayan Mitra
  • Patent number: 11893338
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that merge separate digital point text objects into a single merged digital text object while preserving the properties and original visual appearance associated with the digital text included therein. For example, the disclosed systems can determine point text character properties associated with the separate digital point text objects (e.g., rotations, baseline shifts, etc.). The disclosed systems can merge the separate digital point text objects into a single merged digital point text object and modify associated font character properties to reflect the determined point text character properties. Further, the disclosed systems can generate an area text object based on the merged digital point text object where the area text object includes the digital text and the font character properties.
    Type: Grant
    Filed: July 29, 2021
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Praveen Kumar Dhanuka, Arushi Jain, Matthew Fisher
  • Patent number: 11893352
    Abstract: The present disclosure provides systems and methods for relationship extraction. Embodiments of the present disclosure provide a relationship extraction network trained to identify relationships among entities in an input text. The relationship extraction network is used to generate a dependency path between entities in an input phrase. The dependency path includes a set of words that connect the entities, and is used to predict a relationship between the entities. In some cases, the dependency path is related to a syntax tree, but it may include additional words, and omit some words from a path extracted based on a syntax tree.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: February 6, 2024
    Assignee: ADOBE INC.
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt
  • Patent number: 11893717
    Abstract: This disclosure describes one or more embodiments of systems, non-transitory computer-readable media, and methods that can learn or identify a learned-initialization-latent vector for an initialization digital image and reconstruct a target digital image using an image-generating-neural network based on a modified version of the learned-initialization-latent vector. For example, the disclosed systems learn a learned-initialization-latent vector from an initialization image utilizing a high number (e.g., thousands) of learning iterations on an image-generating-neural network (e.g., a GAN). Then, the disclosed systems can modify the learned-initialization-latent vector (of the initialization image) to generate modified or reconstructed versions of target images using the image-generating-neural network.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Christopher Tensmeyer, Vlad Morariu, Michael Brodie
  • Patent number: 11893763
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.
    Type: Grant
    Filed: November 22, 2022
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Taesung Park, Richard Zhang, Oliver Wang, Junyan Zhu, Jingwan Lu, Elya Shechtman, Alexei A Efros
  • Patent number: 11893345
    Abstract: Systems and methods for natural language processing are described. One or more embodiments of the present disclosure receive a document comprising a plurality of words organized into a plurality of sentences, the words comprising an event trigger word and an argument candidate word, generate word representation vectors for the words, generate a plurality of document structures including a semantic structure for the document based on the word representation vectors, a syntax structure representing dependency relationships between the words, and a discourse structure representing discourse information of the document based on the plurality of sentences, generate a relationship representation vector based on the document structures, and predict a relationship between the event trigger word and the argument candidate word based on the relationship representation vector.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: February 6, 2024
    Assignee: ADOBE, INC.
    Inventors: Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Tran, Varun Manjunatha, Lidan Wang, Rajiv Jain, Doo Soon Kim, Walter Chang
  • Patent number: 11893792
    Abstract: Techniques are disclosed for identifying and presenting video content that demonstrates features of a target product. The video content can be accessed, for example, from a media database of user-generated videos that demonstrate one or more features of the target product so that a user can see and hear the product in operation via a product webpage before making a purchasing decision. The product functioning videos supplement any static images of the target product and the textual product description to provide the user with additional context for each of the product's features, depending on the textual product description. The user can quickly and easily interact with the product webpage to access and playback the product functioning video to see and/or hear the product in operation.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: February 6, 2024
    Assignee: Adobe Inc.
    Inventors: Gourav Singhal, Sourabh Gupta, Mrinal Kumar Sharma
  • Publication number: 20240037845
    Abstract: In implementations of systems for efficiently generating blend objects, a computing device implements a blending system to assign unique shape identifiers to objects included in an input render tree. The blending system generates a shape mask based on the unique shape identifiers. A color of a pixel of a blend object is computed based on particular objects of the objects that contribute to the blend object using the shape mask. The blending system generates the blend object for display in a user interface based on the color of the pixel.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Applicant: Adobe Inc.
    Inventors: Harish Kumar, Apurva Kumar
  • Publication number: 20240037827
    Abstract: Embodiments are disclosed for using machine learning models to perform three-dimensional garment deformation due to character body motion with collision handling. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input, the input including character body shape parameters and character body pose parameters defining a character body, and garment parameters. The disclosed systems and methods further comprise generating, by a first neural network, a first set of garment vertices defining deformations of a garment with the character body based on the input. The disclosed systems and methods further comprise determining, by a second neural network, that the first set of garment vertices includes a second set of garment vertices penetrating the character body. The disclosed systems and methods further comprise modifying, by a third neural network, each garment vertex in the second set of garment vertices to positions outside the character body.
    Type: Application
    Filed: July 27, 2022
    Publication date: February 1, 2024
    Applicant: Adobe Inc.
    Inventors: Yi ZHOU, Yangtuanfeng WANG, Xin SUN, Qingyang TAN, Duygu CEYLAN AKSIT
  • Patent number: 11887629
    Abstract: Embodiments are directed to interactive tiles that represent video segments of a segmentation of a video. In some embodiments, each interactive tile represents a different video segment from a particular video segmentation (e.g., a default video segmentation). Each interactive tile includes a thumbnail (e.g., the first frame of the video segment represented by the tile), some transcript from the beginning of the video segment, a visualization of detected faces in the video segment, and one or more faceted timelines that visualize a category of detected features (e.g., a visualization of detected visual scenes, audio classifications, visual artifacts). In some embodiments, interacting with a particular interactive tile navigates to a corresponding portion of the video, adds a corresponding video segment to a selection, and/or scrubs through tile thumbnails.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Seth Walker, Hijung Shin, Cristin Ailidh Fraser, Aseem Agarwala, Lubomira Dontcheva, Joel Richard Brandt, Jovan Popović, Joy Oakyung Kim, Justin Salamon, Jui-hsien Wang, Timothy Jeewun Ganter, Xue Bai, Dingzeyu Li