Patents Assigned to Adobe Inc.
  • Patent number: 11880766
    Abstract: An improved system architecture uses a pipeline including a Generative Adversarial Network (GAN) including a generator neural network and a discriminator neural network to generate an image. An input image in a first domain and information about a target domain are obtained. The domains correspond to image styles. An initial latent space representation of the input image is produced by encoding the input image. An initial output image is generated by processing the initial latent space representation with the generator neural network. Using the discriminator neural network, a score is computed indicating whether the initial output image is in the target domain. A loss is computed based on the computed score. The loss is minimized to compute an updated latent space representation. The updated latent space representation is processed with the generator neural network to generate an output image in the target domain.
    Type: Grant
    Filed: July 23, 2021
    Date of Patent: January 23, 2024
    Assignee: Adobe Inc.
    Inventors: Cameron Smith, Ratheesh Kalarot, Wei-An Lin, Richard Zhang, Niloy Mitra, Elya Shechtman, Shabnam Ghadar, Zhixin Shu, Yannick Hold-Geoffrey, Nathan Carr, Jingwan Lu, Oliver Wang, Jun-Yan Zhu
  • Patent number: 11880957
    Abstract: One example method involves operations for receiving a request to transform an input image into a target image. Operations further include providing the input image to a machine learning model trained to adapt images. Training the machine learning model includes accessing training data having a source domain of images and a target domain of images with a target style. Training further includes using a pre-trained generative model to generate an adapted source domain of adapted images having the target style. The adapted source domain is generated by determining a rate of change for parameters of the target style, generating weighted parameters by applying a weight to each of the parameters based on their respective rate of change, and applying the weighted parameters to the source domain. Additionally, operations include using the machine learning model to generate the target image by modifying parameters of the input image using the target style.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: January 23, 2024
    Assignee: Adobe Inc.
    Inventors: Yijun Li, Richard Zhang, Jingwan Lu, Elya Shechtman
  • Publication number: 20240020916
    Abstract: Embodiments are disclosed for optimizing a material graph for replicating a material of the target image. Embodiments include receiving a target image and a material graph to be optimized for replicating a material of the target image. Embodiments include identifying a non-differentiable node of the material graph, the non-differentiable node including a set of input parameters. Embodiments include selecting a differentiable proxy from a library of the selected differentiable proxy is trained to replicate an output of the identified non-differentiable node. Embodiments include generating an optimized input parameters for the identified non-differentiable node using the corresponding trained neural network and the target image. Embodiments include replacing the set of input parameters of the non-differentiable node of the material graph with the optimized input parameters.
    Type: Application
    Filed: July 14, 2022
    Publication date: January 18, 2024
    Applicant: Adobe Inc.
    Inventors: Valentin DESCHAINTRE, Yiwei HU, Paul GUERRERO, Milos HASAN
  • Publication number: 20240020891
    Abstract: Jitter application techniques are described for vector objects as implemented by a vector object jitter application system. In an implementation, the vector object jitter application system receives an input defining a stroke to be drawn on a user interface. A vector object is then generated representing the stroke and having a variable width determined by applying jitter to the stroke. The vector object having the variable width is displayed in the user interface as the input is received.
    Type: Application
    Filed: July 12, 2022
    Publication date: January 18, 2024
    Applicant: Adobe Inc.
    Inventors: Reena Agrawal, William C. Eisley, JR., Rohit Kumar Guglani, Paul A George, Gourav Tayal, Deep Sinha
  • Patent number: 11875585
    Abstract: Enhanced techniques and circuitry are presented herein for providing responses to user questions from among digital documentation sources spanning various documentation formats, versions, and types. One example includes a method comprising receiving a user question directed to subject having a documentation corpus, determining a set of passages of the documentation corpus related to the user question, ranking the set of passages according to relevance to the user question, forming semantic clusters comprising sentences extracted from ranked ones of the set of passages according to sentence similarity, and providing a response to the user question based at least on a selected semantic cluster.
    Type: Grant
    Filed: December 15, 2022
    Date of Patent: January 16, 2024
    Assignee: ADOBE INC.
    Inventors: Balaji Vasan Srinivasan, Sujith Sai Venna, Kuldeep Kulkarni, Durga Prasad Maram, Dasireddy Sai Shritishma Reddy
  • Patent number: 11875462
    Abstract: In implementations of systems for augmented reality authoring of remote environments, a computing device implements an augmented reality authoring system to display a three-dimensional representation of a remote physical environment on a display device based on orientations of an image capture device. The three-dimensional representation of the remote physical environment is generated from a three-dimensional mesh representing a geometry of the remote physical environment and digital video frames depicting portions of the remote physical environment. The augmented reality authoring system receives input data describing a request to display a digital video frame of the digital video frames. A particular digital video frame of the digital video frames is determined based on an orientation of the image capture device relative to the three-dimensional mesh. The augmented reality authoring system displays the particular digital video frame on the display device.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Zeyu Wang, Paul J. Asente, Cuong D. Nguyen
  • Patent number: 11875781
    Abstract: A media edit point selection process can include a media editing software application programmatically converting speech to text and storing a timestamp-to-text map. The map correlates text corresponding to speech extracted from an audio track for the media clip to timestamps for the media clip. The timestamps correspond to words and some gaps in the speech from the audio track. The probability of identified gaps corresponding to a grammatical pause by the speaker is determined using the timestamp-to-text map and a semantic model. Potential edit points corresponding to grammatical pauses in the speech are stored for display or for additional use by the media editing software application. Text can optionally be displayed to a user during media editing.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Amol Jindal, Somya Jain, Ajay Bedi
  • Patent number: 11875512
    Abstract: Embodiments are disclosed for training a neural network classifier to learn to more closely align an input image with its attribution map. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a training image comprising a representation of one or more objects, the training image associated with at least one label for the representation of the one or more objects, generating a perturbed training image based on the training image using a neural network, and training the neural network using the perturbed training image by minimizing a combination of classification loss and attribution loss to learn to align an image with its corresponding attribution map.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Mayank Singh, Balaji Krishnamurthy, Nupur Kumari, Puneet Mangla
  • Patent number: 11875260
    Abstract: The architectural complexity of a neural network is reduced by selectively pruning channels. A cost metric for a convolution layer is determined. The cost metric indicates a resource cost per channel for the channels of the layer. Training the neural network includes, for channels of the layer, updating a channel-scaling coefficient based on the cost metric. The channel-scaling coefficient linearly scales the output of the channel. A constant channel is identified based on the channel-scaling coefficients. The neural network is updated by pruning the constant channel. Model weights are updated via a stochastic gradient descent of a training loss function evaluated on training data. The channel-scaling coefficients are updated via an iterative-thresholding algorithm that penalizes a batch normalization loss function based on the cost metric for the layer and a norm of the channel-scaling coefficients.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Xin Lu, Zhe Lin, Jianbo Ye
  • Patent number: 11875442
    Abstract: Embodiments are disclosed for articulated part extraction using images of animated characters from sprite sheets by a digital design system. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including a plurality of images depicting an animated character in different poses. The disclosed systems and methods further comprise, for each pair of images in the plurality of images, determining, by a first machine learning model, pixel correspondences between pixels of the pair of images, and determining, by a second machine learning model, pixel clusters representing the animated character, each pixel cluster corresponding to a different structural segment of the animated character. The disclosed systems and methods further comprise selecting a subset of clusters that reconstructs the different poses of the animated character. The disclosed systems and methods further comprise creating a rigged animated character based on the selected subset of clusters.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Matthew David Fisher, Zhan Xu, Yang Zhou, Deepali Aneja, Evangelos Kalogerakis
  • Patent number: 11874902
    Abstract: Techniques are disclosed for text conditioned image searching. A methodology implementing the techniques according to an embodiment includes receiving a source image and a text query defining a target image attribute. The method also includes decomposing the source image into image content and style feature vectors and decomposing the text query into text content and style feature vectors, wherein image style is descriptive of image content and text style is descriptive of text content. The method further includes composing a global content feature vector based on the text content feature vector and the image content feature vector and composing a global style feature vector based on the text style feature vector and the image style feature vector. The method further includes identifying a target image that relates to the global content feature vector and the global style feature vector so that the target image relates to the target image attribute.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
  • Patent number: 11875510
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that utilizes a neural network having a hierarchy of hierarchical point-wise refining blocks to generate refined segmentation masks for high-resolution digital visual media items. For example, in one or more embodiments, the disclosed systems utilize a segmentation refinement neural network having an encoder and a recursive decoder to generate the refined segmentation masks. The recursive decoder includes a deconvolution branch for generating feature maps and a refinement branch for generating and refining segmentation masks. In particular, in some cases, the refinement branch includes a hierarchy of hierarchical point-wise refining blocks that recursively refine a segmentation mask generated for a digital visual media item.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Yilin Wang, Chenglin Yang, Jianming Zhang, He Zhang, Zhe Lin
  • Patent number: 11875446
    Abstract: Aspects of a system and method for procedural media generation include generating a sequence of operator types using a node generation network; generating a sequence of operator parameters for each operator type of the sequence of operator types using a parameter generation network; generating a sequence of directed edges based on the sequence of operator types using an edge generation network; combining the sequence of operator types, the sequence of operator parameters, and the sequence of directed edges to obtain a procedural media generator, wherein each node of the procedural media generator comprises an operator that includes an operator type from the sequence of operator types, a corresponding sequence of operator parameters, and an input connection or an output connection from the sequence of directed edges that connects the node to another node of the procedural media generator; and generating a media asset using the procedural media generator.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: January 16, 2024
    Assignee: ADOBE, INC.
    Inventors: Paul Augusto Guerrero, Milos Hasan, Kalyan K. Sunkavalli, Radomir Mech, Tamy Boubekeur, Niloy Jyoti Mitra
  • Patent number: 11875435
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for accurately and flexibly generating scalable fonts utilizing multi-implicit neural font representations. For instance, the disclosed systems combine deep learning with differentiable rasterization to generate a multi-implicit neural font representation of a glyph. For example, the disclosed systems utilize an implicit differentiable font neural network to determine a font style code for an input glyph as well as distance values for locations of the glyph to be rendered based on a glyph label and the font style code. Further, the disclosed systems rasterize the distance values utilizing a differentiable rasterization model and combines the rasterized distance values to generate a permutation-invariant version of the glyph corresponding glyph set.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Chinthala Pradyumna Reddy, Zhifei Zhang, Matthew Fisher, Hailin Jin, Zhaowen Wang, Niloy J Mitra
  • Patent number: 11875568
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Patent number: 11875221
    Abstract: Systems and methods generate a filtering function for editing an image with reduced attribute correlation. An image editing system groups training data into bins according to a distribution of a target attribute. For each bin, the system samples a subset of the training data based on a pre-determined target distribution of a set of additional attributes in the training data. The system identifies a direction in the sampled training data corresponding to the distribution of the target attribute to generate a filtering vector for modifying the target attribute in an input image, obtains a latent space representation of an input image, applies the filtering vector to the latent space representation of the input image to generate a filtered latent space representation of the input image, and provides the filtered latent space representation as input to a neural network to generate an output image with a modification to the target attribute.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Wei-An Lin, Baldo Faieta, Cameron Smith, Elya Shechtman, Jingwan Lu, Jun-Yan Zhu, Niloy Mitra, Ratheesh Kalarot, Richard Zhang, Shabnam Ghadar, Zhixin Shu
  • Publication number: 20240013494
    Abstract: In implementations of systems for generating spacing guides for objects in perspective views, a computing device implements a guide system to determine groups of line segments of perspective bounding boxes of objects displayed in a user interface of a digital content editing application. Interaction data is received describing a user interaction with a particular object of the objects displayed in the user interface. The guide system identifies a particular group of the groups of line segments based on a line segment of a perspective bounding box of the particular object. An indication of a guide is generated for display in the user interface based on the line segment and a first line segment included in the particular group.
    Type: Application
    Filed: July 6, 2022
    Publication date: January 11, 2024
    Applicant: Adobe Inc.
    Inventors: Ashish Jain, Arushi Jain
  • Publication number: 20240012849
    Abstract: Embodiments are disclosed for multichannel content recommendation. The method may include receiving an input collection comprising a plurality of images. The method may include extracting a set of feature channels from each of the images. The method may include generating, by a trained machine learning model, an intent channel of the input collection from the set of feature channels. The method may include retrieving, from a content library, a plurality of search result images that include a channel that matches the intent channel. The method may include generating a recommended set of images based on the intent channel and the set of feature channels.
    Type: Application
    Filed: July 11, 2022
    Publication date: January 11, 2024
    Applicant: Adobe Inc.
    Inventors: Praneetha VADDAMANU, Nihal JAIN, Paridhi MAHESHWARI, Kuldeep KULKARNI, Vishwa VINAY, Balaji Vasan SRINIVASAN, Niyati CHHAYA, Harshit AGRAWAL, Prabhat MAHAPATRA, Rizurekh SAHA
  • Patent number: 11869123
    Abstract: Techniques for rendering two-dimensional vector graphics are described. The techniques include using a central processing unit to generate tessellate triangles along a vector path in which each of the tessellate triangles is represented by a set of vertices. From the tessellate triangles, an index buffer and a compressed vertex buffer are generated. The index buffer includes a vertex index for each vertex of each of the tessellate triangles. The compressed vertex buffer includes a vertex buffer entry for each unique vertex that maps to one or more vertex indices of the index buffer. The index buffer and the compressed vertex buffer are provided to a graphics processing unit to render the vector path with anti-aliasing.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Harish Agarwal, Saurabh Gupta, Himanshu Verma
  • Patent number: 11871145
    Abstract: Embodiments are disclosed for video image interpolation. In some embodiments, video image interpolation includes receiving a pair of input images from a digital video, determining, using a neural network, a plurality of spatially varying kernels each corresponding to a pixel of an output image, convolving a first set of spatially varying kernels with a first input image from the pair of input images and a second set of spatially varying kernels with a second input image from the pair of input images to generate filtered images, and generating the output image by performing kernel normalization on the filtered images.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Simon Niklaus, Oliver Wang, Long Mai