Patents Assigned to Adobe Inc.
  • Patent number: 11875781
    Abstract: A media edit point selection process can include a media editing software application programmatically converting speech to text and storing a timestamp-to-text map. The map correlates text corresponding to speech extracted from an audio track for the media clip to timestamps for the media clip. The timestamps correspond to words and some gaps in the speech from the audio track. The probability of identified gaps corresponding to a grammatical pause by the speaker is determined using the timestamp-to-text map and a semantic model. Potential edit points corresponding to grammatical pauses in the speech are stored for display or for additional use by the media editing software application. Text can optionally be displayed to a user during media editing.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Amol Jindal, Somya Jain, Ajay Bedi
  • Patent number: 11875512
    Abstract: Embodiments are disclosed for training a neural network classifier to learn to more closely align an input image with its attribution map. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a training image comprising a representation of one or more objects, the training image associated with at least one label for the representation of the one or more objects, generating a perturbed training image based on the training image using a neural network, and training the neural network using the perturbed training image by minimizing a combination of classification loss and attribution loss to learn to align an image with its corresponding attribution map.
    Type: Grant
    Filed: December 29, 2022
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Mayank Singh, Balaji Krishnamurthy, Nupur Kumari, Puneet Mangla
  • Patent number: 11875260
    Abstract: The architectural complexity of a neural network is reduced by selectively pruning channels. A cost metric for a convolution layer is determined. The cost metric indicates a resource cost per channel for the channels of the layer. Training the neural network includes, for channels of the layer, updating a channel-scaling coefficient based on the cost metric. The channel-scaling coefficient linearly scales the output of the channel. A constant channel is identified based on the channel-scaling coefficients. The neural network is updated by pruning the constant channel. Model weights are updated via a stochastic gradient descent of a training loss function evaluated on training data. The channel-scaling coefficients are updated via an iterative-thresholding algorithm that penalizes a batch normalization loss function based on the cost metric for the layer and a norm of the channel-scaling coefficients.
    Type: Grant
    Filed: February 13, 2018
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Xin Lu, Zhe Lin, Jianbo Ye
  • Patent number: 11875442
    Abstract: Embodiments are disclosed for articulated part extraction using images of animated characters from sprite sheets by a digital design system. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including a plurality of images depicting an animated character in different poses. The disclosed systems and methods further comprise, for each pair of images in the plurality of images, determining, by a first machine learning model, pixel correspondences between pixels of the pair of images, and determining, by a second machine learning model, pixel clusters representing the animated character, each pixel cluster corresponding to a different structural segment of the animated character. The disclosed systems and methods further comprise selecting a subset of clusters that reconstructs the different poses of the animated character. The disclosed systems and methods further comprise creating a rigged animated character based on the selected subset of clusters.
    Type: Grant
    Filed: May 31, 2022
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Matthew David Fisher, Zhan Xu, Yang Zhou, Deepali Aneja, Evangelos Kalogerakis
  • Patent number: 11874902
    Abstract: Techniques are disclosed for text conditioned image searching. A methodology implementing the techniques according to an embodiment includes receiving a source image and a text query defining a target image attribute. The method also includes decomposing the source image into image content and style feature vectors and decomposing the text query into text content and style feature vectors, wherein image style is descriptive of image content and text style is descriptive of text content. The method further includes composing a global content feature vector based on the text content feature vector and the image content feature vector and composing a global style feature vector based on the text style feature vector and the image style feature vector. The method further includes identifying a target image that relates to the global content feature vector and the global style feature vector so that the target image relates to the target image attribute.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Pinkesh Badjatiya, Surgan Jandial, Pranit Chawla, Mausoom Sarkar, Ayush Chopra
  • Patent number: 11875510
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that utilizes a neural network having a hierarchy of hierarchical point-wise refining blocks to generate refined segmentation masks for high-resolution digital visual media items. For example, in one or more embodiments, the disclosed systems utilize a segmentation refinement neural network having an encoder and a recursive decoder to generate the refined segmentation masks. The recursive decoder includes a deconvolution branch for generating feature maps and a refinement branch for generating and refining segmentation masks. In particular, in some cases, the refinement branch includes a hierarchy of hierarchical point-wise refining blocks that recursively refine a segmentation mask generated for a digital visual media item.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Yilin Wang, Chenglin Yang, Jianming Zhang, He Zhang, Zhe Lin
  • Patent number: 11875446
    Abstract: Aspects of a system and method for procedural media generation include generating a sequence of operator types using a node generation network; generating a sequence of operator parameters for each operator type of the sequence of operator types using a parameter generation network; generating a sequence of directed edges based on the sequence of operator types using an edge generation network; combining the sequence of operator types, the sequence of operator parameters, and the sequence of directed edges to obtain a procedural media generator, wherein each node of the procedural media generator comprises an operator that includes an operator type from the sequence of operator types, a corresponding sequence of operator parameters, and an input connection or an output connection from the sequence of directed edges that connects the node to another node of the procedural media generator; and generating a media asset using the procedural media generator.
    Type: Grant
    Filed: May 6, 2022
    Date of Patent: January 16, 2024
    Assignee: ADOBE, INC.
    Inventors: Paul Augusto Guerrero, Milos Hasan, Kalyan K. Sunkavalli, Radomir Mech, Tamy Boubekeur, Niloy Jyoti Mitra
  • Patent number: 11875435
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for accurately and flexibly generating scalable fonts utilizing multi-implicit neural font representations. For instance, the disclosed systems combine deep learning with differentiable rasterization to generate a multi-implicit neural font representation of a glyph. For example, the disclosed systems utilize an implicit differentiable font neural network to determine a font style code for an input glyph as well as distance values for locations of the glyph to be rendered based on a glyph label and the font style code. Further, the disclosed systems rasterize the distance values utilizing a differentiable rasterization model and combines the rasterized distance values to generate a permutation-invariant version of the glyph corresponding glyph set.
    Type: Grant
    Filed: October 12, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Chinthala Pradyumna Reddy, Zhifei Zhang, Matthew Fisher, Hailin Jin, Zhaowen Wang, Niloy J Mitra
  • Patent number: 11875568
    Abstract: Embodiments are directed to segmentation and hierarchical clustering of video. In an example implementation, a video is ingested to generate a multi-level hierarchical segmentation of the video. In some embodiments, the finest level identifies a smallest interaction unit of the video—semantically defined video segments of unequal duration called clip atoms. Clip atom boundaries are detected in various ways. For example, speech boundaries are detected from audio of the video, and scene boundaries are detected from video frames of the video. The detected boundaries are used to define the clip atoms, which are hierarchically clustered to form a multi-level hierarchical representation of the video. In some cases, the hierarchical segmentation identifies a static, pre-computed, hierarchical set of video segments, where each level of the hierarchical segmentation identifies a complete set (i.e., covering the entire range of the video) of disjoint (i.e.
    Type: Grant
    Filed: June 2, 2022
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Hijung Shin, Xue Bai, Aseem Agarwala, Joel R. Brandt, Jovan Popović, Lubomira Dontcheva, Dingzeyu Li, Joy Oakyung Kim, Seth Walker
  • Patent number: 11875221
    Abstract: Systems and methods generate a filtering function for editing an image with reduced attribute correlation. An image editing system groups training data into bins according to a distribution of a target attribute. For each bin, the system samples a subset of the training data based on a pre-determined target distribution of a set of additional attributes in the training data. The system identifies a direction in the sampled training data corresponding to the distribution of the target attribute to generate a filtering vector for modifying the target attribute in an input image, obtains a latent space representation of an input image, applies the filtering vector to the latent space representation of the input image to generate a filtered latent space representation of the input image, and provides the filtered latent space representation as input to a neural network to generate an output image with a modification to the target attribute.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: January 16, 2024
    Assignee: Adobe Inc.
    Inventors: Wei-An Lin, Baldo Faieta, Cameron Smith, Elya Shechtman, Jingwan Lu, Jun-Yan Zhu, Niloy Mitra, Ratheesh Kalarot, Richard Zhang, Shabnam Ghadar, Zhixin Shu
  • Publication number: 20240013494
    Abstract: In implementations of systems for generating spacing guides for objects in perspective views, a computing device implements a guide system to determine groups of line segments of perspective bounding boxes of objects displayed in a user interface of a digital content editing application. Interaction data is received describing a user interaction with a particular object of the objects displayed in the user interface. The guide system identifies a particular group of the groups of line segments based on a line segment of a perspective bounding box of the particular object. An indication of a guide is generated for display in the user interface based on the line segment and a first line segment included in the particular group.
    Type: Application
    Filed: July 6, 2022
    Publication date: January 11, 2024
    Applicant: Adobe Inc.
    Inventors: Ashish Jain, Arushi Jain
  • Publication number: 20240012849
    Abstract: Embodiments are disclosed for multichannel content recommendation. The method may include receiving an input collection comprising a plurality of images. The method may include extracting a set of feature channels from each of the images. The method may include generating, by a trained machine learning model, an intent channel of the input collection from the set of feature channels. The method may include retrieving, from a content library, a plurality of search result images that include a channel that matches the intent channel. The method may include generating a recommended set of images based on the intent channel and the set of feature channels.
    Type: Application
    Filed: July 11, 2022
    Publication date: January 11, 2024
    Applicant: Adobe Inc.
    Inventors: Praneetha VADDAMANU, Nihal JAIN, Paridhi MAHESHWARI, Kuldeep KULKARNI, Vishwa VINAY, Balaji Vasan SRINIVASAN, Niyati CHHAYA, Harshit AGRAWAL, Prabhat MAHAPATRA, Rizurekh SAHA
  • Patent number: 11869123
    Abstract: Techniques for rendering two-dimensional vector graphics are described. The techniques include using a central processing unit to generate tessellate triangles along a vector path in which each of the tessellate triangles is represented by a set of vertices. From the tessellate triangles, an index buffer and a compressed vertex buffer are generated. The index buffer includes a vertex index for each vertex of each of the tessellate triangles. The compressed vertex buffer includes a vertex buffer entry for each unique vertex that maps to one or more vertex indices of the index buffer. The index buffer and the compressed vertex buffer are provided to a graphics processing unit to render the vector path with anti-aliasing.
    Type: Grant
    Filed: August 3, 2021
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Harish Agarwal, Saurabh Gupta, Himanshu Verma
  • Patent number: 11871145
    Abstract: Embodiments are disclosed for video image interpolation. In some embodiments, video image interpolation includes receiving a pair of input images from a digital video, determining, using a neural network, a plurality of spatially varying kernels each corresponding to a pixel of an output image, convolving a first set of spatially varying kernels with a first input image from the pair of input images and a second set of spatially varying kernels with a second input image from the pair of input images to generate filtered images, and generating the output image by performing kernel normalization on the filtered images.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Simon Niklaus, Oliver Wang, Long Mai
  • Patent number: 11869173
    Abstract: Various disclosed embodiments are directed to inpainting one or more portions of a target image based on merging (or selecting) one or more portions of a warped image with (or from) one or more portions of an inpainting candidate (e.g., via a learning model). This, among other functionality described herein, resolves the inaccuracies of existing image inpainting technologies.
    Type: Grant
    Filed: December 27, 2022
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Yuqian Zhou, Elya Shechtman, Connelly Stuart Barnes, Sohrab Amirghodsi
  • Patent number: 11869021
    Abstract: Segment valuation techniques usable in a digital medium environment are described. To do so, a segment valuation system first identifies the attributes that are significant in achievement of a desired metric (e.g., conversion) and then values segments based on those significant attributes. Attributes are selected from the trained model based on significance of those attributes towards achieving the desired metric. A valuation of a segment may then be calculated based on the valuations of these attributes. For example, inclusion of the selected attributes within a segment, and the valuations of those selected attributes, is then used by the segment valuation system to generate data describing a value of the segment towards achieving the metric.
    Type: Grant
    Filed: October 18, 2021
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Kourosh Modarresi, Jamie Mark Diner, Elizabeth T. Chin, Aran Nayebi
  • Patent number: 11869125
    Abstract: Methods, systems, and non-transitory computer readable media are disclosed for generating a composite image comprising objects in positions from two or more different digital images. In one or more embodiments, the disclosed system receives a sequence of images and identifies objects within the sequence of images. In one example, the disclosed system determines a target position for a first object based on detecting user selection of the first object in the target position from a first image. The disclosed system can generate a fixed object image comprising the first object in the target position. The disclosed system can generate preview images comprising the fixed object image with the second object sequencing through a plurality of positions as seen in the sequence of images. Based on a second user selection of a desired preview image, the disclosed system can generate the composite image.
    Type: Grant
    Filed: September 30, 2020
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Ajay Bedi, Ajay Jain, Jingwan Lu, Anugrah Prakash, Prasenjit Mondal, Sachin Soni, Sanjeev Tagra
  • Patent number: 11868733
    Abstract: In some embodiments, a knowledge graph generation system extracts noun-phrases from sentences of a knowledge corpora and determines the relations between the noun-phrases based on a relation classifier that is configured to predict a relation between a pair of entities without restricting the entities to a set of named entities. The knowledge graph generation system further generates a sub-graph for each of the sentences based on the noun-phrases and the determined relations. Nodes or entities of the sub-graph represent the non-phrases in the sentence and edges represent the relations between the noun-phrases connected by the respective edges. The knowledge graph generation system merges the sub-graphs to generate the knowledge graph for the knowledge corpora.
    Type: Grant
    Filed: November 17, 2022
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Somak Aditya, Atanu Sinha
  • Patent number: 11869132
    Abstract: Certain aspects and features of this disclosure relate to neural network based 3D object surface mapping. In one example, a first representation of a first surface of a first 3D object and a second representation of a second surface of a second 3D object are produced. A surface mapping function is generated for mapping the first surface to the second surface. The surface mapping function is defined the representations and by a neural network model configured to map a first 2D representation of the first surface to a second 2D representation of the second surface. One or more features of the a first 3D mesh on the first surface can be applied to a second 3D mesh on the second surface using the surface mapping function to produce a modified second surface, which can be rendered through a user interface.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: January 9, 2024
    Assignees: Adobe Inc., UCL Business Ltd.
    Inventors: Vladimir Kim, Noam Aigerman, Niloy J. Mitra, Luca Morreale
  • Patent number: 11868889
    Abstract: In implementations of object detection in images, object detectors are trained using heterogeneous training datasets. A first training dataset is used to train an image tagging network to determine an attention map of an input image for a target concept. A second training dataset is used to train a conditional detection network that accepts as conditional inputs the attention map and a word embedding of the target concept. Despite the conditional detection network being trained with a training dataset having a small number of seen classes (e.g., classes in a training dataset), it generalizes to novel, unseen classes by concept conditioning, since the target concept propagates through the conditional detection network via the conditional inputs, thus influencing classification and region proposal. Hence, classes of objects that can be detected are expanded, without the need to scale training databases to include additional classes.
    Type: Grant
    Filed: January 31, 2022
    Date of Patent: January 9, 2024
    Assignee: Adobe Inc.
    Inventors: Zhe Lin, Xiaohui Shen, Mingyang Ling, Jianming Zhang, Jason Wen Yong Kuen