Patents Assigned to Adobe Inc.
-
Patent number: 12248796Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that perform language guided digital image editing utilizing a cycle-augmentation generative-adversarial neural network (CAGAN) that is augmented using a cross-modal cyclic mechanism. For example, the disclosed systems generate an editing description network that generates language embeddings which represent image transformations applied between a digital image and a modified digital image. The disclosed systems can further train a GAN to generate modified images by providing an input image and natural language embeddings generated by the editing description network (representing various modifications to the digital image from a ground truth modified image). In some instances, the disclosed systems also utilize an image request attention approach with the GAN to generate images that include adaptive edits in different spatial locations of the image.Type: GrantFiled: July 23, 2021Date of Patent: March 11, 2025Assignee: Adobe Inc.Inventors: Ning Xu, Zhe Lin
-
Patent number: 12249116Abstract: Systems and methods for image processing are described. Embodiments of the present disclosure identify a plurality of candidate concepts in a knowledge graph (KG) that correspond to an image tag of an image; generate an image embedding of the image using a multi-modal encoder; generate a concept embedding for each of the plurality of candidate concepts using the multi-modal encoder; select a matching concept from the plurality of candidate concepts based on the image embedding and the concept embedding; and generate association data between the image and the matching concept.Type: GrantFiled: March 23, 2022Date of Patent: March 11, 2025Assignee: ADOBE INC.Inventors: Venkata Naveen Kumar Yadav Marri, Ajinkya Gorakhnath Kale
-
Patent number: 12248056Abstract: In implementations of systems for estimating three-dimensional trajectories of physical objects, a computing device implements a three-dimensional trajectory system to receive radar data describing millimeter wavelength radio waves directed within a physical environment using beamforming and reflected from physical objects in the physical environment. The three-dimensional trajectory system generates a cloud of three-dimensional points based on the radar, each of the three-dimensional points corresponds to a reflected millimeter wavelength radio wave within a sliding temporal window. The three-dimensional points are grouped into at least one group based on Euclidean distances between the three-dimensional points within the cloud. The three-dimensional trajectory system generates an indication of a three-dimensional trajectory of a physical object corresponding to the at least one group using a Kalman filter to track a position and a velocity a centroid of the at least one group in three-dimensions.Type: GrantFiled: March 10, 2023Date of Patent: March 11, 2025Assignee: Adobe IncInventors: Jennifer Anne Healey, Haoliang Wang, Ding Zhang
-
Patent number: 12249051Abstract: Techniques are disclosed for filling or otherwise replacing a target region of a primary image with a corresponding region of an auxiliary image. The filling or replacing can be done with an overlay (no subtractive process need be run on the primary image). Because the primary and auxiliary images may not be aligned, both geometric and photometric transformations are applied to the primary and/or auxiliary images. For instance, a geometric transformation of the auxiliary image is performed, to better align features of the auxiliary image with corresponding features of the primary image. Also, a photometric transformation of the auxiliary image is performed, to better match color of one or more pixels of the auxiliary image with color of corresponding one or more pixels of the primary image. The corresponding region of the transformed auxiliary image is then copied and overlaid on the target region of the primary image.Type: GrantFiled: February 17, 2022Date of Patent: March 11, 2025Assignee: Adobe Inc.Inventors: Connelly Barnes, Sohrab Amirghodsi, Elya Shechtman
-
Patent number: 12248949Abstract: Various disclosed embodiments are directed to using one or more algorithms or models to select a suitable or optimal variation, among multiple variations, of a given content item based on feedback. Such feedback guides the algorithm or model to arrive at suitable variation result such that the variation result is produced as the output for consumption by users. Further, various embodiments resolve tedious manual user input requirements and reduce computing resource consumption, among other things, as described in more detail below.Type: GrantFiled: November 4, 2021Date of Patent: March 11, 2025Assignee: Adobe Inc.Inventors: Trisha Mittal, Viswanathan Swaminathan, Ritwik Sinha, Saayan Mitra, David Arbour, Somdeb Sarkhel
-
Publication number: 20250078386Abstract: Retexturing items depicted in digital image data is described. An image retexturing system receives image data that depicts an item featuring a pattern. The image retexturing system identifies coarse correspondences between regions in the image data and a two-dimensional image of the pattern. Using the coarse correspondences, the image retexturing system establishes, for each pixel in the image data depicting the item, a pair of coordinates for a surface of the item featuring the pattern. The coordinate pairs are then used to generate a mesh that represents the surface of the item. The image retexturing system then applies a new texture to a surface of the item by mapping the new texture to a surface of the mesh. A shading layer and item mask are generated for the image data, which are combined with the retextured mask to generate a synthesized image that depicts the retextured item.Type: ApplicationFiled: August 28, 2023Publication date: March 6, 2025Applicant: Adobe Inc.Inventor: Yangtuanfeng Wang
-
Publication number: 20250078350Abstract: Embodiments are disclosed for reflowing documents to display semantically related content. The method may include receiving a request to view a document that includes body text and one or more images. A trimodal document relationship model identifies relationships between segments of the body text and the one or more images. A linearized view of the document is generated based on the relationships and the linearized view is caused to be displayed on a user device.Type: ApplicationFiled: September 1, 2023Publication date: March 6, 2025Applicant: Adobe Inc.Inventors: Christopher TENSMEYER, Fuxiao LIU, Hao TAN, Ani NENKOVA
-
Publication number: 20250078408Abstract: Implementations of systems and methods for determining viewpoints suitable for performing one or more digital operations on a three-dimensional object are disclosed. Accordingly, a set of candidate viewpoints is established. The subset of candidate viewpoints provides views of an outer surface of a three-dimensional object and those views provide overlapping surface data. A subset of activated viewpoints is determined from the set of candidate viewpoints, the subset of activated viewpoints providing less of the overlapping surface data. The subset of activated viewpoints is used to perform one or more digital operation on the three-dimensional object.Type: ApplicationFiled: August 29, 2023Publication date: March 6, 2025Applicant: Adobe Inc.Inventors: Valentin Mathieu Deschaintre, Vladimir Kim, Thibault Groueix, Julien Philip
-
Publication number: 20250078220Abstract: In implementation of techniques for generating salient regions based on multi-resolution partitioning, a computing device implements a salient object system to receive a digital image including a salient object. The salient object system generates a first mask for the salient object by partitioning the digital image into salient and non-salient regions. The salient object system also generates a second mask for the salient object that has a resolution that is different than the first mask by partitioning a resampled version of the digital image into salient and non-salient regions. Based on the first mask and the second mask, the salient object system generates an indication of a salient region of the digital image using a machine learning model. The salient object system then displays the indication of the salient region in a user interface.Type: ApplicationFiled: August 30, 2023Publication date: March 6, 2025Applicant: Adobe Inc.Inventors: Sriram Ravindran, Debraj Debashish Basu
-
Patent number: 12243121Abstract: In implementations of systems for generating and propagating personal masking edits, a computing device implements a mask system to detect a face of a person depicted in a digital image displayed in a user interface of an application for editing digital content. The mask system determines an identifier for the person based on an identifier for the face. Edit data is received describing properties of an editing operation and a type of mask used to modify a particular portion of the person depicted in the digital image. The mask system edits an additional digital image identified based on the identifier of the person using the type of mask and the properties of the editing operation to modify the particular portion of the person as depicted in the additional digital image.Type: GrantFiled: August 18, 2022Date of Patent: March 4, 2025Assignee: Adobe Inc.Inventors: Subham Gupta, Arnab Sil, Anuradha
-
Patent number: 12243132Abstract: Embodiments are disclosed for interlacing vector objects. A method of interlacing vector objects may include receiving a selection of a first vector object of an image. The method may further include detecting a second vector object of the image, wherein the second vector object is different than the first vector object. The method may further include determining a first depth position for the first vector object and a second depth position for the second vector object. The method may further include interlacing the second vector object and the first vector object, wherein interlacing comprises drawing the first vector object based on the first depth position and the second vector object based on the second depth position.Type: GrantFiled: April 13, 2022Date of Patent: March 4, 2025Assignee: Adobe Inc.Inventors: Praveen Kumar Dhanuka, Harish Kumar, Arushi Jain
-
Patent number: 12242820Abstract: Techniques for training a language model for code switching content are disclosed. Such techniques include, in some embodiments, generating a dataset, which includes identifying one or more portions within textual content in a first language, the identified one or more portions each including one or more of offensive content or non-offensive content; translating the identified one or more salient portions to a second language; and reintegrating the translated one or more portions into the textual content to generate code-switched textual content. In some cases, the textual content in the first language includes offensive content and non-offensive content, the identified one or more portions include the offensive content, and the translated one or more portions include a translated version of the offensive content. In some embodiments, the code-switched textual content is at least part of a synthetic dataset usable to train a language model, such as a multilingual classification model.Type: GrantFiled: February 17, 2022Date of Patent: March 4, 2025Assignee: Adobe Inc.Inventors: Cesa Salaam, Seunghyun Yoon, Trung Huu Bui, Franck Dernoncourt
-
Patent number: 12243288Abstract: Certain aspects and features of this disclosure relate to chromatic undertone detection. For example, a method involves receiving an image file and producing, using a color warmth classifier, an image warmth profile from the image file. The method further involves applying a surface-image-trained machine-learning model to the image warmth profile to produce an inferred undertone value for the image file. The method further involves comparing, using a recommendation module, and the inferred undertone value, an image color value to a plurality of pre-existing color values corresponding to a database of production images, and causing, in response to the comparing, interactive content including the at least one production image selection from the database of production images to be provided on a recipient device.Type: GrantFiled: March 25, 2022Date of Patent: March 4, 2025Assignee: Adobe Inc.Inventors: Michele Saad, Ronald Oribio, Robert W. Burke, Jr., Irgelkha Mejia
-
Patent number: 12243135Abstract: Techniques for vector object blending are described to generate a transformed vector object based on a first vector object and a second vector object. A transformation module, for instance, receives a first vector object that includes a plurality of first paths and a second vector object that includes a plurality of second paths. The transformation module computes morphing costs based on a correspondence within candidate path pairs that include one of the first paths and one of the second paths. Based on the morphing costs, the transformation module generates a low-cost mapping of paths between the first paths and the second paths. To generate the transformed vector object, the transformation module adjusts one or more properties of at least one of the first paths based on the mapping, such as geometry, appearance, and z-order.Type: GrantFiled: November 4, 2022Date of Patent: March 4, 2025Assignee: Adobe Inc.Inventors: Tarun Beri, Matthew David Fisher
-
Patent number: 12243146Abstract: Animated display characteristic control for digital images is described. In an implementation, a control is animated in a user interface as progressing through a plurality of values of a display characteristic. A digital image is displayed in the user interface as progressing through the plurality of values of the display characteristic as specified by the animating of the control. An input is received via the user interface and a particular value of the plurality of values is detected as indicated by the animating of the control. The digital image is displayed as having the particular value of the display characteristic as set by the input.Type: GrantFiled: December 12, 2022Date of Patent: March 4, 2025Assignee: Adobe Inc.Inventors: Christopher James Gammon, Brandon Kroupa
-
Publication number: 20250068829Abstract: Techniques for creation and personalization of composite fonts are described. In one embodiment, a method includes receiving an input font sequence comprising font embeddings for a first font and sequence information for the first font, the font embeddings comprising numerical vectors, predicting a second font based on the font embeddings of the first font and the sequence information for the first font using a transformer-based machine learning model, selecting a character from the second font, and adding the character of the second font to a character of the first font to generate a set of characters for a composite font. Other embodiments are described and claimed.Type: ApplicationFiled: August 25, 2023Publication date: February 27, 2025Applicant: Adobe Inc.Inventors: Oliver Brdiczka, Nipun Jindal
-
Patent number: 12236975Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for determining speech emotion. In particular, a speech emotion recognition system generates an audio feature vector and a textual feature vector for a sequence of words. Further, the speech emotion recognition system utilizes a neural attention mechanism that intelligently blends together the audio feature vector and the textual feature vector to generate attention output. Using the attention output, which includes consideration of both audio and text modalities for speech corresponding to the sequence of words, the speech emotion recognition system can apply attention methods to one of the feature vectors to generate a hidden feature vector. Based on the hidden feature vector, the speech emotion recognition system can generate a speech emotion probability distribution of emotions among a group of candidate emotions, and then select one of the candidate emotions as corresponding to the sequence of words.Type: GrantFiled: November 15, 2021Date of Patent: February 25, 2025Assignee: Adobe Inc.Inventors: Trung Bui, Subhadeep Dey, Seunghyun Yoon
-
Patent number: 12238451Abstract: Embodiments are disclosed for predicting, using neural networks, editing operations for application to a video sequence based on processing conversational messages by a video editing system. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including a video sequence and text sentences, the text sentences describing a modification to the video sequence, mapping, by a first neural network content of the text sentences describing the modification to the video sequence to a candidate editing operation, processing, by a second neural network, the video sequence to predict parameter values for the candidate editing operation, and generating a modified video sequence by applying the candidate editing operation with the predicted parameter values to the video sequence.Type: GrantFiled: November 14, 2022Date of Patent: February 25, 2025Assignee: Adobe Inc.Inventors: Uttaran Bhattacharya, Gang Wu, Viswanathan Swaminathan, Stefano Petrangeli
-
Patent number: 12236610Abstract: This disclosure describes one or more implementations of an alpha matting system that utilizes a deep learning model to generate alpha mattes for digital images utilizing an alpha-range classifier function. More specifically, in various implementations, the alpha matting system builds and utilizes an object mask neural network having a decoder that includes an alpha-range classifier to determine classification probabilities for pixels of a digital image with respect to multiple alpha-range classifications. In addition, the alpha matting system can utilize a refinement model to generate the alpha matte from the pixel classification probabilities with respect to the multiple alpha-range classifications.Type: GrantFiled: October 13, 2021Date of Patent: February 25, 2025Assignee: Adobe Inc.Inventors: Brian Price, Yutong Dai, He Zhang
-
Patent number: 12236640Abstract: Systems and methods for image dense field based view calibration are provided. In one embodiment, an input image is applied to a dense field machine learning model that generates a vertical vector dense field (VVF) and a latitude dense field (LDF) from the input image. The VVF comprises a vertical vector of a projected vanishing point direction for each of the pixels of the input image. The latitude dense field (LDF) comprises a projected latitude value for the pixels of the input image. A dense field map for the input image comprising the VVF and the LDF can be directly or indirectly used for a variety of image processing manipulations. The VVF and LDF can be optionally used to derive traditional camera calibration parameters from uncontrolled images that have undergone undocumented or unknown manipulations.Type: GrantFiled: March 28, 2022Date of Patent: February 25, 2025Assignee: Adobe Inc.Inventors: Jianming Zhang, Linyi Jin, Kevin Matzen, Oliver Wang, Yannick Hold-Geoffroy