Patents by Inventor John Philip Collomosse
John Philip Collomosse has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250104288Abstract: Techniques for latent space based steganographic image generation are described. A processing device, for instance, receives a digital image and a secret that includes a bit string. A pretrained encoder of an autoencoder generates an embedding of the digital image that includes latent code. A secret encoder is trained and utilized to generate an embedding of the secret to act as a latent offset to the latent code. The processing device leverages a pretrained decoder of the autoencoder to generate a steganographic image based on the embedding of the secret and the embedding of the digital image. The steganographic image includes the secret and is visually indiscernible from the digital image. Further, the processing device is configured to recover the secret from the steganographic image, such as by training and leveraging a secret decoder to extract the secret.Type: ApplicationFiled: September 21, 2023Publication date: March 27, 2025Applicant: Adobe Inc.Inventors: Shruti Agarwal, John Philip Collomosse
-
Publication number: 20250086849Abstract: Embodiments of the present disclosure include obtaining a text prompt describing an element, layout information indicating a target region for the element, and a precision level corresponding to the element. Some embodiments generate a text feature pyramid based on the text prompt, the layout information, and the precision level, wherein the text feature pyramid comprises a plurality of text feature maps at a plurality of scales, respectively. Then, an image is generated based on the text feature pyramid. In some cases, the image includes an object corresponding to the element of the text prompt at the target region. Additionally, a shape of the object corresponds to a shape of the target region based on the precision level.Type: ApplicationFiled: September 8, 2023Publication date: March 13, 2025Inventors: Yu Zeng, Zhe Lin, Jianming Zhang, Qing Liu, Jason Wen Yong Kuen, John Philip Collomosse
-
Patent number: 12198224Abstract: Systems and methods for image generation are described. Embodiments of the present disclosure receive a text phrase that describes a target image to be generated; generate text features based on the text phrase; retrieve a search image based on the text phrase; and generate the target image using an image generation network based on the text features and the search image.Type: GrantFiled: February 15, 2022Date of Patent: January 14, 2025Assignee: ADOBE INC.Inventors: Xin Yuan, Zhe Lin, Jason Wen Yong Kuen, Jianming Zhang, John Philip Collomosse
-
Publication number: 20240419750Abstract: Digital content layout encoding techniques for search are described. In these techniques, a layout representation is generated (using machine learning automatically and without user intervention) that describes a layout of elements included within the digital content. In an implementation, the layout representation includes a description of both spatial and structural aspects of the elements in relation to each other. To do so, a two-pathway pipeline that is configured to model layout from both spatial and structural aspects using a spatial pathway, and a structural pathway, respectively. In one example, this is also performed through use of multi-level encoding and fusion to generate a layout representation.Type: ApplicationFiled: September 2, 2024Publication date: December 19, 2024Applicant: Adobe Inc.Inventors: Zhaowen Wang, Yue Bai, John Philip Collomosse
-
Patent number: 12147495Abstract: A visual search system facilitates retrieval of provenance information using a machine learning model to generate content fingerprints that are invariant to benign transformations while being sensitive to manipulations. The machine learning model is trained on a training image dataset that includes original images, benign transformed variants of the original images, and manipulated variants of the original images. A loss function is used to train the machine learning model to minimize distances in an embedding space between benign transformed variants and their corresponding original images and increase distances between the manipulated variants and their corresponding original images.Type: GrantFiled: January 5, 2021Date of Patent: November 19, 2024Assignee: ADOBE INC.Inventors: Viswanathan Swaminathan, John Philip Collomosse, Eric Nguyen
-
Patent number: 12105767Abstract: Digital content layout encoding techniques for search are described. In these techniques, a layout representation is generated (using machine learning automatically and without user intervention) that describes a layout of elements included within the digital content. In an implementation, the layout representation includes a description of both spatial and structural aspects of the elements in relation to each other. To do so, a two-pathway pipeline that is configured to model layout from both spatial and structural aspects using a spatial pathway, and a structural pathway, respectively. In one example, this is also performed through use of multi-level encoding and fusion to generate a layout representation.Type: GrantFiled: May 3, 2022Date of Patent: October 1, 2024Assignee: Adobe Inc.Inventors: Zhaowen Wang, Yue Bai, John Philip Collomosse
-
Publication number: 20240169623Abstract: Systems and methods for multi-modal image generation are provided. One or more aspects of the systems and methods includes obtaining a text prompt and layout information indicating a target location for an element of the text prompt within an image to be generated and computing a text feature map including a plurality of values corresponding to the element of the text prompt at pixel locations corresponding to the target location. Then the image is generated based on the text feature map using a diffusion model. The generated image includes the element of the text prompt at the target location.Type: ApplicationFiled: November 22, 2022Publication date: May 23, 2024Inventors: Yu Zeng, Zhe Lin, Jianming Zhang, Qing Liu, Jason Wen Yong Kuen, John Philip Collomosse
-
Publication number: 20230410505Abstract: Techniques for video manipulation detection are described to detect one or more manipulations present in digital content such as a digital video. A detection system, for instance, receives a frame of a digital video that depicts at least one entity. Coordinates of the frame that correspond to a gaze location of the entity are determined, and the detection system determines whether the coordinates correspond to a portion of an object depicted in the frame to calculate a gaze confidence score. A manipulation score is generated that indicates whether the digital video has been manipulated based on the gaze confidence score. In some examples, the manipulation score is based on at least one additional confidence score.Type: ApplicationFiled: June 21, 2022Publication date: December 21, 2023Applicant: Adobe Inc.Inventors: Ritwik Sinha, Viswanathan Swaminathan, Trisha Mittal, John Philip Collomosse
-
Publication number: 20230359682Abstract: Digital content layout encoding techniques for search are described. In these techniques, a layout representation is generated (using machine learning automatically and without user intervention) that describes a layout of elements included within the digital content. In an implementation, the layout representation includes a description of both spatial and structural aspects of the elements in relation to each other. To do so, a two-pathway pipeline that is configured to model layout from both spatial and structural aspects using a spatial pathway, and a structural pathway, respectively. In one example, this is also performed through use of multi-level encoding and fusion to generate a layout representation.Type: ApplicationFiled: May 3, 2022Publication date: November 9, 2023Applicant: Adobe Inc.Inventors: Zhaowen Wang, Yue Bai, John Philip Collomosse
-
Publication number: 20230260164Abstract: Systems and methods for image generation are described. Embodiments of the present disclosure receive a text phrase that describes a target image to be generated; generate text features based on the text phrase; retrieve a search image based on the text phrase; and generate the target image using an image generation network based on the text features and the search image.Type: ApplicationFiled: February 15, 2022Publication date: August 17, 2023Inventors: Xin Yuan, Zhe Lin, Jason Wen Yong Kuen, Jianming Zhang, John Philip Collomosse
-
Publication number: 20220215205Abstract: A visual search system facilitates retrieval of provenance information using a machine learning model to generate content fingerprints that are invariant to benign transformations while being sensitive to manipulations. The machine learning model is trained on a training image dataset that includes original images, benign transformed variants of the original images, and manipulated variants of the original images. A loss function is used to train the machine learning model to minimize distances in an embedding space between benign transformed variants and their corresponding original images and increase distances between the manipulated variants and their corresponding original images.Type: ApplicationFiled: January 5, 2021Publication date: July 7, 2022Inventors: Viswanathan Swaminathan, John Philip Collomosse, Eric Nguyen
-
Patent number: 11055828Abstract: Techniques of inpainting video content include training a neural network to perform an inpainting operation on a video using only content from that video. For example, upon receiving video content including a sequence of initial frames, a computer generates a sequence of inputs corresponding to at least some of the sequence of initial frames and each input including, for example, a uniform noise map. The computer then generates a convolutional neural network (CNN) using the sequence of input as the initial layer. The parameters of the CNN are adjusted according to a cost function, which has components including a flow generation loss component and a consistency loss component. The CNN then outputs, on a final layer, estimated image values in a sequence of final frames.Type: GrantFiled: May 9, 2019Date of Patent: July 6, 2021Assignee: ADOBE INC.Inventors: Mai Long, Zhaowen Wang, Ning Xu, John Philip Collomosse, Haotian Zhang, Hailin Jin
-
Publication number: 20200357099Abstract: Techniques of inpainting video content include training a neural network to perform an inpainting operation on a video using only content from that video. For example, upon receiving video content including a sequence of initial frames, a computer generates a sequence of inputs corresponding to at least some of the sequence of initial frames and each input including, for example, a uniform noise map. The computer then generates a convolutional neural network (CNN) using the sequence of input as the initial layer. The parameters of the CNN are adjusted according to a cost function, which has components including a flow generation loss component and a consistency loss component. The CNN then outputs, on a final layer, estimated image values in a sequence of final frames.Type: ApplicationFiled: May 9, 2019Publication date: November 12, 2020Inventors: Mai Long, Zhaowen Wang, Ning Xu, John Philip Collomosse, Haotian Zhang, Hailin Jin
-
Patent number: 10733228Abstract: Sketch and style based image retrieval in a digital medium environment is described. Initially, a user sketches an object to be searched in connection with an image search. Styled images are selected to indicate a desired style of images to be returned by the search. A search request is generated based on the sketch and selected images. Responsive to the request, an image repository is searched to identify images having the desired object and styling. To search the image repository, a neural network is utilized that is capable of recognizing the desired object in images based on visual characteristics of the sketch and independently recognizing the desired styling in images based on visual characteristics of the selected images. This independent recognition allows desired styling to be specified by selecting images having the style but not the desired object. Images having the desired object and styling are returned.Type: GrantFiled: June 5, 2019Date of Patent: August 4, 2020Assignee: Adobe Inc.Inventors: Hailin Jin, John Philip Collomosse
-
Publication number: 20200242822Abstract: Techniques and systems are described for style-aware patching of a digital image in a digital medium environment. For example, a digital image creation system generates style data for a portion to be filled of a digital image, indicating a style of an area surrounding the portion. The digital image creation system also generates content data for the portion indicating content of the digital image of the area surrounding the portion. The digital image creation system selects a source digital image based on similarity of both style and content of the source digital image at a location of the patch to the style data and content data. The digital image creation system transforms the style of the source digital image based on the style data and generates the patch from the source digital image in the transformed style for incorporation into the portion to be filled of the digital image.Type: ApplicationFiled: April 6, 2020Publication date: July 30, 2020Applicant: Adobe Inc.Inventors: Hailin Jin, John Philip Collomosse, Brian Lynn Price
-
Patent number: 10699453Abstract: Techniques and systems are described for style-aware patching of a digital image in a digital medium environment. For example, a digital image creation system generates style data for a portion to be filled of a digital image, indicating a style of an area surrounding the portion. The digital image creation system also generates content data for the portion indicating content of the digital image of the area surrounding the portion. The digital image creation system selects a source digital image based on similarity of both style and content of the source digital image at a location of the patch to the style data and content data. The digital image creation system transforms the style of the source digital image based on the style data and generates the patch from the source digital image in the transformed style for incorporation into the portion to be filled of the digital image.Type: GrantFiled: August 17, 2017Date of Patent: June 30, 2020Assignee: Adobe Inc.Inventors: Hailin Jin, John Philip Collomosse, Brian L. Price
-
Patent number: 10430455Abstract: Sketch and style based image retrieval in a digital medium environment is described. Initially, a user sketches an object (e.g., with a stylus) to be searched in connection with an image search. Styled images are selected to indicate a desired style of images to be returned by the search. A search request is generated based on the sketch and selected images. Responsive to the request, an image repository is searched to identify images having the desired object and styling. To search the image repository, a neural network is utilized that is capable of recognizing the desired object in images based on visual characteristics of the sketch and independently recognizing the desired styling in images based on visual characteristics of the selected images. This independent recognition allows desired styling to be specified by selecting images having the style but not the desired object. Images having the desired object and styling are returned.Type: GrantFiled: June 9, 2017Date of Patent: October 1, 2019Assignee: Adobe Inc.Inventors: Hailin Jin, John Philip Collomosse
-
Publication number: 20190286647Abstract: Sketch and style based image retrieval in a digital medium environment is described. Initially, a user sketches an object to be searched in connection with an image search. Styled images are selected to indicate a desired style of images to be returned by the search. A search request is generated based on the sketch and selected images. Responsive to the request, an image repository is searched to identify images having the desired object and styling. To search the image repository, a neural network is utilized that is capable of recognizing the desired object in images based on visual characteristics of the sketch and independently recognizing the desired styling in images based on visual characteristics of the selected images. This independent recognition allows desired styling to be specified by selecting images having the style but not the desired object. Images having the desired object and styling are returned.Type: ApplicationFiled: June 5, 2019Publication date: September 19, 2019Applicant: Adobe Inc.Inventors: Hailin Jin, John Philip Collomosse
-
Patent number: 10268928Abstract: A combined structure and style network is described. Initially, a large set of training images, having a variety of different styles, is obtained. Each of these training images is associated with one of multiple different predetermined style categories indicating the image's style and one of multiple different predetermined semantic categories indicating objects depicted in the image. Groups of these images are formed, such that each group includes an anchor image having one of the styles, a positive-style example image having the same style as the anchor image, and a negative-style example image having a different style. Based on those groups, an image style network is generated to identify images having desired styling by recognizing visual characteristics of the different styles. The image style network is further combined, according to a unifying training technique, with an image structure network configured to recognize desired objects in images irrespective of image style.Type: GrantFiled: June 7, 2017Date of Patent: April 23, 2019Assignee: Adobe Inc.Inventors: Hailin Jin, John Philip Collomosse
-
Publication number: 20190057527Abstract: Techniques and systems are described for style-aware patching of a digital image in a digital medium environment. For example, a digital image creation system generates style data for a portion to be filled of a digital image, indicating a style of an area surrounding the portion. The digital image creation system also generates content data for the portion indicating content of the digital image of the area surrounding the portion. The digital image creation system selects a source digital image based on similarity of both style and content of the source digital image at a location of the patch to the style data and content data. The digital image creation system transforms the style of the source digital image based on the style data and generates the patch from the source digital image in the transformed style for incorporation into the portion to be filled of the digital image.Type: ApplicationFiled: August 17, 2017Publication date: February 21, 2019Applicant: Adobe Systems IncorporatedInventors: Hailin Jin, John Philip Collomosse, Brian L. Price