Patents by Inventor John Collomosse

John Collomosse has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11966849
    Abstract: Techniques and systems are provided for configuring neural networks to perform certain image manipulation operations. For instance, in response to obtaining an image for manipulation, an image manipulation system determines the fitness scores for a set of neural networks resulting from the processing of a noise map. Based on these fitness scores, the image manipulation system selects a subset of the set of neural networks for cross-breeding into a new generation of neural networks. The image manipulation system evaluates the performance of this new generation of neural networks and continues cross-breeding this neural networks until a fitness threshold is satisfied. From the final generation of neural networks, the image manipulation system selects a neural network that provides a desired output and uses the neural network to generate the manipulated image.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: April 23, 2024
    Assignee: Adobe Inc.
    Inventors: John Collomosse, Hailin Jin
  • Publication number: 20240073478
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize deep learning to map query videos to known videos so as to identify a provenance of the query video or identify editorial manipulations of the query video relative to a known video. For example, the video comparison system includes a deep video comparator model that generates and compares visual and audio descriptors utilizing codewords and an inverse index. The deep video comparator model is robust and ignores discrepancies due to benign transformations that commonly occur during electronic video distribution.
    Type: Application
    Filed: August 26, 2022
    Publication date: February 29, 2024
    Inventors: Alexander Black, Van Tu Bui, John Collomosse, Simon Jenni, Viswanathan Swaminathan
  • Publication number: 20230386054
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize deep learning to identify regions of an image that have been editorially modified. For example, the image comparison system includes a deep image comparator model that compares a pair of images and localizes regions that have been editorially manipulated relative to an original or trusted image. More specifically, the deep image comparator model generates and surfaces visual indications of the location of such editorial changes on the modified image. The deep image comparator model is robust and ignores discrepancies due to benign image transformations that commonly occur during electronic image distribution. The image comparison system optionally includes an image retrieval model utilizes a visual search embedding that is robust to minor manipulations or benign modifications of images. The image retrieval model utilizes a visual search embedding for an image to robustly identify near duplicate images.
    Type: Application
    Filed: May 27, 2022
    Publication date: November 30, 2023
    Inventors: John Collomosse, Alexander Black, Van Tu Bui, Hailin Jin, Viswanathan Swaminathan
  • Patent number: 11823322
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for utilizing an encoder-decoder architecture to learn a volumetric 3D representation of an object using digital images of the object from multiple viewpoints to render novel views of the object. For instance, the disclosed systems can utilize patch-based image feature extraction to extract lifted feature representations from images corresponding to different viewpoints of an object. Furthermore, the disclosed systems can model view-dependent transformed feature representations using learned transformation kernels. In addition, the disclosed systems can recurrently and concurrently aggregate the transformed feature representations to generate a 3D voxel representation of the object. Furthermore, the disclosed systems can sample frustum features using the 3D voxel representation and transformation kernels.
    Type: Grant
    Filed: June 16, 2022
    Date of Patent: November 21, 2023
    Assignee: Adobe Inc.
    Inventors: Tong He, John Collomosse, Hailin Jin
  • Patent number: 11709885
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly identifying digital images with similar style to a query digital image using fine-grain style determination via weakly supervised style extraction neural networks. For example, the disclosed systems can extract a style embedding from a query digital image using a style extraction neural network such as a novel two-branch autoencoder architecture or a weakly supervised discriminative neural network. The disclosed systems can generate a combined style embedding by combining complementary style embeddings from different style extraction neural networks. Moreover, the disclosed systems can search a repository of digital images to identify digital images with similar style to the query digital image.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: July 25, 2023
    Assignee: Adobe Inc.
    Inventors: John Collomosse, Zhe Lin, Saeid Motiian, Hailin Jin, Baldo Faieta, Alex Filipkowski
  • Patent number: 11704559
    Abstract: Embodiments are disclosed for learning structural similarity of user experience (UX) designs using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise generating a representation of a layout of a graphical user interface (GUI), the layout including a plurality of control components, each control component including a control type, geometric features, and relationship features to at least one other control component, generating a search embedding for the representation of the layout using a neural network, and querying a repository of layouts in embedding space using the search embedding to obtain a plurality of layouts based on similarity to the layout of the GUI in the embedding space.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: July 18, 2023
    Assignee: Adobe Inc.
    Inventor: John Collomosse
  • Publication number: 20230222762
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize a deep visual fingerprinting model with parameters learned from robust contrastive learning to identify matching digital images and image provenance information. For example, the disclosed systems utilize an efficient learning procedure that leverages training on bounded adversarial examples to more accurately identify digital images (including adversarial images) with a small computational overhead. To illustrate, the disclosed systems utilize a first objective function that iteratively identifies augmentations to increase contrastive loss. Moreover, the disclosed systems utilize a second objective function that iteratively learns parameters of a deep visual fingerprinting model to reduce the contrastive loss.
    Type: Application
    Filed: January 11, 2022
    Publication date: July 13, 2023
    Inventors: Maksym Andriushchenko, John Collomosse, Xiaoyang Li, Geoffrey Oxholm
  • Publication number: 20220327767
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for utilizing an encoder-decoder architecture to learn a volumetric 3D representation of an object using digital images of the object from multiple viewpoints to render novel views of the object. For instance, the disclosed systems can utilize patch-based image feature extraction to extract lifted feature representations from images corresponding to different viewpoints of an object. Furthermore, the disclosed systems can model view-dependent transformed feature representations using learned transformation kernels. In addition, the disclosed systems can recurrently and concurrently aggregate the transformed feature representations to generate a 3D voxel representation of the object. Furthermore, the disclosed systems can sample frustum features using the 3D voxel representation and transformation kernels.
    Type: Application
    Filed: June 16, 2022
    Publication date: October 13, 2022
    Inventors: Tong He, John Collomosse, Hailin Jin
  • Patent number: 11393158
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for utilizing an encoder-decoder architecture to learn a volumetric 3D representation of an object using digital images of the object from multiple viewpoints to render novel views of the object. For instance, the disclosed systems can utilize patch-based image feature extraction to extract lifted feature representations from images corresponding to different viewpoints of an object. Furthermore, the disclosed systems can model view-dependent transformed feature representations using learned transformation kernels. In addition, the disclosed systems can recurrently and concurrently aggregate the transformed feature representations to generate a 3D voxel representation of the object. Furthermore, the disclosed systems can sample frustum features using the 3D voxel representation and transformation kernels.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: July 19, 2022
    Assignee: Adobe Inc.
    Inventors: Tong He, John Collomosse, Hailin Jin
  • Publication number: 20220092108
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly identifying digital images with similar style to a query digital image using fine-grain style determination via weakly supervised style extraction neural networks. For example, the disclosed systems can extract a style embedding from a query digital image using a style extraction neural network such as a novel two-branch autoencoder architecture or a weakly supervised discriminative neural network. The disclosed systems can generate a combined style embedding by combining complementary style embeddings from different style extraction neural networks. Moreover, the disclosed systems can search a repository of digital images to identify digital images with similar style to the query digital image.
    Type: Application
    Filed: September 18, 2020
    Publication date: March 24, 2022
    Inventors: John Collomosse, Zhe Lin, Saeid Motiian, Hailin Jin, Baldo Faieta, Alex Filipkowski
  • Publication number: 20210397942
    Abstract: Embodiments are disclosed for learning structural similarity of user experience (UX) designs using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise generating a representation of a layout of a graphical user interface (GUI), the layout including a plurality of control components, each control component including a control type, geometric features, and relationship features to at least one other control component, generating a search embedding for the representation of the layout using a neural network, and querying a repository of layouts in embedding space using the search embedding to obtain a plurality of layouts based on similarity to the layout of the GUI in the embedding space.
    Type: Application
    Filed: June 17, 2020
    Publication date: December 23, 2021
    Inventor: John COLLOMOSSE
  • Publication number: 20210312698
    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for utilizing an encoder-decoder architecture to learn a volumetric 3D representation of an object using digital images of the object from multiple viewpoints to render novel views of the object. For instance, the disclosed systems can utilize patch-based image feature extraction to extract lifted feature representations from images corresponding to different viewpoints of an object. Furthermore, the disclosed systems can model view-dependent transformed feature representations using learned transformation kernels. In addition, the disclosed systems can recurrently and concurrently aggregate the transformed feature representations to generate a 3D voxel representation of the object. Furthermore, the disclosed systems can sample frustum features using the 3D voxel representation and transformation kernels.
    Type: Application
    Filed: April 2, 2020
    Publication date: October 7, 2021
    Inventors: Tong He, John Collomosse, Hailin Jin
  • Publication number: 20210311936
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for guided visual search. A visual search query can be represented as a sketch sequence that includes ordering information of the constituent strokes in the sketch. The visual search query can be encoded into a structural search encoding in a common search space by a structural neural network. Indexed visual search results can be identified in the common search space and clustered in an auxiliary semantic space. Sketch suggestions can be identified from a plurality of indexed sketches in the common search space. A sketch suggestion can be identified for each semantic cluster of visual search results and presented with the cluster to guide a user towards relevant content through an iterative search process. Selecting a sketch suggestion as a target sketch can automatically transform the visual search query to the target sketch via adversarial images.
    Type: Application
    Filed: June 17, 2021
    Publication date: October 7, 2021
    Inventors: Hailin Jin, John Collomosse
  • Publication number: 20210264282
    Abstract: Techniques and systems are provided for configuring neural networks to perform certain image manipulation operations. For instance, in response to obtaining an image for manipulation, an image manipulation system determines the fitness scores for a set of neural networks resulting from the processing of a noise map. Based on these fitness scores, the image manipulation system selects a subset of the set of neural networks for cross-breeding into a new generation of neural networks. The image manipulation system evaluates the performance of this new generation of neural networks and continues cross-breeding this neural networks until a fitness threshold is satisfied. From the final generation of neural networks, the image manipulation system selects a neural network that provides a desired output and uses the neural network to generate the manipulated image.
    Type: Application
    Filed: February 20, 2020
    Publication date: August 26, 2021
    Inventors: John Collomosse, Hailin Jim
  • Patent number: 11068493
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for guided visual search. A visual search query can be represented as a sketch sequence that includes ordering information of the constituent strokes in the sketch. The visual search query can be encoded into a structural search encoding in a common search space by a structural neural network. Indexed visual search results can be identified in the common search space and clustered in an auxiliary semantic space. Sketch suggestions can be identified from a plurality of indexed sketches in the common search space. A sketch suggestion can be identified for each semantic cluster of visual search results and presented with the cluster to guide a user towards relevant content through an iterative search process. Selecting a sketch suggestion as a target sketch can automatically transform the visual search query to the target sketch via adversarial images.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: July 20, 2021
    Assignee: Adobe Inc.
    Inventors: Hailin Jin, John Collomosse
  • Publication number: 20200142994
    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for guided visual search. A visual search query can be represented as a sketch sequence that includes ordering information of the constituent strokes in the sketch. The visual search query can be encoded into a structural search encoding in a common search space by a structural neural network. Indexed visual search results can be identified in the common search space and clustered in an auxiliary semantic space. Sketch suggestions can be identified from a plurality of indexed sketches in the common search space. A sketch suggestion can be identified for each semantic cluster of visual search results and presented with the cluster to guide a user towards relevant content through an iterative search process. Selecting a sketch suggestion as a target sketch can automatically transform the visual search query to the target sketch via adversarial images.
    Type: Application
    Filed: November 7, 2018
    Publication date: May 7, 2020
    Inventors: Hailin Jin, John Collomosse
  • Patent number: 9270846
    Abstract: A content encoder for encoding content into a source image for display on a display device includes inputs for receiving data representing content to be encoded into the source image; a processor arranged to encode the content into a sequence of display frames each including the source image, the content encoded as a time varying two-dimensional pattern of luminosity modulations of portions of the source image to form a sequence of encoded images of the source image; and outputs arranged to output the sequence of encoded images to the display device.
    Type: Grant
    Filed: July 25, 2008
    Date of Patent: February 23, 2016
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: John Collomosse, Timothy Paul James Gerard Kindberg
  • Patent number: 9203439
    Abstract: A method of generating a sequence of display frames for display on a display device, wherein the sequence of display frames are derived from a data string which is encoded to include error correction in order to enable recreation of the data string at a receiving device, includes dividing the data string to be encoded into a plurality of source segments; encoding the plurality of source segments to generate a plurality of codewords, each codeword comprising a plurality of codeword bits; and positioning codeword bits in the sequence of frames.
    Type: Grant
    Filed: July 25, 2008
    Date of Patent: December 1, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: John Collomosse, Timothy Paul James Gerard Kindberg
  • Patent number: 8180163
    Abstract: The present disclosure describes encoding sequence information into a sequence of display frames for display on a display device. An example of encoding sequence information includes generating the sequence of display frames, inserting monitor flags within each display frame, each monitor flag being capable of moving between a first state and a second state, setting the state of monitor flags within each display frame to a predetermined configuration, and encoding sequence information in the sequence of display frames such that neighboring display frames in the sequence have different predetermined configurations.
    Type: Grant
    Filed: July 25, 2008
    Date of Patent: May 15, 2012
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: John Collomosse, Timothy Paul James Gerard Kindberg
  • Publication number: 20090028453
    Abstract: A content encoder for encoding content in a source image for display on a display device, the content encoder comprising: inputs for receiving data representing content to be encoded in the source image; a processor arranged to encode content as a time varying two-dimensional pattern of luminosity modulations within the source image to form an encoded image; outputs arranged to output the encoded image to the display device.
    Type: Application
    Filed: July 25, 2008
    Publication date: January 29, 2009
    Applicant: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
    Inventors: John Collomosse, Timothy Paul James Gerard Kindberg