Patents by Inventor Alex Filipkowski

Alex Filipkowski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11886793
    Abstract: Embodiments of the technology described herein, are an intelligent system that aims to expedite a text design process by providing text design predictions interactively. The system works with a typical text design scenario comprising a background image and one or more text strings as input. In the design scenario, the text string is to be placed on top of the background. The textual design agent may include a location recommendation model that recommends a location on the background image to place the text. The textual design agent may also include a font recommendation model, a size recommendation model, and a color recommendation model. The output of these four models may be combined to generate draft designs that are evaluated as a whole (combination of color, font, and size) for the best designs. The top designs may be output to the user.
    Type: Grant
    Filed: September 3, 2021
    Date of Patent: January 30, 2024
    Assignee: Adobe Inc.
    Inventors: Zhaowen Wang, Saeid Motiian, Baldo Faieta, Zegi Gu, Peter Evan O'Donovan, Alex Filipkowski, Jose Ignacio Echevarria Vallespi
  • Patent number: 11709885
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly identifying digital images with similar style to a query digital image using fine-grain style determination via weakly supervised style extraction neural networks. For example, the disclosed systems can extract a style embedding from a query digital image using a style extraction neural network such as a novel two-branch autoencoder architecture or a weakly supervised discriminative neural network. The disclosed systems can generate a combined style embedding by combining complementary style embeddings from different style extraction neural networks. Moreover, the disclosed systems can search a repository of digital images to identify digital images with similar style to the query digital image.
    Type: Grant
    Filed: September 18, 2020
    Date of Patent: July 25, 2023
    Assignee: Adobe Inc.
    Inventors: John Collomosse, Zhe Lin, Saeid Motiian, Hailin Jin, Baldo Faieta, Alex Filipkowski
  • Publication number: 20230137774
    Abstract: Systems and methods for image retrieval are described. Embodiments of the present disclosure receive a search query from a user; extract an entity and a color phrase describing the entity from the search query; generate an entity color embedding in a color embedding space from the color phrase using a multi-modal color encoder; identify an image in a database based on metadata for the image including an object label corresponding to the extracted entity and an object color embedding in the color embedding space corresponding to the object label; and provide image information for the image to the user based on the metadata.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 4, 2023
    Inventors: Baldo Faieta, Ajinkya Gorakhnath Kale, Pranav Vineet Aggarwal, Naveen Marri, Saeid Motiian, Tracy Holloway King, Alex Filipkowski, Shabnam Ghadar
  • Patent number: 11605168
    Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: March 14, 2023
    Assignee: Adobe Inc.
    Inventors: Mingyang Ling, Alex Filipkowski, Zhe Lin, Jianming Zhang, Samarth Gulati
  • Publication number: 20230070390
    Abstract: Embodiments of the technology described herein, are an intelligent system that aims to expedite a text design process by providing text design predictions interactively. The system works with a typical text design scenario comprising a background image and one or more text strings as input. In the design scenario, the text string is to be placed on top of the background. The textual design agent may include a location recommendation model that recommends a location on the background image to place the text. The textual design agent may also include a font recommendation model, a size recommendation model, and a color recommendation model. The output of these four models may be combined to generate draft designs that are evaluated as a whole (combination of color, font, and size) for the best designs. The top designs may be output to the user.
    Type: Application
    Filed: September 3, 2021
    Publication date: March 9, 2023
    Inventors: Zhaowen Weng, Saeid Motiian, Baldo Faieta, Zegi Gu, Peter Evan O'Donovan, Alex Filipkowski, Jose Ignacio Echevarria Vallespi
  • Patent number: 11531697
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: December 20, 2022
    Assignee: Adobe Inc.
    Inventors: Jinrong Xie, Shabnam Ghadar, Jun Saito, Jimei Yang, Elnaz Morad, Duygu Ceylan Aksit, Baldo Faieta, Alex Filipkowski
  • Patent number: 11380033
    Abstract: Based on a received digital image and text, a neural network trained to identify candidate text placement areas within images may be used to generate a mask for the digital image that includes a candidate text placement area. A bounding box for the digital image may be defined for the text and based on the candidate text placement area, and the text may be superimposed onto the digital image within the bounding box.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: July 5, 2022
    Assignee: ADOBE INC.
    Inventors: Kate Sousa, Zhe Lin, Saeid Motiian, Pramod Srinivasan, Baldo Faieta, Alex Filipkowski
  • Publication number: 20220138249
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.
    Type: Application
    Filed: November 3, 2020
    Publication date: May 5, 2022
    Inventors: Jinrong Xie, Shabnam Ghadar, Jun Saito, Jimei Yang, Elnaz Morad, Duygu Ceylan Aksit, Baldo Faieta, Alex Filipkowski
  • Publication number: 20220092108
    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly identifying digital images with similar style to a query digital image using fine-grain style determination via weakly supervised style extraction neural networks. For example, the disclosed systems can extract a style embedding from a query digital image using a style extraction neural network such as a novel two-branch autoencoder architecture or a weakly supervised discriminative neural network. The disclosed systems can generate a combined style embedding by combining complementary style embeddings from different style extraction neural networks. Moreover, the disclosed systems can search a repository of digital images to identify digital images with similar style to the query digital image.
    Type: Application
    Filed: September 18, 2020
    Publication date: March 24, 2022
    Inventors: John Collomosse, Zhe Lin, Saeid Motiian, Hailin Jin, Baldo Faieta, Alex Filipkowski
  • Publication number: 20210217215
    Abstract: Based on a received digital image and text, a neural network trained to identify candidate text placement areas within images may be used to generate a mask for the digital image that includes a candidate text placement area. A bounding box for the digital image may be defined for the text and based on the candidate text placement area, and the text may be superimposed onto the digital image within the bounding box.
    Type: Application
    Filed: January 9, 2020
    Publication date: July 15, 2021
    Inventors: Kate Sousa, Zhe Lin, Saeid Motiian, Pramod Srinivasan, Baldo Faieta, Alex Filipkowski
  • Publication number: 20210216824
    Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.
    Type: Application
    Filed: March 29, 2021
    Publication date: July 15, 2021
    Applicant: Adobe Inc.
    Inventors: Mingyang Ling, Alex Filipkowski, Zhe Lin, Jianming Zhang, Samarth Gulati
  • Patent number: 10970599
    Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: April 6, 2021
    Assignee: ADOBE INC.
    Inventors: Mingyang Ling, Alex Filipkowski, Zhe Lin, Jianming Zhang, Samarth Gulati
  • Publication number: 20200160111
    Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.
    Type: Application
    Filed: November 15, 2018
    Publication date: May 21, 2020
    Applicant: ADOBE INC.
    Inventors: Mingyang Ling, Alex Filipkowski, Zhe Lin, Jianming Zhang, Samarth Gulati