Patents by Inventor Alex Filipkowski
Alex Filipkowski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11886793Abstract: Embodiments of the technology described herein, are an intelligent system that aims to expedite a text design process by providing text design predictions interactively. The system works with a typical text design scenario comprising a background image and one or more text strings as input. In the design scenario, the text string is to be placed on top of the background. The textual design agent may include a location recommendation model that recommends a location on the background image to place the text. The textual design agent may also include a font recommendation model, a size recommendation model, and a color recommendation model. The output of these four models may be combined to generate draft designs that are evaluated as a whole (combination of color, font, and size) for the best designs. The top designs may be output to the user.Type: GrantFiled: September 3, 2021Date of Patent: January 30, 2024Assignee: Adobe Inc.Inventors: Zhaowen Wang, Saeid Motiian, Baldo Faieta, Zegi Gu, Peter Evan O'Donovan, Alex Filipkowski, Jose Ignacio Echevarria Vallespi
-
Patent number: 11709885Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly identifying digital images with similar style to a query digital image using fine-grain style determination via weakly supervised style extraction neural networks. For example, the disclosed systems can extract a style embedding from a query digital image using a style extraction neural network such as a novel two-branch autoencoder architecture or a weakly supervised discriminative neural network. The disclosed systems can generate a combined style embedding by combining complementary style embeddings from different style extraction neural networks. Moreover, the disclosed systems can search a repository of digital images to identify digital images with similar style to the query digital image.Type: GrantFiled: September 18, 2020Date of Patent: July 25, 2023Assignee: Adobe Inc.Inventors: John Collomosse, Zhe Lin, Saeid Motiian, Hailin Jin, Baldo Faieta, Alex Filipkowski
-
Publication number: 20230137774Abstract: Systems and methods for image retrieval are described. Embodiments of the present disclosure receive a search query from a user; extract an entity and a color phrase describing the entity from the search query; generate an entity color embedding in a color embedding space from the color phrase using a multi-modal color encoder; identify an image in a database based on metadata for the image including an object label corresponding to the extracted entity and an object color embedding in the color embedding space corresponding to the object label; and provide image information for the image to the user based on the metadata.Type: ApplicationFiled: November 4, 2021Publication date: May 4, 2023Inventors: Baldo Faieta, Ajinkya Gorakhnath Kale, Pranav Vineet Aggarwal, Naveen Marri, Saeid Motiian, Tracy Holloway King, Alex Filipkowski, Shabnam Ghadar
-
Patent number: 11605168Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.Type: GrantFiled: March 29, 2021Date of Patent: March 14, 2023Assignee: Adobe Inc.Inventors: Mingyang Ling, Alex Filipkowski, Zhe Lin, Jianming Zhang, Samarth Gulati
-
Publication number: 20230070390Abstract: Embodiments of the technology described herein, are an intelligent system that aims to expedite a text design process by providing text design predictions interactively. The system works with a typical text design scenario comprising a background image and one or more text strings as input. In the design scenario, the text string is to be placed on top of the background. The textual design agent may include a location recommendation model that recommends a location on the background image to place the text. The textual design agent may also include a font recommendation model, a size recommendation model, and a color recommendation model. The output of these four models may be combined to generate draft designs that are evaluated as a whole (combination of color, font, and size) for the best designs. The top designs may be output to the user.Type: ApplicationFiled: September 3, 2021Publication date: March 9, 2023Inventors: Zhaowen Weng, Saeid Motiian, Baldo Faieta, Zegi Gu, Peter Evan O'Donovan, Alex Filipkowski, Jose Ignacio Echevarria Vallespi
-
Patent number: 11531697Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.Type: GrantFiled: November 3, 2020Date of Patent: December 20, 2022Assignee: Adobe Inc.Inventors: Jinrong Xie, Shabnam Ghadar, Jun Saito, Jimei Yang, Elnaz Morad, Duygu Ceylan Aksit, Baldo Faieta, Alex Filipkowski
-
Patent number: 11380033Abstract: Based on a received digital image and text, a neural network trained to identify candidate text placement areas within images may be used to generate a mask for the digital image that includes a candidate text placement area. A bounding box for the digital image may be defined for the text and based on the candidate text placement area, and the text may be superimposed onto the digital image within the bounding box.Type: GrantFiled: January 9, 2020Date of Patent: July 5, 2022Assignee: ADOBE INC.Inventors: Kate Sousa, Zhe Lin, Saeid Motiian, Pramod Srinivasan, Baldo Faieta, Alex Filipkowski
-
Publication number: 20220138249Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately, efficiently, and flexibly identifying and providing digital images of human figures in poses corresponding to a query pose. In particular, the disclosed systems can provide multiple approaches to searching for and providing pose images, including identifying a digital image depicting a human figure in a particular pose based on a query digital image that depicts the pose or identifying a digital image depicting a human figure in a particular pose based on a virtual mannequin. Indeed, the disclosed systems can provide a manipulable virtual mannequin that defines a query pose for searching a repository of digital images. Additionally, the disclosed systems can generate and provide digital pose image groups by clustering digital images together according to poses of human figures within a pose feature space.Type: ApplicationFiled: November 3, 2020Publication date: May 5, 2022Inventors: Jinrong Xie, Shabnam Ghadar, Jun Saito, Jimei Yang, Elnaz Morad, Duygu Ceylan Aksit, Baldo Faieta, Alex Filipkowski
-
Publication number: 20220092108Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly identifying digital images with similar style to a query digital image using fine-grain style determination via weakly supervised style extraction neural networks. For example, the disclosed systems can extract a style embedding from a query digital image using a style extraction neural network such as a novel two-branch autoencoder architecture or a weakly supervised discriminative neural network. The disclosed systems can generate a combined style embedding by combining complementary style embeddings from different style extraction neural networks. Moreover, the disclosed systems can search a repository of digital images to identify digital images with similar style to the query digital image.Type: ApplicationFiled: September 18, 2020Publication date: March 24, 2022Inventors: John Collomosse, Zhe Lin, Saeid Motiian, Hailin Jin, Baldo Faieta, Alex Filipkowski
-
Publication number: 20210217215Abstract: Based on a received digital image and text, a neural network trained to identify candidate text placement areas within images may be used to generate a mask for the digital image that includes a candidate text placement area. A bounding box for the digital image may be defined for the text and based on the candidate text placement area, and the text may be superimposed onto the digital image within the bounding box.Type: ApplicationFiled: January 9, 2020Publication date: July 15, 2021Inventors: Kate Sousa, Zhe Lin, Saeid Motiian, Pramod Srinivasan, Baldo Faieta, Alex Filipkowski
-
Publication number: 20210216824Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.Type: ApplicationFiled: March 29, 2021Publication date: July 15, 2021Applicant: Adobe Inc.Inventors: Mingyang Ling, Alex Filipkowski, Zhe Lin, Jianming Zhang, Samarth Gulati
-
Patent number: 10970599Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.Type: GrantFiled: November 15, 2018Date of Patent: April 6, 2021Assignee: ADOBE INC.Inventors: Mingyang Ling, Alex Filipkowski, Zhe Lin, Jianming Zhang, Samarth Gulati
-
Publication number: 20200160111Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.Type: ApplicationFiled: November 15, 2018Publication date: May 21, 2020Applicant: ADOBE INC.Inventors: Mingyang Ling, Alex Filipkowski, Zhe Lin, Jianming Zhang, Samarth Gulati