Patents by Inventor Kfir Aberman
Kfir Aberman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250086760Abstract: Systems and methods for augmenting data can leverage one or more machine-learned models and contextual attention data to provide more realistic and efficient data augmentation. For example, systems and methods for inpainting can leverage a machine-learned model to generate predicted contextual attention data and blend the predicted contextual attention data with obtained contextual attention data to determine replacement data for augmenting an image to replace one or more occlusions. The obtained contextual attention data can include user-guided contextual attention.Type: ApplicationFiled: July 19, 2021Publication date: March 13, 2025Inventors: Noritsugu Kanazawa, Neal Wadhwa, Yael Pritch Knaan, Kfir Aberman
-
Publication number: 20250069194Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: ApplicationFiled: November 13, 2024Publication date: February 27, 2025Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Publication number: 20250037251Abstract: A method includes obtaining an input image having a region to be inpainted, an indication of the region to be inpainted, and a guide image. The method also includes determining, by an encoder model, a first latent representation of the input image and a second latent representation of the guide image, and generating a combined latent representation based on the first latent representation and the second latent representation. The method additionally includes generating, by a style generative adversarial network model and based on the combined latent representation, an intermediate output image that includes inpainted image content for the region to be inpainted in the input image. The method further includes generating, based on the input image, the indication of the region, and the intermediate output image, an output image representing the input image with the region to be inpainted including the inpainted image content from the intermediate output image.Type: ApplicationFiled: January 13, 2022Publication date: January 30, 2025Inventors: Orly Liba, Kfir Aberman, Wei Xiong, David Futschik, Yael Pritch Knaan, Daniel Sýkora, Tianfan Xue
-
Patent number: 12169911Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: GrantFiled: June 14, 2023Date of Patent: December 17, 2024Assignee: GOOGLE LLCInventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Publication number: 20240320912Abstract: A fractional training process can be performed training images to an instance of a machine-learned generative image model to obtain a partially trained instance of the model. A fractional optimization process can be performed with the partially trained instance to an instance of a machine-learned three-dimensional (3D) implicit representation model obtain a partially optimized instance of the model. Based on the plurality of training images, pseudo multi-view subject images can be generated with the partially optimized instance of the 3D implicit representation model and a fully trained instance of the generative image model; The partially trained instance of the model can be trained with a set of training data. The partially optimized instance of the machine-learned 3D implicit representation model can be trained with the machine-learned multi-view image model.Type: ApplicationFiled: March 20, 2024Publication date: September 26, 2024Inventors: Yuanzhen Li, Amit Raj, Varun Jampani, Benjamin Joseph Mildenhall, Benjamin Michael Poole, Jonathan Tilton Barron, Kfir Aberman, Michael Niemeyer, Michael Rubinstein, Nataniel Ruiz Gutierrez, Shiran Elyahu Zada, Srinivas Kaza
-
Publication number: 20240296596Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a text-to-image model so that the text-to-image model generates images that each depict a variable instance of an object class when the object class without the unique identifier is provided as a text input, and that generates images that each depict a same subject instance of the object class when the unique identifier is provided as the text input.Type: ApplicationFiled: August 23, 2023Publication date: September 5, 2024Inventors: Kfir Aberman, Nataniel Ruiz Gutierrez, Michael Rubinstein, Yuanzhen Li, Yael Pritch Knaan, Varun Jampani
-
Publication number: 20240046532Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.Type: ApplicationFiled: October 18, 2023Publication date: February 8, 2024Inventors: Kfir Aberman, Yael Pritch Knaan, Orly Liba, David Edward Jacobs
-
Publication number: 20240037822Abstract: Some implementations are directed to editing a source image, where the source image is one generated based on processing a source natural language (NL) prompt using a Large-scale language-image (LLI) model. Those implementations edit the source image based on user interface input that indicates an edit to the source NL prompt, and optionally independent of any user interface input that specifies a mask in the source image and/or independent of any other user interface input. Some implementations of the present disclosure are additionally or alternatively directed to applying prompt-to-prompt editing techniques to editing a source image that is one generated based on a real image, and that approximates the real image.Type: ApplicationFiled: July 31, 2023Publication date: February 1, 2024Inventors: Kfir Aberman, Amir Hertz, Yael Pritch Knaan, Ron Mokady, Jay Tenenbaum, Daniel Cohen-Or
-
Patent number: 11854120Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.Type: GrantFiled: September 28, 2021Date of Patent: December 26, 2023Assignee: GOOGLE LLCInventors: Kfir Aberman, Yael Pritch Knaan, David Edward Jacobs, Orly Liba
-
Publication number: 20230325985Abstract: A method includes receiving an input image. The input image corresponds to one or more masked regions to be inpainted. The method includes providing the input image to a first neural network. The first neural network outputs a first inpainted image at a first resolution, and the one or more masked regions are inpainted in the first inpainted image. The method includes creating a second inpainted image by increasing a resolution of the first inpainted image from the first resolution to a second resolution. The second resolution is greater than the first resolution such that the one or more inpainted masked regions have an increased resolution. The method includes providing the second inpainted image to a second neural network. The second neural network outputs a first refined inpainted image at the second resolution, and the first refined inpainted image is a refined version of the second inpainted image.Type: ApplicationFiled: October 14, 2021Publication date: October 12, 2023Inventors: Soo Ye KIM, Orly LIBA, Rahul GARG, Nori KANAZAWA, Neal WADHWA, Kfir ABERMAN, Huiwen CHANG
-
Publication number: 20230325998Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: ApplicationFiled: June 14, 2023Publication date: October 12, 2023Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Patent number: 11721007Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: GrantFiled: November 8, 2022Date of Patent: August 8, 2023Assignee: Google LLCInventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Publication number: 20230222636Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: ApplicationFiled: November 8, 2022Publication date: July 13, 2023Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Publication number: 20230094723Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.Type: ApplicationFiled: September 28, 2021Publication date: March 30, 2023Inventors: Kfir Aberman, Yael Pritch Knaan, David Edward Jacobs, Orly Liba
-
Publication number: 20230015117Abstract: Techniques for tuning an image editing operator for reducing a distractor in raw image data are presented herein. The image editing operator can access the raw image data and a mask. The mask can indicate a region of interest associated with the raw image data. The image editing operator can process the raw image data and the mask to generate processed image data. Additionally, a trained saliency model can process at least the processed image data within the region of interest to generate a saliency map that provides saliency values. Moreover, a saliency loss function can compare the saliency values provided by the saliency map for the processed image data within the region of interest to one or more target saliency values. Subsequently, the one or more parameter values of the image editing operator can be modified based at least in part on the saliency loss function.Type: ApplicationFiled: July 1, 2022Publication date: January 19, 2023Inventors: Kfir Aberman, David Edward Jacobs, Kai Jochen Kohlhoff, Michael Rubinstein, Yossi Gandelsman, Junfeng He, Inbar Mosseri, Yael Pritch Knaan