Patents by Inventor Yael Pritch Knaan
Yael Pritch Knaan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240046532Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.Type: ApplicationFiled: October 18, 2023Publication date: February 8, 2024Inventors: Kfir Aberman, Yael Pritch Knaan, Orly Liba, David Edward Jacobs
-
Publication number: 20240037822Abstract: Some implementations are directed to editing a source image, where the source image is one generated based on processing a source natural language (NL) prompt using a Large-scale language-image (LLI) model. Those implementations edit the source image based on user interface input that indicates an edit to the source NL prompt, and optionally independent of any user interface input that specifies a mask in the source image and/or independent of any other user interface input. Some implementations of the present disclosure are additionally or alternatively directed to applying prompt-to-prompt editing techniques to editing a source image that is one generated based on a real image, and that approximates the real image.Type: ApplicationFiled: July 31, 2023Publication date: February 1, 2024Inventors: Kfir Aberman, Amir Hertz, Yael Pritch Knaan, Ron Mokady, Jay Tenenbaum, Daniel Cohen-Or
-
Patent number: 11854120Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.Type: GrantFiled: September 28, 2021Date of Patent: December 26, 2023Assignee: GOOGLE LLCInventors: Kfir Aberman, Yael Pritch Knaan, David Edward Jacobs, Orly Liba
-
Publication number: 20230342890Abstract: Systems and methods for augmenting images can utilize one or more image augmentation models and one or more texture transfer blocks. The image augmentation model can process input images and one or more segmentation masks to generate first output data. The first output data and the one or more segmentation masks can be processed with the texture transfer block to generate an augmented image. The input image can depict a scene with one or more occlusions, and the augmented image can depict the scene with the one or more occlusions replaced with predicted pixel data.Type: ApplicationFiled: April 22, 2022Publication date: October 26, 2023Inventors: Noritsugu Kanazawa, Neal Wadhwa, Yael Pritch Knaan
-
Patent number: 11792553Abstract: The present disclosure provides systems and methods that leverage neural networks for high resolution image segmentation. A computing system can include a processor, a machine-learned image segmentation model comprising a semantic segmentation neural network and an edge refinement neural network, and at least one tangible, non-transitory computer readable medium that stores instructions that cause the processor to perform operations. The operations can include obtaining an image, inputting the image into the semantic segmentation neural network, receiving, as an output of the semantic segmentation neural network, a semantic segmentation mask, inputting at least a portion of the image and at least a portion of the semantic segmentation mask into the edge refinement neural network, and receiving, as an output of the edge refinement neural network, the refined semantic segmentation mask.Type: GrantFiled: November 13, 2020Date of Patent: October 17, 2023Assignee: GOOGLE LLCInventors: Noritsugu Kanazawa, Yael Pritch Knaan
-
Publication number: 20230325998Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: ApplicationFiled: June 14, 2023Publication date: October 12, 2023Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Patent number: 11721007Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: GrantFiled: November 8, 2022Date of Patent: August 8, 2023Assignee: Google LLCInventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Publication number: 20230222636Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: ApplicationFiled: November 8, 2022Publication date: July 13, 2023Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Publication number: 20230118361Abstract: A media application receives user input that indicates one or more objects to be erased from a media item. The media application translates the user input to a bounding box. The media application provides a crop of the media item based on the bounding box to a segmentation machine-learning model. The segmentation machine-learning model outputs a segmentation mask for one or more segmented objects in the crop of the media item and a corresponding segmentation score that indicates a quality of the segmentation mask.Type: ApplicationFiled: October 18, 2022Publication date: April 20, 2023Applicant: Google LLCInventors: Orly LIBA, Navin SARMA, Yael Pritch KNAAN, Alexander SCHIFFHAUER, Longqi CAI, David JACOBS, Huizhong CHEN, Siyang LI, Bryan FELDMAN
-
Publication number: 20230118460Abstract: A media application generates training data that includes a first set of media items and a second set of media items, where the first set of media items correspond to the second set of media items and include distracting objects that are manually segmented. The media application trains a segmentation machine-learning model based on the training data to receive a media item with one or more distracting objects and to output a segmentation mask for one or more segmented objects that correspond to the one or more distracting objects.Type: ApplicationFiled: October 18, 2022Publication date: April 20, 2023Applicant: Google LLCInventors: Orly LIBA, Nikhil KARNAD, Nori KANAZAWA, Yael Pritch KNAAN, Huizhong CHEN, Longqi CAI
-
Publication number: 20230094723Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.Type: ApplicationFiled: September 28, 2021Publication date: March 30, 2023Inventors: Kfir Aberman, Yael Pritch Knaan, David Edward Jacobs, Orly Liba
-
Patent number: 11599747Abstract: Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system.Type: GrantFiled: November 6, 2020Date of Patent: March 7, 2023Assignee: Google LLCInventors: Yael Pritch Knaan, Marc Levoy, Neal Wadhwa, Rahul Garg, Sameer Ansari, Jiawen Chen
-
Publication number: 20230015117Abstract: Techniques for tuning an image editing operator for reducing a distractor in raw image data are presented herein. The image editing operator can access the raw image data and a mask. The mask can indicate a region of interest associated with the raw image data. The image editing operator can process the raw image data and the mask to generate processed image data. Additionally, a trained saliency model can process at least the processed image data within the region of interest to generate a saliency map that provides saliency values. Moreover, a saliency loss function can compare the saliency values provided by the saliency map for the processed image data within the region of interest to one or more target saliency values. Subsequently, the one or more parameter values of the image editing operator can be modified based at least in part on the saliency loss function.Type: ApplicationFiled: July 1, 2022Publication date: January 19, 2023Inventors: Kfir Aberman, David Edward Jacobs, Kai Jochen Kohlhoff, Michael Rubinstein, Yossi Gandelsman, Junfeng He, Inbar Mosseri, Yael Pritch Knaan
-
Publication number: 20220343525Abstract: Example implementations relate to joint depth prediction from dual cameras and dual pixels. An example method may involve obtaining a first set of depth information representing a scene from a first source and a second set of depth information representing the scene from a second source. The method may further involve determining, using a neural network, a joint depth map that conveys respective depths for elements in the scene. The neural network may determine the joint depth map based on a combination of the first set of depth information and the second set of depth information. In addition, the method may involve modifying an image representing the scene based on the joint depth map. For example, background portions of the image may be partially blurred based on the joint depth map.Type: ApplicationFiled: April 27, 2020Publication date: October 27, 2022Inventors: Rahul GARG, Neal WADHWA, Sean FANELLO, Christian HAENE, Yinda ZHANG, Sergio Orts ESCOLANO, Yael Pritch KNAAN, Marc LEVOY, Shahram IZADI
-
Publication number: 20220230323Abstract: A device automatically segments an image into different regions and automatically adjusts perceived exposure-levels or other characteristics associated with each of the different regions, to produce pictures that exceed expectations for the type of optics and camera equipment being used and in some cases, the pictures even resemble other high-quality photography created using professional equipment and photo editing software. A machine-learned model is trained to automatically segment an image into distinct regions. The model outputs one or more masks that define the distinct regions. The mask(s) are refined using a guided filter or other technique to ensure that edges of the mask(s) conform to edges of objects depicted in the image. By applying the mask(s) to the image, the device can individually adjust respective characteristics of each of the different regions to produce a higher-quality picture of a scene.Type: ApplicationFiled: July 15, 2019Publication date: July 21, 2022Applicant: Google LLCInventors: Orly Liba, Florian Kainz, Longqi Cai, Yael Pritch Knaan
-
Patent number: 11210799Abstract: A camera may capture an image of a scene and use the image to generate a first and a second subpixel image of the scene. The pair of subpixel images may be represented by a first set of subpixels and a second set of subpixels from the image respectively. Each pixel of the image may include two green subpixels that are respectively represented in the first and second subpixel images. The camera may determine a disparity between a portion of the scene as represented by the pair of subpixel images and may estimate a depth map of the scene that indicates a depth of the portion relative to other portions of the scene based on the disparity and a baseline distance between the two green subpixels. A new version of the image may be generated with a focus upon the portion and with the other portions of the scene blurred.Type: GrantFiled: December 5, 2017Date of Patent: December 28, 2021Assignee: Google LLCInventors: David Jacobs, Rahul Garg, Yael Pritch Knaan, Neal Wadhwa, Marc Levoy
-
Publication number: 20210067848Abstract: The present disclosure provides systems and methods that leverage neural networks for high resolution image segmentation. A computing system can include a processor, a machine-learned image segmentation model comprising a semantic segmentation neural network and an edge refinement neural network, and at least one tangible, non-transitory computer readable medium that stores instructions that cause the processor to perform operations. The operations can include obtaining an image, inputting the image into the semantic segmentation neural network, receiving, as an output of the semantic segmentation neural network, a semantic segmentation mask, inputting at least a portion of the image and at least a portion of the semantic segmentation mask into the edge refinement neural network, and receiving, as an output of the edge refinement neural network, the refined semantic segmentation mask.Type: ApplicationFiled: November 13, 2020Publication date: March 4, 2021Inventors: Noritsugu Kanazawa, Yael Pritch Knaan
-
Publication number: 20210056349Abstract: Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system.Type: ApplicationFiled: November 6, 2020Publication date: February 25, 2021Inventors: Yael Pritch Knaan, Marc Levoy, Neal Wadhwa, Rahul Garg, Sameer Ansari, Jiawen Chen
-
Patent number: 10860919Abstract: The present disclosure provides systems and methods that leverage neural networks for high resolution image segmentation. A computing system can include a processor, a machine-learned image segmentation model comprising a semantic segmentation neural network and an edge refinement neural network, and at least one tangible, non-transitory computer readable medium that stores instructions that cause the processor to perform operations. The operations can include obtaining an image, inputting the image into the semantic segmentation neural network, receiving, as an output of the semantic segmentation neural network, a semantic segmentation mask, inputting at least a portion of the image and at least a portion of the semantic segmentation mask into the edge refinement neural network, and receiving, as an output of the edge refinement neural network, the refined semantic segmentation mask.Type: GrantFiled: September 27, 2017Date of Patent: December 8, 2020Assignee: Google LLCInventors: Noritsugu Kanazawa, Yael Pritch Knaan
-
Patent number: 10860889Abstract: Apparatus and methods related to using machine learning to determine depth maps for dual pixel images of objects are provided. A computing device can receive a dual pixel image of at least a foreground object. The dual pixel image can include a plurality of dual pixels. A dual pixel of the plurality of dual pixels can include a left-side pixel and a right-side pixel that both represent light incident on a single dual pixel element used to capture the dual pixel image. The computing device can be used to train a machine learning system to determine a depth map associated with the dual pixel image. The computing device can provide the trained machine learning system.Type: GrantFiled: January 11, 2019Date of Patent: December 8, 2020Assignee: Google LLCInventors: Yael Pritch Knaan, Marc Levoy, Neal Wadhwa, Rahul Garg, Sameer Ansari, Jiawen Chen