Patents by Inventor Orly Liba
Orly Liba has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12266113Abstract: A device automatically segments an image into different regions and automatically adjusts perceived exposure-levels or other characteristics associated with each of the different regions, to produce pictures that exceed expectations for the type of optics and camera equipment being used and in some cases, the pictures even resemble other high-quality photography created using professional equipment and photo editing software. A machine-learned model is trained to automatically segment an image into distinct regions. The model outputs one or more masks that define the distinct regions. The mask(s) are refined using a guided filter or other technique to ensure that edges of the mask(s) conform to edges of objects depicted in the image. By applying the mask(s) to the image, the device can individually adjust respective characteristics of each of the different regions to produce a higher-quality picture of a scene.Type: GrantFiled: July 15, 2019Date of Patent: April 1, 2025Assignee: Google LLCInventors: Orly Liba, Florian Kainz, Longqi Cai, Yael Pritch Knaan
-
Publication number: 20250069194Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: ApplicationFiled: November 13, 2024Publication date: February 27, 2025Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Patent number: 12217472Abstract: A media application generates training data that includes a first set of visual media items and a second set of visual media items, where the first set of visual media items correspond to the second set of visual items and include distracting objects that are manually segmented. The media application trains a segmentation machine-learning model based on the training data to receive a visual media item with one or more distracting objects and to output a segmentation mask for one or more segmented objects that correspond to the one or more distracting objects.Type: GrantFiled: October 18, 2022Date of Patent: February 4, 2025Assignee: Google LLCInventors: Orly Liba, Nikhil Karnad, Nori Kanazawa, Yael Pritch Knaan, Huizhong Chen, Longqi Cai
-
Publication number: 20250037251Abstract: A method includes obtaining an input image having a region to be inpainted, an indication of the region to be inpainted, and a guide image. The method also includes determining, by an encoder model, a first latent representation of the input image and a second latent representation of the guide image, and generating a combined latent representation based on the first latent representation and the second latent representation. The method additionally includes generating, by a style generative adversarial network model and based on the combined latent representation, an intermediate output image that includes inpainted image content for the region to be inpainted in the input image. The method further includes generating, based on the input image, the indication of the region, and the intermediate output image, an output image representing the input image with the region to be inpainted including the inpainted image content from the intermediate output image.Type: ApplicationFiled: January 13, 2022Publication date: January 30, 2025Inventors: Orly Liba, Kfir Aberman, Wei Xiong, David Futschik, Yael Pritch Knaan, Daniel Sýkora, Tianfan Xue
-
Patent number: 12169911Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: GrantFiled: June 14, 2023Date of Patent: December 17, 2024Assignee: GOOGLE LLCInventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Publication number: 20240394852Abstract: Implementations described herein relate to methods, computing devices, and non-transitory computer-readable media to generate an output image. In some implementations, a method includes estimating depth for an image to obtain a depth. The method further includes generating a focal table for the image that includes parameters that indicate a focal range and at least one of a front slope or a back slope. The method further includes determining if one or more faces are detected in the image. The method further includes, if one or more faces are detected in the image, identifying a respective face bounding box for each face and adjusting the focal table to include the face bounding boxes. The method further includes, if no faces are detected in the image, scaling the focal table. The method further includes, applying blur to the image using the focal table and the depth map to generate an output image.Type: ApplicationFiled: August 1, 2022Publication date: November 28, 2024Applicant: Google LLCInventors: Orly LIBA, Lucy YU, Yael Pritch KNAAN
-
Publication number: 20240378844Abstract: A media application derives a bystander mask from an image by analyzing the image with a bystander segmentation model, wherein the image depicts a bystander and the bystander mask identifies a plurality of first pixels in the image that are associated with the bystander. The media derives a shadow mask for the bystander by analyzing the image with a shadow segmentation model, wherein the image and the bystander mask are provided as input to the shadow segmentation model, and wherein the shadow mask identifies a plurality of second pixels in the image that are associated with a shadow of the bystander. The media application modifies the image to update pixel values of the plurality of first pixels and the plurality of second pixels such that the bystander and the shadow are erased from the image.Type: ApplicationFiled: May 10, 2023Publication date: November 14, 2024Applicant: Google LLCInventors: Lucy YU, Andrew LIU, Orly LIBA
-
Publication number: 20240355107Abstract: A method includes receiving training data comprising a plurality of images. one or more identified objects in each of the plurality of images. and a detection score associated with each of the one or more identified objects. wherein the detection score for an object is indicative of a degree to which a portion of an image corresponds to the object. The method also includes training a neural network based on the training data to predict a distractor score for at least one object of the one or more identified objects in an input image, wherein the at least one object is selected based on an associated detection score, and wherein the distractor score for the at least one object is indicative of a perceived visual distraction caused by a presence of the at least one object in the input image. The method additionally includes outputting the trained neural network.Type: ApplicationFiled: August 23, 2021Publication date: October 24, 2024Inventors: Orly Liba, Michael Garth Milne, Navin Padman Sarma, Doron Kukliansky, Huizhong Chen, Yael Pritch Knaan
-
Publication number: 20240346631Abstract: A media application detects a bystander in an initial image. The media application generates a bystander box that includes the bystander, wherein all pixels for the bystander are within the bystander box. The media application generates localizer boxes that encompass the bystander and one or more objects that are attached to the bystander. The media application aggregates the bystander box and one or more of the localizer boxes to form an aggregated box. The media application applies a segmenter to the initial image, based on the aggregated box, to segment the bystander and the one or more objects from the initial image to generate a bystander mask, wherein the bystander mask includes a subset of pixels within the aggregated box. The media application generates an inpainted image that replaces all pixels within the bystander mask with pixels that match a background in the initial image.Type: ApplicationFiled: June 30, 2022Publication date: October 17, 2024Applicant: Google LLCInventors: Orly LIBA, Pedro VELEZ, Siyang LI, Huizhong CHEN, Marcel PUYAT, Yanan BAO
-
Publication number: 20240046532Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.Type: ApplicationFiled: October 18, 2023Publication date: February 8, 2024Inventors: Kfir Aberman, Yael Pritch Knaan, Orly Liba, David Edward Jacobs
-
Patent number: 11854120Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.Type: GrantFiled: September 28, 2021Date of Patent: December 26, 2023Assignee: GOOGLE LLCInventors: Kfir Aberman, Yael Pritch Knaan, David Edward Jacobs, Orly Liba
-
Publication number: 20230325985Abstract: A method includes receiving an input image. The input image corresponds to one or more masked regions to be inpainted. The method includes providing the input image to a first neural network. The first neural network outputs a first inpainted image at a first resolution, and the one or more masked regions are inpainted in the first inpainted image. The method includes creating a second inpainted image by increasing a resolution of the first inpainted image from the first resolution to a second resolution. The second resolution is greater than the first resolution such that the one or more inpainted masked regions have an increased resolution. The method includes providing the second inpainted image to a second neural network. The second neural network outputs a first refined inpainted image at the second resolution, and the first refined inpainted image is a refined version of the second inpainted image.Type: ApplicationFiled: October 14, 2021Publication date: October 12, 2023Inventors: Soo Ye KIM, Orly LIBA, Rahul GARG, Nori KANAZAWA, Neal WADHWA, Kfir ABERMAN, Huiwen CHANG
-
Publication number: 20230325998Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: ApplicationFiled: June 14, 2023Publication date: October 12, 2023Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Patent number: 11721007Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: GrantFiled: November 8, 2022Date of Patent: August 8, 2023Assignee: Google LLCInventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Publication number: 20230222636Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.Type: ApplicationFiled: November 8, 2022Publication date: July 13, 2023Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
-
Publication number: 20230118361Abstract: A media application receives user input that indicates one or more objects to be erased from a media item. The media application translates the user input to a bounding box. The media application provides a crop of the media item based on the bounding box to a segmentation machine-learning model. The segmentation machine-learning model outputs a segmentation mask for one or more segmented objects in the crop of the media item and a corresponding segmentation score that indicates a quality of the segmentation mask.Type: ApplicationFiled: October 18, 2022Publication date: April 20, 2023Applicant: Google LLCInventors: Orly LIBA, Navin SARMA, Yael Pritch KNAAN, Alexander SCHIFFHAUER, Longqi CAI, David JACOBS, Huizhong CHEN, Siyang LI, Bryan FELDMAN
-
Publication number: 20230118460Abstract: A media application generates training data that includes a first set of media items and a second set of media items, where the first set of media items correspond to the second set of media items and include distracting objects that are manually segmented. The media application trains a segmentation machine-learning model based on the training data to receive a media item with one or more distracting objects and to output a segmentation mask for one or more segmented objects that correspond to the one or more distracting objects.Type: ApplicationFiled: October 18, 2022Publication date: April 20, 2023Applicant: Google LLCInventors: Orly LIBA, Nikhil KARNAD, Nori KANAZAWA, Yael Pritch KNAAN, Huizhong CHEN, Longqi CAI
-
Publication number: 20230094723Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.Type: ApplicationFiled: September 28, 2021Publication date: March 30, 2023Inventors: Kfir Aberman, Yael Pritch Knaan, David Edward Jacobs, Orly Liba
-
Publication number: 20230037958Abstract: A system includes a computing device. The computing device is configured to perform a set of functions. The set of functions includes receiving an image, wherein the image comprises a two-dimensional array of data. The set of functions includes extracting, by a two-dimensional neural network, a plurality of two-dimensional features from the two-dimensional array of data. The set of functions includes generating a linear combination of the plurality of two-dimensional features to form a single three-dimensional input feature. The set of functions includes extracting, by a three-dimensional neural network, a plurality of three-dimensional features from the single three-dimensional input feature. The set of functions includes determining a two-dimensional depth map. The two-dimensional depth map contains depth information corresponding to the plurality of three-dimensional features.Type: ApplicationFiled: December 24, 2020Publication date: February 9, 2023Inventors: Orly Liba, Rahul Garg, Neal Wadhwa, Jon Barron, Hayato Ikoma
-
Publication number: 20220230323Abstract: A device automatically segments an image into different regions and automatically adjusts perceived exposure-levels or other characteristics associated with each of the different regions, to produce pictures that exceed expectations for the type of optics and camera equipment being used and in some cases, the pictures even resemble other high-quality photography created using professional equipment and photo editing software. A machine-learned model is trained to automatically segment an image into distinct regions. The model outputs one or more masks that define the distinct regions. The mask(s) are refined using a guided filter or other technique to ensure that edges of the mask(s) conform to edges of objects depicted in the image. By applying the mask(s) to the image, the device can individually adjust respective characteristics of each of the different regions to produce a higher-quality picture of a scene.Type: ApplicationFiled: July 15, 2019Publication date: July 21, 2022Applicant: Google LLCInventors: Orly Liba, Florian Kainz, Longqi Cai, Yael Pritch Knaan