Patents by Inventor Orly Liba

Orly Liba has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240046532
    Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.
    Type: Application
    Filed: October 18, 2023
    Publication date: February 8, 2024
    Inventors: Kfir Aberman, Yael Pritch Knaan, Orly Liba, David Edward Jacobs
  • Patent number: 11854120
    Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: December 26, 2023
    Assignee: GOOGLE LLC
    Inventors: Kfir Aberman, Yael Pritch Knaan, David Edward Jacobs, Orly Liba
  • Publication number: 20230325985
    Abstract: A method includes receiving an input image. The input image corresponds to one or more masked regions to be inpainted. The method includes providing the input image to a first neural network. The first neural network outputs a first inpainted image at a first resolution, and the one or more masked regions are inpainted in the first inpainted image. The method includes creating a second inpainted image by increasing a resolution of the first inpainted image from the first resolution to a second resolution. The second resolution is greater than the first resolution such that the one or more inpainted masked regions have an increased resolution. The method includes providing the second inpainted image to a second neural network. The second neural network outputs a first refined inpainted image at the second resolution, and the first refined inpainted image is a refined version of the second inpainted image.
    Type: Application
    Filed: October 14, 2021
    Publication date: October 12, 2023
    Inventors: Soo Ye KIM, Orly LIBA, Rahul GARG, Nori KANAZAWA, Neal WADHWA, Kfir ABERMAN, Huiwen CHANG
  • Publication number: 20230325998
    Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.
    Type: Application
    Filed: June 14, 2023
    Publication date: October 12, 2023
    Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
  • Patent number: 11721007
    Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.
    Type: Grant
    Filed: November 8, 2022
    Date of Patent: August 8, 2023
    Assignee: Google LLC
    Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
  • Publication number: 20230222636
    Abstract: Systems and methods for identifying a personalized prior within a generative model's latent vector space based on a set of images of a given subject. In some examples, the present technology may further include using the personalized prior to confine the inputs of a generative model to a latent vector space associated with the given subject, such that when the model is tasked with editing an image of the subject (e.g., to perform inpainting to fill in masked areas, improve resolution, or deblur the image), the subject's identifying features will be reflected in the images the model produces.
    Type: Application
    Filed: November 8, 2022
    Publication date: July 13, 2023
    Inventors: Kfir Aberman, Yotam Nitzan, Orly Liba, Yael Pritch Knaan, Qiurui He, Inbar Mosseri, Yossi Gandelsman, Michal Yarom
  • Publication number: 20230118361
    Abstract: A media application receives user input that indicates one or more objects to be erased from a media item. The media application translates the user input to a bounding box. The media application provides a crop of the media item based on the bounding box to a segmentation machine-learning model. The segmentation machine-learning model outputs a segmentation mask for one or more segmented objects in the crop of the media item and a corresponding segmentation score that indicates a quality of the segmentation mask.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 20, 2023
    Applicant: Google LLC
    Inventors: Orly LIBA, Navin SARMA, Yael Pritch KNAAN, Alexander SCHIFFHAUER, Longqi CAI, David JACOBS, Huizhong CHEN, Siyang LI, Bryan FELDMAN
  • Publication number: 20230118460
    Abstract: A media application generates training data that includes a first set of media items and a second set of media items, where the first set of media items correspond to the second set of media items and include distracting objects that are manually segmented. The media application trains a segmentation machine-learning model based on the training data to receive a media item with one or more distracting objects and to output a segmentation mask for one or more segmented objects that correspond to the one or more distracting objects.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 20, 2023
    Applicant: Google LLC
    Inventors: Orly LIBA, Nikhil KARNAD, Nori KANAZAWA, Yael Pritch KNAAN, Huizhong CHEN, Longqi CAI
  • Publication number: 20230094723
    Abstract: Techniques for reducing a distractor object in a first image are presented herein. A system can access a mask and the first image. A distractor object in the first image can be inside a region of interest and can have a pixel with an original attribute. Additionally, the system can process, using a machine-learned inpainting model, the first image and the mask to generate an inpainted image. The pixel of the distractor object in the inpainted image can have an inpainted attribute in chromaticity channels. Moreover, the system can determine a palette transform based on a comparison of the first image and the inpainted image. The transform attribute can be different from the inpainted attribute. Furthermore, the system can process the first image to generate a recolorized image. The pixel in the recolorized image can have a recolorized attribute based on the transform attribute of the palette transform.
    Type: Application
    Filed: September 28, 2021
    Publication date: March 30, 2023
    Inventors: Kfir Aberman, Yael Pritch Knaan, David Edward Jacobs, Orly Liba
  • Publication number: 20230037958
    Abstract: A system includes a computing device. The computing device is configured to perform a set of functions. The set of functions includes receiving an image, wherein the image comprises a two-dimensional array of data. The set of functions includes extracting, by a two-dimensional neural network, a plurality of two-dimensional features from the two-dimensional array of data. The set of functions includes generating a linear combination of the plurality of two-dimensional features to form a single three-dimensional input feature. The set of functions includes extracting, by a three-dimensional neural network, a plurality of three-dimensional features from the single three-dimensional input feature. The set of functions includes determining a two-dimensional depth map. The two-dimensional depth map contains depth information corresponding to the plurality of three-dimensional features.
    Type: Application
    Filed: December 24, 2020
    Publication date: February 9, 2023
    Inventors: Orly Liba, Rahul Garg, Neal Wadhwa, Jon Barron, Hayato Ikoma
  • Publication number: 20220230323
    Abstract: A device automatically segments an image into different regions and automatically adjusts perceived exposure-levels or other characteristics associated with each of the different regions, to produce pictures that exceed expectations for the type of optics and camera equipment being used and in some cases, the pictures even resemble other high-quality photography created using professional equipment and photo editing software. A machine-learned model is trained to automatically segment an image into distinct regions. The model outputs one or more masks that define the distinct regions. The mask(s) are refined using a guided filter or other technique to ensure that edges of the mask(s) conform to edges of objects depicted in the image. By applying the mask(s) to the image, the device can individually adjust respective characteristics of each of the different regions to produce a higher-quality picture of a scene.
    Type: Application
    Filed: July 15, 2019
    Publication date: July 21, 2022
    Applicant: Google LLC
    Inventors: Orly Liba, Florian Kainz, Longqi Cai, Yael Pritch Knaan
  • Patent number: 10716867
    Abstract: A composition includes a plurality of gold nanoparticles each having at least one surface. The gold nanoparticles have an average length of at least about 90 nm and an average width of at least about 25 nm.
    Type: Grant
    Filed: February 5, 2016
    Date of Patent: July 21, 2020
    Assignee: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Adam De La Zerda, Orly Liba, Elliott Sorelle, Bryan Knysh
  • Publication number: 20180299251
    Abstract: An apparatus includes a light splitter to receive a light beam and direct a first portion of the light beam to a reference arm and a second portion of the light beam to a sample arm. The sample arm includes a phase scrambler, in a path of the second portion of the light beam, to cause local-random-time varying phase modulation to the second portion of the light beam. The sample arm also includes a controller to change the local phase of the second portion of the light. The apparatus further includes a detector, in optical communication with the reference arm and the sample arm, to detect an interference pattern produced by the first portion of the light beam propagated through the reference arm and the second portion of the light beam scattered from the sample via the sample arm.
    Type: Application
    Filed: October 19, 2016
    Publication date: October 18, 2018
    Applicant: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Orly LIBA, Matthew D. LEW, Elliott D. SORELLE, Adam DE LA ZERDA
  • Publication number: 20180264144
    Abstract: A composition includes a plurality of gold nanoparticles each having at least one surface. The gold nanoparticles have an average length of at least about 90 nm and an average width of at least about 25 nm.
    Type: Application
    Filed: February 5, 2016
    Publication date: September 20, 2018
    Applicant: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Adam DE LA ZERDA, Orly LIBA, Elliott SORELLE, Bryan KNYSH
  • Patent number: 8879841
    Abstract: In accordance with an embodiment of the invention, an anisotropic denoising method is provided that removes sensor noise from a digital image while retaining edges, lines, and details in the image. In one embodiment, the method removes noise from a pixel of interest based on the detected type of image environment in which the pixel is situated. If the pixel is situated in an edge/line image environment, then denoising of the pixel is increased such that relatively stronger denoising of the pixel occurs along the edge or line feature. If the pixel is situated in a detail image environment, then denoising of the pixel is decreased such that relatively less denoising of the pixel occurs so as to preserve the details in the image. In one embodiment, detection of the type of image environment is accomplished by performing simple arithmetic operations using only pixels in a 9 pixel by 9 pixel matrix of pixels in which the pixel of interest is situated.
    Type: Grant
    Filed: March 1, 2011
    Date of Patent: November 4, 2014
    Assignee: Fotonation Limited
    Inventors: Noy Cohen, Jeffrey Danowitz, Orly Liba
  • Patent number: 8687894
    Abstract: In an embodiment, a device comprises a plurality of elements configured to apply a filter to multiple groups of pixels in a neighborhood of pixels surrounding a particular pixel to generate a matrix of filtered values; compute, from the matrix of filtered values, a first set of gradients along a first direction and a second set of gradients along a second and different direction; determine how many directional changes are experienced by the gradients in the first set of gradients and the gradients in the second set of gradients; compute a first weighted value for a first direction and a second weighted value for a second direction; and based, at least in part, upon the first and second weighted values, compute an overall texture characterization value for the particular pixel, wherein the overall texture characterization value indicates a type of image environment in which the particular pixel is located.
    Type: Grant
    Filed: October 15, 2010
    Date of Patent: April 1, 2014
    Assignee: DigitalOptics Corporation Europe Limited
    Inventors: Orly Liba, Noy Cohen, Jeffrey Danowitz
  • Patent number: 8582890
    Abstract: In an embodiment, a device comprises a plurality of elements, including logical elements, wherein the elements are configured to perform the operations of: in a neighborhood of pixels surrounding and including a particular pixel, applying a filter to multiple groups of pixels in the neighborhood to generate a set of filtered values; generating, based at least in part upon the set of filtered values, one or more sets of gradient values; based at least in part upon the one or more sets of gradient values, computing a first metric for an image environment in which the particular pixel is situated; determining a second metric for the image environment in which the particular pixel is situated, wherein the second metric distinguishes between a detail environment; and based at least in part upon the first metric and the second metric, computing a gradient improvement (GI) metric for the particular pixel.
    Type: Grant
    Filed: January 10, 2011
    Date of Patent: November 12, 2013
    Assignee: DigitalOptics Corporation Europe Limited
    Inventor: Orly Liba
  • Patent number: 8488031
    Abstract: A chromatic noise reduction method is provided for removing chromatic noise from the pixels of a mosaic image. In one implementation, an actual chroma value and a de-noised chroma value are derived for the central pixel of a matrix of pixels. Based at least in part upon these chroma values, a final chroma value is derived for the central pixel. The final chroma value is then used, along with the actual luminance of the central pixel, to derive a final de-noised pixel value for the central pixel. By de-noising the central pixel based on its chroma (which takes into account more than one color) rather than on just the color channel of the central pixel, this method allows the central pixel to be de-noised in a more color-coordinated fashion. As a result, improved chromatic noise reduction is achieved.
    Type: Grant
    Filed: January 14, 2011
    Date of Patent: July 16, 2013
    Assignee: DigitalOptics Corporation Europe Limited
    Inventors: Tomer Schwartz, Eyal Ben-Eliezer, Orly Liba, Noy Cohen
  • Publication number: 20120224784
    Abstract: In accordance with an embodiment of the invention, an anisotropic denoising method is provided that removes sensor noise from a digital image while retaining edges, lines, and details in the image. In one embodiment, the method removes noise from a pixel of interest based on the detected type of image environment in which the pixel is situated. If the pixel is situated in an edge/line image environment, then denoising of the pixel is increased such that relatively stronger denoising of the pixel occurs along the edge or line feature. If the pixel is situated in a detail image environment, then denoising of the pixel is decreased such that relatively less denoising of the pixel occurs so as to preserve the details in the image. In one embodiment, detection of the type of image environment is accomplished by performing simple arithmetic operations using only pixels in a 9 pixel by 9 pixel matrix of pixels in which the pixel of interest is situated.
    Type: Application
    Filed: March 1, 2011
    Publication date: September 6, 2012
    Applicant: TESSERA TECHNOLOGIES IRELAND LIMITED
    Inventors: Noy Cohen, Jeffrey Danowitz, Orly Liba
  • Publication number: 20120182454
    Abstract: A chromatic noise reduction method is provided for removing chromatic noise from the pixels of a mosaic image. In one implementation, an actual chroma value and a de-noised chroma value are derived for the central pixel of a matrix of pixels. Based at least in part upon these chroma values, a final chroma value is derived for the central pixel. The final chroma value is then used, along with the actual luminance of the central pixel, to derive a final de-noised pixel value for the central pixel. By de-noising the central pixel based on its chroma (which takes into account more than one color) rather than on just the color channel of the central pixel, this method allows the central pixel to be de-noised in a more color-coordinated fashion. As a result, improved chromatic noise reduction is achieved.
    Type: Application
    Filed: January 14, 2011
    Publication date: July 19, 2012
    Applicant: Tessera Technologies Ireland, Ltd.
    Inventors: Tomer Schwartz, Eyal Ben-Eliezer, Orly Liba, Noy Cohen