Patents by Inventor Nikhil Karnad

Nikhil Karnad has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230118460
    Abstract: A media application generates training data that includes a first set of media items and a second set of media items, where the first set of media items correspond to the second set of media items and include distracting objects that are manually segmented. The media application trains a segmentation machine-learning model based on the training data to receive a media item with one or more distracting objects and to output a segmentation mask for one or more segmented objects that correspond to the one or more distracting objects.
    Type: Application
    Filed: October 18, 2022
    Publication date: April 20, 2023
    Applicant: Google LLC
    Inventors: Orly LIBA, Nikhil KARNAD, Nori KANAZAWA, Yael Pritch KNAAN, Huizhong CHEN, Longqi CAI
  • Patent number: 10679361
    Abstract: A video stream may be captured, and may have a plurality of frames including at least a first frame and a second frame. Each of the frames may have a plurality of views obtained from viewpoints that are offset from each other. A source contour, associated with a source view of the first frame, may be retrieved. Camera parameters, associated with the image capture device used to capture the video stream, may also be retrieved. The camera parameters may include a first offset between the source view and a destination view of the first frame. At least the first offset may be used to project the source contour to the destination view to generate a destination contour associated with the destination view.
    Type: Grant
    Filed: December 5, 2017
    Date of Patent: June 9, 2020
    Assignee: GOOGLE LLC
    Inventor: Nikhil Karnad
  • Patent number: 10552947
    Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.
    Type: Grant
    Filed: November 28, 2017
    Date of Patent: February 4, 2020
    Assignee: GOOGLE LLC
    Inventors: Chia-Kai Liang, Kent Oberheu, Kurt Akeley, Garrett Girod, Nikhil Karnad, Francis A. Benevides
  • Patent number: 10440407
    Abstract: A combined video of a scene may be generated for applications such as virtual reality or augmented reality. In one method, a data store may store video data with a first portion having a first importance metric, and a second portion having a second importance metric, denoting that viewing of the first portion is more likely and/or preferential to viewing of the second portion. The subset may be retrieved and used to generate viewpoint video from a virtual viewpoint corresponding to a viewer's viewpoint. The viewpoint video may be displayed on a display device. One of storing the video data, retrieving the subset, and using the subset to generate the viewpoint video may include, based on the difference between the first and second importance metrics, expediting and/or enhancing performance of the step for the first portion, relative to the second portion.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: October 8, 2019
    Assignee: GOOGLE LLC
    Inventors: Alex Song, Derek Pang, Mike Ma, Nikhil Karnad
  • Publication number: 20180332317
    Abstract: A combined video of a scene may be generated for applications such as virtual reality or augmented reality. In one method, a data store may store video data with a first portion having a first importance metric, and a second portion having a second importance metric, denoting that viewing of the first portion is more likely and/or preferential to viewing of the second portion. The subset may be retrieved and used to generate viewpoint video from a virtual viewpoint corresponding to a viewer's viewpoint. The viewpoint video may be displayed on a display device. One of storing the video data, retrieving the subset, and using the subset to generate the viewpoint video may include, based on the difference between the first and second importance metrics, expediting and/or enhancing performance of the step for the first portion, relative to the second portion.
    Type: Application
    Filed: May 9, 2017
    Publication date: November 15, 2018
    Inventors: Alex Song, Derek Pang, Mike Ma, Nikhil Karnad
  • Publication number: 20180158198
    Abstract: A video stream may be captured, and may have a plurality of frames including at least a first frame and a second frame. Each of the frames may have a plurality of views obtained from viewpoints that are offset from each other. A source contour, associated with a source view of the first frame, may be retrieved. Camera parameters, associated with the image capture device used to capture the video stream, may also be retrieved. The camera parameters may include a first offset between the source view and a destination view of the first frame. At least the first offset may be used to project the source contour to the destination view to generate a destination contour associated with the destination view.
    Type: Application
    Filed: December 5, 2017
    Publication date: June 7, 2018
    Inventor: Nikhil Karnad
  • Publication number: 20180082405
    Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.
    Type: Application
    Filed: November 28, 2017
    Publication date: March 22, 2018
    Inventors: Chia-Kai Liang, Kent Oberheu, Kurt Akeley, Garrett Girod, Nikhil Karnad, Francis A. Benevides, JR.
  • Patent number: 9858649
    Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: January 2, 2018
    Assignee: Lytro, Inc.
    Inventors: Chia-Kai Liang, Kent Oberheu, Kurt Akeley, Garrett Girod, Nikhil Karnad, Francis A. Benevides, Jr.
  • Publication number: 20170091906
    Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.
    Type: Application
    Filed: September 30, 2015
    Publication date: March 30, 2017
    Inventors: Chia-Kai Liang, Kent Oberheu, Kurt Akeley, Garrett Girod, Nikhil Karnad, Francis A. Benevides, JR.
  • Publication number: 20160307368
    Abstract: A compressed format provides more efficient storage for light-field pictures. A specialized player is configured to project virtual views from the compressed format. According to various embodiments, the compressed format and player are designed so that implementations using readily available computing equipment are able to project new virtual views from the compressed data at rates suitable for interactivity. Virtual-camera parameters, including but not limited to focus distance, depth of field, and center of perspective, may be varied arbitrarily within the range supported by the light-field picture, with each virtual view expressing the parameter values specified at its computation time. In at least one embodiment, compressed light-field pictures containing multiple light-field images may be projected to a single virtual view, also at interactive or near-interactive rates.
    Type: Application
    Filed: April 12, 2016
    Publication date: October 20, 2016
    Inventors: Kurt Akeley, Nikhil Karnad, Keith Leonard, Colvin Pitts