Patents by Inventor Nikhil Karnad
Nikhil Karnad has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230118460Abstract: A media application generates training data that includes a first set of media items and a second set of media items, where the first set of media items correspond to the second set of media items and include distracting objects that are manually segmented. The media application trains a segmentation machine-learning model based on the training data to receive a media item with one or more distracting objects and to output a segmentation mask for one or more segmented objects that correspond to the one or more distracting objects.Type: ApplicationFiled: October 18, 2022Publication date: April 20, 2023Applicant: Google LLCInventors: Orly LIBA, Nikhil KARNAD, Nori KANAZAWA, Yael Pritch KNAAN, Huizhong CHEN, Longqi CAI
-
Patent number: 10679361Abstract: A video stream may be captured, and may have a plurality of frames including at least a first frame and a second frame. Each of the frames may have a plurality of views obtained from viewpoints that are offset from each other. A source contour, associated with a source view of the first frame, may be retrieved. Camera parameters, associated with the image capture device used to capture the video stream, may also be retrieved. The camera parameters may include a first offset between the source view and a destination view of the first frame. At least the first offset may be used to project the source contour to the destination view to generate a destination contour associated with the destination view.Type: GrantFiled: December 5, 2017Date of Patent: June 9, 2020Assignee: GOOGLE LLCInventor: Nikhil Karnad
-
Patent number: 10552947Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.Type: GrantFiled: November 28, 2017Date of Patent: February 4, 2020Assignee: GOOGLE LLCInventors: Chia-Kai Liang, Kent Oberheu, Kurt Akeley, Garrett Girod, Nikhil Karnad, Francis A. Benevides
-
Patent number: 10440407Abstract: A combined video of a scene may be generated for applications such as virtual reality or augmented reality. In one method, a data store may store video data with a first portion having a first importance metric, and a second portion having a second importance metric, denoting that viewing of the first portion is more likely and/or preferential to viewing of the second portion. The subset may be retrieved and used to generate viewpoint video from a virtual viewpoint corresponding to a viewer's viewpoint. The viewpoint video may be displayed on a display device. One of storing the video data, retrieving the subset, and using the subset to generate the viewpoint video may include, based on the difference between the first and second importance metrics, expediting and/or enhancing performance of the step for the first portion, relative to the second portion.Type: GrantFiled: May 9, 2017Date of Patent: October 8, 2019Assignee: GOOGLE LLCInventors: Alex Song, Derek Pang, Mike Ma, Nikhil Karnad
-
Publication number: 20180332317Abstract: A combined video of a scene may be generated for applications such as virtual reality or augmented reality. In one method, a data store may store video data with a first portion having a first importance metric, and a second portion having a second importance metric, denoting that viewing of the first portion is more likely and/or preferential to viewing of the second portion. The subset may be retrieved and used to generate viewpoint video from a virtual viewpoint corresponding to a viewer's viewpoint. The viewpoint video may be displayed on a display device. One of storing the video data, retrieving the subset, and using the subset to generate the viewpoint video may include, based on the difference between the first and second importance metrics, expediting and/or enhancing performance of the step for the first portion, relative to the second portion.Type: ApplicationFiled: May 9, 2017Publication date: November 15, 2018Inventors: Alex Song, Derek Pang, Mike Ma, Nikhil Karnad
-
Publication number: 20180158198Abstract: A video stream may be captured, and may have a plurality of frames including at least a first frame and a second frame. Each of the frames may have a plurality of views obtained from viewpoints that are offset from each other. A source contour, associated with a source view of the first frame, may be retrieved. Camera parameters, associated with the image capture device used to capture the video stream, may also be retrieved. The camera parameters may include a first offset between the source view and a destination view of the first frame. At least the first offset may be used to project the source contour to the destination view to generate a destination contour associated with the destination view.Type: ApplicationFiled: December 5, 2017Publication date: June 7, 2018Inventor: Nikhil Karnad
-
Publication number: 20180082405Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.Type: ApplicationFiled: November 28, 2017Publication date: March 22, 2018Inventors: Chia-Kai Liang, Kent Oberheu, Kurt Akeley, Garrett Girod, Nikhil Karnad, Francis A. Benevides, JR.
-
Patent number: 9858649Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.Type: GrantFiled: September 30, 2015Date of Patent: January 2, 2018Assignee: Lytro, Inc.Inventors: Chia-Kai Liang, Kent Oberheu, Kurt Akeley, Garrett Girod, Nikhil Karnad, Francis A. Benevides, Jr.
-
Publication number: 20170091906Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.Type: ApplicationFiled: September 30, 2015Publication date: March 30, 2017Inventors: Chia-Kai Liang, Kent Oberheu, Kurt Akeley, Garrett Girod, Nikhil Karnad, Francis A. Benevides, JR.
-
Publication number: 20160307368Abstract: A compressed format provides more efficient storage for light-field pictures. A specialized player is configured to project virtual views from the compressed format. According to various embodiments, the compressed format and player are designed so that implementations using readily available computing equipment are able to project new virtual views from the compressed data at rates suitable for interactivity. Virtual-camera parameters, including but not limited to focus distance, depth of field, and center of perspective, may be varied arbitrarily within the range supported by the light-field picture, with each virtual view expressing the parameter values specified at its computation time. In at least one embodiment, compressed light-field pictures containing multiple light-field images may be projected to a single virtual view, also at interactive or near-interactive rates.Type: ApplicationFiled: April 12, 2016Publication date: October 20, 2016Inventors: Kurt Akeley, Nikhil Karnad, Keith Leonard, Colvin Pitts