Patents by Inventor Jon Karafin

Jon Karafin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11328446
    Abstract: Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: May 10, 2022
    Assignee: Google LLC
    Inventors: Jie Tan, Gang Pan, Jon Karafin, Thomas Nonn, Julio C. Hernandez Zaragoza
  • Patent number: 10565734
    Abstract: An image capture system includes a plurality of image sensors arranged in a pattern such that gaps exist between adjacent image sensors of the plurality of image sensors. Each of the image sensors may be configured to capture sensor image data. The image capture system may also have a main lens configured to direct incoming light along an optical path, a microlens array positioned within the optical path, and a plurality of tapered fiber optic bundles. Each tapered fiber optic bundle may have a leading end positioned within the optical path, and a trailing end positioned proximate one of the image sensors. The leading end may have a larger cross-sectional area than the trailing end. Sensor data from the image sensors may be combined to generate a single light-field image that is substantially unaffected by the gaps.
    Type: Grant
    Filed: March 7, 2017
    Date of Patent: February 18, 2020
    Assignee: GOOGLE LLC
    Inventors: Brendan Bevensee, Tingfang Du, Jon Karafin, Joel Merritt, Duane Petrovich, Gareth Spor
  • Patent number: 10545215
    Abstract: A light-field video stream may be processed to modify the camera pathway from which the light-field video stream is projected. A plurality of target pixels may be selected, in a plurality of key frames of the light-field video stream. The target pixels may be used to generate a camera pathway indicative of motion of the camera during generation of the light-field video stream. The camera pathway may be adjusted to generate an adjusted camera pathway. This may be done, for example, to carry out image stabilization. The light-field video stream may be projected to a viewpoint defined by the adjusted camera pathway to generate a projected video stream with the image stabilization.
    Type: Grant
    Filed: September 13, 2017
    Date of Patent: January 28, 2020
    Assignee: GOOGLE LLC
    Inventors: Jon Karafin, Gang Pan, Thomas Nonn, Jie Tan
  • Publication number: 20190079158
    Abstract: A light-field video stream may be processed to modify the camera pathway from which the light-field video stream is projected. A plurality of target pixels may be selected, in a plurality of key frames of the light-field video stream. The target pixels may be used to generate a camera pathway indicative of motion of the camera during generation of the light-field video stream. The camera pathway may be adjusted to generate an adjusted camera pathway. This may be done, for example, to carry out image stabilization. The light-field video stream may be projected to a viewpoint defined by the adjusted camera pathway to generate a projected video stream with the image stabilization.
    Type: Application
    Filed: September 13, 2017
    Publication date: March 14, 2019
    Inventors: Jon Karafin, Gang Pan, Thomas Nonn, Jie Tan
  • Patent number: 10009597
    Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.
    Type: Grant
    Filed: January 27, 2017
    Date of Patent: June 26, 2018
    Assignee: LIGHT FIELD LAB, INC.
    Inventors: Jon Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
  • Patent number: 9900510
    Abstract: Motion blur may be applied to a light-field image. The light-field image may be captured with a light-field camera having a main lens, an image sensor, and a plurality of microlenses positioned between the main lens and the image sensor. The light-field image may have a plurality of lenslet images, each of which corresponds to one microlens of the microlens array. The light-field image may be used to generate a mosaic of subaperture images, each of which has pixels from the same location on each of the lenslet images. Motion vectors may be computed to indicate motion occurring within at least a primary subaperture image of the mosaic. The motion vectors may be used to carry out shutter reconstruction of the mosaic to generate a mosaic of blurred subaperture images, which may then be used to generate a motion-blurred light-field image.
    Type: Grant
    Filed: December 8, 2016
    Date of Patent: February 20, 2018
    Assignee: Lytro, Inc.
    Inventors: Jon Karafin, Thomas Nonn, Gang Pan, Zejing Wang
  • Publication number: 20170365068
    Abstract: Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.
    Type: Application
    Filed: June 28, 2017
    Publication date: December 21, 2017
    Inventors: Jie Tan, Gang Pan, Jon Karafin, Thomas Nonn, Julio C. Hernandez Zaragoza
  • Publication number: 20170332000
    Abstract: A high dynamic range light-field image may be captured through the use of a light-field imaging system. In a first sensor of the light-field imaging system, first image data may be captured at a first exposure level. In the first sensor or in a second sensor of the light-field imaging system, second imaging data may be captured at a second exposure level greater than the first exposure level. In a data store, the first image data and the second image data may be received. In a processor, the first image data and the second image data may be combined to generate a light-field image with high dynamic range.
    Type: Application
    Filed: May 10, 2016
    Publication date: November 16, 2017
    Inventors: Zejing Wang, Kurt Akeley, Colvin Pitts, Jon Karafin
  • Publication number: 20170243373
    Abstract: An image capture system includes a plurality of image sensors arranged in a pattern such that gaps exist between adjacent image sensors of the plurality of image sensors. Each of the image sensors may be configured to capture sensor image data. The image capture system may also have a main lens configured to direct incoming light along an optical path, a microlens array positioned within the optical path, and a plurality of tapered fiber optic bundles. Each tapered fiber optic bundle may have a leading end positioned within the optical path, and a trailing end positioned proximate one of the image sensors. The leading end may have a larger cross-sectional area than the trailing end. Sensor data from the image sensors may be combined to generate a single light-field image that is substantially unaffected by the gaps.
    Type: Application
    Filed: March 7, 2017
    Publication date: August 24, 2017
    Inventors: Brendan Bevensee, Tingfang Du, Jon Karafin, Joel Merritt, Duane Petrovich, Gareth Spor
  • Publication number: 20170237970
    Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.
    Type: Application
    Filed: January 27, 2017
    Publication date: August 17, 2017
    Inventors: Jon Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
  • Publication number: 20170139131
    Abstract: A camera may have two or more image sensors, including a first image sensor and a second image sensor. The camera may have a main lens that directs incoming light along an optical path, and microlens array positioned within the optical path. The camera may also have two or more fiber optic bundles, including first and second fiber optic bundles with first and second leading ends, respectively. A first trailing end of the first fiber optic bundle may be positioned proximate the first image sensor, and a second trailing end of the second fiber optic bundle may be positioned proximate the second image sensor, displaced from the first trailing end by a gap. The leading ends may be positioned adjacent to each other within the optical path such that image data captured by the image sensors can be combined to define a single light-field image substantially unaffected by the gap.
    Type: Application
    Filed: February 1, 2017
    Publication date: May 18, 2017
    Inventors: Jon Karafin, Colvin Pitts, Yuriy Romanenko
  • Patent number: 9558421
    Abstract: Systems, devices, and methods disclosed herein may apply a computational spatial-temporal analysis to assess pixels between temporal and/or perspective view imagery to determine imaging details that may be used to generate image data with increased signal-to-noise ratio.
    Type: Grant
    Filed: September 26, 2014
    Date of Patent: January 31, 2017
    Assignee: RealD Inc.
    Inventors: Jon Karafin, Mrityunjay Kumar
  • Publication number: 20160309065
    Abstract: A camera may have two or more image sensors, including a first image sensor and a second image sensor. The camera may have a main lens that directs incoming light along an optical path, and microlens array positioned within the optical path. The camera may also have two or more fiber optic bundles, including first and second fiber optic bundles with first and second leading ends, respectively. A first trailing end of the first fiber optic bundle may be positioned proximate the first image sensor, and a second trailing end of the second fiber optic bundle may be positioned proximate the second image sensor, displaced from the first trailing end by a gap. The leading ends may be positioned adjacent to each other within the optical path such that image data captured by the image sensors can be combined to define a single light-field image substantially unaffected by the gap.
    Type: Application
    Filed: April 14, 2016
    Publication date: October 20, 2016
    Inventors: Jon Karafin, Colvin Pitts, Andreas Nowatzyk, Yuriy Romanenko, Adina Roth, Tom Czepowicz, Matt Helms, Gareth Spor
  • Publication number: 20150178585
    Abstract: Systems, devices, and methods disclosed herein may apply a computational spatial-temporal analysis to assess pixels between temporal and/or perspective view imagery to determine imaging details that may be used to generate image data with increased signal-to-noise ratio.
    Type: Application
    Filed: September 26, 2014
    Publication date: June 25, 2015
    Inventors: Jon Karafin, Mrityunjay Kumar