Patents by Inventor Jon Karafin
Jon Karafin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11328446Abstract: Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.Type: GrantFiled: June 28, 2017Date of Patent: May 10, 2022Assignee: Google LLCInventors: Jie Tan, Gang Pan, Jon Karafin, Thomas Nonn, Julio C. Hernandez Zaragoza
-
Patent number: 10565734Abstract: An image capture system includes a plurality of image sensors arranged in a pattern such that gaps exist between adjacent image sensors of the plurality of image sensors. Each of the image sensors may be configured to capture sensor image data. The image capture system may also have a main lens configured to direct incoming light along an optical path, a microlens array positioned within the optical path, and a plurality of tapered fiber optic bundles. Each tapered fiber optic bundle may have a leading end positioned within the optical path, and a trailing end positioned proximate one of the image sensors. The leading end may have a larger cross-sectional area than the trailing end. Sensor data from the image sensors may be combined to generate a single light-field image that is substantially unaffected by the gaps.Type: GrantFiled: March 7, 2017Date of Patent: February 18, 2020Assignee: GOOGLE LLCInventors: Brendan Bevensee, Tingfang Du, Jon Karafin, Joel Merritt, Duane Petrovich, Gareth Spor
-
Patent number: 10545215Abstract: A light-field video stream may be processed to modify the camera pathway from which the light-field video stream is projected. A plurality of target pixels may be selected, in a plurality of key frames of the light-field video stream. The target pixels may be used to generate a camera pathway indicative of motion of the camera during generation of the light-field video stream. The camera pathway may be adjusted to generate an adjusted camera pathway. This may be done, for example, to carry out image stabilization. The light-field video stream may be projected to a viewpoint defined by the adjusted camera pathway to generate a projected video stream with the image stabilization.Type: GrantFiled: September 13, 2017Date of Patent: January 28, 2020Assignee: GOOGLE LLCInventors: Jon Karafin, Gang Pan, Thomas Nonn, Jie Tan
-
Publication number: 20190079158Abstract: A light-field video stream may be processed to modify the camera pathway from which the light-field video stream is projected. A plurality of target pixels may be selected, in a plurality of key frames of the light-field video stream. The target pixels may be used to generate a camera pathway indicative of motion of the camera during generation of the light-field video stream. The camera pathway may be adjusted to generate an adjusted camera pathway. This may be done, for example, to carry out image stabilization. The light-field video stream may be projected to a viewpoint defined by the adjusted camera pathway to generate a projected video stream with the image stabilization.Type: ApplicationFiled: September 13, 2017Publication date: March 14, 2019Inventors: Jon Karafin, Gang Pan, Thomas Nonn, Jie Tan
-
Patent number: 10009597Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.Type: GrantFiled: January 27, 2017Date of Patent: June 26, 2018Assignee: LIGHT FIELD LAB, INC.Inventors: Jon Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
-
Patent number: 9900510Abstract: Motion blur may be applied to a light-field image. The light-field image may be captured with a light-field camera having a main lens, an image sensor, and a plurality of microlenses positioned between the main lens and the image sensor. The light-field image may have a plurality of lenslet images, each of which corresponds to one microlens of the microlens array. The light-field image may be used to generate a mosaic of subaperture images, each of which has pixels from the same location on each of the lenslet images. Motion vectors may be computed to indicate motion occurring within at least a primary subaperture image of the mosaic. The motion vectors may be used to carry out shutter reconstruction of the mosaic to generate a mosaic of blurred subaperture images, which may then be used to generate a motion-blurred light-field image.Type: GrantFiled: December 8, 2016Date of Patent: February 20, 2018Assignee: Lytro, Inc.Inventors: Jon Karafin, Thomas Nonn, Gang Pan, Zejing Wang
-
Publication number: 20170365068Abstract: Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.Type: ApplicationFiled: June 28, 2017Publication date: December 21, 2017Inventors: Jie Tan, Gang Pan, Jon Karafin, Thomas Nonn, Julio C. Hernandez Zaragoza
-
Publication number: 20170332000Abstract: A high dynamic range light-field image may be captured through the use of a light-field imaging system. In a first sensor of the light-field imaging system, first image data may be captured at a first exposure level. In the first sensor or in a second sensor of the light-field imaging system, second imaging data may be captured at a second exposure level greater than the first exposure level. In a data store, the first image data and the second image data may be received. In a processor, the first image data and the second image data may be combined to generate a light-field image with high dynamic range.Type: ApplicationFiled: May 10, 2016Publication date: November 16, 2017Inventors: Zejing Wang, Kurt Akeley, Colvin Pitts, Jon Karafin
-
Publication number: 20170243373Abstract: An image capture system includes a plurality of image sensors arranged in a pattern such that gaps exist between adjacent image sensors of the plurality of image sensors. Each of the image sensors may be configured to capture sensor image data. The image capture system may also have a main lens configured to direct incoming light along an optical path, a microlens array positioned within the optical path, and a plurality of tapered fiber optic bundles. Each tapered fiber optic bundle may have a leading end positioned within the optical path, and a trailing end positioned proximate one of the image sensors. The leading end may have a larger cross-sectional area than the trailing end. Sensor data from the image sensors may be combined to generate a single light-field image that is substantially unaffected by the gaps.Type: ApplicationFiled: March 7, 2017Publication date: August 24, 2017Inventors: Brendan Bevensee, Tingfang Du, Jon Karafin, Joel Merritt, Duane Petrovich, Gareth Spor
-
Publication number: 20170237970Abstract: Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, Td, the plurality of intermediate views being extrapolated from the captured views.Type: ApplicationFiled: January 27, 2017Publication date: August 17, 2017Inventors: Jon Karafin, Miller H. Schuck, Douglas J. McKnight, Mrityunjay Kumar, Wilhelm Taylor
-
Publication number: 20170139131Abstract: A camera may have two or more image sensors, including a first image sensor and a second image sensor. The camera may have a main lens that directs incoming light along an optical path, and microlens array positioned within the optical path. The camera may also have two or more fiber optic bundles, including first and second fiber optic bundles with first and second leading ends, respectively. A first trailing end of the first fiber optic bundle may be positioned proximate the first image sensor, and a second trailing end of the second fiber optic bundle may be positioned proximate the second image sensor, displaced from the first trailing end by a gap. The leading ends may be positioned adjacent to each other within the optical path such that image data captured by the image sensors can be combined to define a single light-field image substantially unaffected by the gap.Type: ApplicationFiled: February 1, 2017Publication date: May 18, 2017Inventors: Jon Karafin, Colvin Pitts, Yuriy Romanenko
-
Patent number: 9558421Abstract: Systems, devices, and methods disclosed herein may apply a computational spatial-temporal analysis to assess pixels between temporal and/or perspective view imagery to determine imaging details that may be used to generate image data with increased signal-to-noise ratio.Type: GrantFiled: September 26, 2014Date of Patent: January 31, 2017Assignee: RealD Inc.Inventors: Jon Karafin, Mrityunjay Kumar
-
Publication number: 20160309065Abstract: A camera may have two or more image sensors, including a first image sensor and a second image sensor. The camera may have a main lens that directs incoming light along an optical path, and microlens array positioned within the optical path. The camera may also have two or more fiber optic bundles, including first and second fiber optic bundles with first and second leading ends, respectively. A first trailing end of the first fiber optic bundle may be positioned proximate the first image sensor, and a second trailing end of the second fiber optic bundle may be positioned proximate the second image sensor, displaced from the first trailing end by a gap. The leading ends may be positioned adjacent to each other within the optical path such that image data captured by the image sensors can be combined to define a single light-field image substantially unaffected by the gap.Type: ApplicationFiled: April 14, 2016Publication date: October 20, 2016Inventors: Jon Karafin, Colvin Pitts, Andreas Nowatzyk, Yuriy Romanenko, Adina Roth, Tom Czepowicz, Matt Helms, Gareth Spor
-
Publication number: 20150178585Abstract: Systems, devices, and methods disclosed herein may apply a computational spatial-temporal analysis to assess pixels between temporal and/or perspective view imagery to determine imaging details that may be used to generate image data with increased signal-to-noise ratio.Type: ApplicationFiled: September 26, 2014Publication date: June 25, 2015Inventors: Jon Karafin, Mrityunjay Kumar