Patents by Inventor Bennett Wilburn

Bennett Wilburn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10388069
    Abstract: Methods and apparatus are described that enable augmented or virtual reality based on a light field. A geometric proxy of a mobile device such as a smart phone is used during the process of inserting a virtual object from the light field into the real world images being acquired. For example, a mobile device includes a processor and a camera coupled to the processor. The processor is configured to define a view-dependent geometric proxy, record images with the camera to produce recorded frames and, based on the view-dependent geometric proxy, render the recorded frames with an inserted light field virtual object.
    Type: Grant
    Filed: September 9, 2015
    Date of Patent: August 20, 2019
    Assignee: Futurewei Technologies, Inc.
    Inventors: Jinwei Gu, Bennett Wilburn, Wei Jiang
  • Publication number: 20170069133
    Abstract: Methods and apparatus are described that enable augmented or virtual reality based on a light field. A geometric proxy of a mobile device such as a smart phone is used during the process of inserting a virtual object from the light field into the real world images being acquired. For example, a mobile device includes a processor and a camera coupled to the processor. The processor is configured to define a view-dependent geometric proxy, record images with the camera to produce recorded frames and, based on the view-dependent geometric proxy, render the recorded frames with an inserted light field virtual object.
    Type: Application
    Filed: September 9, 2015
    Publication date: March 9, 2017
    Inventors: Jinwei Gu, Bennett Wilburn, Wei Jiang
  • Patent number: 9386288
    Abstract: According to various embodiments, the system and method of the present invention process light-field image data so as to reduce color artifacts, reduce projection artifacts, and/or increase dynamic range. These techniques operate, for example, on image data affected by sensor saturation and/or microlens modulation. Flat-field images are captured and converted to modulation images, and then applied on a per-pixel basis, according to techniques described herein.
    Type: Grant
    Filed: December 15, 2014
    Date of Patent: July 5, 2016
    Assignee: Lytro, Inc.
    Inventors: Kurt Barton Akeley, Brian Cabral, Colvin Pitts, Chia-Kai Liang, Bennett Wilburn, Timothy James Knight, Yi-Ren Ng
  • Publication number: 20150312553
    Abstract: A system and method are provided for coordinating image capture using multiple devices, including for example multiple image capture devices (cameras), multiple lighting devices (flash), and/or the like. In at least one embodiment, the system of the present invention is configured to collect image information from multiple image capture devices, such as cameras, and/or to collect multiple images having different lighting configurations. The collected image data can be processed to generate various effects, such as relighting, parallax, refocusing, and/or three-dimensional effects, and/or to introduce interactivity into the image presentation. In at least one embodiment, the system of the present invention is implemented using any combination of any number of image capture device(s) and/or flash (lighting) device(s), which may be equipped to communicate with one another via any suitable means, such as wirelessly.
    Type: Application
    Filed: February 27, 2015
    Publication date: October 29, 2015
    Inventors: Yi-Ren Ng, Chia-Kai Liang, Kurt Barton Akeley, Bennett Wilburn
  • Publication number: 20150097985
    Abstract: According to various embodiments, the system and method of the present invention process light-field image data so as to reduce color artifacts, reduce projection artifacts, and/or increase dynamic range. These techniques operate, for example, on image data affected by sensor saturation and/or microlens modulation. Flat-field images are captured and converted to modulation images, and then applied on a per-pixel basis, according to techniques described herein.
    Type: Application
    Filed: December 15, 2014
    Publication date: April 9, 2015
    Inventors: Kurt Barton Akeley, Brian Cabral, Colvin Pitts, Chia-Kai Liang, Bennett Wilburn, Timothy James Knight, Yi-Ren Ng
  • Patent number: 9001226
    Abstract: A system and method are provided for coordinating image capture using multiple devices, including for example multiple image capture devices (cameras), multiple lighting devices (flash), and/or the like. In at least one embodiment, the system of the present invention is configured to collect image information from multiple image capture devices, such as cameras, and/or to collect multiple images having different lighting configurations. The collected image data can be processed to generate various effects, such as relighting, parallax, refocusing, and/or three-dimensional effects, and/or to introduce interactivity into the image presentation. In at least one embodiment, the system of the present invention is implemented using any combination of any number of image capture device(s) and/or flash (lighting) device(s), which may be equipped to communicate with one another via any suitable means, such as wirelessly.
    Type: Grant
    Filed: December 4, 2012
    Date of Patent: April 7, 2015
    Assignee: Lytro, Inc.
    Inventors: Yi-Ren Ng, Chia-Kai Liang, Kurt Barton Akeley, Bennett Wilburn
  • Patent number: 8948545
    Abstract: According to various embodiments, the system and method of the present invention process light-field image data so as to reduce color artifacts, reduce projection artifacts, and/or increase dynamic range. These techniques operate, for example, on image data affected by sensor saturation and/or microlens modulation. Flat-field images are captured and converted to modulation images, and then applied on a per-pixel basis, according to techniques described herein.
    Type: Grant
    Filed: February 22, 2013
    Date of Patent: February 3, 2015
    Assignee: Lytro, Inc.
    Inventors: Kurt Barton Akeley, Brian Cabral, Colvin Pitts, Chia-Kai Liang, Bennett Wilburn, Timothy James Knight, Yi-Ren Ng
  • Publication number: 20140176592
    Abstract: According to various embodiments, the present may be used to apply a wide variety of processes to a two-dimensional image generated from light-field data. One or more parameters, such as light-field parameters and/or device capture parameters may be included in metadata of the two-dimensional image, and may be retrieved and processed to determine the appropriate value(s) of a first setting of the process. The process may be applied uniformly, or with variation across subsets of the two-dimensional image, down to individual pixels. The process may be a noise filtering process, an image sharpening process, a color adjustment process, a tone curve process, a contrast adjustment process, a saturation adjustment process, a gamma adjustment process, a combination thereof, or any other known process that may be desirable for enhancing two-dimensional images.
    Type: Application
    Filed: October 10, 2013
    Publication date: June 26, 2014
    Applicant: Lytro, Inc.
    Inventors: Bennett Wilburn, Tony Yip Pang Poon, Colvin Pitts, Chia-Kai Liang, Timothy Knight, Robert Carroll
  • Patent number: 8369399
    Abstract: Methods and systems for combining multiple video streams are provided. Video feeds are received from multiple optical sensors, and homography information and/or corner metadata is calculated for each frame from each video stream. This data is used to mosaic the separate frames into a single video frame. Local translation of each image may also be used to synchronize the video frames. The optical sensors can be provided by an airborne platform, such as a manned or unmanned surveillance vehicle. Image data can be requested from a ground operator, and transmitted from the airborne platform to the user in real time or at a later time. Various data arrangements may be used by an aggregation system to serialize and/or multiplex image data received from multiple sensor modules. Fixed-size record arrangement and variable-size record arrangement systems are provided.
    Type: Grant
    Filed: February 13, 2007
    Date of Patent: February 5, 2013
    Assignee: Sony Corporation
    Inventors: Geoffrey Egnal, Rodney Feldman, Kyungnam Kim, Bennett Wilburn
  • Patent number: 8027531
    Abstract: This invention relates to an apparatus and a method for video capture of a three-dimensional region of interest in a scene using an array of video cameras. The video cameras of the array are positioned for viewing the three-dimensional region of interest in the scene from their respective viewpoints. A triggering mechanism is provided for staggering the capture of a set of frames by the video cameras of the array. The apparatus has a processing unit for combining and operating on the set of frames captured by the array of cameras to generate a new visual output, such as high-speed video or spatio-temporal structure and motion models, that has a synthetic viewpoint of the three-dimensional region of interest. The processing involves spatio-temporal interpolation for determining the synthetic viewpoint space-time trajectory. In some embodiments, the apparatus computes a multibaseline spatio-temporal optical flow.
    Type: Grant
    Filed: July 21, 2005
    Date of Patent: September 27, 2011
    Assignee: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Bennett Wilburn, Neel Joshi, Marc C. Levoy, Mark Horowitz
  • Patent number: 7974498
    Abstract: A super-resolution algorithm that explicitly and exactly models the detector pixel shape, size, location, and gaps for periodic and aperiodic tilings. The algorithm projects the low-resolution input image into high-resolution space to model the actual shapes and/or gaps of the detector pixels. By using an aperiodic pixel layout such as a Penrose tiling significant improvements in super-resolution results can be obtained. An error back-projection super-resolution algorithm makes use of the exact detector model in its back projection operator for better accuracy. Theoretically, the aperiodic detector can be based on CCD (charge-coupled device) technology, and/or more practically, CMOS (complimentary metal oxide semiconductor) technology, for example.
    Type: Grant
    Filed: August 8, 2007
    Date of Patent: July 5, 2011
    Assignee: Microsoft Corporation
    Inventors: Moshe Ben-Ezra, Zhouchen Lin, Bennett Wilburn
  • Publication number: 20090046952
    Abstract: A super-resolution algorithm that explicitly and exactly models the detector pixel shape, size, location, and gaps for periodic and aperiodic tilings. The algorithm projects the low-resolution input image into high-resolution space to model the actual shapes and/or gaps of the detector pixels. By using an aperiodic pixel layout such as a Penrose tiling significant improvements in super-resolution results can be obtained. An error back-projection super-resolution algorithm makes use of the exact detector model in its back projection operator for better accuracy. Theoretically, the aperiodic detector can be based on CCD (charge-coupled device) technology, and/or more practically, CMOS (complimentary metal oxide semiconductor) technology, for example.
    Type: Application
    Filed: August 8, 2007
    Publication date: February 19, 2009
    Applicant: MICROSOFT CORPORATION
    Inventors: Moshe Ben-Ezra, Zhouchen Lin, Bennett Wilburn
  • Publication number: 20080060034
    Abstract: Methods and systems for combining multiple video streams are provided. Video feeds are received from multiple optical sensors, and homography information and/or corner metadata is calculated for each frame from each video stream. This data is used to mosaic the separate frames into a single video frame. Local translation of each image may also be used to synchronize the video frames. The optical sensors can be provided by an airborne platform, such as a manned or unmanned surveillance vehicle. Image data can be requested from a ground operator, and transmitted from the airborne platform to the user in real time or at a later time. Various data arrangements may be used by an aggregation system to serialize and/or multiplex image data received from multiple sensor modules. Fixed-size record arrangement and variable-size record arrangement systems are provided.
    Type: Application
    Filed: February 13, 2007
    Publication date: March 6, 2008
    Inventors: Geoffrey Egnal, Rodney Feldman, Bobby Gintz, Kyungnam Kim, Robert Palmer, Bennett Wilburn
  • Publication number: 20070188653
    Abstract: An image capture system comprises a plurality of cameras and a camera mount. The camera mount has a curved portion, and the plurality of cameras are secured to the curved portion. The cameras are oriented radially inward relative to the curved portion of the camera mount, such that the lines of sight of the cameras intersect. Images are captured substantially simultaneously with the plurality of cameras. The captured images are stitched together to form a collective image. The image capture system may be positioned in an aerial vehicle to provide overhead views of an environment, and captured images may be transmitted to a remote location for viewing in substantially real time. A remote user may indicate a region of interest within the collective image, and the image capture system may render a portion of the collective image in accordance with the user's indications.
    Type: Application
    Filed: July 11, 2006
    Publication date: August 16, 2007
    Inventors: David Pollock, Patrick Reardon, Theodore Rogers, Christopher Underwood, Geoffrey Egnal, Bennett Wilburn, Stephen Pitalo
  • Publication number: 20070030342
    Abstract: This invention relates to an apparatus and a method for video capture of a three-dimensional region of interest in a scene using an array of video cameras. The video cameras of the array are positioned for viewing the three-dimensional region of interest in the scene from their respective viewpoints. A triggering mechanism is provided for staggering the capture of a set of frames by the video cameras of the array. The apparatus has a processing unit for combining and operating on the set of frames captured by the array of cameras to generate a new visual output, such as high-speed video or spatio-temporal structure and motion models, that has a synthetic viewpoint of the three-dimensional region of interest. The processing involves spatio-temporal interpolation for determining the synthetic viewpoint space-time trajectory. In some embodiments, the apparatus computes a multibaseline spatio-temporal optical flow.
    Type: Application
    Filed: July 21, 2005
    Publication date: February 8, 2007
    Inventors: Bennett Wilburn, Neel Joshi, Marc Levoy, Mark Horowitz