Patents by Inventor Colvin Pitts

Colvin Pitts has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10412373
    Abstract: A light-field camera system such as a tiled camera array may be used to capture a light-field of an environment. The tiled camera array may be a tiered camera array with a first plurality of cameras and a second plurality of cameras that are arranged more densely, but have lower resolution, than those of the first plurality of cameras. The first plurality of cameras may be interspersed among the second plurality of cameras. The first and second pluralities may cooperate to capture the light-field. According to one method, a subview may be captured by each camera of the first and second pluralities. Estimated world properties of the environment may be computed for each subview. A confidence map may be generated to indicate a level of confidence in the estimated world properties for each subview. The confidence maps and subviews may be used to generate a virtual view of the environment.
    Type: Grant
    Filed: April 28, 2017
    Date of Patent: September 10, 2019
    Assignee: GOOGLE LLC
    Inventor: Colvin Pitts
  • Patent number: 10341632
    Abstract: An environment may be displayed from a viewpoint. According to one method, volumetric video data may be acquired depicting the environment, for example, using a tiled camera array. A plurality of vantages may be distributed throughout a viewing volume from which the environment is to be viewed. The volumetric video data may be used to generate video data for each vantage, representing the view of the environment from that vantage. User input may be received designating a viewpoint within the viewing volume. From among the plurality of vantages, a subset nearest to the viewpoint may be identified. The video data from the subset may be retrieved and combined to generate viewpoint video data depicting the environment from the viewpoint. The viewpoint video data may be displayed for the viewer to display a view of the environment from the viewpoint selected by the user.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: July 2, 2019
    Inventors: Derek Pang, Colvin Pitts, Kurt Akeley
  • Patent number: 10298834
    Abstract: A video refocusing system operates in connection with refocusable video data, information, images and/or frames, which may be light field video data, information, images and/or frames, that may be focused and/or refocused after acquisition or recording. A video acquisition device acquires first refocusable light field video data of a scene, stores first refocusable video data representative of the first refocusable light field video data, acquires second refocusable light field video data of the scene after acquiring the first refocusable light field video data, determines a first virtual focus parameter (such as a virtual focus depth) using the second refocusable light field video data, generates first video data using the stored first refocusable video data and the first virtual focus parameter, wherein the first video data includes a focus depth that is different from an optical focus depth of the first refocusable light field video data, and outputs the first video data.
    Type: Grant
    Filed: July 11, 2016
    Date of Patent: May 21, 2019
    Assignee: GOOGLE LLC
    Inventors: Colvin Pitts, Yi-Ren Ng
  • Patent number: 10275898
    Abstract: A combined video of a scene may be generated for applications such as virtual reality or augmented reality. In one method, a camera system may be oriented at a first orientation and used to capture first video of a first portion of the scene. The camera system may then be rotated to a second orientation and used to capture second video of a second portion of the scene that is offset from the first portion such that the first video and the second video each have an overlapping video portion depicting an overlapping portion of the scene in which the first portion and the second portion of the scene overlap with each other. The first and second portions may be combined together to generate the combined video, which may depict the first and second portions substantially without duplicative inclusion of the overlapping video portion.
    Type: Grant
    Filed: May 9, 2017
    Date of Patent: April 30, 2019
    Assignee: GOOGLE LLC
    Inventors: Alex Song, Jonathan Frank, Julio C. Hernandez Zaragoza, Orin Green, Steve Cooper, Ariel Braunstein, Tim Milliron, Colvin Pitts, Yusuke Yasui, Saeid Shahhosseini, Bipeng Zhang
  • Publication number: 20190124318
    Abstract: A light-field camera may generate four-dimensional light-field data indicative of incoming light. The light-field camera may have an aperture configured to receive the incoming light, an image sensor, and a microlens array configured to redirect the incoming light at the image sensor. The image sensor may receive the incoming light and, based on the incoming light, generate the four-dimensional light-field data, which may have first and second spatial dimensions and first and second angular dimensions. The first angular dimension may have a first resolution higher than a second resolution of the second angular dimension.
    Type: Application
    Filed: July 11, 2018
    Publication date: April 25, 2019
    Inventors: Colvin PITTS, Chia-Kai LIANG, Kurt AKELEY
  • Patent number: 10154197
    Abstract: An image capture device, such as a camera, has multiple modes including a light field image capture mode, a conventional 2D image capture mode, and at least one intermediate image capture mode. By changing position and/or properties of the microlens array (MLA) in front of the image sensor, changes in 2D spatial resolution and angular resolution can be attained. In at least one embodiment, such changes can be performed in a continuous manner, allowing a continuum of intermediate modes to be attained.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: December 11, 2018
    Assignee: GOOGLE LLC
    Inventors: Jerome Chandra Bhat, Brandon Elliott Merle Clarke, Graham Butler Myhre, Ravi Kiran Nalla, Steven David Oliver, Tony Yip Pang Poon, William D. Houck, II, Colvin Pitts, Yi-Ren Ng, Kurt Akeley
  • Patent number: 10085005
    Abstract: A capture system may capture light-field data representative of an environment for use in virtual reality, augmented reality, and the like. The system may have a plurality of light-field cameras arranged to capture a light-field volume within the environment, and a processor. The processor may use the light-field volume to generate a first virtual view depicting the environment from a first virtual viewpoint. The light-field cameras may be arranged in a tiled array to define a capture surface with a ring-shaped, spherical, or other arrangement. The processor may map the pixels captured by the image sensors to light rays received in the light-field volume, and store data descriptive of the light rays in a coordinate system representative of the light-field volume.
    Type: Grant
    Filed: March 29, 2016
    Date of Patent: September 25, 2018
    Assignee: Lytro, Inc.
    Inventors: Colvin Pitts, Yonggang Ha, William Houck
  • Patent number: 10038909
    Abstract: RAW images and/or light field images may be compressed through the use of specialized techniques. The color depth of a light field image may be reduced through the use of a bit reduction algorithm such as a K-means algorithm. The image may then be retiled to group pixels of similar intensities and/or colors. The retiled image may be padded with extra pixel rows and/or pixel columns as needed, and compressed through the use of an image compression algorithm. The compressed image may be assembled with metadata pertinent to the manner in which compression was done to form a compressed image file. The compressed image file may be decompressed by following the compression method in reverse.
    Type: Grant
    Filed: July 6, 2016
    Date of Patent: July 31, 2018
    Assignee: Google LLC
    Inventors: Kurt Akeley, Brendan Bevensee, Colvin Pitts, Timothy James Knight, Carl Warren Craddock, Chia-Kai Liang
  • Patent number: 10033986
    Abstract: A light-field camera may generate four-dimensional light-field data indicative of incoming light. The light-field camera may have an aperture configured to receive the incoming light, an image sensor, and a microlens array configured to redirect the incoming light at the image sensor. The image sensor may receive the incoming light and, based on the incoming light, generate the four-dimensional light-field data, which may have first and second spatial dimensions and first and second angular dimensions. The first angular dimension may have a first resolution higher than a second resolution of the second angular dimension.
    Type: Grant
    Filed: April 29, 2016
    Date of Patent: July 24, 2018
    Assignee: Google LLC
    Inventors: Colvin Pitts, Chia-Kai Liang, Kurt Akeley
  • Publication number: 20180097867
    Abstract: A video stream of a scene for a virtual reality or augmented reality experience may be captured by one or more image capture devices. Data from the video stream may be retrieved, including base vantage data with base vantage color data depicting the scene from a base vantage location, and target vantage data with target vantage color data depicting the scene from a target vantage location. The base vantage data may be reprojected to the target vantage location to obtain reprojected target vantage data. The reprojected target vantage data may be compared with the target vantage data to obtain residual data. The residual data may be compressed by removing a subset of the residual data that is likely to be less viewer-discernable than a remainder of the residual data. A compressed video stream may be stored, including the base vantage data and the compressed residual data.
    Type: Application
    Filed: December 5, 2017
    Publication date: April 5, 2018
    Inventors: Derek Pang, Colvin Pitts, Kurt Akeley
  • Publication number: 20180089903
    Abstract: A virtual reality or augmented reality experience of a scene may be presented to a viewer using layered data retrieval and/or processing. A first layer of a video stream may be retrieved, and a first viewer position and/or orientation may be received. The first layer may be processed to generate first viewpoint video of the scene from a first virtual viewpoint corresponding to the first viewer position and/or orientation. The first viewpoint video may be displayed for the viewer. Then, a second layer of the video stream may be retrieved, and a second viewer position and/or orientation may be received. The second layer may be processed to generate second viewpoint video of the scene from a second virtual viewpoint corresponding to the second viewer position and/or orientation, with higher quality than the first viewpoint video. The second viewpoint video may be displayed for the viewer.
    Type: Application
    Filed: October 11, 2017
    Publication date: March 29, 2018
    Inventors: Derek Pang, Colvin Pitts, Kurt Akeley
  • Publication number: 20180070067
    Abstract: According to various embodiments of the present invention, the optical systems of light field capture devices are optimized so as to improve captured light field image data. Optimizing optical systems of light field capture devices can result in captured light field image data (both still and video) that is cheaper and/or easier to process. Optical systems can be optimized to yield improved quality or resolution when using cheaper processing approaches whose computational costs fit within various processing and/or resource constraints. As such, the optical systems of light field cameras can be optimized to reduce size and/or cost and/or increase the quality of such optical systems.
    Type: Application
    Filed: November 13, 2017
    Publication date: March 8, 2018
    Inventors: Timothy Knight, Colvin Pitts, Kurt Akeley, Yuriy Romanenko, Carl (Warren) Craddock
  • Publication number: 20180070066
    Abstract: According to various embodiments of the present invention, the optical systems of light field capture devices are optimized so as to improve captured light field image data. Optimizing optical systems of light field capture devices can result in captured light field image data (both still and video) that is cheaper and/or easier to process. Optical systems can be optimized to yield improved quality or resolution when using cheaper processing approaches whose computational costs fit within various processing and/or resource constraints. As such, the optical systems of light field cameras can be optimized to reduce size and/or cost and/or increase the quality of such optical systems.
    Type: Application
    Filed: November 13, 2017
    Publication date: March 8, 2018
    Inventors: Timothy Knight, Colvin Pitts, Kurt Akeley, Yuriy Romanenko, Carl (Warren) Craddock
  • Publication number: 20180035134
    Abstract: A virtual reality or augmented reality experience of a scene may be decoded for playback for a viewer through a combination of CPU and GPU processing. A video stream may be retrieved from a data store. A first viewer position and/or orientation may be received from an input device, such as the sensor package on a head-mounted display (HMD). At a processor, the video stream may be partially decoded to generate a partially-decoded bitstream. At a graphics processor, the partially-decoded bitstream may be further decoded to generate viewpoint video of the scene from a first virtual viewpoint corresponding to the first viewer position and/or orientation. The viewpoint video may be displayed on a display device, such as screen of the HMD.
    Type: Application
    Filed: October 11, 2017
    Publication date: February 1, 2018
    Inventors: Derek Pang, Colvin Pitts, Kurt Akeley, Zeyar Htet
  • Publication number: 20180020204
    Abstract: A video stream for a scene for a virtual reality or augmented reality experience may be stored and delivered to a viewer. The video stream may be divided into a plurality of units based on time segmentation, viewpoint segmentation, and/or view orientation segmentation. Each of the units may be divided into a plurality of sub-units based on a different segmentation from the units, via time segmentation, viewpoint segmentation, and/or view orientation segmentation. At least a portion of the video stream may be stored in a file that includes a plurality of the units. Each unit may be a group of pictures that is a sequence of successive frames in time. Each sub-unit may be a vantage defining a viewpoint from which the scene is viewable. Each vantage may be further divided into tiles, each of which is part of the vantage, limited to one or more particular view orientations.
    Type: Application
    Filed: September 15, 2017
    Publication date: January 18, 2018
    Inventors: Derek Pang, Colvin Pitts, Kurt Akeley
  • Patent number: 9866810
    Abstract: According to various embodiments of the present invention, the optical systems of light field capture devices are optimized so as to improve captured light field image data. Optimizing optical systems of light field capture devices can result in captured light field image data (both still and video) that is cheaper and/or easier to process. Optical systems can be optimized to yield improved quality or resolution when using cheaper processing approaches whose computational costs fit within various processing and/or resource constraints. As such, the optical systems of light field cameras can be optimized to reduce size and/or cost and/or increase the quality of such optical systems.
    Type: Grant
    Filed: February 22, 2016
    Date of Patent: January 9, 2018
    Assignee: Lytro, Inc.
    Inventors: Timothy Knight, Colvin Pitts, Kurt Akeley, Yuriy Romanenko, Carl (Warren) Craddock
  • Publication number: 20170332000
    Abstract: A high dynamic range light-field image may be captured through the use of a light-field imaging system. In a first sensor of the light-field imaging system, first image data may be captured at a first exposure level. In the first sensor or in a second sensor of the light-field imaging system, second imaging data may be captured at a second exposure level greater than the first exposure level. In a data store, the first image data and the second image data may be received. In a processor, the first image data and the second image data may be combined to generate a light-field image with high dynamic range.
    Type: Application
    Filed: May 10, 2016
    Publication date: November 16, 2017
    Inventors: Zejing Wang, Kurt Akeley, Colvin Pitts, Jon Karafin
  • Publication number: 20170244948
    Abstract: An environment may be displayed from a viewpoint. According to one method, volumetric video data may be acquired depicting the environment, for example, using a tiled camera array. A plurality of vantages may be distributed throughout a viewing volume from which the environment is to be viewed. The volumetric video data may be used to generate video data for each vantage, representing the view of the environment from that vantage. User input may be received designating a viewpoint within the viewing volume. From among the plurality of vantages, a subset nearest to the viewpoint may be identified. The video data from the subset may be retrieved and combined to generate viewpoint video data depicting the environment from the viewpoint. The viewpoint video data may be displayed for the viewer to display a view of the environment from the viewpoint selected by the user.
    Type: Application
    Filed: May 9, 2017
    Publication date: August 24, 2017
    Inventors: Derek Pang, Colvin Pitts, Kurt Akeley
  • Publication number: 20170237971
    Abstract: A light-field camera system such as a tiled camera array may be used to capture a light-field of an environment. The tiled camera array may be a tiered camera array with a first plurality of cameras and a second plurality of cameras that are arranged more densely, but have lower resolution, than those of the first plurality of cameras. The first plurality of cameras may be interspersed among the second plurality of cameras. The first and second pluralities may cooperate to capture the light-field. According to one method, a subview may be captured by each camera of the first and second pluralities. Estimated world properties of the environment may be computed for each subview. A confidence map may be generated to indicate a level of confidence in the estimated world properties for each subview. The confidence maps and subviews may be used to generate a virtual view of the environment.
    Type: Application
    Filed: April 28, 2017
    Publication date: August 17, 2017
    Inventor: Colvin Pitts
  • Publication number: 20170139131
    Abstract: A camera may have two or more image sensors, including a first image sensor and a second image sensor. The camera may have a main lens that directs incoming light along an optical path, and microlens array positioned within the optical path. The camera may also have two or more fiber optic bundles, including first and second fiber optic bundles with first and second leading ends, respectively. A first trailing end of the first fiber optic bundle may be positioned proximate the first image sensor, and a second trailing end of the second fiber optic bundle may be positioned proximate the second image sensor, displaced from the first trailing end by a gap. The leading ends may be positioned adjacent to each other within the optical path such that image data captured by the image sensors can be combined to define a single light-field image substantially unaffected by the gap.
    Type: Application
    Filed: February 1, 2017
    Publication date: May 18, 2017
    Inventors: Jon Karafin, Colvin Pitts, Yuriy Romanenko