Patents by Inventor Kurt Akeley
Kurt Akeley has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10897608Abstract: A light-field camera may generate four-dimensional light-field data indicative of incoming light. The light-field camera may have an aperture configured to receive the incoming light, an image sensor, and a microlens array configured to redirect the incoming light at the image sensor. The image sensor may receive the incoming light and, based on the incoming light, generate the four-dimensional light-field data, which may have first and second spatial dimensions and first and second angular dimensions. The first angular dimension may have a first resolution higher than a second resolution of the second angular dimension.Type: GrantFiled: July 11, 2018Date of Patent: January 19, 2021Assignee: GOOGLE LLCInventors: Colvin Pitts, Chia-Kai Liang, Kurt Akeley
-
Patent number: 10567464Abstract: A video stream of a scene for a virtual reality or augmented reality experience may be captured by one or more image capture devices. Data from the video stream may be retrieved, including base vantage data with base vantage color data depicting the scene from a base vantage location, and target vantage data with target vantage color data depicting the scene from a target vantage location. The base vantage data may be reprojected to the target vantage location to obtain reprojected target vantage data. The reprojected target vantage data may be compared with the target vantage data to obtain residual data. The residual data may be compressed by removing a subset of the residual data that is likely to be less viewer-discernable than a remainder of the residual data. A compressed video stream may be stored, including the base vantage data and the compressed residual data.Type: GrantFiled: December 5, 2017Date of Patent: February 18, 2020Assignee: GOOGLE LLCInventors: Derek Pang, Colvin Pitts, Kurt Akeley
-
Patent number: 10552947Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.Type: GrantFiled: November 28, 2017Date of Patent: February 4, 2020Assignee: GOOGLE LLCInventors: Chia-Kai Liang, Kent Oberheu, Kurt Akeley, Garrett Girod, Nikhil Karnad, Francis A. Benevides
-
Patent number: 10546424Abstract: A virtual reality or augmented reality experience of a scene may be presented to a viewer using layered data retrieval and/or processing. A first layer of a video stream may be retrieved, and a first viewer position and/or orientation may be received. The first layer may be processed to generate first viewpoint video of the scene from a first virtual viewpoint corresponding to the first viewer position and/or orientation. The first viewpoint video may be displayed for the viewer. Then, a second layer of the video stream may be retrieved, and a second viewer position and/or orientation may be received. The second layer may be processed to generate second viewpoint video of the scene from a second virtual viewpoint corresponding to the second viewer position and/or orientation, with higher quality than the first viewpoint video. The second viewpoint video may be displayed for the viewer.Type: GrantFiled: October 11, 2017Date of Patent: January 28, 2020Assignee: GOOGLE LLCInventors: Derek Pang, Colvin Pitts, Kurt Akeley
-
Patent number: 10540818Abstract: Video data of an environment may be prepared for stereoscopic presentation to a user in a virtual reality or augmented reality experience. According to one method, a plurality of locations distributed throughout a viewing volume may be designated, at which a plurality of vantages are to be positioned to facilitate viewing of the environment from proximate the locations. For each location, a plurality of images of the environment, captured from viewpoints proximate the location, may be retrieved. For each location, the images may be reprojected to a three-dimensional shape and combined to generate a combined image. The combined image may be applied to one or more surfaces of the three-dimensional shape to generate a vantage. The vantages may be stored such that the vantages can be used to generate stereoscopic viewpoint video of the scene, as viewed from at least two virtual viewpoints corresponding to viewpoints of an actual viewer's eyes within the viewing volume.Type: GrantFiled: October 11, 2017Date of Patent: January 21, 2020Assignee: GOOGLE LLCInventor: Kurt Akeley
-
Patent number: 10474227Abstract: A virtual reality or augmented reality experience may be presented for a viewer through the use of input including only three degrees of freedom. The input may include orientation data indicative of a viewer orientation at which a head of the viewer is oriented. The viewer orientation may be mapped to an estimated viewer location. Viewpoint video may be generated of a scene as viewed from a virtual viewpoint with a virtual location corresponding to the estimated viewer location, from along the viewer orientation. The viewpoint video may be displayed for the viewer. In some embodiments, mapping may be carried out by defining a ray at the viewer orientation, locating an intersection of the ray with a three-dimensional shape, and, based on a location of the intersection, generating the estimated viewer location. The shape may be generated via calibration with a device that receives input including six degrees of freedom.Type: GrantFiled: February 15, 2018Date of Patent: November 12, 2019Assignee: GOOGLE LLCInventors: Trevor Carothers, Kurt Akeley
-
Patent number: 10469873Abstract: A virtual reality or augmented reality experience of a scene may be decoded for playback for a viewer through a combination of CPU and GPU processing. A video stream may be retrieved from a data store. A first viewer position and/or orientation may be received from an input device, such as the sensor package on a head-mounted display (HMD). At a processor, the video stream may be partially decoded to generate a partially-decoded bitstream. At a graphics processor, the partially-decoded bitstream may be further decoded to generate viewpoint video of the scene from a first virtual viewpoint corresponding to the first viewer position and/or orientation. The viewpoint video may be displayed on a display device, such as screen of the HMD.Type: GrantFiled: October 11, 2017Date of Patent: November 5, 2019Assignee: Google LLCInventors: Derek Pang, Colvin Pitts, Kurt Akeley, Zeyar Htet
-
Patent number: 10444931Abstract: Video data of an environment may be prepared for presentation to a user in a virtual reality or augmented reality experience. According to one method, a plurality of locations distributed throughout a viewing volume may be designated, at which a plurality of vantages are to be positioned to facilitate viewing of the environment from proximate the locations. For each location, a plurality of images of the environment, captured from viewpoints proximate the location, may be retrieved. For each location, the images may be reprojected to a three-dimensional shape and combined to generate a combined image. The combined image may be applied to one or more surfaces of the three-dimensional shape to generate a vantage. The vantages may be stored such that the vantages can be used to generate viewpoint video of the scene, as viewed from a virtual viewpoint corresponding to an actual viewer's viewpoint within the viewing volume.Type: GrantFiled: May 9, 2017Date of Patent: October 15, 2019Assignee: GOOGLE LLCInventor: Kurt Akeley
-
Patent number: 10419737Abstract: A video stream for a scene for a virtual reality or augmented reality experience may be stored and delivered to a viewer. The video stream may be divided into a plurality of units based on time segmentation, viewpoint segmentation, and/or view orientation segmentation. Each of the units may be divided into a plurality of sub-units based on a different segmentation from the units, via time segmentation, viewpoint segmentation, and/or view orientation segmentation. At least a portion of the video stream may be stored in a file that includes a plurality of the units. Each unit may be a group of pictures that is a sequence of successive frames in time. Each sub-unit may be a vantage defining a viewpoint from which the scene is viewable. Each vantage may be further divided into tiles, each of which is part of the vantage, limited to one or more particular view orientations.Type: GrantFiled: September 15, 2017Date of Patent: September 17, 2019Assignee: GOOGLE LLCInventors: Derek Pang, Colvin Pitts, Kurt Akeley
-
Patent number: 10341632Abstract: An environment may be displayed from a viewpoint. According to one method, volumetric video data may be acquired depicting the environment, for example, using a tiled camera array. A plurality of vantages may be distributed throughout a viewing volume from which the environment is to be viewed. The volumetric video data may be used to generate video data for each vantage, representing the view of the environment from that vantage. User input may be received designating a viewpoint within the viewing volume. From among the plurality of vantages, a subset nearest to the viewpoint may be identified. The video data from the subset may be retrieved and combined to generate viewpoint video data depicting the environment from the viewpoint. The viewpoint video data may be displayed for the viewer to display a view of the environment from the viewpoint selected by the user.Type: GrantFiled: May 9, 2017Date of Patent: July 2, 2019Inventors: Derek Pang, Colvin Pitts, Kurt Akeley
-
Publication number: 20190124318Abstract: A light-field camera may generate four-dimensional light-field data indicative of incoming light. The light-field camera may have an aperture configured to receive the incoming light, an image sensor, and a microlens array configured to redirect the incoming light at the image sensor. The image sensor may receive the incoming light and, based on the incoming light, generate the four-dimensional light-field data, which may have first and second spatial dimensions and first and second angular dimensions. The first angular dimension may have a first resolution higher than a second resolution of the second angular dimension.Type: ApplicationFiled: July 11, 2018Publication date: April 25, 2019Inventors: Colvin PITTS, Chia-Kai LIANG, Kurt AKELEY
-
Patent number: 10154197Abstract: An image capture device, such as a camera, has multiple modes including a light field image capture mode, a conventional 2D image capture mode, and at least one intermediate image capture mode. By changing position and/or properties of the microlens array (MLA) in front of the image sensor, changes in 2D spatial resolution and angular resolution can be attained. In at least one embodiment, such changes can be performed in a continuous manner, allowing a continuum of intermediate modes to be attained.Type: GrantFiled: July 6, 2016Date of Patent: December 11, 2018Assignee: GOOGLE LLCInventors: Jerome Chandra Bhat, Brandon Elliott Merle Clarke, Graham Butler Myhre, Ravi Kiran Nalla, Steven David Oliver, Tony Yip Pang Poon, William D. Houck, II, Colvin Pitts, Yi-Ren Ng, Kurt Akeley
-
Publication number: 20180329485Abstract: A virtual reality or augmented reality experience may be presented for a viewer through the use of input including only three degrees of freedom. The input may include orientation data indicative of a viewer orientation at which a head of the viewer is oriented. The viewer orientation may be mapped to an estimated viewer location. Viewpoint video may be generated of a scene as viewed from a virtual viewpoint with a virtual location corresponding to the estimated viewer location, from along the viewer orientation. The viewpoint video may be displayed for the viewer. In some embodiments, mapping may be carried out by defining a ray at the viewer orientation, locating an intersection of the ray with a three-dimensional shape, and, based on a location of the intersection, generating the estimated viewer location. The shape may be generated via calibration with a device that receives input including six degrees of freedom.Type: ApplicationFiled: February 15, 2018Publication date: November 15, 2018Inventors: Trevor Carothers, Kurt Akeley
-
Publication number: 20180329602Abstract: Video data of an environment may be prepared for presentation to a user in a virtual reality or augmented reality experience. According to one method, a plurality of locations distributed throughout a viewing volume may be designated, at which a plurality of vantages are to be positioned to facilitate viewing of the environment from proximate the locations. For each location, a plurality of images of the environment, captured from viewpoints proximate the location, may be retrieved. For each location, the images may be reprojected to a three-dimensional shape and combined to generate a combined image. The combined image may be applied to one or more surfaces of the three-dimensional shape to generate a vantage. The vantages may be stored such that the vantages can be used to generate viewpoint video of the scene, as viewed from a virtual viewpoint corresponding to an actual viewer's viewpoint within the viewing volume.Type: ApplicationFiled: May 9, 2017Publication date: November 15, 2018Inventor: Kurt Akeley
-
Patent number: 10038909Abstract: RAW images and/or light field images may be compressed through the use of specialized techniques. The color depth of a light field image may be reduced through the use of a bit reduction algorithm such as a K-means algorithm. The image may then be retiled to group pixels of similar intensities and/or colors. The retiled image may be padded with extra pixel rows and/or pixel columns as needed, and compressed through the use of an image compression algorithm. The compressed image may be assembled with metadata pertinent to the manner in which compression was done to form a compressed image file. The compressed image file may be decompressed by following the compression method in reverse.Type: GrantFiled: July 6, 2016Date of Patent: July 31, 2018Assignee: Google LLCInventors: Kurt Akeley, Brendan Bevensee, Colvin Pitts, Timothy James Knight, Carl Warren Craddock, Chia-Kai Liang
-
Patent number: 10033986Abstract: A light-field camera may generate four-dimensional light-field data indicative of incoming light. The light-field camera may have an aperture configured to receive the incoming light, an image sensor, and a microlens array configured to redirect the incoming light at the image sensor. The image sensor may receive the incoming light and, based on the incoming light, generate the four-dimensional light-field data, which may have first and second spatial dimensions and first and second angular dimensions. The first angular dimension may have a first resolution higher than a second resolution of the second angular dimension.Type: GrantFiled: April 29, 2016Date of Patent: July 24, 2018Assignee: Google LLCInventors: Colvin Pitts, Chia-Kai Liang, Kurt Akeley
-
Publication number: 20180097867Abstract: A video stream of a scene for a virtual reality or augmented reality experience may be captured by one or more image capture devices. Data from the video stream may be retrieved, including base vantage data with base vantage color data depicting the scene from a base vantage location, and target vantage data with target vantage color data depicting the scene from a target vantage location. The base vantage data may be reprojected to the target vantage location to obtain reprojected target vantage data. The reprojected target vantage data may be compared with the target vantage data to obtain residual data. The residual data may be compressed by removing a subset of the residual data that is likely to be less viewer-discernable than a remainder of the residual data. A compressed video stream may be stored, including the base vantage data and the compressed residual data.Type: ApplicationFiled: December 5, 2017Publication date: April 5, 2018Inventors: Derek Pang, Colvin Pitts, Kurt Akeley
-
Publication number: 20180089903Abstract: A virtual reality or augmented reality experience of a scene may be presented to a viewer using layered data retrieval and/or processing. A first layer of a video stream may be retrieved, and a first viewer position and/or orientation may be received. The first layer may be processed to generate first viewpoint video of the scene from a first virtual viewpoint corresponding to the first viewer position and/or orientation. The first viewpoint video may be displayed for the viewer. Then, a second layer of the video stream may be retrieved, and a second viewer position and/or orientation may be received. The second layer may be processed to generate second viewpoint video of the scene from a second virtual viewpoint corresponding to the second viewer position and/or orientation, with higher quality than the first viewpoint video. The second viewpoint video may be displayed for the viewer.Type: ApplicationFiled: October 11, 2017Publication date: March 29, 2018Inventors: Derek Pang, Colvin Pitts, Kurt Akeley
-
Publication number: 20180082405Abstract: An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.Type: ApplicationFiled: November 28, 2017Publication date: March 22, 2018Inventors: Chia-Kai Liang, Kent Oberheu, Kurt Akeley, Garrett Girod, Nikhil Karnad, Francis A. Benevides, JR.
-
Publication number: 20180070066Abstract: According to various embodiments of the present invention, the optical systems of light field capture devices are optimized so as to improve captured light field image data. Optimizing optical systems of light field capture devices can result in captured light field image data (both still and video) that is cheaper and/or easier to process. Optical systems can be optimized to yield improved quality or resolution when using cheaper processing approaches whose computational costs fit within various processing and/or resource constraints. As such, the optical systems of light field cameras can be optimized to reduce size and/or cost and/or increase the quality of such optical systems.Type: ApplicationFiled: November 13, 2017Publication date: March 8, 2018Inventors: Timothy Knight, Colvin Pitts, Kurt Akeley, Yuriy Romanenko, Carl (Warren) Craddock