Patents by Inventor Steven M. Seitz

Steven M. Seitz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10880582
    Abstract: An example telepresence terminal includes a display, an image sensor, an infrared emitter, and an infrared depth sensor. The terminal may determine image data using visible light emitted by the infrared emitter and captured by the image sensor and determine depth data using infrared light captured by the infrared depth sensor. The terminal may also communicate the depth data and the image data to a remote telepresence terminal and receive remote image data and remote depth data. The terminal may also generate a first display image using the lenticular display based on the remote image data that is viewable from a first viewing location and generate a second display image using the lenticular display based on the remote image data and the remote depth data that is viewable from a second viewing location.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: December 29, 2020
    Assignee: Google LLC
    Inventors: Daniel Goldman, Jason Lawrence, Andrew Huibers, Andrew Ian Russell, Steven M. Seitz
  • Publication number: 20200344500
    Abstract: An example telepresence terminal includes a display, an image sensor, an infrared emitter, and an infrared depth sensor. The terminal may determine image data using visible light emitted by the infrared emitter and captured by the image sensor and determine depth data using infrared light captured by the infrared depth sensor. The terminal may also communicate the depth data and the image data to a remote telepresence terminal and receive remote image data and remote depth data. The terminal may also generate a first display image using the lenticular display based on the remote image data that is viewable from a first viewing location and generate a second display image using the lenticular display based on the remote image data and the remote depth data that is viewable from a second viewing location.
    Type: Application
    Filed: July 8, 2020
    Publication date: October 29, 2020
    Inventors: Daniel Goldman, Jason Lawrence, Andrew Huibers, Andrew Ian Russell, Steven M. Seitz
  • Patent number: 10750210
    Abstract: An example telepresence terminal includes a lenticular display, an image sensor, an infrared emitter, and an infrared depth sensor. The terminal may determine image data using visible light emitted by the infrared emitter and captured by the image sensor and determine depth data using infrared light captured by the infrared depth sensor. The terminal may also communicate the depth data and the image data to a remote telepresence terminal and receive remote image data and remote depth data. The terminal may also generate a first display image using the lenticular display based on the remote image data that is viewable from a first viewing location and generate a second display image using the lenticular display based on the remote image data and the remote depth data that is viewable from a second viewing location.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: August 18, 2020
    Assignee: GOOGLE LLC
    Inventors: Daniel Goldman, Jason Lawrence, Andrew Huibers, Andrew Ian Russell, Steven M. Seitz
  • Publication number: 20190306541
    Abstract: An example telepresence terminal includes a lenticular display, an image sensor, an infrared emitter, and an infrared depth sensor. The terminal may determine image data using visible light emitted by the infrared emitter and captured by the image sensor and determine depth data using infrared light captured by the infrared depth sensor. The terminal may also communicate the depth data and the image data to a remote telepresence terminal and receive remote image data and remote depth data. The terminal may also generate a first display image using the lenticular display based on the remote image data that is viewable from a first viewing location and generate a second display image using the lenticular display based on the remote image data and the remote depth data that is viewable from a second viewing location.
    Type: Application
    Filed: June 17, 2019
    Publication date: October 3, 2019
    Inventors: Daniel Goldman, Jason Lawrence, Andrew Huibers, Andrew Ian Russell, Steven M. Seitz
  • Patent number: 10327014
    Abstract: An example telepresence terminal includes a lenticular display, an image sensor, an infrared emitter, and an infrared depth sensor. The terminal may determine image data using visible light emitted by the infrared emitter and captured by the image sensor and determine depth data using infrared light captured by the infrared depth sensor. The terminal may also communicate the depth data and the image data to a remote telepresence terminal and receive remote image data and remote depth data. The terminal may also generate a first display image using the lenticular display based on the remote image data that is viewable from a first viewing location and generate a second display image using the lenticular display based on the remote image data and the remote depth data that is viewable from a second viewing location.
    Type: Grant
    Filed: September 8, 2017
    Date of Patent: June 18, 2019
    Assignee: GOOGLE LLC
    Inventors: Daniel Goldman, Jason Lawrence, Andrew Huibers, Andrew Ian Russell, Steven M. Seitz
  • Patent number: 7813538
    Abstract: In connection with imaging an inner surface of a body lumen, a mosaiced image is created from discrete images or a video produced with a small camera, as the camera is moved through the lumen. In one embodiment, a tethered capsule with a scanning optical fiber provides the images, although other types of endoscopic cameras can instead be used. A surface model of the lumen and camera pose estimates for each image or frame are required for this task. Camera pose parameters, which define camera alignment, are determined for six degrees-of-freedom. The size of each frame projected as a strip on the surface model depends on the longitudinal movement of the camera. The projected frames are concatenated, and the cylinder is unrolled to produce the mosaic image. Further processing, such as applying surface domain blending, improves the quality of the mosaic image.
    Type: Grant
    Filed: May 17, 2007
    Date of Patent: October 12, 2010
    Assignee: University of Washington
    Inventors: Robert E. Carroll, Eric J. Seibel, Steven M. Seitz
  • Publication number: 20080262312
    Abstract: In connection with imaging an inner surface of a body lumen, a mosaiced image is created from discrete images or a video produced with a small camera, as the camera is moved through the lumen. In one embodiment, a tethered capsule with a scanning optical fiber provides the images, although other types of endoscopic cameras can instead be used. A surface model of the lumen and camera pose estimates for each image or frame are required for this task. Camera pose parameters, which define camera alignment, are determined for six degrees-of-freedom. The size of each frame projected as a strip on the surface model depends on the longitudinal movement of the camera. The projected frames are concatenated, and the cylinder is unrolled to produce the mosaic image. Further processing, such as applying surface domain blending, improves the quality of the mosaic image.
    Type: Application
    Filed: May 17, 2007
    Publication date: October 23, 2008
    Applicant: University of Washington
    Inventors: Robert E. Carroll, Eric J. Seibel, Steven M. Seitz
  • Patent number: 6642924
    Abstract: A method and a system for obtaining visual information from an image sequence using a visual tunnel analysis are described. The present invention determines the position and orientation of rays captured in the image sequence and uses these rays to provide visual information. This visual information includes, for example, visual prediction information (whereby the extent of visibility and appearance of a virtual camera at arbitrary locations are determined) and visual planning (whereby a minimum number of images that need to be captured to visualize a desired region is determined). Generally, the visual tunnel analysis uses a subset of the plenoptic function to determine the position and orientation of every light ray passing through each point of images in the sequence. A visual tunnel associated with an input image sequence is a volume in visibility space that represents that portion of the visibility space occupied by the input image sequence.
    Type: Grant
    Filed: November 2, 2000
    Date of Patent: November 4, 2003
    Assignee: Microsoft Corporation
    Inventors: Sing Bing Kang, Peter-Pike J. Sloan, Steven M. Seitz
  • Patent number: 6363170
    Abstract: A novel scene reconstruction technique is presented, different from previous approaches in its ability to cope with large changes in visibility and its modeling of intrinsic scene color and texture information. The method avoids image correspondence problems by working in a discretized scene space whose voxels are traversed in a fixed visibility ordering. This strategy takes full account of occlusions and allows the input cameras to be far apart and widely distributed about the environment. The algorithm identifies a special set of invariant voxels which together form a spatial and photometric reconstruction of the scene, fully consistent with the input images.
    Type: Grant
    Filed: April 29, 1999
    Date of Patent: March 26, 2002
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Steven M. Seitz, Charles R. Dyer