Patents by Inventor Steven M. Seitz
Steven M. Seitz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10880582Abstract: An example telepresence terminal includes a display, an image sensor, an infrared emitter, and an infrared depth sensor. The terminal may determine image data using visible light emitted by the infrared emitter and captured by the image sensor and determine depth data using infrared light captured by the infrared depth sensor. The terminal may also communicate the depth data and the image data to a remote telepresence terminal and receive remote image data and remote depth data. The terminal may also generate a first display image using the lenticular display based on the remote image data that is viewable from a first viewing location and generate a second display image using the lenticular display based on the remote image data and the remote depth data that is viewable from a second viewing location.Type: GrantFiled: July 8, 2020Date of Patent: December 29, 2020Assignee: Google LLCInventors: Daniel Goldman, Jason Lawrence, Andrew Huibers, Andrew Ian Russell, Steven M. Seitz
-
Publication number: 20200344500Abstract: An example telepresence terminal includes a display, an image sensor, an infrared emitter, and an infrared depth sensor. The terminal may determine image data using visible light emitted by the infrared emitter and captured by the image sensor and determine depth data using infrared light captured by the infrared depth sensor. The terminal may also communicate the depth data and the image data to a remote telepresence terminal and receive remote image data and remote depth data. The terminal may also generate a first display image using the lenticular display based on the remote image data that is viewable from a first viewing location and generate a second display image using the lenticular display based on the remote image data and the remote depth data that is viewable from a second viewing location.Type: ApplicationFiled: July 8, 2020Publication date: October 29, 2020Inventors: Daniel Goldman, Jason Lawrence, Andrew Huibers, Andrew Ian Russell, Steven M. Seitz
-
Patent number: 10750210Abstract: An example telepresence terminal includes a lenticular display, an image sensor, an infrared emitter, and an infrared depth sensor. The terminal may determine image data using visible light emitted by the infrared emitter and captured by the image sensor and determine depth data using infrared light captured by the infrared depth sensor. The terminal may also communicate the depth data and the image data to a remote telepresence terminal and receive remote image data and remote depth data. The terminal may also generate a first display image using the lenticular display based on the remote image data that is viewable from a first viewing location and generate a second display image using the lenticular display based on the remote image data and the remote depth data that is viewable from a second viewing location.Type: GrantFiled: June 17, 2019Date of Patent: August 18, 2020Assignee: GOOGLE LLCInventors: Daniel Goldman, Jason Lawrence, Andrew Huibers, Andrew Ian Russell, Steven M. Seitz
-
Publication number: 20190306541Abstract: An example telepresence terminal includes a lenticular display, an image sensor, an infrared emitter, and an infrared depth sensor. The terminal may determine image data using visible light emitted by the infrared emitter and captured by the image sensor and determine depth data using infrared light captured by the infrared depth sensor. The terminal may also communicate the depth data and the image data to a remote telepresence terminal and receive remote image data and remote depth data. The terminal may also generate a first display image using the lenticular display based on the remote image data that is viewable from a first viewing location and generate a second display image using the lenticular display based on the remote image data and the remote depth data that is viewable from a second viewing location.Type: ApplicationFiled: June 17, 2019Publication date: October 3, 2019Inventors: Daniel Goldman, Jason Lawrence, Andrew Huibers, Andrew Ian Russell, Steven M. Seitz
-
Patent number: 10327014Abstract: An example telepresence terminal includes a lenticular display, an image sensor, an infrared emitter, and an infrared depth sensor. The terminal may determine image data using visible light emitted by the infrared emitter and captured by the image sensor and determine depth data using infrared light captured by the infrared depth sensor. The terminal may also communicate the depth data and the image data to a remote telepresence terminal and receive remote image data and remote depth data. The terminal may also generate a first display image using the lenticular display based on the remote image data that is viewable from a first viewing location and generate a second display image using the lenticular display based on the remote image data and the remote depth data that is viewable from a second viewing location.Type: GrantFiled: September 8, 2017Date of Patent: June 18, 2019Assignee: GOOGLE LLCInventors: Daniel Goldman, Jason Lawrence, Andrew Huibers, Andrew Ian Russell, Steven M. Seitz
-
Patent number: 7813538Abstract: In connection with imaging an inner surface of a body lumen, a mosaiced image is created from discrete images or a video produced with a small camera, as the camera is moved through the lumen. In one embodiment, a tethered capsule with a scanning optical fiber provides the images, although other types of endoscopic cameras can instead be used. A surface model of the lumen and camera pose estimates for each image or frame are required for this task. Camera pose parameters, which define camera alignment, are determined for six degrees-of-freedom. The size of each frame projected as a strip on the surface model depends on the longitudinal movement of the camera. The projected frames are concatenated, and the cylinder is unrolled to produce the mosaic image. Further processing, such as applying surface domain blending, improves the quality of the mosaic image.Type: GrantFiled: May 17, 2007Date of Patent: October 12, 2010Assignee: University of WashingtonInventors: Robert E. Carroll, Eric J. Seibel, Steven M. Seitz
-
Publication number: 20080262312Abstract: In connection with imaging an inner surface of a body lumen, a mosaiced image is created from discrete images or a video produced with a small camera, as the camera is moved through the lumen. In one embodiment, a tethered capsule with a scanning optical fiber provides the images, although other types of endoscopic cameras can instead be used. A surface model of the lumen and camera pose estimates for each image or frame are required for this task. Camera pose parameters, which define camera alignment, are determined for six degrees-of-freedom. The size of each frame projected as a strip on the surface model depends on the longitudinal movement of the camera. The projected frames are concatenated, and the cylinder is unrolled to produce the mosaic image. Further processing, such as applying surface domain blending, improves the quality of the mosaic image.Type: ApplicationFiled: May 17, 2007Publication date: October 23, 2008Applicant: University of WashingtonInventors: Robert E. Carroll, Eric J. Seibel, Steven M. Seitz
-
Patent number: 6642924Abstract: A method and a system for obtaining visual information from an image sequence using a visual tunnel analysis are described. The present invention determines the position and orientation of rays captured in the image sequence and uses these rays to provide visual information. This visual information includes, for example, visual prediction information (whereby the extent of visibility and appearance of a virtual camera at arbitrary locations are determined) and visual planning (whereby a minimum number of images that need to be captured to visualize a desired region is determined). Generally, the visual tunnel analysis uses a subset of the plenoptic function to determine the position and orientation of every light ray passing through each point of images in the sequence. A visual tunnel associated with an input image sequence is a volume in visibility space that represents that portion of the visibility space occupied by the input image sequence.Type: GrantFiled: November 2, 2000Date of Patent: November 4, 2003Assignee: Microsoft CorporationInventors: Sing Bing Kang, Peter-Pike J. Sloan, Steven M. Seitz
-
Patent number: 6363170Abstract: A novel scene reconstruction technique is presented, different from previous approaches in its ability to cope with large changes in visibility and its modeling of intrinsic scene color and texture information. The method avoids image correspondence problems by working in a discretized scene space whose voxels are traversed in a fixed visibility ordering. This strategy takes full account of occlusions and allows the input cameras to be far apart and widely distributed about the environment. The algorithm identifies a special set of invariant voxels which together form a spatial and photometric reconstruction of the scene, fully consistent with the input images.Type: GrantFiled: April 29, 1999Date of Patent: March 26, 2002Assignee: Wisconsin Alumni Research FoundationInventors: Steven M. Seitz, Charles R. Dyer