Patents by Inventor Gordon Wetzstein

Gordon Wetzstein has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11961431
    Abstract: The disclosure describes aspects of a display processing circuitry. In an aspect, one or more displays that support multiple views include one or more arrays of pixels, one or more backplanes, and a processing circuitry configured to receive one or more data streams, control processing of the data streams based on policies from which to select a mode of operation, each mode of operation defining which rays of light the arrays of pixels in the displays are to contribute to generate a particular view or views and the tasks to be performed by the processing circuitry to modify the data streams accordingly. The processing circuitry further provides signaling representative of the modified data streams to the arrays of pixels through a circuit configuration of the backplanes for the arrays of pixels to contribute the rays that will to generate the particular view or views. A corresponding method is also described.
    Type: Grant
    Filed: March 12, 2021
    Date of Patent: April 16, 2024
    Assignee: Google LLC
    Inventors: Gordon Wetzstein, Andrew Victor Jones, Tomi Petteri Maila, Kari Pulli, Ryan Phillip Spicer
  • Publication number: 20240094811
    Abstract: A differential camera system for object tracking. The system includes a co-aligned light source camera assembly (LSCA) and a controller. The co-aligned LSCA includes a light source and a differential camera sensor. The light source is configured to emit light along an optical path that is directed towards an eye box including an eye of a user. The differential camera sensor is configured to detect a change in brightness of the eye caused in part by the emitted light, asynchronously output data samples corresponding to the detected change in brightness, wherein the optical path is substantially co-aligned with an optical path of the differential camera sensor. The controller is configured to identify a pupil of the eye based on data samples output from the differential camera sensor resulting from the emitted light, and determine a gaze location of the user based in part on the identified pupil.
    Type: Application
    Filed: September 15, 2023
    Publication date: March 21, 2024
    Inventors: Robert Konrad Konrad, Kevin Conlon Boyle, Gordon Wetzstein, Nitish Padmanaban, John Gabriel Buckmaster
  • Publication number: 20240098360
    Abstract: A tracking system for object detection and tracking. The system may include a plurality of light sources, a differential camera, and a controller. The plurality of light sources is positioned at different locations on a device and are configured to emit pulses of light that illuminate an object. The differential camera has an optical axis, and at least some of the plurality of light sources are off-axis relative to the optical axis. The differential camera is configured to detect a change in brightness of the object caused in part by one or more of the pulses of light, and asynchronously output data samples corresponding to the detected change in brightness. The controller is configured to track the object based in part on the data samples output by the differential camera.
    Type: Application
    Filed: September 15, 2023
    Publication date: March 21, 2024
    Inventors: Robert Konrad Konrad, Kevin Conlon Boyle, Gordon Wetzstein
  • Patent number: 11922562
    Abstract: Disclosed herein is methods and systems for providing different views to a viewer. One particular embodiment includes a method including providing, to a neural network, a plurality of 2D images of a 3D object. The neural network may include a signed distance function based sinusoidal representation network. The method may further include obtaining a neural model of a shape of the object by obtaining a zero-level set of the signed distance function; and modeling an appearance of the object using a spatially varying emission function. In some embodiments, the neural model may be converted into a triangular mesh representing the object which may be used to render multiple view-dependent images representative of the 3D object.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: March 5, 2024
    Assignee: Google LLC
    Inventors: Gordon Wetzstein, Andrew Jones, Petr Kellnhofer, Lars Jebe, Ryan Spicer, Kari Pulli
  • Patent number: 11921271
    Abstract: Provided herein is a macroscope comprising an objective apparatus comprising a multifocal widefield optics comprising a plurality of optical components configured to focus on a plurality of planes. Also provided herein are methods for analyzing a three-dimensional specimen, the method comprising obtaining, via a macroscope, synchronous multifocal optical images of a plurality of planes of the three-dimensional specimen, wherein the macroscope comprises an objective apparatus comprising a multifocal widefield optics comprising a plurality of optical components configured to focus on a plurality of planes. The three-dimensional specimen can be a biological specimen, such as brain.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: March 5, 2024
    Assignee: The Board of Trustees of the Leland Stanford Junior Univeristy
    Inventors: Gordon Wetzstein, Tim Machado, Karl A. Deisseroth, Isaac Kauvar
  • Publication number: 20240045215
    Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.
    Type: Application
    Filed: October 19, 2023
    Publication date: February 8, 2024
    Inventors: Michael Anthony KLUG, Robert KONRAD, Gordon WETZSTEIN, Brian T. SCHOWENGERDT, Michal Beau Dennison VAUGHN
  • Patent number: 11835724
    Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.
    Type: Grant
    Filed: February 13, 2023
    Date of Patent: December 5, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Michael Anthony Klug, Robert Konrad, Gordon Wetzstein, Brian T. Schowengerdt, Michal Beau Dennison Vaughn
  • Publication number: 20230194879
    Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.
    Type: Application
    Filed: February 13, 2023
    Publication date: June 22, 2023
    Inventors: Michael Anthony Klug, Robert Konrad, Gordon Wetzstein, Brian T. Schowengerdt, Michal Beau Dennison Vaughn
  • Patent number: 11662574
    Abstract: A device includes a camera assembly and a controller. The camera assembly is configured to capture images of both eyes of a user. Using the captured images, the controller determines a location for each pupil of each eye of the user. The determined pupil locations and captured images are used to determine eye tracking parameters which are used to compute values of eye tracking functions. With the computed values and a model that maps the eye tracking functions to gaze depths, a gaze depth of the user is determined. An action is performed based on the determined gaze depth.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: May 30, 2023
    Assignee: Zinn Labs, Inc.
    Inventors: Kevin Boyle, Robert Konrad, Nitish Padmanaban, Gordon Wetzstein
  • Publication number: 20230120519
    Abstract: Systems and methods for event-based gaze in accordance with embodiments of the invention are illustrated. One embodiment includes an event-based gaze tracking system, including a camera positioned to observe an eye, where the camera is configured to asynchronously sample a plurality of pixels to obtain event data indicating changes in local contrast at each pixel in the plurality of pixels, and a processor communicatively coupled to the camera, and a memory communicatively coupled to processor, where the memory contains a gaze tracking application, where the gaze tracking application directs the processor to, receive the event data from the camera, fit an eye model to the eye using the event data, map eye model parameters from the eye model to a gaze vector, and provide the gaze vector.
    Type: Application
    Filed: February 26, 2021
    Publication date: April 20, 2023
    Applicant: The Board of Trustees of the Leland Stanford Junior University
    Inventor: Gordon Wetzstein
  • Patent number: 11625095
    Abstract: Embodiments are related to a plurality of gaze sensors embedded into a frame of a headset for detection of a gaze vector of a user wearing the headset and user's control at the headset. The gaze vector for an eye of the user can be within a threshold distance from one of the gaze sensors. By monitoring signals detected by the gaze sensors, it can be determined that the gaze vector is within the threshold distance from the gaze sensor. Based on this determination, at least one action associated with the headset is initiated.
    Type: Grant
    Filed: January 20, 2022
    Date of Patent: April 11, 2023
    Assignee: Zinn Labs, Inc.
    Inventors: Robert Konrad, Kevin Boyle, Nitish Padmanaban, Gordon Wetzstein
  • Patent number: 11614628
    Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: March 28, 2023
    Assignee: Magic Leap, Inc.
    Inventors: Michael Anthony Klug, Robert Konrad, Gordon Wetzstein, Brian T. Schowengerdt, Michal Beau Dennison Vaughn
  • Patent number: 11474597
    Abstract: A multiview autostereoscopic display includes a display area including an array of angular pixels, an eye tracker, and a processing system. Each angular pixel emits color that varies across a field of view of that angular pixel. The array of angular pixels displays different views in different viewing zones across the field of view of the display. The eye tracker detects the presence of the eyes of at least one viewer within specific viewing zones and produces eye tracking information including locations of the detected eyes within the specific viewing zones. The processing system renders a specific view for each detected eye based upon the location of the detected eye within the viewing zone with detected eyes, and generates control information for the array of angular pixels to cause the specific view for each detected eye to be displayed in the viewing zone in which that eye was detected.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: October 18, 2022
    Assignee: GOOGLE LLC
    Inventors: Kari Pulli, Gordon Wetzstein, Ryan Spicer, Andrew Jones, Tomi Maila, Zisimos Economou
  • Publication number: 20220236796
    Abstract: Embodiments are related to a plurality of gaze sensors embedded into a frame of a headset for detection of a gaze vector of a user wearing the headset and user's control at the headset. The gaze vector for an eye of the user can be within a threshold distance from one of the gaze sensors. By monitoring signals detected by the gaze sensors, it can be determined that the gaze vector is within the threshold distance from the gaze sensor. Based on this determination, at least one action associated with the headset is initiated.
    Type: Application
    Filed: January 20, 2022
    Publication date: July 28, 2022
    Inventors: Robert Konrad, Kevin Boyle, Nitish Padmanaban, Gordon Wetzstein
  • Publication number: 20220238220
    Abstract: Embodiments are related to a headset integrated into a healthcare platform. The headset comprises one or more sensors embedded into a frame of the headset, a controller coupled to the one or more sensors, and a transceiver coupled to the controller. The one or more sensors capture health information data for a user wearing the headset. The controller pre-processes at least a portion of the captured health information data to generate a pre-processed portion of the health information data. The transceiver communicates the health information data and the pre-processed portion of health information data to an intermediate device communicatively coupled to the headset. The intermediate device processes at least one of the health information data and the pre-processed portion of health information data to generate processed health information data for a health-related diagnostic of the user.
    Type: Application
    Filed: January 20, 2022
    Publication date: July 28, 2022
    Inventors: Robert Konrad, Kevin Boyle, Nitish Padmanaban, Gordon Wetzstein
  • Publication number: 20220189104
    Abstract: Disclosed herein is methods and systems for providing different views to a viewer. One particular embodiment includes a method including providing, to a neural network, a plurality of 2D images of a 3D object. The neural network may include a signed distance function based sinusoidal representation network. The method may further include obtaining a neural model of a shape of the object by obtaining a zero-level set of the signed distance function; and modeling an appearance of the object using a spatially varying emission function. In some embodiments, the neural model may be converted into a triangular mesh representing the object which may be used to render multiple view-dependent images representative of the 3D object.
    Type: Application
    Filed: December 14, 2021
    Publication date: June 16, 2022
    Applicant: Raxium, Inc.
    Inventors: Gordon Wetzstein, Andrew Jones, Petr Kellnhofer, Lars Jebe, Ryan Spicer, Kari Pulli
  • Publication number: 20220146819
    Abstract: A device includes a camera assembly and a controller. The camera assembly is configured to capture images of both eyes of a user. Using the captured images, the controller determines a location for each pupil of each eye of the user. The determined pupil locations and captured images are used to determine eye tracking parameters which are used to compute values of eye tracking functions. With the computed values and a model that maps the eye tracking functions to gaze depths, a gaze depth of the user is determined. An action is performed based on the determined gaze depth.
    Type: Application
    Filed: November 9, 2021
    Publication date: May 12, 2022
    Inventors: Kevin Boyle, Robert Konrad, Nitish Padmanaban, Gordon Wetzstein
  • Publication number: 20220146834
    Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.
    Type: Application
    Filed: January 21, 2022
    Publication date: May 12, 2022
    Inventors: Michael Anthony Klug, Robert Konrad, Gordon Wetzstein, Brian T. Schowengerdt, Michal Beau Dennison Vaughn
  • Patent number: 11231584
    Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: January 25, 2022
    Assignee: Magic Leap, Inc.
    Inventors: Michael Anthony Klug, Robert Konrad, Gordon Wetzstein, Brian T. Schowengerdt, Michal Beau Dennison Vaughn
  • Publication number: 20210364771
    Abstract: Provided herein is a macroscope comprising an objective apparatus comprising a multifocal widefield optics comprising a plurality of optical components configured to focus on a plurality of planes. Also provided herein are methods for analyzing a three-dimensional specimen, the method comprising obtaining, via a macroscope, synchronous multifocal optical images of a plurality of planes of the three-dimensional specimen, wherein the macroscope comprises an objective apparatus comprising a multifocal widefield optics comprising a plurality of optical components configured to focus on a plurality of planes. The three-dimensional specimen can be a biological specimen, such as brain.
    Type: Application
    Filed: May 20, 2021
    Publication date: November 25, 2021
    Inventors: Gordon Wetzstein, Tim Machado, Karl A. Deisseroth, Isaac Kauvar