Patents by Inventor Gordon Wetzstein
Gordon Wetzstein has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240419721Abstract: A system enables a user to query based on a user's gaze by receiving a query from the user and capturing, via an eye tracking system on a headset, the user's gaze location near an object in a local area. The system captures one or more images of the local area with the object and formats the images based in part on a region of interest in the one or more images that includes the object. The system generates a formatted query based in part on the query. The formatted query is provided to a search engine. Information describing the object determined from the one or more formatted images and information describing the query are used by the search engine to determine an answer to the query about the object. The system presents the answer to the query about the object.Type: ApplicationFiled: January 29, 2024Publication date: December 19, 2024Inventors: Robert Konrad Konrad, Gordon Wetzstein, Kevin Conlon Boyle, John Gabriel Buckmaster, Nitish Padmanaban
-
Patent number: 12066625Abstract: Systems and methods for event-based gaze in accordance with embodiments of the invention are illustrated. One embodiment includes an event-based gaze tracking system, including a camera positioned to observe an eye, where the camera is configured to asynchronously sample a plurality of pixels to obtain event data indicating changes in local contrast at each pixel in the plurality of pixels, and a processor communicatively coupled to the camera, and a memory communicatively coupled to processor, where the memory contains a gaze tracking application, where the gaze tracking application directs the processor to, receive the event data from the camera, fit an eye model to the eye using the event data, map eye model parameters from the eye model to a gaze vector, and provide the gaze vector.Type: GrantFiled: February 26, 2021Date of Patent: August 20, 2024Assignee: The Board of Trustees of The Leland Stanford Junior UniversityInventors: Gordon Wetzstein, Anastasios Nikolas Angelopoulos, Julien Martel
-
Publication number: 20240144584Abstract: A method of training a neural network model to generate a three-dimensional (3D) model of a scene includes: generating the 3D model based on a latent code; based on the 3D model, sampling a camera view including a camera position and a camera angle corresponding to the 3D model of the scene; generating a two-dimensional (2D) image based on the 3D model and the sampled camera view; and training the neural network model to, using the 3D model, generate a scene corresponding to the sampled camera view based on the generated 2D image and a real 2D image.Type: ApplicationFiled: July 24, 2023Publication date: May 2, 2024Applicants: Samsung Electronics Co., Ltd., The Board of Trustees of the Leland Stanford Junior UniversityInventors: Minjung SON, Jeong Joon PARK, Gordon WETZSTEIN
-
Patent number: 11961431Abstract: The disclosure describes aspects of a display processing circuitry. In an aspect, one or more displays that support multiple views include one or more arrays of pixels, one or more backplanes, and a processing circuitry configured to receive one or more data streams, control processing of the data streams based on policies from which to select a mode of operation, each mode of operation defining which rays of light the arrays of pixels in the displays are to contribute to generate a particular view or views and the tasks to be performed by the processing circuitry to modify the data streams accordingly. The processing circuitry further provides signaling representative of the modified data streams to the arrays of pixels through a circuit configuration of the backplanes for the arrays of pixels to contribute the rays that will to generate the particular view or views. A corresponding method is also described.Type: GrantFiled: March 12, 2021Date of Patent: April 16, 2024Assignee: Google LLCInventors: Gordon Wetzstein, Andrew Victor Jones, Tomi Petteri Maila, Kari Pulli, Ryan Phillip Spicer
-
Publication number: 20240098360Abstract: A tracking system for object detection and tracking. The system may include a plurality of light sources, a differential camera, and a controller. The plurality of light sources is positioned at different locations on a device and are configured to emit pulses of light that illuminate an object. The differential camera has an optical axis, and at least some of the plurality of light sources are off-axis relative to the optical axis. The differential camera is configured to detect a change in brightness of the object caused in part by one or more of the pulses of light, and asynchronously output data samples corresponding to the detected change in brightness. The controller is configured to track the object based in part on the data samples output by the differential camera.Type: ApplicationFiled: September 15, 2023Publication date: March 21, 2024Inventors: Robert Konrad Konrad, Kevin Conlon Boyle, Gordon Wetzstein
-
Publication number: 20240094811Abstract: A differential camera system for object tracking. The system includes a co-aligned light source camera assembly (LSCA) and a controller. The co-aligned LSCA includes a light source and a differential camera sensor. The light source is configured to emit light along an optical path that is directed towards an eye box including an eye of a user. The differential camera sensor is configured to detect a change in brightness of the eye caused in part by the emitted light, asynchronously output data samples corresponding to the detected change in brightness, wherein the optical path is substantially co-aligned with an optical path of the differential camera sensor. The controller is configured to identify a pupil of the eye based on data samples output from the differential camera sensor resulting from the emitted light, and determine a gaze location of the user based in part on the identified pupil.Type: ApplicationFiled: September 15, 2023Publication date: March 21, 2024Inventors: Robert Konrad Konrad, Kevin Conlon Boyle, Gordon Wetzstein, Nitish Padmanaban, John Gabriel Buckmaster
-
Patent number: 11921271Abstract: Provided herein is a macroscope comprising an objective apparatus comprising a multifocal widefield optics comprising a plurality of optical components configured to focus on a plurality of planes. Also provided herein are methods for analyzing a three-dimensional specimen, the method comprising obtaining, via a macroscope, synchronous multifocal optical images of a plurality of planes of the three-dimensional specimen, wherein the macroscope comprises an objective apparatus comprising a multifocal widefield optics comprising a plurality of optical components configured to focus on a plurality of planes. The three-dimensional specimen can be a biological specimen, such as brain.Type: GrantFiled: May 20, 2021Date of Patent: March 5, 2024Assignee: The Board of Trustees of the Leland Stanford Junior UniveristyInventors: Gordon Wetzstein, Tim Machado, Karl A. Deisseroth, Isaac Kauvar
-
Patent number: 11922562Abstract: Disclosed herein is methods and systems for providing different views to a viewer. One particular embodiment includes a method including providing, to a neural network, a plurality of 2D images of a 3D object. The neural network may include a signed distance function based sinusoidal representation network. The method may further include obtaining a neural model of a shape of the object by obtaining a zero-level set of the signed distance function; and modeling an appearance of the object using a spatially varying emission function. In some embodiments, the neural model may be converted into a triangular mesh representing the object which may be used to render multiple view-dependent images representative of the 3D object.Type: GrantFiled: December 14, 2021Date of Patent: March 5, 2024Assignee: Google LLCInventors: Gordon Wetzstein, Andrew Jones, Petr Kellnhofer, Lars Jebe, Ryan Spicer, Kari Pulli
-
Publication number: 20240045215Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.Type: ApplicationFiled: October 19, 2023Publication date: February 8, 2024Inventors: Michael Anthony KLUG, Robert KONRAD, Gordon WETZSTEIN, Brian T. SCHOWENGERDT, Michal Beau Dennison VAUGHN
-
Patent number: 11835724Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.Type: GrantFiled: February 13, 2023Date of Patent: December 5, 2023Assignee: Magic Leap, Inc.Inventors: Michael Anthony Klug, Robert Konrad, Gordon Wetzstein, Brian T. Schowengerdt, Michal Beau Dennison Vaughn
-
Publication number: 20230194879Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.Type: ApplicationFiled: February 13, 2023Publication date: June 22, 2023Inventors: Michael Anthony Klug, Robert Konrad, Gordon Wetzstein, Brian T. Schowengerdt, Michal Beau Dennison Vaughn
-
Patent number: 11662574Abstract: A device includes a camera assembly and a controller. The camera assembly is configured to capture images of both eyes of a user. Using the captured images, the controller determines a location for each pupil of each eye of the user. The determined pupil locations and captured images are used to determine eye tracking parameters which are used to compute values of eye tracking functions. With the computed values and a model that maps the eye tracking functions to gaze depths, a gaze depth of the user is determined. An action is performed based on the determined gaze depth.Type: GrantFiled: November 9, 2021Date of Patent: May 30, 2023Assignee: Zinn Labs, Inc.Inventors: Kevin Boyle, Robert Konrad, Nitish Padmanaban, Gordon Wetzstein
-
Publication number: 20230120519Abstract: Systems and methods for event-based gaze in accordance with embodiments of the invention are illustrated. One embodiment includes an event-based gaze tracking system, including a camera positioned to observe an eye, where the camera is configured to asynchronously sample a plurality of pixels to obtain event data indicating changes in local contrast at each pixel in the plurality of pixels, and a processor communicatively coupled to the camera, and a memory communicatively coupled to processor, where the memory contains a gaze tracking application, where the gaze tracking application directs the processor to, receive the event data from the camera, fit an eye model to the eye using the event data, map eye model parameters from the eye model to a gaze vector, and provide the gaze vector.Type: ApplicationFiled: February 26, 2021Publication date: April 20, 2023Applicant: The Board of Trustees of the Leland Stanford Junior UniversityInventor: Gordon Wetzstein
-
Patent number: 11625095Abstract: Embodiments are related to a plurality of gaze sensors embedded into a frame of a headset for detection of a gaze vector of a user wearing the headset and user's control at the headset. The gaze vector for an eye of the user can be within a threshold distance from one of the gaze sensors. By monitoring signals detected by the gaze sensors, it can be determined that the gaze vector is within the threshold distance from the gaze sensor. Based on this determination, at least one action associated with the headset is initiated.Type: GrantFiled: January 20, 2022Date of Patent: April 11, 2023Assignee: Zinn Labs, Inc.Inventors: Robert Konrad, Kevin Boyle, Nitish Padmanaban, Gordon Wetzstein
-
Patent number: 11614628Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.Type: GrantFiled: January 21, 2022Date of Patent: March 28, 2023Assignee: Magic Leap, Inc.Inventors: Michael Anthony Klug, Robert Konrad, Gordon Wetzstein, Brian T. Schowengerdt, Michal Beau Dennison Vaughn
-
Patent number: 11474597Abstract: A multiview autostereoscopic display includes a display area including an array of angular pixels, an eye tracker, and a processing system. Each angular pixel emits color that varies across a field of view of that angular pixel. The array of angular pixels displays different views in different viewing zones across the field of view of the display. The eye tracker detects the presence of the eyes of at least one viewer within specific viewing zones and produces eye tracking information including locations of the detected eyes within the specific viewing zones. The processing system renders a specific view for each detected eye based upon the location of the detected eye within the viewing zone with detected eyes, and generates control information for the array of angular pixels to cause the specific view for each detected eye to be displayed in the viewing zone in which that eye was detected.Type: GrantFiled: November 2, 2020Date of Patent: October 18, 2022Assignee: GOOGLE LLCInventors: Kari Pulli, Gordon Wetzstein, Ryan Spicer, Andrew Jones, Tomi Maila, Zisimos Economou
-
Publication number: 20220236796Abstract: Embodiments are related to a plurality of gaze sensors embedded into a frame of a headset for detection of a gaze vector of a user wearing the headset and user's control at the headset. The gaze vector for an eye of the user can be within a threshold distance from one of the gaze sensors. By monitoring signals detected by the gaze sensors, it can be determined that the gaze vector is within the threshold distance from the gaze sensor. Based on this determination, at least one action associated with the headset is initiated.Type: ApplicationFiled: January 20, 2022Publication date: July 28, 2022Inventors: Robert Konrad, Kevin Boyle, Nitish Padmanaban, Gordon Wetzstein
-
Publication number: 20220238220Abstract: Embodiments are related to a headset integrated into a healthcare platform. The headset comprises one or more sensors embedded into a frame of the headset, a controller coupled to the one or more sensors, and a transceiver coupled to the controller. The one or more sensors capture health information data for a user wearing the headset. The controller pre-processes at least a portion of the captured health information data to generate a pre-processed portion of the health information data. The transceiver communicates the health information data and the pre-processed portion of health information data to an intermediate device communicatively coupled to the headset. The intermediate device processes at least one of the health information data and the pre-processed portion of health information data to generate processed health information data for a health-related diagnostic of the user.Type: ApplicationFiled: January 20, 2022Publication date: July 28, 2022Inventors: Robert Konrad, Kevin Boyle, Nitish Padmanaban, Gordon Wetzstein
-
Publication number: 20220189104Abstract: Disclosed herein is methods and systems for providing different views to a viewer. One particular embodiment includes a method including providing, to a neural network, a plurality of 2D images of a 3D object. The neural network may include a signed distance function based sinusoidal representation network. The method may further include obtaining a neural model of a shape of the object by obtaining a zero-level set of the signed distance function; and modeling an appearance of the object using a spatially varying emission function. In some embodiments, the neural model may be converted into a triangular mesh representing the object which may be used to render multiple view-dependent images representative of the 3D object.Type: ApplicationFiled: December 14, 2021Publication date: June 16, 2022Applicant: Raxium, Inc.Inventors: Gordon Wetzstein, Andrew Jones, Petr Kellnhofer, Lars Jebe, Ryan Spicer, Kari Pulli
-
Publication number: 20220146834Abstract: An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity.Type: ApplicationFiled: January 21, 2022Publication date: May 12, 2022Inventors: Michael Anthony Klug, Robert Konrad, Gordon Wetzstein, Brian T. Schowengerdt, Michal Beau Dennison Vaughn