Patents by Inventor David Patrick Luebke
David Patrick Luebke has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11892629Abstract: Virtual reality (VR) displays are computer displays that present images or video in a manner that simulates a real experience for the viewer. In many cases, VR displays are implemented as head-mounted displays (HMDs) which provide a display in the line of sight of the user. Because current HMDs are composed of a display panel and magnifying lens with a gap therebetween, proper functioning of the HMDs limits their design to a box-like form factor, thereby negatively impacting both comfort and aesthetics. The present disclosure provides a different configuration for a VR display which allows for improved comfort and aesthetics, including specifically at least one coherent light source, at least one pupil replicating waveguide coupled to the at least one coherent light source to receive light therefrom, and at least one spatial light modulator coupled to the at least one pupil replicating waveguide to modulate the light.Type: GrantFiled: February 11, 2022Date of Patent: February 6, 2024Assignee: NVIDIA CORPORATIONInventors: Jonghyun Kim, Ward Lopes, David Patrick Luebke, Manu Gopakumar
-
Patent number: 11775829Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: GrantFiled: December 12, 2022Date of Patent: October 3, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Publication number: 20230110206Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: ApplicationFiled: December 12, 2022Publication date: April 13, 2023Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 11625613Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: GrantFiled: January 7, 2021Date of Patent: April 11, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 11610435Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: GrantFiled: October 13, 2020Date of Patent: March 21, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 11610122Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: GrantFiled: January 7, 2021Date of Patent: March 21, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 11580395Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: GrantFiled: October 13, 2020Date of Patent: February 14, 2023Assignee: NVIDIA CorporationInventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Publication number: 20220334395Abstract: Virtual reality (VR) displays are computer displays that present images or video in a manner that simulates a real experience for the viewer. In many cases, VR displays are implemented as head-mounted displays (HMDs) which provide a display in the line of sight of the user. Because current HMDs are composed of a display panel and magnifying lens with a gap therebetween, proper functioning of the HMDs limits their design to a box-like form factor, thereby negatively impacting both comfort and aesthetics. The present disclosure provides a different configuration for a VR display which allows for improved comfort and aesthetics, including specifically at least one coherent light source, at least one pupil replicating waveguide coupled to the at least one coherent light source to receive light therefrom, and at least one spatial light modulator coupled to the at least one pupil replicating waveguide to modulate the light.Type: ApplicationFiled: February 11, 2022Publication date: October 20, 2022Inventors: Jonghyun Kim, Ward Lopes, David Patrick Luebke, Manu Gopakumar
-
Publication number: 20220191638Abstract: Apparatuses, systems, and techniques to determine head poses of users and provide audio for the users. In at least one embodiment, a head pose is determined based, at least in part, on camera frame information, and an audio signal is generated, based at least in part, on the determined head pose.Type: ApplicationFiled: March 3, 2021Publication date: June 16, 2022Inventors: Michael Stengel, Jan Kautz, David Patrick Luebke, Morgan Samuel McGuire
-
Publication number: 20210329306Abstract: Apparatuses, systems, and techniques to perform compression of video data using neural networks to facilitate video streaming, such as video conferencing. In at least one embodiment, a sender transmits to a receiver a key frame from video data and one or more keypoints identified by a neural network from said video data, and a receiver reconstructs video data using said key frame and one or more received keypoints.Type: ApplicationFiled: October 13, 2020Publication date: October 21, 2021Inventors: Ming-Yu Liu, Ting-Chun Wang, Arun Mohanray Mallya, Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko Lehtinen, Miika Samuli Aittala, Timo Oskari Aila
-
Publication number: 20210150354Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: ApplicationFiled: January 7, 2021Publication date: May 20, 2021Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Publication number: 20210150187Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: ApplicationFiled: January 7, 2021Publication date: May 20, 2021Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 10948985Abstract: Perceived clarity of an image presented by a display can be improved using an image stabilization technique to stabilize the image relative to a user's retina. During an illumination period, stabilization actuators are controlled to move a display panel or adjust optical components in the path of light associated with the image to shift the location of the image on the user's retina in response to head or eye movement detected by the system. In some embodiments, a display is configured to illuminate an image, and at least one stabilization actuator is configured to stabilize the image in a retina space associated with a user. Changes in the retina space can be detected by one or more sensors configured to detect a head position of the user and/or an orientation of the user's retina. The image is stabilized in retina space using the stabilization actuators.Type: GrantFiled: March 27, 2019Date of Patent: March 16, 2021Assignee: NVIDIA CorporationInventors: Thomas Hastings Greer, Josef Bo Spjut, David Patrick Luebke
-
Publication number: 20210049468Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: ApplicationFiled: October 13, 2020Publication date: February 18, 2021Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Patent number: 10922876Abstract: A method, computer readable medium, and system are disclosed for redirecting a user's movement through a physical space while the user views a virtual environment. A temporary visual suppression event is detected when a user's eyes move relative to the user's head while viewing a virtual scene displayed on a display device, an orientation of the virtual scene relative to the user is modified to direct the user to physically move along a planned path through a virtual environment corresponding to the virtual scene, and the virtual scene is displayed on the display device according to the modified orientation.Type: GrantFiled: January 2, 2020Date of Patent: February 16, 2021Assignee: NVIDIA CorporationInventors: Qi Sun, Anjul Patney, Omer Shapira, Morgan McGuire, Aaron Eliot Lefohn, David Patrick Luebke
-
Publication number: 20210042503Abstract: A latent code defined in an input space is processed by the mapping neural network to produce an intermediate latent code defined in an intermediate latent space. The intermediate latent code may be used as appearance vector that is processed by the synthesis neural network to generate an image. The appearance vector is a compressed encoding of data, such as video frames including a person's face, audio, and other data. Captured images may be converted into appearance vectors at a local device and transmitted to a remote device using much less bandwidth compared with transmitting the captured images. A synthesis neural network at the remote device reconstructs the images for display.Type: ApplicationFiled: October 13, 2020Publication date: February 11, 2021Inventors: Tero Tapani Karras, Samuli Matias Laine, David Patrick Luebke, Jaakko T. Lehtinen, Miika Samuli Aittala, Timo Oskari Aila, Ming-Yu Liu, Arun Mohanray Mallya, Ting-Chun Wang
-
Publication number: 20210012562Abstract: Global illumination in computer graphics refers to the modeling of how light is bounced off of one or more surfaces in a computer generated image onto other surfaces in the image (i.e. indirect light), rather than simply determining the light that hits a surface in an image directly from a light source (i.e. direct light). Rendering accurate global illumination effects in such images makes them more believable. However, simulating physically-based global illumination with offline numerical solvers has traditionally been time consuming and/or noisy and has not adapted well for dynamic scenes. The present disclosure provides a probe-based dynamic global illumination technique for computer generated scenes.Type: ApplicationFiled: July 10, 2020Publication date: January 14, 2021Inventors: Morgan McGuire, Alexander Majercik, David Patrick Luebke
-
Patent number: 10664049Abstract: A method, computer readable medium, and system are disclosed for gaze tracking. The method includes the steps of receiving reflected light rays at an optical sensor, where all of the reflected light rays converge towards a rotational center of an eye and generating pattern data based on intersections of the reflected light rays at a surface of the optical sensor. A processor computes an estimated gaze direction of the eye based on the pattern data.Type: GrantFiled: November 10, 2017Date of Patent: May 26, 2020Assignee: NVIDIA CorporationInventors: Joohwan Kim, Ward Lopes, David Patrick Luebke, Chengyuan Lin
-
Publication number: 20200160590Abstract: A method, computer readable medium, and system are disclosed for redirecting a user's movement through a physical space while the user views a virtual environment. A temporary visual suppression event is detected when a user's eyes move relative to the user's head while viewing a virtual scene displayed on a display device, an orientation of the virtual scene relative to the user is modified to direct the user to physically move along a planned path through a virtual environment corresponding to the virtual scene, and the virtual scene is displayed on the display device according to the modified orientation.Type: ApplicationFiled: January 2, 2020Publication date: May 21, 2020Inventors: Qi Sun, Anjul Patney, Omer Shapira, Morgan McGuire, Aaron Eliot Lefohn, David Patrick Luebke
-
Patent number: RE48876Abstract: In embodiments of the invention, an apparatus may include a display comprising a plurality of pixels and a computer system coupled with the display and operable to instruct the display to display images. The apparatus may further include an SLM array located adjacent to the display and comprising a plurality of SLMs, wherein the SLM array is operable to produce a light field by altering light emitted by the display to simulate an object that is in focus to an observer while the display and the SLM array are located within a near-eye range of the observer.Type: GrantFiled: March 27, 2017Date of Patent: January 4, 2022Assignee: NVIDIA CORPORATIONInventors: David Patrick Luebke, Douglas Lanman, Thomas F. Fox, Gerrit Slavenburg