Patents by Inventor Morgan McGuire

Morgan McGuire has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119664
    Abstract: A remote device utilizes ray tracing to compute a light field for a scene to be rendered, where the light field includes information about light reflected off surfaces within the scene. This light field is then compressed utilizing one or more video compression techniques that implement temporal reuse, such that only differences between the light field for the scene and a light field for a previous scene are compressed. The compressed light field data is then sent to a client device that decompresses the light field data and uses such data to obtain the light field for the scene at the client device. This light field is then used by the client device to compute global illumination for the scene. The global illumination may be used to accurately render the scene at the mobile device, resulting in a realistic scene that is presented by the mobile device.
    Type: Application
    Filed: December 19, 2023
    Publication date: April 11, 2024
    Inventors: Michael Stengel, Alexander Majercik, Ben Boudaoud, Morgan McGuire
  • Publication number: 20240112689
    Abstract: A computer-implemented method includes receiving, at a server, a first audio stream of a performance associated with a first client device. The method further includes receiving, at the server, a second audio stream of the performance associated with a second client device. The method further includes during a time window of the performance, where the time window is less than a total time of the performance: generating a synthesized first audio stream that predicts a future of the performance based on audio features of the first audio stream and mixing the synthesized first audio stream and the second audio stream to form a combined audio stream that synchronizes the synthesized first audio stream and the second audio stream, where the time window is advanced and the generating and the mixing are repeated until the performance is complete. The method further includes transmitting the combined audio stream to the second client device.
    Type: Application
    Filed: October 4, 2022
    Publication date: April 4, 2024
    Applicant: Roblox Corporation
    Inventors: Mahesh Kumar NANDWANA, Kiran BHAT, Morgan MCGUIRE
  • Publication number: 20240112691
    Abstract: A computer-implemented method includes receiving a first audio stream of a performance associated with a first client device. The method further includes during a time window of the performance, wherein the time window is less than a total time of the performance: generating a synthesized first audio stream that predicts a future of the performance based on audio features of the first audio stream and mixing the synthesized first audio stream and a second audio stream associated with a second client device to form a combined audio stream that synchronizes the synthesized first audio stream and the second audio stream, where the time window is advanced and the generating and the mixing are repeated until the performance is complete.
    Type: Application
    Filed: October 4, 2022
    Publication date: April 4, 2024
    Applicant: Roblox Corporation
    Inventors: Mahesh Kumar NANDWANA, Kiran BHAT, Morgan McGuire
  • Patent number: 11941752
    Abstract: A remote device utilizes ray tracing to compute a light field for a scene to be rendered, where the light field includes information about light reflected off surfaces within the scene. This light field is then compressed utilizing one or more video compression techniques that implement temporal reuse, such that only differences between the light field for the scene and a light field for a previous scene are compressed. The compressed light field data is then sent to a client device that decompresses the light field data and uses such data to obtain the light field for the scene at the client device. This light field is then used by the client device to compute global illumination for the scene. The global illumination may be used to accurately render the scene at the mobile device, resulting in a realistic scene that is presented by the mobile device.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: March 26, 2024
    Assignee: NVIDIA CORPORATION
    Inventors: Michael Stengel, Alexander Majercik, Ben Boudaoud, Morgan McGuire
  • Publication number: 20240087596
    Abstract: A computer-implemented method to determine whether to introduce latency into an audio stream from a particular speaker includes an audio stream from a sender device. The method further includes providing, as input to a trained machine-learning model, the audio stream and a speech analysis score, information about one or more voice emotion parameters, and one or more voice emotion scores for a first user associated with the sender device, wherein the trained machine-learning model is iteratively applied to the audio stream and wherein each iteration corresponds to a respective portion of the audio stream. The method further includes generating as output, with the trained machine-learning model, a level of toxicity in the audio stream. The method further includes transmitting the audio stream to a recipient device, wherein the transmitting is performed to introduce a time delay in the audio stream based on the level of toxicity.
    Type: Application
    Filed: September 8, 2022
    Publication date: March 14, 2024
    Applicant: Roblox Corporation
    Inventors: Mahesh Kumar NANDWANA, Philippe CLAVEL, Morgan MCGUIRE
  • Patent number: 11922567
    Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.
    Type: Grant
    Filed: April 4, 2022
    Date of Patent: March 5, 2024
    Assignee: NVIDIA Corporation
    Inventors: Morgan McGuire, Cyril Crassin, David Luebke, Michael Mara, Brent Oster, Peter Shirley, Peter-Pike Sloan, Christopher Wyman
  • Patent number: 11875449
    Abstract: Systems and methods are described for rendering complex surfaces or geometry. In at least one embodiment, neural signed distance functions (SDFs) can be used that efficiently capture multiple levels of detail (LODs), and that can be used to reconstruct multi-dimensional geometry or surfaces with high image quality. An example architecture can represent complex shapes in a compressed format with high visual fidelity, and can generalize across different geometries from a single learned example. Extremely small multi-layer perceptrons (MLPs) can be used with an octree-based feature representation for the learned neural SDFs.
    Type: Grant
    Filed: May 16, 2022
    Date of Patent: January 16, 2024
    Assignee: Nvidia Corporation
    Inventors: Towaki Alan Takikawa, Joey Litalien, Kangxue Yin, Karsten Julian Kreis, Charles Loop, Morgan McGuire, Sanja Fidler
  • Patent number: 11660535
    Abstract: The disclosure provides features or schemes that improve a user's experience with an interactive computer product by reducing latency through late latching and late warping. The late warping can be applied by imaging hardware based on late latch inputs and is applicable for both local and cloud computing environments. In one aspect, the disclosure provides a method of operating an imaging system employing late latching and late warping. In one example the method of operating an imaging system includes: (1) rendering a rendered image based on a user input from an input device and scene data from an application engine, (2) obtaining a late latch input from the input device, (3) rendering, employing imaging hardware, a warped image by late warping at least a portion of the rendered image based on the late latch input, and (4) updating state information in the application engine with late latch and warp information.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: May 30, 2023
    Assignee: NVIDIA Corporation
    Inventors: Pyarelal Knowles, Ben Boudaoud, Josef Spjut, Morgan McGuire, Kamran Binaee, Joohwan Kim, Harish Vutukuru
  • Patent number: 11630312
    Abstract: An augmented reality display system includes a first beam path for a foveal inset image on a holographic optical element, a second beam path for a peripheral display image on the holographic optical element, and pupil position tracking logic that generates control signals to set a position of the foveal inset as perceived through the holographic optical element, to determine the peripheral display image, and to control a moveable stage.
    Type: Grant
    Filed: July 6, 2021
    Date of Patent: April 18, 2023
    Assignee: NVIDIA Corp.
    Inventors: Jonghyun Kim, Youngmo Jeong, Michael Stengel, Morgan McGuire, David Luebke
  • Patent number: 11501467
    Abstract: A remote device utilizes ray tracing to compute a light field for a scene to be rendered, where the light field includes information about light reflected off surfaces within the scene. This light field is then compressed utilizing lossless or lossy compression and one or more video compression techniques that implement temporal reuse, such that only differences between the light field for the scene and a light field for a previous scene are compressed. The compressed light field data is then sent to a client device that decompresses the light field data and uses such data to obtain the light field for the scene at the client device. This light field is then used by the client device to compute global illumination for the scene. The global illumination may be used to accurately render the scene at the mobile device, resulting in a realistic scene that is presented by the mobile device.
    Type: Grant
    Filed: May 5, 2021
    Date of Patent: November 15, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Michael Stengel, Alexander Majercik, Ben Boudaoud, Morgan McGuire, Dawid Stanislaw Pajak
  • Publication number: 20220284621
    Abstract: One embodiment of a method includes calculating one or more activation values of one or more neural networks trained to infer eye gaze information based, at least in part, on eye position of one or more images of one or more faces indicated by an infrared light reflection from the one or more images.
    Type: Application
    Filed: May 2, 2022
    Publication date: September 8, 2022
    Inventors: Joohwan Kim, Michael Stengel, Zander Majercik, Shalini De Mello, Samuli Laine, Morgan McGuire, David Luebke
  • Publication number: 20220284659
    Abstract: Systems and methods are described for rendering complex surfaces or geometry. In at least one embodiment, neural signed distance functions (SDFs) can be used that efficiently capture multiple levels of detail (LODs), and that can be used to reconstruct multi-dimensional geometry or surfaces with high image quality. An example architecture can represent complex shapes in a compressed format with high visual fidelity, and can generalize across different geometries from a single learned example. Extremely small multi-layer perceptrons (MLPs) can be used with an octree-based feature representation for the learned neural SDFs.
    Type: Application
    Filed: May 16, 2022
    Publication date: September 8, 2022
    Inventors: Towaki Alan Takikawa, Joey Litalien, Kangxue Yin, Karsten Julian Kreis, Charles Loop, Morgan McGuire, Sanja Fidler
  • Publication number: 20220230386
    Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.
    Type: Application
    Filed: April 4, 2022
    Publication date: July 21, 2022
    Inventors: Morgan McGuire, Cyril Crassin, David Luebke, Michael Mara, Brent Oster, Peter Shirley, Peter-Pike Sloan, Christopher Wyman
  • Publication number: 20220198746
    Abstract: A global illumination data structure (e.g., a data structure created to store global illumination information for geometry within a scene to be rendered) is computed for the scene. Additionally, reservoir-based spatiotemporal importance resampling (RESTIR) is used to perform illumination gathering, utilizing the global illumination data structure. The illumination gathering includes identifying light values for points within the scene, where one or more points are selected within the scene based on the light values in order to perform ray tracing during the rendering of the scene.
    Type: Application
    Filed: March 11, 2022
    Publication date: June 23, 2022
    Inventors: Christopher Ryan Wyman, Morgan McGuire, Peter Schuyler Shirley, Aaron Eliot Lefohn
  • Publication number: 20220172423
    Abstract: Systems and methods are described for rendering complex surfaces or geometry. In at least one embodiment, neural signed distance functions (SDFs) can be used that efficiently capture multiple levels of detail (LODs), and that can be used to reconstruct multi-dimensional geometry or surfaces with high image quality. An example architecture can represent complex shapes in a compressed format with high visual fidelity, and can generalize across different geometries from a single learned example. Extremely small multi-layer perceptrons (MLPs) can be used with an octree-based feature representation for the learned neural SDFs.
    Type: Application
    Filed: May 7, 2021
    Publication date: June 2, 2022
    Inventors: Towaki Alan Takikawa, Joey Litalien, Kangxue Yin, Karsten Julian Kreis, Charles Loop, Morgan McGuire, Sanja Fidler
  • Patent number: 11335056
    Abstract: Systems and methods are described for rendering complex surfaces or geometry. In at least one embodiment, neural signed distance functions (SDFs) can be used that efficiently capture multiple levels of detail (LODs), and that can be used to reconstruct multi-dimensional geometry or surfaces with high image quality. An example architecture can represent complex shapes in a compressed format with high visual fidelity, and can generalize across different geometries from a single learned example. Extremely small multi-layer perceptrons (MLPs) can be used with an octree-based feature representation for the learned neural SDFs.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: May 17, 2022
    Assignee: Nvidia Corporation
    Inventors: Towaki Alan Takikawa, Joey Litalien, Kangxue Yin, Karsten Julian Kreis, Charles Loop, Morgan McGuire, Sanja Fidler
  • Publication number: 20220138988
    Abstract: A remote device utilizes ray tracing to compute a light field for a scene to be rendered, where the light field includes information about light reflected off surfaces within the scene. This light field is then compressed utilizing lossless or lossy compression and one or more video compression techniques that implement temporal reuse, such that only differences between the light field for the scene and a light field for a previous scene are compressed. The compressed light field data is then sent to a client device that decompresses the light field data and uses such data to obtain the light field for the scene at the client device. This light field is then used by the client device to compute global illumination for the scene. The global illumination may be used to accurately render the scene at the mobile device, resulting in a realistic scene that is presented by the mobile device.
    Type: Application
    Filed: May 5, 2021
    Publication date: May 5, 2022
    Inventors: Michael Stengel, Alexander Majercik, Ben Boudaoud, Morgan McGuire, Dawid Stanislaw Pajak
  • Patent number: 11321865
    Abstract: One embodiment of a method includes calculating one or more activation values of one or more neural networks trained to infer eye gaze information based, at least in part, on eye position of one or more images of one or more faces indicated by an infrared light reflection from the one or more images.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: May 3, 2022
    Assignee: Nvidia Corporation
    Inventors: Joohwan Kim, Michael Stengel, Zander Majercik, Shalini De Mello, Samuli Laine, Morgan McGuire, David Luebke
  • Patent number: 11315310
    Abstract: A global illumination data structure (e.g., a data structure created to store global illumination information for geometry within a scene to be rendered) is computed for the scene. Additionally, reservoir-based spatiotemporal importance resampling (RESTIR) is used to perform illumination gathering, utilizing the global illumination data structure. The illumination gathering includes identifying light values for points within the scene, where one or more points are selected within the scene based on the light values in order to perform ray tracing during the rendering of the scene.
    Type: Grant
    Filed: January 19, 2021
    Date of Patent: April 26, 2022
    Assignee: NVIDIA CORPORATION
    Inventors: Christopher Ryan Wyman, Morgan McGuire, Peter Schuyler Shirley, Aaron Eliot Lefohn
  • Patent number: 11295515
    Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: April 5, 2022
    Assignee: NVIDIA Corporation
    Inventors: Morgan McGuire, Cyril Crassin, David Luebke, Michael Mara, Brent L. Oster, Peter Shirley, Peter-Pike Sloan, Christopher Wyman