Patents by Inventor Michael MARA
Michael MARA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240079611Abstract: Presented are systems for manufacturing membrane electrode assemblies for fuel cells, control logic for operating such systems, methods for making such MEAs, and fuel cell systems employing such MEAs. A method of manufacturing a membrane electrode assembly (MEA) for a fuel cell system includes receiving a standalone membrane (SAM) with a semipermeable proton-exchange membrane having opposing first and second faces and a backing layer attached to the first face. A SAM may be characterized by a lack of cathode and anode electrodes upon receipt of the membrane. The second face of the SAM is placed across a vacuum plate; the vacuum plate applies a predefined vacuum pressure to the SAM. While vacuum pressure is being applied to the SAM by the vacuum plate, the backing layer is removed from the SAM. A subgasket is then attached to the first face of the SAM after the backing layer is removed.Type: ApplicationFiled: September 2, 2022Publication date: March 7, 2024Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLCInventors: Ruichun Jiang, Michael Sweet, Kathryn L. Stevick, Burl B. Keel, Jackie Mara
-
Patent number: 11922567Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.Type: GrantFiled: April 4, 2022Date of Patent: March 5, 2024Assignee: NVIDIA CorporationInventors: Morgan McGuire, Cyril Crassin, David Luebke, Michael Mara, Brent Oster, Peter Shirley, Peter-Pike Sloan, Christopher Wyman
-
Publication number: 20220230386Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.Type: ApplicationFiled: April 4, 2022Publication date: July 21, 2022Inventors: Morgan McGuire, Cyril Crassin, David Luebke, Michael Mara, Brent Oster, Peter Shirley, Peter-Pike Sloan, Christopher Wyman
-
Patent number: 11295515Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.Type: GrantFiled: June 5, 2020Date of Patent: April 5, 2022Assignee: NVIDIA CorporationInventors: Morgan McGuire, Cyril Crassin, David Luebke, Michael Mara, Brent L. Oster, Peter Shirley, Peter-Pike Sloan, Christopher Wyman
-
Patent number: 11138782Abstract: In one embodiment, a computing system may determine an orientation in a three-dimensional (3D) space and generate a plurality of coordinates in the 3D space based on the determined orientation. The system may access pre-determined ray trajectory definitions associated with the plurality of coordinates. The system may determine visibility information of one or more objects defined within the 3D space by projecting rays through the plurality of coordinates, wherein trajectories of the rays from the plurality of coordinates are determined based on the pre-determined ray trajectory definitions. The system may then generate an image of the one or more objects based on the determined visibility information of the one or more objects.Type: GrantFiled: October 7, 2019Date of Patent: October 5, 2021Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Patent number: 11069124Abstract: In one embodiment, a computing system may determine a first orientation of a viewer in a three-dimensional (3D) space based on first sensor data associated with a first time. The system may render one or more first lines of pixels based on the first orientation of the viewer and display the one or more first lines. The system may determine a second orientation of the viewer in the 3D space based on second sensor data associated with a second time that is subsequent to the first time. The system may render one or more second lines of pixels based on the second orientation of the viewer and display the one or more second lines of pixels. The one or more second lines of pixels associated with the second orientation are displayed concurrently with the one or more first lines of pixels associated with the first orientation.Type: GrantFiled: January 21, 2020Date of Patent: July 20, 2021Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Publication number: 20200312018Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.Type: ApplicationFiled: June 5, 2020Publication date: October 1, 2020Inventors: Morgan McGuire, Cyril Crassin, David Luebke, Michael Mara, Brent L. Oster, Peter Shirley, Peter-Pike Sloan, Christopher Wyman
-
Patent number: 10713838Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.Type: GrantFiled: May 5, 2014Date of Patent: July 14, 2020Assignee: NVIDIA CorporationInventors: Morgan McGuire, David Luebke, Cyril Crassin, Peter-Pike Sloan, Peter Shirley, Brent Oster, Christopher Wyman, Michael Mara
-
Patent number: 10699467Abstract: In one embodiment, a method for determine visibility may perform intersection tests using block beams, tile beams, and rays. First, a computing system may project a block beam to test for intersection with a first bounding volume (BV) in a bounding volume hierarchy. If the beam fully contains BV, the system may test for more granular intersections with the first BV by projecting smaller tile beams contained within the block beam. Upon determining that the first BV partially intersects a tile beam, the system may project the tile beam against a second BV contained within the first BV. If the tile beam fully contains the second BV, the system may test for intersection using rays contained within the tile beam. The system may project procedurally-generated rays to test whether they intersect with objects contained within the second BV. Information associated with intersections may be used to render a computer-generated scene.Type: GrantFiled: April 16, 2018Date of Patent: June 30, 2020Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Publication number: 20200160587Abstract: In one embodiment, a computing system may determine a first orientation of a viewer in a three-dimensional (3D) space based on first sensor data associated with a first time. The system may render one or more first lines of pixels based on the first orientation of the viewer and display the one or more first lines. The system may determine a second orientation of the viewer in the 3D space based on second sensor data associated with a second time that is subsequent to the first time. The system may render one or more second lines of pixels based on the second orientation of the viewer and display the one or more second lines of pixels. The one or more second lines of pixels associated with the second orientation are displayed concurrently with the one or more first lines of pixels associated with the first orientation.Type: ApplicationFiled: January 21, 2020Publication date: May 21, 2020Inventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Publication number: 20200043219Abstract: In one embodiment, a computing system may determine an orientation in a three-dimensional (3D) space and generate a plurality of coordinates in the 3D space based on the determined orientation. The system may access pre-determined ray trajectory definitions associated with the plurality of coordinates. The system may determine visibility information of one or more objects defined within the 3D space by projecting rays through the plurality of coordinates, wherein trajectories of the rays from the plurality of coordinates are determined based on the pre-determined ray trajectory definitions. The system may then generate an image of the one or more objects based on the determined visibility information of the one or more objects.Type: ApplicationFiled: October 7, 2019Publication date: February 6, 2020Inventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Patent number: 10553012Abstract: In one embodiment, a computer system may determine an orientation in a 3D space based on sensor data generated by a virtual reality device. The system may generate ray footprints in the 3D space based on the determined orientation. For at least one of the ray footprints, the system may identify a corresponding number of subsamples to generate for that ray footprint and generate one or more coordinates in the ray footprint based on the corresponding number of subsamples. The system may determine visibility of one or more objects defined within the 3D space by projecting a ray from each of the one or more coordinates to test for intersection with the one or more objects. The system may generate an image of the one or more objected based on the determined visibility of the one or more objects.Type: GrantFiled: April 16, 2018Date of Patent: February 4, 2020Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Patent number: 10553013Abstract: In one embodiment, a computing system may determine a first orientation in a 3D space based on first sensor data generated at a first time. The system may determine a first visibility of an object in the 3D space by projecting rays based on the first orientation to test for intersection. The system may generate first lines of pixels based on the determined first visibility and output the first lines of pixels for display. The system may determine a second orientation based on second sensor data generated at a second time. The system may determine a second visibility of the object by projected rays based on the second orientation to test for intersection. The system may generate second lines of pixels based on the determined second visibility and output the second lines of pixels for display. The second lines of pixels are displayed concurrently with the first lines of pixels.Type: GrantFiled: April 16, 2018Date of Patent: February 4, 2020Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Patent number: 10529117Abstract: In one embodiment, a computing system may receive a focal surface map, which may be specified by an application. The system may determine an orientation in a 3D space based on sensor data generated by a virtual reality device. The system may generate first coordinates in the 3D space based on the determined orientation and generate second coordinates using the first coordinates and the focal surface map. Each of the first coordinates is associated with one of the second coordinates. For each of the first coordinates, the system may determine visibility of one or more objects defined within the 3D space by projecting a ray from the first coordinate through the associated second coordinate to test for intersection with the one or more objects. The system may generate an image of the one or more objected based on the determined visibility of the one or more objects.Type: GrantFiled: April 16, 2018Date of Patent: January 7, 2020Assignee: Facebook Technologies, LLCInventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Publication number: 20190318530Abstract: In one embodiment, a computing system may determine a first orientation in a 3D space based on first sensor data generated at a first time. The system may determine a first visibility of an object in the 3D space by projecting rays based on the first orientation to test for intersection. The system may generate first lines of pixels based on the determined first visibility and output the first lines of pixels for display. The system may determine a second orientation based on second sensor data generated at a second time. The system may determine a second visibility of the object by projected rays based on the second orientation to test for intersection. The system may generate second lines of pixels based on the determined second visibility and output the second lines of pixels for display. The second lines of pixels are displayed concurrently with the first lines of pixels.Type: ApplicationFiled: April 16, 2018Publication date: October 17, 2019Inventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Publication number: 20190318529Abstract: In one embodiment, a computer system may determine an orientation in a 3D space based on sensor data generated by a virtual reality device. The system may generate ray footprints in the 3D space based on the determined orientation. For at least one of the ray footprints, the system may identify a corresponding number of subsamples to generate for that ray footprint and generate one or more coordinates in the ray footprint based on the corresponding number of subsamples. The system may determine visibility of one or more objects defined within the 3D space by projecting a ray from each of the one or more coordinates to test for intersection with the one or more objects. The system may generate an image of the one or more objected based on the determined visibility of the one or more objects.Type: ApplicationFiled: April 16, 2018Publication date: October 17, 2019Inventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Publication number: 20190318526Abstract: In one embodiment, a computing system may receive a focal surface map, which may be specified by an application. The system may determine an orientation in a 3D space based on sensor data generated by a virtual reality device. The system may generate first coordinates in the 3D space based on the determined orientation and generate second coordinates using the first coordinates and the focal surface map. Each of the first coordinates is associated with one of the second coordinates. For each of the first coordinates, the system may determine visibility of one or more objects defined within the 3D space by projecting a ray from the first coordinate through the associated second coordinate to test for intersection with the one or more objects. The system may generate an image of the one or more objected based on the determined visibility of the one or more objects.Type: ApplicationFiled: April 16, 2018Publication date: October 17, 2019Inventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Publication number: 20190318528Abstract: In one embodiment, a method for determine visibility may perform intersection tests using block beams, tile beams, and rays. First, a computing system may project a block beam to test for intersection with a first bounding volume (BV) in a bounding volume hierarchy. If the beam fully contains BV, the system may test for more granular intersections with the first BV by projecting smaller tile beams contained within the block beam. Upon determining that the first BV partially intersects a tile beam, the system may project the tile beam against a second BV contained within the first BV. If the tile beam fully contains the second BV, the system may test for intersection using rays contained within the tile beam. The system may project procedurally-generated rays to test whether they intersect with objects contained within the second BV. Information associated with intersections may be used to render a computer-generated scene.Type: ApplicationFiled: April 16, 2018Publication date: October 17, 2019Inventors: Warren Andrew Hunt, Anton S. Kaplanyan, Michael Mara, Alexander Nankervis
-
Publication number: 20140375659Abstract: The present invention facilitates efficient and effective image processing. A network can comprise: a first system configured to perform a first portion of lighting calculations for an image and combing results of the first portion of lighting calculations for the image with results of a second portion of lighting calculations; and a second system configured to perform the second portion of lighting calculations and forward the results of the second portion of the lighting calculations to the first system. The first and second portion of lighting calculations can be associated with indirect lighting calculations and direct lighting calculations respectively. The first system can be a client in a local location and the second system can be a server in a remote location (e.g., a cloud computing environment). The first system and second system can be in a cloud and a video is transmitted to a local system.Type: ApplicationFiled: May 5, 2014Publication date: December 25, 2014Applicant: NVIDIA CorporationInventors: Morgan MCGUIRE, David LUEBKE, Cyril CRASSIN, Peter-Pike SLOAN, Peter SHIRLEY, Brent OSTER, Christopher WYMAN, Michael MARA