Patents by Inventor James Michael O'Connor
James Michael O'Connor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12223201Abstract: A hierarchical network enables access for a stacked memory system including or more memory dies that each include multiple memory tiles. The processor die includes multiple processing tiles that are stacked with the one or more memory die. The memory tiles that are vertically aligned with a processing tile are directly coupled to the processing tile and comprise the local memory block for the processing tile. The hierarchical network provides access paths for each processing tile to access the processing tile's local memory block, the local memory block coupled to a different processing tile within the same processing die, memory tiles in a different die stack, and memory tiles in a different device. The ratio of memory bandwidth (byte) to floating-point operation (B:F) may improve 50× for accessing the local memory block compared with conventional memory. Additionally, the energy consumed to transfer each bit may be reduced by 10×.Type: GrantFiled: February 9, 2024Date of Patent: February 11, 2025Assignee: NVIDIA CorporationInventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
-
Publication number: 20250037186Abstract: One embodiment sets forth a technique for performing matrix operations. The technique includes traversing a tree structure to access one or more non-empty regions within a matrix. The tree structure includes a first plurality of nodes and a second plurality of nodes corresponding to non-empty regions in the matrix. The first plurality of nodes includes a first node representing a first region and one or more second nodes that are children of the first node and represent second region(s) with an equal size formed within the first region. The second plurality of nodes include a third node representing a third region and one or more fourth nodes that are children of the third node and represent fourth region(s) with substantially equal numbers of non-zero matrix values formed within the third region. The technique also includes performing matrix operation(s) based on the non-empty region(s) to generate a matrix operation result.Type: ApplicationFiled: October 15, 2024Publication date: January 30, 2025Inventors: Hanrui WANG, James Michael O'CONNOR, Donghyuk LEE
-
Patent number: 12211080Abstract: One embodiment sets forth a technique for performing matrix operations. The technique includes traversing a tree structure to access one or more non-empty regions within a matrix. The tree structure includes a first plurality of nodes and a second plurality of nodes corresponding to non-empty regions in the matrix. The first plurality of nodes includes a first node representing a first region and one or more second nodes that are children of the first node and represent second region(s) with an equal size formed within the first region. The second plurality of nodes include a third node representing a third region and one or more fourth nodes that are children of the third node and represent fourth region(s) with substantially equal numbers of non-zero matrix values formed within the third region. The technique also includes performing matrix operation(s) based on the non-empty region(s) to generate a matrix operation result.Type: GrantFiled: May 19, 2021Date of Patent: January 28, 2025Assignee: NVIDIA CORPORATIONInventors: Hanrui Wang, James Michael O'Connor, Donghyuk Lee
-
Publication number: 20240411709Abstract: Embodiments of the present disclosure relate to application partitioning for locality in a stacked memory system. In an embodiment, one or more memory dies are stacked on the processor die. The processor die includes multiple processing tiles and each memory die includes multiple memory tiles. Vertically aligned memory tiles are directly coupled to and comprise the local memory block for a corresponding processing tile. An application program that operates on dense multi-dimensional arrays (matrices) may partition the dense arrays into sub-arrays associated with program tiles. Each program tile is executed by a processing tile using the processing tile's local memory block to process the associated sub-array. Data associated with each sub-array is stored in a local memory block and the processing tile corresponding to the local memory block executes the program tile to process the sub-array data.Type: ApplicationFiled: August 21, 2024Publication date: December 12, 2024Inventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
-
Patent number: 12141229Abstract: One embodiment sets forth a technique for performing one or more matrix multiplication operations based on a first matrix and a second matrix. The technique includes receiving data associated with the first matrix from a first traversal engine that accesses nonzero elements included in the first matrix via a first tree structure. The technique also includes performing one or more computations on the data associated with the first matrix and the data associated with the second matrix to produce a plurality of partial results. The technique further includes combining the plurality of partial results into one or more intermediate results and storing the one or more intermediate results in a first buffer memory.Type: GrantFiled: May 19, 2021Date of Patent: November 12, 2024Assignee: NVIDIA CorporationInventors: Hanrui Wang, James Michael O'Connor, Donghyuk Lee
-
Patent number: 12141451Abstract: Embodiments of the present disclosure relate to memory page access instrumentation for generating a memory access profile. The memory access profile may be used to co-locate data near the processing unit that accesses the data, reducing memory access energy by minimizing distances to access data that is co-located with a different processing unit (i.e., remote data). Execution thread arrays and memory pages for execution of a program are partitioned across multiple processing units. The partitions are then each mapped to a specific processing unit to minimize inter-partition traffic given the processing unit physical topology.Type: GrantFiled: February 1, 2023Date of Patent: November 12, 2024Assignee: NVIDIA CorporationInventors: Niladrish Chatterjee, Zachary Joseph Susskind, Donghyuk Lee, James Michael O'Connor
-
Patent number: 12099453Abstract: Embodiments of the present disclosure relate to application partitioning for locality in a stacked memory system. In an embodiment, one or more memory dies are stacked on the processor die. The processor die includes multiple processing tiles and each memory die includes multiple memory tiles. Vertically aligned memory tiles are directly coupled to and comprise the local memory block for a corresponding processing tile. An application program that operates on dense multi-dimensional arrays (matrices) may partition the dense arrays into sub-arrays associated with program tiles. Each program tile is executed by a processing tile using the processing tile's local memory block to process the associated sub-array. Data associated with each sub-array is stored in a local memory block and the processing tile corresponding to the local memory block executes the program tile to process the sub-array data.Type: GrantFiled: March 30, 2022Date of Patent: September 24, 2024Assignee: NVIDIA CorporationInventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
-
Publication number: 20240308794Abstract: A gangway comprises a fixed platform and a support structure connected to the fixed platform in a manner that allows the support structure to rotate with respect to the fixed platform between a raised stowed position and a lowered deployed position. A raising assembly is operative to rotate the support structure from the deployed position to the stowed position. The raising assembly includes at least one fluid actuated cylinder connected between the fixed platform and a distal end of the support structure. A raising actuator is usable by an operator to cause operation of the cylinder in a manner that rotates the support structure toward the stowed position.Type: ApplicationFiled: May 24, 2024Publication date: September 19, 2024Inventors: John Rutledge LAWSON, James Michael O'KEEFE, Jeff W. REICHERT, Jeffrey David SCOTT
-
Publication number: 20240281300Abstract: An initiating processing tile generates an offload request that may include a processing tile ID, source data needed for the computation, program counter, and destination location where the computation result is stored. The offload processing tile may execute the offloaded computation. Alternatively, the offload processing tile may deny the offload request based on congestion criteria. The congestion criteria may include a processing workload measure, whether a resource needed to perform the computation is available, and an offload request buffer fullness. In an embodiment, the denial message that is returned to the initiating processing tile may include the data needed to perform the computation (read from the local memory of the offload processing tile). Returning the data with the denial message results in the same inter-processing tile traffic that would occur if no attempt to offload the computation were initiated.Type: ApplicationFiled: December 4, 2023Publication date: August 22, 2024Inventors: Donghyuk Lee, Leul Wuletaw Belayneh, Niladrish Chatterjee, James Michael O'Connor
-
Publication number: 20240256153Abstract: Embodiments of the present disclosure relate to memory page access instrumentation for generating a memory access profile. The memory access profile may be used to co-locate data near the processing unit that accesses the data, reducing memory access energy by minimizing distances to access data that is co-located with a different processing unit (i.e., remote data). Execution thread arrays and memory pages for execution of a program are partitioned across multiple processing units. The partitions are then each mapped to a specific processing unit to minimize inter-partition traffic given the processing unit physical topology.Type: ApplicationFiled: February 1, 2023Publication date: August 1, 2024Inventors: Niladrish Chatterjee, Zachary Joseph Susskind, Donghyuk Lee, James Michael O'Connor
-
Publication number: 20240211166Abstract: A hierarchical network enables access for a stacked memory system including or more memory dies that each include multiple memory tiles. The processor die includes multiple processing tiles that are stacked with the one or more memory die. The memory tiles that are vertically aligned with a processing tile are directly coupled to the processing tile and comprise the local memory block for the processing tile. The hierarchical network provides access paths for each processing tile to access the processing tile's local memory block, the local memory block coupled to a different processing tile within the same processing die, memory tiles in a different die stack, and memory tiles in a different device. The ratio of memory bandwidth (byte) to floating-point operation (B:F) may improve 50× for accessing the local memory block compared with conventional memory. Additionally, the energy consumed to transfer each bit may be reduced by 10×.Type: ApplicationFiled: February 9, 2024Publication date: June 27, 2024Inventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
-
Patent number: 12001725Abstract: A combined on-package and off-package memory system uses a custom base-layer within which are fabricated one or more dedicated interfaces to off-package memories. An on-package processor and on-package memories are also directly coupled to the custom base-layer. The custom base-layer includes memory management logic between the processor and memories (both off and on package) to steer requests. The memories are exposed as a combined memory space having greater bandwidth and capacity compared with either the off-package memories or the on-package memories alone. The memory management logic services requests while maintaining quality of service (QoS) to satisfy bandwidth requirements for each allocation. An allocation may include any combination of the on and/or off package memories. The memory management logic also manages data migration between the on and off package memories.Type: GrantFiled: August 23, 2023Date of Patent: June 4, 2024Assignee: NVIDIA CorporationInventors: Niladrish Chatterjee, James Michael O'Connor, Donghyuk Lee, Gaurav Uttreja, Wishwesh Anil Gandhi
-
Patent number: 11993471Abstract: A raising assembly for use with a gangway connected for rotation between a raised stowed position and a lowered deployed position. The raising assembly comprises at least one fluid actuated cylinder connected to the gangway. A raising actuator is operative to cause operation of the cylinder in a manner that rotates the gangway toward the stowed position while the raising actuator is continuously activated by an operator. A lowering actuator is operative to cause operation of the cylinder in a manner that rotates the gangway toward the deployed position due to gravitational forces while the lowering actuator is continuously activated by the operator. The raising assembly is configured to maintain the gangway at a current position between the stowed position and the deployed position if the operator does one of ceasing to activate the raising actuator while raising the gangway or ceasing to activate the lowering actuator while lowering the gangway.Type: GrantFiled: May 26, 2023Date of Patent: May 28, 2024Assignee: SAFE RACK LLCInventors: John Rutledge Lawson, James Michael O'Keefe, Jeff W. Reichert, Jeffrey David Scott
-
Patent number: 11977766Abstract: A hierarchical network enables access for a stacked memory system including or more memory dies that each include multiple memory tiles. The processor die includes multiple processing tiles that are stacked with the one or more memory die. The memory tiles that are vertically aligned with a processing tile are directly coupled to the processing tile and comprise the local memory block for the processing tile. The hierarchical network provides access paths for each processing tile to access the processing tile's local memory block, the local memory block coupled to a different processing tile within the same processing die, memory tiles in a different die stack, and memory tiles in a different device. The ratio of memory bandwidth (byte) to floating-point operation (B:F) may improve 50× for accessing the local memory block compared with conventional memory. Additionally, the energy consumed to transfer each bit may be reduced by 10×.Type: GrantFiled: February 28, 2022Date of Patent: May 7, 2024Assignee: NVIDIA CorporationInventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor
-
Patent number: 11966348Abstract: Methods of operating a serial data bus divide series of data bits into sequences of one or more bits and encode the sequences as N-level symbols, which are then transmitted at multiple discrete voltage levels. These methods may be utilized to communicate over serial data lines to improve bandwidth and reduce crosstalk and other sources of noise.Type: GrantFiled: January 28, 2019Date of Patent: April 23, 2024Assignee: NVIDIA Corp.Inventors: Donghyuk Lee, James Michael O'Connor
-
Patent number: 11954036Abstract: Embodiments include methods, systems and non-transitory computer-readable computer readable media including instructions for executing a prefetch kernel that includes memory accesses for prefetching data for a processing kernel into a memory, and, subsequent to executing at least a portion of the prefetch kernel, executing the processing kernel where the processing kernel includes accesses to data that is stored into the memory resulting from execution of the prefetch kernel.Type: GrantFiled: November 11, 2022Date of Patent: April 9, 2024Assignee: Advanced Micro Devices, Inc.Inventors: Nuwan S. Jayasena, James Michael O'Connor, Michael Mantor
-
Patent number: 11851303Abstract: An elevating cage apparatus includes a support structure and a carriage that is operable to move vertically with respect to the support structure. The carriage is configured to be alternatively raised and lowered via a first driver and a second driver. The first driver may comprise a powered (electric, hydraulic, or pneumatic) motor, and the second driver may comprise a manual actuation mechanism including a hand crank. Both input drivers may be operable to move the carriage through a dual input worm drive gear box.Type: GrantFiled: July 10, 2018Date of Patent: December 26, 2023Assignee: SAFE RACK LLCInventors: James Michael O'Keefe, Jeffrey David Scott
-
Publication number: 20230393788Abstract: A combined on-package and off-package memory system uses a custom base-layer within which are fabricated one or more dedicated interfaces to off-package memories. An on-package processor and on-package memories are also directly coupled to the custom base-layer. The custom base-layer includes memory management logic between the processor and memories (both off and on package) to steer requests. The memories are exposed as a combined memory space having greater bandwidth and capacity compared with either the off-package memories or the on-package memories alone. The memory management logic services requests while maintaining quality of service (QoS) to satisfy bandwidth requirements for each allocation. An allocation may include any combination of the on and/or off package memories. The memory management logic also manages data migration between the on and off package memories.Type: ApplicationFiled: August 23, 2023Publication date: December 7, 2023Inventors: Nilandrish Chatterjee, James Michael O'Connor, Donghyuk Lee, Gaurav Uttreja, Wishwesh Anil Gandhi
-
Patent number: 11789649Abstract: A combined on-package and off-package memory system uses a custom base-layer within which are fabricated one or more dedicated interfaces to off-package memories. An on-package processor and on-package memories are also directly coupled to the custom base-layer. The custom base-layer includes memory management logic between the processor and memories (both off and on package) to steer requests. The memories are exposed as a combined memory space having greater bandwidth and capacity compared with either the off-package memories or the on-package memories alone. The memory management logic services requests while maintaining quality of service (QoS) to satisfy bandwidth requirements for each allocation. An allocation may include any combination of the on and/or off package memories. The memory management logic also manages data migration between the on and off package memories.Type: GrantFiled: April 22, 2021Date of Patent: October 17, 2023Assignee: NVIDIA CorporationInventors: Niladrish Chatterjee, James Michael O'Connor, Donghyuk Lee, Gaurav Uttreja, Wishwesh Anil Gandhi
-
Publication number: 20230315651Abstract: Embodiments of the present disclosure relate to application partitioning for locality in a stacked memory system. In an embodiment, one or more memory dies are stacked on the processor die. The processor die includes multiple processing tiles and each memory die includes multiple memory tiles. Vertically aligned memory tiles are directly coupled to and comprise the local memory block for a corresponding processing tile. An application program that operates on dense multi-dimensional arrays (matrices) may partition the dense arrays into sub-arrays associated with program tiles. Each program tile is executed by a processing tile using the processing tile's local memory block to process the associated sub-array. Data associated with each sub-array is stored in a local memory block and the processing tile corresponding to the local memory block executes the program tile to process the sub-array data.Type: ApplicationFiled: March 30, 2022Publication date: October 5, 2023Inventors: William James Dally, Carl Thomas Gray, Stephen W. Keckler, James Michael O'Connor