Patents by Inventor Alexander L. Minkin
Alexander L. Minkin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250077615Abstract: Systems and methods for efficient convolution based on matrix multiply and add (MMA) are described. An example processor having a plurality of processing lanes is configured to perform convolution of a matrix of activation elements and a filter matrix in accordance with a configurable series of instructions including a plurality of MMA instructions and shift instructions while reusing activation elements already loaded to the processor or associated memory over a plurality of MMA operations. The filter elements are held stationary at inputs to the processor for multiple cycles for multiple MMA operations while activations are streamed in. Associated methods are also described.Type: ApplicationFiled: August 30, 2023Publication date: March 6, 2025Inventors: Manan PATEL, Jack CHOQUETTE, Alexander L. MINKIN, Maciej Tyrlik
-
Publication number: 20250060938Abstract: Systems and methods for efficient convolution based on matrix multiply and add (MMA) are described. An example processor having a plurality of processing lanes is configured to perform convolution of a matrix of activation elements and a filter matrix in accordance with a configurable series of instructions including a plurality of MMA instructions and shift instructions while reusing activation elements already loaded to the datapath or associated memory over a plurality of MMA operations. Associated methods are also described.Type: ApplicationFiled: August 14, 2023Publication date: February 20, 2025Inventors: Jack CHOQUETTE, Po-An TSAI, Alexander L. MINKIN, Manan PATEL, Neal Clayton CRAGO, Daniel STIFFLER, Kefeng DUAN, Yu-Jung CHEN, Jing LI, Qian WANG, Ronny KRASHINSKY, Jun YANG, Feng XIE
-
Publication number: 20240393951Abstract: Various embodiments include techniques for performing self-synchronizing remote memory operations in a multiprocessor computing system. During a remote memory operation in the multiprocessor computing system, a source processing unit transmits multiple segments of data to a destination processing. For each segment of data, the source processing unit transmits a remote memory operation to the destination processing unit that includes associated metadata that identifies the memory location of a corresponding synchronization object. The remote memory operation along with the metadata is transmitted as a single unit to the destination processing unit. The destination processing unit splits the operation into the remote memory operation and the memory synchronization operation. As a result, the source processing unit avoids the need to perform a separate memory synchronization operation, thereby reducing inter-processor communications and increasing performance of remote memory operations.Type: ApplicationFiled: July 10, 2024Publication date: November 28, 2024Inventors: Srinivas Santosh Kumar MADUGULA, Olivier GIROUX, Wishwesh Anil GANDHI, Michael Allen PARKER, Raghuram L, Ivan TANASIC, Manan PATEL, Mark HUMMEL, Alexander L. MINKIN
-
Patent number: 12141082Abstract: A parallel processing unit comprises a plurality of processors each being coupled to a memory access hardware circuitry. Each memory access hardware circuitry is configured to receive, from the coupled processor, a memory access request specifying a coordinate of a multidimensional data structure, wherein the memory access hardware circuit is one of a plurality of memory access circuitry each coupled to a respective one of the processors; and, in response to the memory access request, translate the coordinate of the multidimensional data structure into plural memory addresses for the multidimensional data structure and using the plural memory addresses, asynchronously transfer at least a portion of the multidimensional data structure for processing by at least the coupled processor. The memory locations may be in the shared memory of the coupled processor and/or an external memory.Type: GrantFiled: March 10, 2022Date of Patent: November 12, 2024Assignee: NVIDIA CORPORATIONInventors: Alexander L. Minkin, Alan Kaatz, Oliver Giroux, Jack Choquette, Shirish Gadre, Manan Patel, John Tran, Ronny Krashinsky, Jeff Schottmiller
-
Publication number: 20240354106Abstract: Various embodiments include techniques for performing self-synchronizing remote memory operations in a data center or multiprocessor computing system. During a remote memory operation, a source processor transmits multiple data segments to a destination processor. For each data segment, the source processor transmits a remote memory operation to the destination processor that includes associated metadata that identifies the memory location of a corresponding synchronization object representing a count of data segments to be stored or a flag for each data segment to be stored. The remote memory operation along with the metadata is transmitted as a single unit to the destination processor. The destination processor splits the operation into the remote memory operation and the memory synchronization operation.Type: ApplicationFiled: June 26, 2024Publication date: October 24, 2024Inventors: Srinivas Santosh Kumar MADUGULA, Olivier GIROUX, Wishwesh Anil GANDHI, Michael Allen PARKER, Raghuram L, Ivan TANASIC, Manan PATEL, Mark HUMMEL, Alexander L. MINKIN, Gregory Michael THORSON
-
Patent number: 12105960Abstract: Various embodiments include techniques for performing self-synchronizing remote memory operations in a multiprocessor computing system. During a remote memory operation in the multiprocessor computing system, a source processing unit transmits multiple segments of data to a destination processing. For each segment of data, the source processing unit transmits a remote memory operation to the destination processing unit that includes associated metadata that identifies the memory location of a corresponding synchronization object. The remote memory operation along with the metadata is transmitted as a single unit to the destination processing unit. The destination processing unit splits the operation into the remote memory operation and the memory synchronization operation. As a result, the source processing unit avoids the need to perform a separate memory synchronization operation, thereby reducing inter-processor communications and increasing performance of remote memory operations.Type: GrantFiled: August 31, 2022Date of Patent: October 1, 2024Assignee: NVIDIA CORPORATIONInventors: Srinivas Santosh Kumar Madugula, Olivier Giroux, Wishwesh Anil Gandhi, Michael Allen Parker, Raghuram L, Ivan Tanasic, Manan Patel, Mark Hummel, Alexander L. Minkin
-
Publication number: 20240289132Abstract: This specification describes a programmatic multicast technique enabling one thread (for example, in a cooperative group array (CGA) on a GPU) to request data on behalf of one or more other threads (for example, executing on respective processor cores of the GPU). The multicast is supported by tracking circuitry that interfaces between multicast requests received from processor cores and the available memory. The multicast is designed to reduce cache (for example, layer 2 cache) bandwidth utilization enabling strong scaling and smaller tile sizes.Type: ApplicationFiled: May 10, 2024Publication date: August 29, 2024Inventors: Apoorv PARLE, Ronny KRASHINSKY, John EDMONDSON, Jack CHOQUETTE, Shirish GADRE, Steve HEINRICH, Manan PATEL, Prakash Bangalore PRABHAKAR, JR., Ravi MANYAM, Wish GANDHI, Lacky SHAH, Alexander L. Minkin
-
Patent number: 12020035Abstract: This specification describes a programmatic multicast technique enabling one thread (for example, in a cooperative group array (CGA) on a GPU) to request data on behalf of one or more other threads (for example, executing on respective processor cores of the GPU). The multicast is supported by tracking circuitry that interfaces between multicast requests received from processor cores and the available memory. The multicast is designed to reduce cache (for example, layer 2 cache) bandwidth utilization enabling strong scaling and smaller tile sizes.Type: GrantFiled: March 10, 2022Date of Patent: June 25, 2024Assignee: NVIDIA CORPORATIONInventors: Apoorv Parle, Ronny Krashinsky, John Edmondson, Jack Choquette, Shirish Gadre, Steve Heinrich, Manan Patel, Prakash Bangalore Prabhakar, Jr., Ravi Manyam, Wish Gandhi, Lacky Shah, Alexander L. Minkin
-
Publication number: 20240069736Abstract: Various embodiments include techniques for performing self-synchronizing remote memory operations in a multiprocessor computing system. During a remote memory operation in the multiprocessor computing system, a source processing unit transmits multiple segments of data to a destination processing. For each segment of data, the source processing unit transmits a remote memory operation to the destination processing unit that includes associated metadata that identifies the memory location of a corresponding synchronization object. The remote memory operation along with the metadata is transmitted as a single unit to the destination processing unit. The destination processing unit splits the operation into the remote memory operation and the memory synchronization operation. As a result, the source processing unit avoids the need to perform a separate memory synchronization operation, thereby reducing inter-processor communications and increasing performance of remote memory operations.Type: ApplicationFiled: August 31, 2022Publication date: February 29, 2024Inventors: Srinivas Santosh Kumar MADUGULA, Olivier GIROUX, Wishwesh Anil GANDHI, Michael Allen PARKER, Raghuram L, Ivan TANASIC, Manan PATEL, Mark HUMMEL, Alexander L. MINKIN
-
Publication number: 20230289292Abstract: A parallel processing unit comprises a plurality of processors each being coupled to a memory access hardware circuitry. Each memory access hardware circuitry is configured to receive, from the coupled processor, a memory access request specifying a coordinate of a multidimensional data structure, wherein the memory access hardware circuit is one of a plurality of memory access circuitry each coupled to a respective one of the processors; and, in response to the memory access request, translate the coordinate of the multidimensional data structure into plural memory addresses for the multidimensional data structure and using the plural memory addresses, asynchronously transfer at least a portion of the multidimensional data structure for processing by at least the coupled processor. The memory locations may be in the shared memory of the coupled processor and/or an external memory.Type: ApplicationFiled: March 10, 2022Publication date: September 14, 2023Inventors: Alexander L. Minkin, Alan Kaatz, Olivier Giroux, Jack Choquette, Shirish Gadre, Manan Patel, John Tran, Ronny Krashinsky, Jeff Schottmiller
-
Publication number: 20230289190Abstract: This specification describes a programmatic multicast technique enabling one thread (for example, in a cooperative group array (CGA) on a GPU) to request data on behalf of one or more other threads (for example, executing on respective processor cores of the GPU). The multicast is supported by tracking circuitry that interfaces between multicast requests received from processor cores and the available memory. The multicast is designed to reduce cache (for example, layer 2 cache) bandwidth utilization enabling strong scaling and smaller tile sizes.Type: ApplicationFiled: March 10, 2022Publication date: September 14, 2023Inventors: Apoorv PARLE, Ronny KRASHINSKY, John EDMONDSON, Jack CHOQUETTE, Shirish GADRE, Steve HEINRICH, Manan PATEL, Prakash Bangalore PRABHAKAR, JR., Ravi MANYAM, Wish GANDHI, Lacky SHAH, Alexander L. Minkin
-
Publication number: 20230289304Abstract: A parallel processing unit comprises a plurality of processors each being coupled to a memory access hardware circuitry. Each memory access hardware circuitry is configured to receive, from the coupled processor, a memory access request specifying a coordinate of a multidimensional data structure, wherein the memory access hardware circuit is one of a plurality of memory access circuitry each coupled to a respective one of the processors; and, in response to the memory access request, translate the coordinate of the multidimensional data structure into plural memory addresses for the multidimensional data structure and using the plural memory addresses, asynchronously transfer at least a portion of the multidimensional data structure for processing by at least the coupled processor. The memory locations may be in the shared memory of the coupled processor and/or an external memory.Type: ApplicationFiled: March 10, 2022Publication date: September 14, 2023Inventors: Alexander L. Minkin, Alan Kaatz, Oliver Giroux, Jack Choquette, Shirish Gadre, Manan Patel, John Tran, Ronny Krashinsky, Jeff Schottmiller
-
Patent number: 9952977Abstract: A method for managing a parallel cache hierarchy in a processing unit. The method including receiving an instruction that includes a cache operations modifier that identifies a level of the parallel cache hierarchy in which to cache data associated with the instruction; and implementing a cache replacement policy based on the cache operations modifier.Type: GrantFiled: September 24, 2010Date of Patent: April 24, 2018Assignee: NVIDIA CORPORATIONInventors: Steven James Heinrich, Alexander L. Minkin, Brett W. Coon, Rajeshwaran Selvanesan, Robert Steven Glanville, Charles McCarver, Anjana Rajendran, Stewart Glenn Carlton, John R. Nickolls, Brian Fahs
-
Patent number: 9595075Abstract: Approaches are disclosed for performing memory access operations in a texture processing pipeline having a first portion configured to process texture memory access operations and a second portion configured to process non-texture memory access operations. A texture unit receives a memory access request. The texture unit determines whether the memory access request includes a texture memory access operation. If the memory access request includes a texture memory access operation, then the texture unit processes the memory access request via at least the first portion of the texture processing pipeline, otherwise, the texture unit processes the memory access request via at least the second portion of the texture processing pipeline. One advantage of the disclosed approach is that the same processing and cache memory may be used for both texture operations and load/store operations to various other address spaces, leading to reduced surface area and power consumption.Type: GrantFiled: September 26, 2013Date of Patent: March 14, 2017Assignee: NVIDIA CorporationInventors: Steven J. Heinrich, Eric T. Anderson, Jeffrey A. Bolz, Jonathan Dunaisky, Ramesh Jandhyala, Joel McCormack, Alexander L. Minkin, Bryon S. Nordquist, Poornachandra Rao
-
Patent number: 9286256Abstract: The invention sets forth an L1 cache architecture that includes a crossbar unit configured to transmit data associated with both read data requests and write data requests. Data associated with read data requests is retrieved from a cache memory and transmitted to the client subsystems. Similarly, data associated with write data requests is transmitted from the client subsystems to the cache memory. To allow for the transmission of both read and write data on the crossbar unit, an arbiter is configured to schedule the crossbar unit transmissions as well and arbitrate between data requests received from the client subsystems.Type: GrantFiled: September 28, 2010Date of Patent: March 15, 2016Assignee: NVIDIA CorporationInventors: Alexander L. Minkin, Steven J. Heinrich, Rajeshwaran Selvanesan, Stewart Glenn Carlton, John R. Nickolls
-
Patent number: 9013498Abstract: A system and method for tracking and reporting texture map levels of detail that are computed during graphics processing allows for efficient management of texture map storage. Minimum and/or maximum pre-clamped texture map levels of detail values are tracked by a graphics processor and an array stored in memory is updated to report the minimum and/or maximum values for use by an application program. The minimum and/or maximum values may be used to determine the active set of texture map levels of detail that is loaded into graphics memory.Type: GrantFiled: December 19, 2008Date of Patent: April 21, 2015Assignee: NVIDIA CorporationInventors: John S. Montrym, Andrew J. Tao, Henry P. Moreton, Emmett M. Kilgariff, Cass W. Everitt, Alexander L. Minkin, Eric Anderson, Yan Yan Tang, Jerome F. Duluk, Jr.
-
Publication number: 20150084975Abstract: Approaches are disclosed for performing memory access operations in a texture processing pipeline having a first portion configured to process texture memory access operations and a second portion configured to process non-texture memory access operations. A texture unit receives a memory access request. The texture unit determines whether the memory access request includes a texture memory access operation. If the memory access request includes a texture memory access operation, then the texture unit processes the memory access request via at least the first portion of the texture processing pipeline, otherwise, the texture unit processes the memory access request via at least the second portion of the texture processing pipeline. One advantage of the disclosed approach is that the same processing and cache memory may be used for both texture operations and load/store operations to various other address spaces, leading to reduced surface area and power consumption.Type: ApplicationFiled: September 26, 2013Publication date: March 26, 2015Applicant: NVIDIA CORPORATIONInventors: Steven J. HEINRICH, Eric T. ANDERSON, Jeffrey A. BOLZ, Jonathan DUNAISKY, Ramesh JANDHYALA, Joel MCCORMACK, Alexander L. MINKIN, Bryon S. NORDQUIST, Poornachandra RAO
-
Patent number: 8817035Abstract: Circuits, methods, and apparatus that perform a context switch quickly while not wasting a significant amount of in-progress work. A texture pipeline includes a cutoff point or stage. After receipt of a context switch instruction, texture requests and state updates above the cutoff point are stored in a memory, while those below the cutoff point are processed before the context switch is completed. After this processing is complete, global states in the texture pipeline are stored in the memory. A previous context may then be restored by reading its texture requests and global states from the memory and loading them into the texture pipeline. The location of the cutoff point can be a point in the pipeline where a texture request can no longer result in a page fault in the memory.Type: GrantFiled: February 25, 2013Date of Patent: August 26, 2014Assignee: NVIDIA CorporationInventor: Alexander L. Minkin
-
Patent number: 8595425Abstract: One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.Type: GrantFiled: September 25, 2009Date of Patent: November 26, 2013Assignee: NVIDIA CorporationInventors: Alexander L. Minkin, Steven James Heinrich, RaJeshwaran Selvanesan, Brett W. Coon, Charles McCarver, Anjana Rajendran, Stewart G. Carlton
-
Patent number: 8335892Abstract: One embodiment of the present invention sets forth a technique for arbitrating requests received by an L1 cache from multiple clients. The L1 cache outputs bubble requests to a first one of the multiple clients that cause the first one of the multiple clients to insert bubbles into the request stream, where a bubble is the absence of a request. The bubbles allow the L1 cache to grant access to another one of the multiple clients without stalling the first one of the multiple clients. The L1 cache services multiple clients with diverse latency and bandwidth requirements and may be reconfigured to provide memory spaces for clients executing multiple parallel threads, where the memory spaces each have a different scope.Type: GrantFiled: December 30, 2009Date of Patent: December 18, 2012Assignee: NVIDIA CorporationInventors: Alexander L. Minkin, Steven J. Heinrich, Rajeshwaran Selvanesan, Charles McCarver, Stewart Glenn Carlton, Anjana Rajendran