Patents by Inventor Ajit PUNJ
Ajit PUNJ has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12236220Abstract: The technology disclosed relates to storing a dataflow graph with a plurality of compute nodes that transmit data along data connections, and controlling data transmission between compute nodes in the plurality of compute nodes along the data connections by using control connections to control writing of data.Type: GrantFiled: June 7, 2023Date of Patent: February 25, 2025Assignee: SambaNova Systems, Inc.Inventors: Weiwei Chen, Raghu Prabhakar, David Alan Koeplinger, Sitanshu Gupta, Ruddhi Chaphekar, Ajit Punj, Sumti Jairath
-
Publication number: 20250045241Abstract: A method for optimizing the placement and routing of compute units and memory units within a reconfigurable computing grid is disclosed. The method involves obtaining a hardware description of the computing grid, receiving a placement graph for a computing task, forming subgraphs for unplaced memory units with primary connections, assigning subgraphs as clusters on the grid, and generating a configuration file for a physical reconfigurable processor. By strategically placing clusters on the grid and configuring the processor based on the logical representation, efficient execution of reconfigurable computing tasks is achieved. This method enhances the performance and flexibility of reconfigurable computing systems by streamlining the allocation of compute and memory resources.Type: ApplicationFiled: October 22, 2024Publication date: February 6, 2025Applicant: SambaNova Systems, Inc.Inventors: Kin Hing LEUNG, Feng SHENG, Ajit PUNJ
-
Patent number: 12147381Abstract: A method for placing, routing and using compute units and memory units in a reconfigurable computing grid includes receiving a placement graph for a computing task that defines a set of unplaced memory units, a set of unplaced compute units and data connections between the unplaced memory units and the unplaced compute units, the data connections comprising primary connections corresponding to the primary ports of the unplaced compute units and secondary connections corresponding to the secondary ports of the unplaced compute units. The method also includes forming a subgraph for each unplaced memory unit having a primary connection, each subgraph comprising the unplaced memory unit and each unplaced compute unit connected to the unplaced memory unit via a primary connection. The method also includes placing each formed subgraph as a cluster on the reconfigurable computing grid. A corresponding computer program product and system are also disclosed herein.Type: GrantFiled: December 16, 2022Date of Patent: November 19, 2024Assignee: SambaNova Systems, Inc.Inventors: Kin Hing Leung, Feng Sheng, Ajit Punj
-
Publication number: 20240054099Abstract: A method for placing, routing and using compute units and memory units in a reconfigurable computing grid includes receiving a placement graph for a computing task that defines a set of unplaced memory units, a set of unplaced compute units and data connections between the unplaced memory units and the unplaced compute units, the data connections comprising primary connections corresponding to the primary ports of the unplaced compute units and secondary connections corresponding to the secondary ports of the unplaced compute units. The method also includes forming a subgraph for each unplaced memory unit having a primary connection, each subgraph comprising the unplaced memory unit and each unplaced compute unit connected to the unplaced memory unit via a primary connection. The method also includes placing each formed subgraph as a cluster on the reconfigurable computing grid. A corresponding computer program product and system are also disclosed herein.Type: ApplicationFiled: December 16, 2022Publication date: February 15, 2024Applicant: SambaNova Systems, Inc.Inventors: Kin Hing LEUNG, Feng SHENG, Ajit PUNJ
-
Publication number: 20230325163Abstract: The technology disclosed relates to storing a dataflow graph with a plurality of compute nodes that transmit data along data connections, and controlling data transmission between compute nodes in the plurality of compute nodes along the data connections by using control connections to control writing of data.Type: ApplicationFiled: June 7, 2023Publication date: October 12, 2023Applicant: SambaNova Systems, Inc.Inventors: Weiwei CHEN, Raghu PRABHAKAR, David Alan KOEPLINGER, Sitanshu GUPTA, Ruddhi CHAPHEKAR, Ajit PUNJ, Sumti JAIRATH
-
Publication number: 20230244748Abstract: A method for multiplying matrices in a coarse-grained computing grid includes assigning each compute unit c of C compute units to a unique submatrix Rc of a result matrix R, wherein the C compute units are arranged in a 2D computing grid, configuring one or more source memory units to provide relevant matrix A data and matrix B data to the C compute units via a plurality of packets, configuring each compute unit c to produce the unique submatrix Rc and send the unique submatrix Rc to one or more desired memory units. The method also includes initiating data flow in the computing grid to produce the result matrix R within the desired memory units. To reduce packet traffic, Matrix B data corresponding to a column of compute units may be narrow-casted to each column of compute units. A corresponding system and computer-readable medium are also disclosed herein.Type: ApplicationFiled: May 25, 2022Publication date: August 3, 2023Applicant: SambaNova Systems, Inc.Inventors: Pramod Natarja, Sitanshu Gupta, Ram Sivaramakrishnan, Ajit Punj
-
Patent number: 11709664Abstract: A compiler configured to configure memory nodes with a ready-to-read credit counter and a write credit counter. The ready-to-read credit counter of a particular upstream memory node initialized with as many read credits as a buffer depth of a corresponding downstream memory node. The ready-to-read credit counter configured to decrement when a buffer data unit is written by the particular upstream memory node into the corresponding downstream memory node, and to increment when the particular upstream memory node receives from the corresponding downstream memory node a read ready token. The write credit counter of the particular upstream memory node initialized with one or more write credits and configured to decrement when the particular upstream memory node begins writing the buffer data unit into the corresponding downstream memory node, and to increment when the particular upstream memory node receives from the corresponding downstream memory node a write done token.Type: GrantFiled: June 2, 2020Date of Patent: July 25, 2023Assignee: SambaNova Systems, Inc.Inventors: Weiwei Chen, Raghu Prabhakar, David Alan Koeplinger, Sitanshu Gupta, Ruddhi Arun Chaphekar, Ajit Punj, Sumti Jairath
-
Patent number: 11561925Abstract: A method of processing partitions of a tensor in a target order includes receiving, by a reorder unit and from two or more producer units, a plurality of partitions of a tensor in a first order that is different from the target order, storing the plurality of partitions in the reorder unit, and providing, from the reorder unit, the plurality of partitions in the target order to one or more consumer units. In an example, the one or more consumer units process the plurality of partitions in the target order.Type: GrantFiled: September 16, 2021Date of Patent: January 24, 2023Assignee: SambaNova Systems, Inc.Inventors: Raghu Prabhakar, Nathan Francis Sheeley, Matheen Musaddiq, Scott Layson Burson, Sitanshu Gupta, Sumti Jairath, Pramod Nataraja, Ajit Punj
-
Publication number: 20220309029Abstract: A method of processing partitions of a tensor in a target order includes receiving, by a reorder unit and from two or more producer units, a plurality of partitions of a tensor in a first order that is different from the target order, storing the plurality of partitions in the reorder unit, and providing, from the reorder unit, the plurality of partitions in the target order to one or more consumer units. In an example, the one or more consumer units process the plurality of partitions in the target order.Type: ApplicationFiled: September 16, 2021Publication date: September 29, 2022Applicant: SambaNova Systems, Inc.Inventors: Raghu PRABHAKAR, Nathan Francis SHEELEY, Matheen MUSADDIQ, Scott Layson BURSON, Sitanshu GUPTA, Sumti JAIRATH, Pramod NATARAJA, Ajit PUNJ
-
Patent number: 11204889Abstract: A method of processing partitions of a tensor in a target order includes receiving, by a reorder unit and from two or more producer units, a plurality of partitions of a tensor in a first order that is different from the target order, storing the plurality of partitions in the reorder unit, and providing, from the reorder unit, the plurality of partitions in the target order to one or more consumer units. In an example, the one or more consumer units process the plurality of partitions in the target order.Type: GrantFiled: March 29, 2021Date of Patent: December 21, 2021Assignee: SambaNova Systems, Inc.Inventors: Raghu Prabhakar, Nathan Francis Sheeley, Matheen Musaddiq, Scott Layson Burson, Sitanshu Gupta, Sumti Jairath, Pramod Nataraja, Ajit Punj
-
Publication number: 20210373867Abstract: A compiler configured to configure memory nodes with a ready-to-read credit counter and a write credit counter. The ready-to-read credit counter of a particular upstream memory node initialized with as many read credits as a buffer depth of a corresponding downstream memory node. The ready-to-read credit counter configured to decrement when a buffer data unit is written by the particular upstream memory node into the corresponding downstream memory node, and to increment when the particular upstream memory node receives from the corresponding downstream memory node a read ready token. The write credit counter of the particular upstream memory node initialized with one or more write credits and configured to decrement when the particular upstream memory node begins writing the buffer data unit into the corresponding downstream memory node, and to increment when the particular upstream memory node receives from the corresponding downstream memory node a write done token.Type: ApplicationFiled: June 2, 2020Publication date: December 2, 2021Applicant: SambaNova Systems, Inc.Inventors: Weiwei CHEN, Raghu PRABHAKAR, David Alan KOEPLINGER, Sitanshu GUPTA, Ruddhi Arun CHAPHEKAR, Ajit PUNJ, Sumti JAIRATH