Patents by Inventor Arnav GOEL

Arnav GOEL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11936559
    Abstract: One technique includes receiving, in a first network, a multi-destination packet from a second network, and determining, based on the multi-destination packet, a first multi-destination tree in the first network for forwarding the multi-destination packet. In response to determining that the first multi-destination tree is not rooted on the network device, a second multi-destination tree in the first network is determined, and the multi-destination packet is transmitted using the second multi-destination tree. Another technique includes, upon detecting a first network device joining a network, sending a first indication to a second network device that the first network device is in a state for an amount of time. After the amount of time has elapsed, a second indication that the first network device has exited the state is sent to the second network device. A topology of the network is updated after the first network device has exited the state.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: March 19, 2024
    Assignee: Cisco Technology, Inc.
    Inventors: Hrishikesh Narasimhan, Sundher Narayanaswamy, Biju M. Mammen, Balaji Muthuvarathan, Arnav Goel
  • Patent number: 11886931
    Abstract: The technology disclosed relates to inter-node execution of configuration files on reconfigurable processors using network interface controller (NIC) buffers. In particular, the technology disclosed relates to a runtime logic that is configured to execute configuration files that define applications and application data for applications using a first reconfigurable processor connected to a first host, and a second reconfigurable processor connected to a second host. The first reconfigurable processor is configured to push input data for the applications in a first plurality of buffers. The first host is configured to cause a first network interface controller (NIC) to stream the input data to a second plurality of buffers from the first plurality of buffers. The second host is configured to cause a second NIC to stream the input data to the second reconfigurable processor from the second plurality of buffers.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: January 30, 2024
    Assignee: SambaNova Systems, Inc.
    Inventors: Ram Sivaramakrishnan, Sumti Jairath, Emre Ali Burhan, Manish K. Shah, Raghu Prabhakar, Ravinder Kumar, Arnav Goel, Ranen Chatterjee, Gregory Frederick Grohoski, Kin Hing Leung, Dawei Huang, Manoj Unnikrishnan, Martin Russell Raumann, Bandish B. Shah
  • Patent number: 11886930
    Abstract: The technology disclosed relates to runtime execution of functions across reconfigurable processor. In particular, the technology disclosed relates to a runtime logic that is configured to execute a first set of functions in a plurality of functions and/or data therefor on a first reconfigurable processor, and a second set of functions in the plurality of functions and/or data therefor on additional reconfigurable processors. Functions in the second set of functions and/or the data therefor are transmitted to the additional reconfigurable processors using one or more of a first reconfigurable processor-to-additional reconfigurable processors buffers, and results of executing the functions and/or the data therefor on the additional reconfigurable processors are transmitted to the first reconfigurable processor using one or more of additional reconfigurable processors-to-first reconfigurable processor buffers.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: January 30, 2024
    Assignee: SambaNova Systems, Inc.
    Inventors: Ram Sivaramakrishnan, Sumti Jairath, Emre Ali Burhan, Manish K. Shah, Raghu Prabhakar, Ravinder Kumar, Arnav Goel, Ranen Chatterjee, Gregory Frederick Grohoski, Kin Hing Leung, Dawei Huang, Manoj Unnikrishnan, Martin Russell Raumann, Bandish B. Shah
  • Publication number: 20230409395
    Abstract: A data processing system comprises a pool of reconfigurable data flow resources and a runtime processor. The pool of reconfigurable data flow resources includes arrays of physical configurable units and memory. The runtime processor includes logic to receive a plurality of configuration files for user applications. The configuration files include configurations of virtual data flow resources required to execute the user applications. The runtime processor also includes logic to allocate physical configurable units and memory in the pool of reconfigurable data flow resources to the virtual data flow resources and load the configuration files to the allocated physical configurable units. The runtime processor further includes logic to execute the user applications using the allocated physical configurable units and memory.
    Type: Application
    Filed: June 20, 2023
    Publication date: December 21, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Ravinder KUMAR, Conrad Alexander TURLIK, Arnav GOEL, Qi ZHENG, Raghunath SHENBAGAM, Anand MISRA, Ananda Reddy VAYYALA
  • Publication number: 20230385103
    Abstract: In a method an Intelligent Data Conversion (IDC) engine of a dataflow system detects a stage transition of a dataflow application executing on the dataflow system. In response, the IDC engine determines that data among stage data of the application has a first Stage Data Format (SDF). The IDC engine determines that a first processing unit of the dataflow system can process data having a second SDF and determines a data conversion to convert data among the stage data to have the second SDF. The IDC engine also determines a second processing unit, of the dataflow system to perform the data conversion and dispatches the second processing unit to perform the data conversion. The dataflow computing system can include a runtime processor and the IDC engine can interact with the runtime processor to detect the stage transition and/or dispatch the first processing unit.
    Type: Application
    Filed: May 22, 2023
    Publication date: November 30, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Qi ZHENG, Ravinder KUMAR, Arnav GOEL, Po-Yu WU, Arjun SABNIS, Joshua Earle POLZIN
  • Publication number: 20230388373
    Abstract: A data processing system is presented in a client-server configuration for executing first and second applications that a client in the client-server configuration can offload for execution onto the data processing system. The data processing system includes a server and a pool of reconfigurable data flow resources that is configured to execute the first application in a first runtime context and the second application in a second runtime context. The server is configured to establish a session with the client, receive first and second execution requests for executing the first application and the second application from the client, start respective first and second execution of the first and second applications in the respective first and second runtime contexts in response to receiving the first and second execution requests, and balance a first load from the first execution with a second load from the second execution.
    Type: Application
    Filed: May 22, 2023
    Publication date: November 30, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Milad SHARIF, Ravinder KUMAR, Qi ZHENG, Neal SANGHVI, Jiayu BAI, Arnav GOEL
  • Patent number: 11809908
    Abstract: A data processing system comprises a pool of reconfigurable data flow resources and a runtime processor. The pool of reconfigurable data flow resources includes arrays of physical configurable units and memory. The runtime processor includes logic to receive a plurality of configuration files for user applications. The configuration files include configurations of virtual data flow resources required to execute the user applications. The runtime processor also includes logic to allocate physical configurable units and memory in the pool of reconfigurable data flow resources to the virtual data flow resources and load the configuration files to the allocated physical configurable units. The runtime processor further includes logic to execute the user applications using the allocated physical configurable units and memory.
    Type: Grant
    Filed: July 7, 2020
    Date of Patent: November 7, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Ravinder Kumar, Conrad Alexander Turlik, Arnav Goel, Qi Zheng, Raghunath Shenbagam, Anand Misra, Ananda Reddy Vayyala
  • Publication number: 20230333879
    Abstract: A data processing system is presented that is configured as a server in a client-server configuration for executing applications that a client in the client-server configuration can offload as execution tasks for execution on the server. The data processing system includes a reconfigurable processor, a storage device that stores configuration files for the applications, and a host processor that is coupled to the storage device and to the reconfigurable processor. The host processor is configured to receive an execution task of the execution tasks with an identifier of an application from the client, retrieve a configuration file that is associated with the application from the storage device using the identifier of the application, configure the reconfigurable processor with the configuration file, and start execution of the application on the reconfigurable processor, whereby the reconfigurable processor provides output data of the execution of the application to the client.
    Type: Application
    Filed: April 12, 2023
    Publication date: October 19, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Arnav GOEL, Ravinder KUMAR, Qi ZHENG, Milad SHARIF, Jiayu BAI, Neal SANGHVI
  • Patent number: 11782760
    Abstract: A method for executing applications in a system comprising general hardware and reconfigurable hardware includes accessing a first execution file comprising metadata storing a first priority indicator associated with a first application, and a second execution file comprising metadata storing a second priority indicator associated with a second application. In an example, use of the reconfigurable hardware is interleaved between the first application and the second application, and the interleaving is scheduled to take into account (i) workload of the reconfigurable hardware and (ii) the first priority indicator and the second priority indicator associated with the first application and the second application, respectively. In an example, when the reconfigurable hardware is used by one of the first and second applications, the general hardware is used by another of the first and second applications.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: October 10, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Anand Misra, Arnav Goel, Qi Zheng, Raghunath Shenbagam, Ravinder Kumar
  • Publication number: 20230297527
    Abstract: A system is presented that includes two data processing systems that are coupled via a network, each data processing system including a reconfigurable processor with a reconfigurable processor memory, a host that is coupled to the reconfigurable processor and that includes a host processor and a host memory that is coupled to the host processor, and a network interface controller (NIC) that is operatively coupled to the reconfigurable processor and to the host processor. The reconfigurable processor of one of the data processing systems is configured to implement a virtual function that uses a virtual address for a memory access operation. An application programming interface (API) in the host processor translates the virtual address into a physical address, and the NIC uses the physical address to initiate a direct memory access operation at the reconfigurable processor memory or the host memory of the other data processing system.
    Type: Application
    Filed: March 14, 2023
    Publication date: September 21, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Conrad Alexander TURLIK, Sudhakar DINDUKURTI, Anand MISRA, Arjun SABNIS, Milad SHARIF, Ravinder KUMAR, Joshua Earle POLZIN, Arnav GOEL, Steven DAI
  • Publication number: 20230259823
    Abstract: In a method an orchestrator of a computing system determines that results of Machine Learning model computations are available and dispatches a worker to perform model computations that include computing gradients of the results. The orchestrator determines that a set of gradients of the results is available and dispatches a gradient worker to compute a sum of the gradients. The orchestrator determines that a second set of gradients of the results is available and dispatches a second gradient worker to compute a sum of the second set of gradients. The orchestrator determines that the sums of the first and second gradients are available and dispatches a third gradient worker to compute synchronized gradients. The gradient workers compute the sums and synchronized gradients concurrent with training workers computing additional model computations results and/or gradients. A computer program product can include the method and a computing system can include the orchestrator.
    Type: Application
    Filed: February 13, 2023
    Publication date: August 17, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Greg DYKEMA, Fansheng CHENG, Kuan ZHOU, Arnav GOEL, Subhra MAZUMDAR, Milad SHARIF, Po-Yu WU, Bowen YANG, Qi ZHENG
  • Publication number: 20230237012
    Abstract: A system for executing an application on a pool of reconfigurable processors with first and second reconfigurable processors having first and second architectures that are different from each other is presented. The system comprises an archive of configuration files with first and second configuration files for executing the application on the first and second reconfigurable processors, respectively, and a host system that is operatively coupled to the first and second reconfigurable processors. The host system comprises a runtime processor that allocates reconfigurable processors for executing the application and an auto-discovery module that is configured to perform discovery of whether the reconfigurable processors include at least one of the first reconfigurable processors and whether the reconfigurable processors include at least one of the second reconfigurable processors.
    Type: Application
    Filed: September 9, 2022
    Publication date: July 27, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Greg Dykema, Maran Wilson, Guoyao Feng, Kuan Zhou, Tianyu Sun, Taylor Lee, Kin Hing LEUNG, Arnav Goel, Conrad Turlik, Milad Sharif
  • Publication number: 20230237013
    Abstract: A system for a data-parallel execution of at least two implementations of an application on reconfigurable processors with different layouts is presented. The system comprises a pool of reconfigurable data flow resources with data transfer resources that interconnect first and second reconfigurable processors having first and second layouts that impose respective first and second constraints for the data-parallel execution of the application. The system further comprises an archive of configuration files and a host system that is operatively coupled to the first and second reconfigurable processors. The host system comprises first and second compilers that generate for the application, based on the respective first and second constraints, first and second configuration files that are stored in the archive of configuration files and adapted to be executed data-parallel compatible on respective first and second reconfigurable processors.
    Type: Application
    Filed: September 9, 2022
    Publication date: July 27, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Greg Dykema, Maran Wilson, Guoyao Feng, Kuan Zhou, Tianyu Sun, Taylor Lee, Kin Hing LEUNG, Arnav Goel, Conrad Turlik, Milad Sharif
  • Publication number: 20230205614
    Abstract: A method of pipelining execution stages of a pipelined application comprises an application execution program (AEP) utilizing a Pipeline Programming Interface (PPI) of a Buffer Pipelined Application computing System (BPAS). In the method the AEP uses one interface of the PPI to determine buffers, among a set of pipeline buffers stored in physical memories of the BPAS, for the BPAS to execute operations a computing application using batches of application data. The AEP uses a second interface of the PPI to load data batches into pipeline buffers, and a third interface of the PPI to input the buffers to the BPAS for executing operations of the application. The AEP can use another interface of the PPI to allocate the buffers in particular physical memories of the BPAS. A computing system can comprise the AEP and BPAS, and can perform the method.
    Type: Application
    Filed: December 22, 2022
    Publication date: June 29, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Joshua POLZIN, Conrad Alexander TURLIK, Arnav GOEL, Qi ZHENG, Maran WILSON, Neal SANGHVI
  • Publication number: 20230205613
    Abstract: A method of pipelining execution stages of a pipelined application can comprise a Buffer Pipeline Manager (BPM) of a Buffer Pipelined Application computing System (BPAS) allocating pipeline buffers, configuring access to the pipeline buffers by stage processors of the system, transferring buffers from one stage processor to a successor stage processor, and transferring data from a buffer in one memory to a buffer in an alternative memory. The BPM can allocate the buffers based on execution parameters associated with the pipelined application and/or stage processors. The BPM can transfer data to a buffer in an alternative memory based on performance, capacity, and/or topological attributes of the memories and/or processors utilizing the memories. The BPM can perform operations of the method responsive to interfaces of a Pipeline Programming Interface (PPI). A BPAS can comprise hardware processors, physical memories, stage processors, an application execution program, the PPI, and the BPM.
    Type: Application
    Filed: December 22, 2022
    Publication date: June 29, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Joshua POLZIN, Conrad Alexander TURLIK, Arnav GOEL, Qi ZHENG, Maran WILSON, Neal SANGHVI
  • Publication number: 20230205585
    Abstract: A data processing system includes a runtime processor and a pool of reconfigurable data flow resources with memory units, busses, and arrays of physical configurable units. The runtime processor is operatively coupled to the pool of reconfigurable data flow resources and configured to load first and second configuration files for executing first and second user applications on first and second subsets of the arrays of physical configurable units and to assign first and second subsets of the memory units to the first and second user applications. The runtime processor starts execution of the first and second user applications on the first and second subsets of the arrays of physical configurable units, prevents the first user application from accessing the resources allocated to the second user application, and prevents the second user application from accessing resources allocated to the first user application.
    Type: Application
    Filed: December 19, 2022
    Publication date: June 29, 2023
    Applicant: SambaNova Systems, Inc.
    Inventors: Ranen CHATTERJEE, Ravinder KUMAR, Raghunath SHENBAGAM, Maran WILSON, Conrad Alexander TURLIK, Arnav GOEL, Arjun SABNIS, Yannan CHEN
  • Patent number: 11625284
    Abstract: The technology disclosed relates to inter-node execution of configuration files on reconfigurable processors using smart network interface controller (SmartNIC) buffers. In particular, the technology disclosed relates to a runtime logic that is configured to execute configuration files that define applications and process application data for applications using a first reconfigurable processor on a first node, and a second host processor on a second node. The execution includes streaming configuration data in the configuration files and the application data between the first reconfigurable processor and the second host processor using one or more SmartNIC buffers.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: April 11, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Ram Sivaramakrishnan, Sumti Jairath, Emre Ali Burhan, Manish K. Shah, Raghu Prabhakar, Ravinder Kumar, Arnav Goel, Ranen Chatterjee, Gregory Frederick Grohoski, Kin Hing Leung, Dawei Huang, Manoj Unnikrishnan, Martin Russell Raumann, Bandish B. Shah
  • Patent number: 11625283
    Abstract: The technology disclosed relates to inter-processor execution of configuration files on reconfigurable processors using smart network interface controller (SmartNIC) buffers. In particular, the technology disclosed relates to a runtime logic that is configured to execute configuration files that define applications and process application data for applications using a first reconfigurable processor and a second reconfigurable processor. The execution includes streaming configuration data in the configuration files and the application data between the first reconfigurable processor and the second reconfigurable processor using one or more SmartNIC buffers.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: April 11, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Ram Sivaramakrishnan, Sumti Jairath, Emre Ali Burhan, Manish K. Shah, Raghu Prabhakar, Ravinder Kumar, Arnav Goel, Ranen Chatterjee, Gregory Frederick Grohoski, Kin Hing Leung, Dawei Huang, Manoj Unnikrishnan, Martin Russell Raumann, Bandish B. Shah
  • Patent number: 11609798
    Abstract: The technology disclosed relates to runtime execution of configuration files on reconfigurable processors with varying configuration granularity. In particular, the technology disclosed relates to a runtime logic that is configured to receive a set of configuration files for an application, and load and execute a first subset of configuration files in the set of configuration files and associated application data on a first reconfigurable processor. The first reconfigurable processor has a first level of configurable granularity. The runtime logic is further configured to load and execute a second subset of configuration files in the set of configuration files and associated application data on a second reconfigurable processor. The second reconfigurable processor has a second level of configurable granularity that is different from the first level of configurable granularity.
    Type: Grant
    Filed: November 9, 2021
    Date of Patent: March 21, 2023
    Assignee: SambaNova Systems, Inc.
    Inventors: Ram Sivaramakrishnan, Sumti Jairath, Emre Ali Burhan, Manish K. Shah, Raghu Prabhakar, Ravinder Kumar, Arnav Goel, Ranen Chatterjee, Gregory Frederick Grohoski, Kin Hing Leung, Dawei Huang, Manoj Unnikrishnan, Martin Russell Raumann, Bandish B. Shah
  • Publication number: 20220269534
    Abstract: A method for executing applications in a system comprising general hardware and reconfigurable hardware includes accessing a first execution file comprising metadata storing a first priority indicator associated with a first application, and a second execution file comprising metadata storing a second priority indicator associated with a second application. In an example, use of the reconfigurable hardware is interleaved between the first application and the second application, and the interleaving is scheduled to take into account (i) workload of the reconfigurable hardware and (ii) the first priority indicator and the second priority indicator associated with the first application and the second application, respectively. In an example, when the reconfigurable hardware is used by one of the first and second applications, the general hardware is used by another of the first and second applications.
    Type: Application
    Filed: February 25, 2021
    Publication date: August 25, 2022
    Applicant: SambaNova Systems, Inc.
    Inventors: Anand MISRA, Arnav GOEL, Qi ZHENG, Raghunath SHENBAGAM, Ravinder KUMAR