Patents by Inventor Sayantan Sur

Sayantan Sur has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12229069
    Abstract: Methods and apparatus for an accelerator controller hub (ACH). The ACH may be a stand-alone component or integrated on-die or on package in an accelerator such as a GPU. The ACH may include a host device link (HDL) interface, one or more Peripheral Component Interconnect Express (PCIe) interfaces, one or more high performance accelerator link (HPAL) interfaces, and a router, operatively coupled to each of the HDL interface, the one or more PCIe interfaces, and the one or more HPAL interfaces. The HDL interface is configured to be coupled to a host CPU via an HDL link and the one or more HPAL interfaces are configured to be coupled to one or more HPALs that are used to access high performance accelerator fabrics (HPAFs) such as NVlink fabrics and CCIX (Cache Coherent Interconnect for Accelerators) fabrics. Platforms including ACHs or accelerators with integrated ACHs support RDMA transfers using RDMA semantics to enable transfers between accelerator memory on initiators and targets without CPU involvement.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: February 18, 2025
    Assignee: Intel Corporation
    Inventors: Pratik Marolia, Andrew Herdrich, Rajesh Sankaran, Rahul Pal, David Puffer, Sayantan Sur, Ajaya Durg
  • Patent number: 12190405
    Abstract: Examples described herein relate to a first graphics processing unit (GPU) with at least one integrated communications system, wherein the at least one integrated communications system is to apply a reliability protocol to communicate with a second at least one integrated communications system associated with a second GPU to copy data from a first memory region to a second memory region and wherein the first memory region is associated with the first GPU and the second memory region is associated with the second GPU.
    Type: Grant
    Filed: June 29, 2022
    Date of Patent: January 7, 2025
    Assignee: Intel Corporation
    Inventors: Todd Rimmer, Mark Debbage, Bruce G. Warren, Sayantan Sur, Nayan Amrutlal Suthar, Ajaya Durg
  • Patent number: 12170625
    Abstract: Examples described herein relate to receiving, at a network interface, an allocation of a first group of one or more buffers to store data to be processed by a Message Passing Interface (MPI) and based on a received packet including an indicator that permits the network interface to select a buffer for the received packet and store the received packet in the selected buffer, the network interface storing a portion of the received packet in a buffer of the first group of the one or more buffers. The indicator can permit the network interface to select a buffer for the received packet and store the received packet in the selected buffer irrespective of a tag and sender associated with the received packet.
    Type: Grant
    Filed: July 30, 2020
    Date of Patent: December 17, 2024
    Assignee: Intel Corporation
    Inventors: Todd Rimmer, Sayantan Sur, Michael William Heinz
  • Publication number: 20240231888
    Abstract: Techniques described herein include managing scheduling of interrupts by receiving a data packet comprising an indication of an interrupt to be delivered, determining an availability status of a processing thread, and managing an interrupt status indicator in response to determining the availability status. A value of the interrupt status indicator corresponds to a quantity of pending interrupts. An event handling circuit processes the interrupt or one or more pending interrupts using the processing thread.
    Type: Application
    Filed: October 24, 2022
    Publication date: July 11, 2024
    Inventors: Sayantan Sur, Shahaf Shuler, Doron Haim, Netanel Moshe Gonen, Stephen Anthony Bernard Jones
  • Publication number: 20240233066
    Abstract: A kernel comprising at least one dynamically configurable parameter is submitted by a processor. The kernel is to be executed at a later time. Data is received after the kernel has been submitted. The at least one dynamically configurable parameter of the kernel is updated based on the data. The kernel having the at least one updated dynamically configurable parameter is executed after the at least one dynamically configurable parameter has been updated.
    Type: Application
    Filed: February 22, 2024
    Publication date: July 11, 2024
    Inventors: Sayantan Sur, Stephen Anthony Bernard Jones, Shahaf Shuler
  • Publication number: 20240134681
    Abstract: Techniques described herein include managing scheduling of interrupts by receiving a data packet comprising an indication of an interrupt to be delivered, determining an availability status of a processing thread, and managing an interrupt status indicator in response to determining the availability status. A value of the interrupt status indicator corresponds to a quantity of pending interrupts. An event handling circuit processes the interrupt or one or more pending interrupts using the processing thread.
    Type: Application
    Filed: October 23, 2022
    Publication date: April 25, 2024
    Inventors: Sayantan Sur, Shahaf Shuler, Doron Haim, Netanel Moshe Gonen, Stephen Anthony Bernard Jones
  • Patent number: 11941722
    Abstract: A kernel comprising at least one dynamically configurable parameter is submitted by a processor. The kernel is to be executed at a later time. Data is received after the kernel has been submitted. The at least one dynamically configurable parameter of the kernel is updated based on the data. The kernel having the at least one updated dynamically configurable parameter is executed after the at least one dynamically configurable parameter has been updated.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: March 26, 2024
    Assignee: Mellanox Technologies, Ltd.
    Inventors: Sayantan Sur, Stephen Anthony Bernard Jones, Shahaf Shuler
  • Publication number: 20230418746
    Abstract: A method includes receiving a network packet into a hardware pipeline of a network device; parsing and retrieving information of the network packet; determining, by the hardware pipeline, a packet-processing action to be performed by matching the information to a data structure of a set of flow data structures; sending, by the hardware pipeline, an action request to a programmable core, the action request being populated with data to trigger the programmable core to execute a hardware thread to perform a job, which is associated with the packet-processing action and that generates contextual data; retrieving the contextual data updated by the programmable core; and integrating the contextual data into performing the packet-processing action.
    Type: Application
    Filed: October 3, 2022
    Publication date: December 28, 2023
    Inventors: Omri Kahalon, Avi Urman, Ilan Pardo, Omer Cohen, Sayantan Sur, Barak Biber, Saar Tarnopolsky, Ariel Shahar
  • Publication number: 20230276301
    Abstract: A computer based system and method for sending data packets over a data network may include: preparing data packets and packet descriptors on one or more graphical processing units (GPUs); associating packets with a packet descriptor, which may determine a desired transmission time of the packets associated with that descriptor; receiving an indication of a clock time; and physically transmitting packets via an output interface, at a clock time corresponding to the desired transmission time. A computer based system and method for GPU-initiated communication over a 5G data network may include allocating one or more memory buffers in GPU memory; performing at least one 5G signal processing procedure by a GPU; preparing descriptors for a plurality of packets, where each packet includes allocated memory buffers, and where the descriptors provide scheduling instructions for the packets; and triggering the sending of packets over the network based on prepared descriptors.
    Type: Application
    Filed: June 30, 2022
    Publication date: August 31, 2023
    Applicant: NVIDIA CORPORATION
    Inventors: Sreeram POTLURI, Davide ROSSETTI, Elena AGOSTINI, Pak MARKTHUB, Daniel MARCOVITCH, Sayantan SUR
  • Patent number: 11645534
    Abstract: An embodiment of a semiconductor package apparatus may include technology to embed one or more trigger operations in one or more messages related to collective operations for a neural network, and issue the one or more messages related to the collective operations to a hardware-based message scheduler in a desired order of execution. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: May 9, 2023
    Assignee: Intel Corporation
    Inventors: Sayantan Sur, James Dinan, Maria Garzaran, Anupama Kurpad, Andrew Friedley, Nusrat Islam, Robert Zak
  • Publication number: 20230112420
    Abstract: A kernel comprising at least one dynamically configurable parameter is submitted by a processor. The kernel is to be executed at a later time. Data is received after the kernel has been submitted. The at least one dynamically configurable parameter of the kernel is updated based on the data. The kernel having the at least one updated dynamically configurable parameter is executed after the at least one dynamically configurable parameter has been updated.
    Type: Application
    Filed: October 13, 2021
    Publication date: April 13, 2023
    Inventors: Sayantan Sur, Stephen Anthony Bernard Jones, Shahaf Shuler
  • Publication number: 20220351326
    Abstract: Examples described herein relate to a first graphics processing unit (GPU) with at least one integrated communications system, wherein the at least one integrated communications system is to apply a reliability protocol to communicate with a second at least one integrated communications system associated with a second GPU to copy data from a first memory region to a second memory region and wherein the first memory region is associated with the first GPU and the second memory region is associated with the second GPU.
    Type: Application
    Filed: June 29, 2022
    Publication date: November 3, 2022
    Inventors: Todd RIMMER, Mark DEBBAGE, Bruce G. WARREN, Sayantan SUR, Nayan Amrutlal SUTHAR, Ajaya Durg
  • Patent number: 11409673
    Abstract: Examples include a method of managing storage for triggered operations. The method includes receiving a request to allocate a triggered operation; if there is a free triggered operation, allocating the free triggered operation; if there is no free triggered operation, recovering one or more fired triggered operations, freeing one or more of the recovered triggered operations, and allocating one of the freed triggered operations; configuring the allocated triggered operation; and storing the configured triggered operation in a cache on an input/output (I/O) device for subsequent asynchronous execution of the configured triggered operation.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: August 9, 2022
    Assignee: Intel Corporation
    Inventors: Andrew Friedley, Sayantan Sur, Ravindra Babu Ganapathi, Travis Hamilton, Keith D. Underwood
  • Publication number: 20220210639
    Abstract: In an embodiment, at least one interface mechanism may be provided. The mechanism may permit, at least in part, at least one process allocate, at least in part, and/or configure, at least in part, at least one network-associated object. Such allocation and/or configuration, at least in part, may be in accordance with at least one parameter set that may correspond, at least in part, to at least one query issued by the at least one process via the mechanism. Many modifications are possible without departing from this embodiment.
    Type: Application
    Filed: December 10, 2021
    Publication date: June 30, 2022
    Applicant: Intel Corporation
    Inventors: William R. Magro, Todd M. Rimmer, Robert J. Woodruff, Mark S. Hefty, Sayantan Sur
  • Patent number: 11246027
    Abstract: In an embodiment, at least one interface mechanism may be provided. The mechanism may permit, at least in part, at least one process allocate, at least in part, and/or configure, at least in part, at least one network-associated object. Such allocation and/or configuration, at least in part, may be in accordance with at least one parameter set that may correspond, at least in part, to at least one query issued by the at least one process via the mechanism. Many modifications are possible without departing from this embodiment.
    Type: Grant
    Filed: September 29, 2016
    Date of Patent: February 8, 2022
    Assignee: Intel Corporation
    Inventors: William R. Magro, Todd M. Rimmer, Robert J. Woodruff, Mark S. Hefty, Sayantan Sur
  • Patent number: 11150967
    Abstract: Methods, software, and systems for improved data transfer operations using overlapped rendezvous memory registration. Techniques are disclosed for transferring data between a first process operating as a sender and a second process operating as a receiver. The sender sends a PUT request message to the receiver including payload data stored in a send buffer and first and second match indicia. The first match indicia is used to determine whether the PUT request is expected or unexpected. If the PUT request is unexpected, an RMA GET operation is performed using the second matching indicia to pull data from the send buffer and write the data to a memory region in the user space of the process associated with the receiver. If the PUT request message is expected, the data payload with the PUT request is written to a receive buffer on the receiver determined using the first match indicia.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: October 19, 2021
    Assignee: Intel Corporation
    Inventors: Sayantan Sur, Keith Underwood, Ravindra Babu Ganapathi, Andrew Friedley
  • Publication number: 20210271536
    Abstract: Algorithms for optimizing small message collectives with hardware supported triggered operations and associated methods, apparatus, and systems. The algorithms are implemented in a distributed compute environment comprising a plurality of ranks including a root, a plurality of intermediate nodes, and a plurality of leaf nodes, where each of the plurality of ranks comprising a compute platform having a communication interface including embedded logic for implementing the algorithms. Collectives are employed to transfer data between parent ranks and child ranks. In connection with the collectives, control messages are sent from children of a collective to the parent of the collective informing the parent that the children of the collective have free buffers ready to receive data. The parent employs a counter to determine that a control message has been received from each of its children indicating each child has a free buffer prior to sending data to the children in the collective.
    Type: Application
    Filed: December 23, 2020
    Publication date: September 2, 2021
    Inventors: Maria Garzaran, Nusrat Islam, Gengbin Zheng, Sayantan Sur
  • Patent number: 10963183
    Abstract: Technologies for fine-grained completion tracking of memory buffer accesses include a compute device. The compute device is to establish multiple counter pairs for a memory buffer. Each counter pair includes a locally managed offset and a completion counter. The compute device is also to receive a request from a remote compute device to access the memory buffer, assign one of the counter pairs to the request, advance the locally managed offset of the assigned counter pair by the amount of data to be read or written, and advance the completion counter of the assigned counter pair as the data is read from or written to the memory buffer. Other embodiments are also described and claimed.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: March 30, 2021
    Assignee: Intel Corporation
    Inventors: James Dinan, Keith D. Underwood, Sayantan Sur, Charles A. Giefer, Mario Flajslik
  • Patent number: 10958589
    Abstract: Technologies for offloaded management of communication are disclosed. In order to manage communication with information that may be available to applications in a compute device, the compute device may offload communication management to a host fabric interface using a credit management system. A credit limit is established, and each message to be sent is added to a queue with a corresponding number of credits required to send the message. The host fabric interface of the compute device may send out messages as credits become available and decrease the number of available credits based on the number of credits required to send a particular message. When an acknowledgement of receipt of a message is received, the number of credits required to send the corresponding message may be added back to an available credit pool.
    Type: Grant
    Filed: March 29, 2017
    Date of Patent: March 23, 2021
    Assignee: Intel Corporation
    Inventors: James Dinan, Sayantan Sur, Mario Flajslik, Keith D. Underwood
  • Publication number: 20210042254
    Abstract: Methods and apparatus for an accelerator controller hub (ACH). The ACH may be a stand-alone component or integrated on-die or on package in an accelerator such as a GPU. The ACH may include a host device link (HDL) interface, one or more Peripheral Component Interconnect Express (PCIe) interfaces, one or more high performance accelerator link (HPAL) interfaces, and a router, operatively coupled to each of the HDL interface, the one or more PCIe interfaces, and the one or more HPAL interfaces. The HDL interface is configured to be coupled to a host CPU via an HDL link and the one or more HPAL interfaces are configured to be coupled to one or more HPALs that are used to access high performance accelerator fabrics (HPAFs) such as NVlink fabrics and CCIX (Cache Coherent Interconnect for Accelerators) fabrics. Platforms including ACHs or accelerators with integrated ACHs support RDMA transfers using RDMA semantics to enable transfers between accelerator memory on initiators and targets without CPU involvement.
    Type: Application
    Filed: October 28, 2020
    Publication date: February 11, 2021
    Inventors: Pratik Marolia, Andrew Herdrich, Rajesh Sankaran, Rahul Pal, David Puffer, Sayantan Sur, Ajaya Durg