Patents by Inventor Sujoy Sen

Sujoy Sen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10853296
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: December 1, 2020
    Assignee: Intel Corporation
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat, Sujoy Sen, Slawomir Putyrski, Paul Dormitzer, Joseph Grecco
  • Publication number: 20200371689
    Abstract: Technologies for quality of service (QoS) management include a computing device having a physical storage volume and multiple processor cores. A management thread reads I/O counters that are each associated with a logical volume and a processor core. The logical volumes are backed by the physical storage volume. The management thread configures stop bits as a function of the I/O counters and multiple QoS parameters. Each stop bit is associated with a logical volume and a processor core. The QoS parameters include minimum guaranteed bandwidth and optional maximum allowed bandwidth for each logical volume. A worker thread reads the stop bit associated with a logical volume and a processor core, accesses the logical volume if the stop bit is not set, and updates the I/O counter associated with the logical volume and the processor core in response to accessing the logical volume. Other embodiments are described and claimed.
    Type: Application
    Filed: May 20, 2019
    Publication date: November 26, 2020
    Inventors: Sujoy Sen, Siddhartha Kumar Panda, Jayaraj Puthenpurackal Rajappan, Kunal Sablok, Ramkumar Venkatachalam
  • Publication number: 20200356294
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Application
    Filed: July 30, 2020
    Publication date: November 12, 2020
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Publication number: 20200341810
    Abstract: Technologies for providing an accelerator device discovery service include a device having circuitry configured to obtain, from a discovery service, availability data indicative of a set of accelerator devices available to assist in the execution of a workload. The circuitry is also configured to select, as a function of the availability data, one or more target accelerator devices to assist in the execution of the workload, and execute the workload with the one or more target accelerator devices.
    Type: Application
    Filed: April 24, 2019
    Publication date: October 29, 2020
    Inventors: Narayan Ranganathan, Sujoy Sen, Joseph Grecco, Slawomir Putyrski
  • Publication number: 20200328973
    Abstract: In general, in one aspect, the disclosures describes a method that includes receiving multiple ingress Internet Protocol packets, each of the multiple ingress Internet Protocol packets having an Internet Protocol header and a Transmission Control Protocol segment having a Transmission Control Protocol header and a Transmission Control Protocol payload, where the multiple packets belonging to a same Transmission Control Protocol/Internet Protocol flow. The method also includes preparing an Internet Protocol packet having a single Internet Protocol header and a single Transmission Control Protocol segment having a single Transmission Control Protocol header and a single payload formed by a combination of the Transmission Control Protocol segment payloads of the multiple Internet Protocol packets. The method further includes generating a signal that causes receive processing of the Internet Protocol packet.
    Type: Application
    Filed: May 10, 2020
    Publication date: October 15, 2020
    Applicant: Intel Corporation
    Inventors: Srihari Makineni, Ravi Iyer, Dave Minturn, Sujoy Sen, Donald Newell, Li Zhao
  • Publication number: 20200319915
    Abstract: A method is described. The method includes performing the following with a storage end transaction agent within a storage sled of a rack mounted computing system: receiving a request to perform storage operations with one or more storage devices of the storage sled, the request specifying an all-or-nothing semantic for the storage operations; recognizing that all of the storage operations have successfully completed; after all of the storage operations have successfully completed, reporting to a CPU side transaction agent that sent the request that all of the storage operations have successfully completed.
    Type: Application
    Filed: June 22, 2020
    Publication date: October 8, 2020
    Inventors: Arun RAGHUNATH, Yi ZOU, Tushar Sudhakar GOHAD, Anjaneya R. CHAGAM REDDY, Sujoy SEN
  • Patent number: 10768842
    Abstract: Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: September 8, 2020
    Assignee: Intel Corporation
    Inventors: Henry Mitchel, Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer
  • Patent number: 10712963
    Abstract: Technologies for encrypted data access by field-programmable gate array (FPGA) user kernels include a computing device having an FPGA and an external memory device accessible by the FPGA. The FPGA includes a secure key store, a micro-encryption engine, and multiple slots for user kernels that are each identifiable with an index. A user kernel is programmed at an index and a symmetric encryption key is provisioned to the secure key store at the index. The micro encryption engine may read encrypted data from the external memory device, decrypt the encrypted data with the key associated with the index of the user kernel, and forward plain text data to the user kernel. The micro encryption engine may also receive plain text data from the user kernel, encrypt the plain text data with the key, and write the encrypted data to the external memory device. Other embodiments are described and claimed.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: July 14, 2020
    Assignee: Intel Corporation
    Inventors: Rahul Khanna, Susanne M. Balle, Francesc Guim Bernat, Sujoy Sen, Paul Dormitzer
  • Publication number: 20200218684
    Abstract: Technologies for accessing pooled accelerator resources over a network fabric are disclosed. In disclosed embodiments, an application hosted by a computing platform accesses remote accelerator resources over a network fabric using protocol multipathing mechanisms. A communication session is established with the remote accelerator resources. The communication session comprises at least two connections. The at least two connections at least include a first connection having or utilizing a first transport layer and a second connection having or utilizing a second transport layer that is different than the first transport layer. Other embodiments may be disclosed and/or claimed.
    Type: Application
    Filed: January 8, 2019
    Publication date: July 9, 2020
    Inventors: Sujoy Sen, Narayan Ranganathan
  • Publication number: 20200210114
    Abstract: Apparatuses for computing are disclosed herein. An apparatus may include a set of data reduction modules to perform data reduction operations on sets of (key, value) data pairs to reduce an amount of values associated with a shared key, wherein the (key, value) data pairs are stored in a plurality of queues located in a plurality of solid state drives remote from the apparatus. The apparatus may further include a memory access module, communicably coupled to the set of data reduction modules, to directly transfer individual ones of the sets of queued (key, value) data pairs from the plurality of remote solid state drives through remote random access of the solid state drives, via a network, without using intermediate staging storage. Other embodiments may be disclosed or claimed.
    Type: Application
    Filed: August 16, 2017
    Publication date: July 2, 2020
    Inventors: Xiao Hu, Huan Zhou, Sujoy Sen, Anjaneya R. Chagam Reddy, Mohan J. Kumar, Chong Han
  • Patent number: 10664396
    Abstract: A method and apparatus for performing a data transfer, which include a selection a data transfer operation mode, based on telemetry data, from a first operation mode where a first type of data is transferred from a memory of a computing system to one or more shared storage devices, and a second operation mode where a second type of data is transferred from the memory to the one or more shared storage devices, the first type of data being associated with a first range of address space of the one or more shared storage devices, the second type of data being associated with a second range of address space of the one or more shared storage devices different from the first range of address space. Furthermore, a data transfer from the memory to the one or more shared storage devices in the selected data transfer operation mode may be included.
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: May 26, 2020
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Kshitij Doshi, Sujoy Sen
  • Patent number: 10652147
    Abstract: In general, in one aspect, the disclosures describes a method that includes receiving multiple ingress Internet Protocol packets, each of the multiple ingress Internet Protocol packets having an Internet Protocol header and a Transmission Control Protocol segment having a Transmission Control Protocol header and a Transmission Control Protocol payload, where the multiple packets belonging to a same Transmission Control Protocol/Internet Protocol flow. The method also includes preparing an Internet Protocol packet having a single Internet Protocol header and a single Transmission Control Protocol segment having a single Transmission Control Protocol header and a single payload formed by a combination of the Transmission Control Protocol segment payloads of the multiple Internet Protocol packets. The method further includes generating a signal that causes receive processing of the Internet Protocol packet.
    Type: Grant
    Filed: December 29, 2017
    Date of Patent: May 12, 2020
    Assignee: Intel Corporation
    Inventors: Srihari Makineni, Ravi Iyer, Dave Minturn, Sujoy Sen, Donald Newell, Li Zhao
  • Publication number: 20200136996
    Abstract: Examples described herein relate to a network interface that includes an initiator device to determine a storage node associated with an access command based on an association between an address in the command and a storage node. The network interface can include a redirector to update the association based on messages from one or more remote storage nodes. The association can be based on a look-up table associating a namespace identifier with prefix string and object size. In some examples, the access command is compatible with NVMe over Fabrics. The initiator device can determine a remote direct memory access (RDMA) queue-pair (QP) lookup for use to perform the access command.
    Type: Application
    Filed: December 27, 2019
    Publication date: April 30, 2020
    Inventors: Yadong LI, Scott D. PETERSON, Sujoy SEN, David B. MINTURN
  • Publication number: 20200104275
    Abstract: Some examples provide a manner of a memory transaction requester to configure a target to recognize a memory address as a non-local or non-shared address. An intermediary between the requester and the target configures a control plane layer of the target to recognize that a memory transaction involving the memory address is to be performed using a direct memory access operation. The intermediary is connected to the requester as a local device or process. After configuration, a memory transaction provided to the target with the configured memory address causes the target to invoke use of the associated direct memory access operation to retrieve content associated with the memory address or write content using a direct memory access operation.
    Type: Application
    Filed: December 2, 2019
    Publication date: April 2, 2020
    Inventors: Sujoy SEN, Susanne M. BALLE, Narayan RANGANATHAN, Bradley A. BURRES
  • Patent number: 10592162
    Abstract: Examples include methods for obtaining one or more location hints applicable to a range of logical block addresses of a received input/output (I/O) request for a storage subsystem coupled with a host system over a non-volatile memory express over fabric (NVMe-oF) interconnect. The following steps are performed for each logical block address in the I/O request. A most specific location hint of the one or more location hints that matches that logical block address is applied to identify a destination in the storage subsystem for the I/O request. When the most specific location hint is a consistent hash hint, the consistent hash hint is processed. The I/O request is forwarded to the destination and a completion status for the I/O request is returned. When a location hint log page has changed, the location hint log page is processed. When any location hint refers to NVMe-oF qualified names not included in the immediately preceding query by the discovery service, the immediately preceding query is processed again.
    Type: Grant
    Filed: August 22, 2018
    Date of Patent: March 17, 2020
    Assignee: Intel Corporation
    Inventors: Scott D. Peterson, Sujoy Sen, Anjaneya R. Chagam Reddy, Murugasamy K. Nachimuthu, Mohan J. Kumar
  • Publication number: 20200073849
    Abstract: Technologies for network interface controllers (NICs) include a computing device having a NIC coupled to a root FPGA via an I/O link. The root FPGA is further coupled to multiple worker FPGAs by a serial link with each worker FPGA. The NIC may receive a remote direct memory access (RDMA) message from a remote host and send the RDMA message to the root FPGA via the I/O link. The root FPGA determines a target FPGA based on a memory address of the RDMA message. Each FPGA is associated with a part of a unified address space. If the target FPGA is a worker FPGA, the root FPGA sends the RDMA message to the worker FPGA via the corresponding serial link, and the worker FPGA processes the RDMA message. If the root FPGA is the target, the root FPGA may process the RDMA message. Other embodiments are described and claimed.
    Type: Application
    Filed: May 3, 2019
    Publication date: March 5, 2020
    Inventors: Paul H. Dormitzer, Susanne M. Balle, Sujoy Sen, Evan Custodio
  • Publication number: 20200073464
    Abstract: Technologies for providing adaptive power management in an accelerator sled include an accelerator sled having circuitry to determine, based on (i) a total power budget for the accelerator sled, (ii) service level agreement (SLA) data indicative of a target performance of a kernel, and (iii) profile data indicative of a performance of the kernel as a function of a power utilization of the kernel, a power utilization limit for the kernel to be executed by an accelerator device on the accelerator sled. Additionally, the circuitry is to allocate the determined power utilization limit to the kernel and execute the kernel under the allocated power utilization limit.
    Type: Application
    Filed: April 25, 2019
    Publication date: March 5, 2020
    Inventors: Francesc Guim Bernat, Susanne M. Balle, Sujoy Sen, Evan Custodio, Paul H. Dormitzer
  • Publication number: 20200004712
    Abstract: Technologies for providing I/O channel abstraction for accelerator device kernels include an accelerator device comprising circuitry to obtain availability data indicative of an availability of one or more accelerator device kernels in a system, including one or more physical communication paths to each accelerator device kernel. The circuitry is also configured to determine whether to establish a logical communication path between a kernel of the present accelerator device and another accelerator device kernel and establish, in response to a determination to establish the logical communication path as a function of the obtained availability data, the logical communication path between the kernel of the present accelerator device and the other accelerator device kernel.
    Type: Application
    Filed: December 28, 2018
    Publication date: January 2, 2020
    Inventors: Susanne M. Balle, Evan Custodio, Francesc Guim Bernat, Sujoy Sen, Slawomir Putyrski, Paul Dormitzer, Joseph Grecco
  • Patent number: 10503587
    Abstract: Apparatuses, systems and methods are disclosed herein that generally relate to distributed network storage and filesystems, such as Ceph, Hadoop®, or other big data storage environments utilizing resources and/or storage that may be remotely located across a communication link such as a network. More particularly, disclosed are techniques for one or more machines or devices to scrub data on remote resources and/or storage without requiring all or substantially all of the remote data to be read across the communication link in order to scrub it. Some disclosed embodiments discuss having validation be relatively local to storage(s) being scrubbed, and some embodiments discuss only providing to the one or more machines scrubbing data selected results of the relatively local scrubbing over the communication link.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: December 10, 2019
    Assignee: Intel Corporation
    Inventors: Anjaneya R. Chagam Reddy, Mohan J. Kumar, Sujoy Sen, Tushar Gohad
  • Publication number: 20190372914
    Abstract: Technologies for network interface controllers (NICs) include a compute sled and an accelerator sled in communication over a network. The accelerator sled configures a virtual switch endpoint associated with a remote direct memory access (RDMA) server instance that is associated with a field-programmable gate array (FPGA) of the accelerator sled. The accelerator sled updates local software defined networking (SDN) tables with a virtual tunnel associated with the virtual switch endpoint and a remote compute sled. A virtual switch of the accelerator sled switches virtual tunnel traffic from the remote compute sled to the RDMA server instance, which transfers data to or from the FPGA. The compute sled also updates a local SDN table with the virtual tunnel, and a virtual switch of the compute sled switches virtual tunnel traffic to or from the accelerator sled. Other embodiments are described and claimed.
    Type: Application
    Filed: August 14, 2019
    Publication date: December 5, 2019
    Inventors: Mrittika Ganguli, Sugesh Chandran, Parthasarathy Sarangam, Sujoy Sen, Susanne M. Balle, Rajesh Sankaran