Patents by Inventor Sujoy Sen

Sujoy Sen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190354406
    Abstract: Technologies for remote direct memory access (RDMA) queue pair quality of service (QoS) management are disclosed. In the illustrative embodiment, several queue pairs associated with a virtual machine on a compute sled may be created in a network interface controller of the compute sled. A QoS parameter such as a class of service identifier or a weighting may be assigned to each queue pair such that each queue pair has a different available bandwidth. The compute sled may also predict future RDMA queue pair bandwidth usage and adjust RDMA queue pair bandwidth allocation based on the prediction.
    Type: Application
    Filed: July 29, 2019
    Publication date: November 21, 2019
    Inventors: Mrittika Ganguli, Neerav Parikh, Robert Sharp, Sujoy Sen
  • Publication number: 20190278676
    Abstract: Technologies for fast distributed storage recovery include a distributed storage system that includes multiple controller nodes and multiple target nodes. Each controller node is coupled to a corresponding target node via a storage fabric. Each target node stores replica data. The system identifies a failed node and a corresponding node that was coupled to the failed node. If the failed node is a controller node, the corresponding node is a target node. If the failed node is a target node, the corresponding node is a controller node. The system instantiates a replacement node, adds the replacement node to the system, and couples the replacement node to the corresponding node. The system may direct a backup target node to copy replica data to the replacement target node via the storage fabric. Other embodiments are described and claimed.
    Type: Application
    Filed: May 29, 2019
    Publication date: September 12, 2019
    Inventors: Yi Zou, Arun Raghunath, Tushar Gohad, Anjaneya Reddy Chagam Reddy, Sujoy Sen
  • Publication number: 20190253518
    Abstract: Technologies for providing resource health based node composition and management include a compute device having circuitry configured to receive status data from each of multiple resources in a system. The status data is indicative of an ability of the corresponding resource to be utilized in the execution of a workload. The circuitry is also configured to determine, as a function of the received status data, a responsive action to be performed to manage execution of the workload. Further, the circuitry is configured to perform the responsive action to manage execution of the workload.
    Type: Application
    Filed: April 26, 2019
    Publication date: August 15, 2019
    Inventors: Murugasamy K. Nachimuthu, Sujoy Sen
  • Patent number: 10291739
    Abstract: In accordance with embodiments of the present disclosure, an information handling system may include a processor and a memory communicatively coupled to the processor, the memory for storing a portion of a cache. The memory may be configured to receive a request to write data to the portion of the cache, write the data to the portion of the cache, and update a map corresponding to the cache and stored within the memory.
    Type: Grant
    Filed: November 19, 2015
    Date of Patent: May 14, 2019
    Assignee: Dell Products L.P.
    Inventors: Scott David Peterson, Sujoy Sen
  • Publication number: 20190108095
    Abstract: To reduce the cost of ensuring the integrity of data stored in distributed data storage systems, a storage-side system provides data integrity services without the involvement of the host-side data storage system. Processes for storage-side data integrity include maintaining a block ownership map and performing data integrity checking and repair functions in storage target subsystems. The storage target subsystems are configured to efficiently manage data stored remotely using a storage fabric protocol such as NVMe-oF. The storage target subsystems can be implemented in a disaggregated storage computing system on behalf of a host-side distributed data storage system, such as software-defined storage (SDS) system.
    Type: Application
    Filed: December 7, 2018
    Publication date: April 11, 2019
    Inventors: Yi ZOU, Arun RAGHUNATH, Anjaneya R. CHAGAM REDDY, Sujoy SEN, Tushar Sudhakar GOHAD
  • Publication number: 20190102299
    Abstract: A method and apparatus for performing a data transfer, which include a selection a data transfer operation mode, based on telemetry data, from a first operation mode where a first type of data is transferred from a memory of a computing system to one or more shared storage devices, and a second operation mode where a second type of data is transferred from the memory to the one or more shared storage devices, the first type of data being associated with a first range of address space of the one or more shared storage devices, the second type of data being associated with a second range of address space of the one or more shared storage devices different from the first range of address space. Furthermore, a data transfer from the memory to the one or more shared storage devices in the selected data transfer operation mode may be included.
    Type: Application
    Filed: October 4, 2017
    Publication date: April 4, 2019
    Inventors: Francesc GUIM BERNAT, Kshitij DOSHI, Sujoy SEN
  • Publication number: 20190065083
    Abstract: Technologies for providing efficient access to pooled accelerator devices include an accelerator sled. The accelerator sled includes an accelerator device and a controller connected to the accelerator device. The controller is to provide, to a compute sled, accelerator abstraction data. The accelerator abstraction data represents the accelerator device as one or more logical devices, each logical device having one or more memory regions accessible by the compute sled, and defines an access mode usable to access each corresponding memory region. The controller is further to receive, from the compute sled, a request to perform an operation on an identified memory region of the accelerator device with a corresponding access mode. Additionally, the controller is to convert the request from a first format to a second format that is different from the second format and is usable by the accelerator device to perform the operation.
    Type: Application
    Filed: December 29, 2017
    Publication date: February 28, 2019
    Inventors: Sujoy Sen, Susanne M. Balle, Narayan Ranganathan, Evan Custodio, Paul H. Dormitzer, Francesc Guim Bernat
  • Publication number: 20190068444
    Abstract: Technologies for providing efficient transfer of results from remote accelerator devices include a compute sled. The compute sled is to send a request to utilize an accelerator device on an accelerator sled. The request includes a data object to be processed by the accelerator device to increase the speed of execution of a workload associated with the data object. The compute sled is also to receive a modification map from the accelerator sled indicative of a modification to the data object. Further, the compute sled is to determine the modification to the data object based on the modification map and apply the modification to the data object in a memory device of the compute sled.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Joe Grecco, Sujoy Sen, Francesc Guim Bernat, Susanne M. Balle, Evan Custodio, Paul Dormitzer, Henry Mitchel
  • Publication number: 20190068696
    Abstract: Technologies for composing a managed node based on telemetry data include communication circuitry and a compute device. The compute device is to receive resource-level telemetry data for each resource of a plurality of resources and rack-level telemetry data from each rack of a plurality of racks and a managed node composition request, which identifies at least one metric to be achieved by a managed node. In response to a receipt of the managed node composition request, the compute device is further to determine a present utilization of each resource of the plurality of resources and a present performance level of each rack of the plurality of racks, and determine a set of resources from the plurality of resources that satisfies the managed node composition request based on the resource-level and rack-level telemetry data.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Sujoy Sen, Mohan J. Kumar
  • Publication number: 20190042091
    Abstract: Technologies for providing efficient distributed data storage in a disaggregated architecture include a compute sled. The compute sled includes a network interface controller and circuitry to receive, through a network and with the network interface controller, a data access request from a compute device. The data access request includes a data payload indicative of an object to be stored. The circuitry is also to map the object to a set of multiple data storage sleds for distributed storage of the object. Additionally, the circuitry is to send a write request with the object and an object identifier to the mapped data storage sleds to store the object in one or more data storage devices located on each data storage sled and concurrently send metadata without the object to one or more other compute sleds associated with the mapped data storage sleds. Other embodiments are also described and claimed.
    Type: Application
    Filed: March 15, 2018
    Publication date: February 7, 2019
    Inventors: Arun Raghunath, Anjaneya Reddy Chagam Reddy, Sujoy Sen, Yi Zou
  • Publication number: 20190042089
    Abstract: Examples include techniques for determining a storage policy for storing data in a computing system having one or more storage nodes, each storage node including one or more storage devices. One technique includes getting rating information from a storage device of a storage node; assigning the storage device to a storage pool based at least in part on the rating information; and automatically determining a storage policy for the computing system based at least in part on the assigned storage pool and the rating information.
    Type: Application
    Filed: March 2, 2018
    Publication date: February 7, 2019
    Inventors: Anjaneya R. CHAGAM REDDY, Mohan J. KUMAR, Sujoy SEN, Murugasamy K. NACHIMUTHU, Gamil CAIN
  • Publication number: 20190042090
    Abstract: Technologies for separating control plane management from data plane management for distributed storage in a disaggregated architecture include a compute sled. The compute sled includes a network interface controller and circuitry to receive, through a network and with the network interface controller, a data access request from a compute device. The data access request includes a data payload indicative of an object to be stored. The circuitry is also to map the object to a set of multiple data storage sleds for distributed storage of the object. Additionally, the circuitry is to send, through the network and with a local data bus protocol mapped onto a network protocol, a write request to the mapped data storage sleds to store the object in one or more data storage devices located on each data storage sled. Other embodiments are also described and claimed.
    Type: Application
    Filed: March 15, 2018
    Publication date: February 7, 2019
    Inventors: Arun Raghunath, Anjaneya Reddy Chagam Reddy, Sujoy Sen, Yi Zou
  • Publication number: 20190042133
    Abstract: Technologies for providing adaptive data access request routing in a distributed storage system include a compute device. The compute device includes a redirector device to receive, from an initiator device, a request that identifies a data set to be accessed. The redirector device is also to determine, from a set of routing rules indicative of target devices associated with data sets, whether the identified data set is available in a storage server associated with the present redirector device, forward, in response to a determination that the identified data set is not available in a storage server associated with the present redirector device, the request to a target device associated with the data set in the routing rules, and send, to the initiator device, an identification of the target device associated with the data set in the routing rules. Other embodiments are also described and claimed.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: Scott Peterson, Sujoy Sen
  • Publication number: 20190042126
    Abstract: Technologies for storage discovery and reallocation include a compute device. The compute device is to receive, from a data storage sled, storage device data from a storage device located on the data storage sled. The storage device data includes storage device self-test data that defines a result of a self-test performed by the storage device. The compute device is also to determine, in response to the storage device self-test data, whether the storage device fails to satisfy a performance threshold. Further, the compute device is to generate, in response to a determination that the storage device fails to satisfy the performance threshold, an adjustment message for the storage device. The adjustment message instructs the storage device to adjust a performance parameter of the storage device. The compute device is also to send the adjustment message to the storage device.
    Type: Application
    Filed: December 29, 2017
    Publication date: February 7, 2019
    Inventors: Sujoy Sen, Gamil Cain, Teddy Greer, Anjaneya Reddy Chagam Reddy
  • Publication number: 20190042144
    Abstract: Examples include methods for obtaining one or more location hints applicable to a range of logical block addresses of a received input/output (I/O) request for a storage subsystem coupled with a host system over a non-volatile memory express over fabric (NVMe-oF) interconnect. The following steps are performed for each logical block address in the I/O request. A most specific location hint of the one or more location hints that matches that logical block address is applied to identify a destination in the storage subsystem for the I/O request. When the most specific location hint is a consistent hash hint, the consistent hash hint is processed. The I/O request is forwarded to the destination and a completion status for the I/O request is returned. When a location hint log page has changed, the location hint log page is processed. When any location hint refers to NVMe-oF qualified names not included in the immediately preceding query by the discovery service, the immediately preceding query is processed again.
    Type: Application
    Filed: August 22, 2018
    Publication date: February 7, 2019
    Inventors: Scott D. PETERSON, Sujoy SEN, Anjaneya R. CHAGAM REDDY, Murugasamy K. NACHIMUTHU, Mohan J. KUMAR
  • Publication number: 20190004894
    Abstract: Apparatuses, systems and methods are disclosed herein that generally relate to distributed network storage and filesystems, such as Ceph, Hadoop®, or other big data storage environments utilizing resources and/or storage that may be remotely located across a communication link such as a network. More particularly, disclosed are techniques for one or more machines or devices to scrub data on remote resources and/or storage without requiring all or substantially all of the remote data to be read across the communication link in order to scrub it. Some disclosed embodiments discuss having validation be relatively local to storage(s) being scrubbed, and some embodiments discuss only providing to the one or more machines scrubbing data selected results of the relatively local scrubbing over the communication link.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Inventors: Anjaneya R. Chagam Reddy, Mohan J. Kumar, Sujoy Sen, Tushar Gohad
  • Patent number: 10089228
    Abstract: A cache storage method includes providing a storage cache cluster, comprising a plurality of cache storage elements, for caching I/O operations from a plurality of virtual machines associated with a corresponding plurality of virtual hard disks mapped to a logical storage area network volume or LUN. Responsive to a cache flush signal, flush write back operations are performed to flush modified cache blocks to achieve or preserve coherency. The flush write back operations may include accessing current time data indicative of a current time, determining a current time window in accordance with the current time, determining a duration of the current time window, and identifying a current cache storage element corresponding to the current time window. For a duration of the current time window, only those write back blocks stored in the current cache storage element are flushed.
    Type: Grant
    Filed: May 9, 2016
    Date of Patent: October 2, 2018
    Assignee: Dell Products L.P.
    Inventors: Scott David Peterson, Sujoy Sen
  • Publication number: 20180219797
    Abstract: Technologies for pooling accelerators over fabric are disclosed. In the illustrative embodiment, an application may access an accelerator device over an application programming interface (API) and the API can access an accelerator device that is either local or a remote accelerator device that is located on a remote accelerator sled over a network fabric. The API may employ a send queue and a receive queue to send and receive command capsules to and from the accelerator sled.
    Type: Application
    Filed: June 12, 2017
    Publication date: August 2, 2018
    Inventors: Sujoy Sen, Mohan J. Kumar, Donald L. Faw, Susanne M. Balle, Narayan Ranganathan
  • Publication number: 20180198709
    Abstract: In general, in one aspect, the disclosures describes a method that includes receiving multiple ingress Internet Protocol packets, each of the multiple ingress Internet Protocol packets having an Internet Protocol header and a Transmission Control Protocol segment having a Transmission Control Protocol header and a Transmission Control Protocol payload, where the multiple packets belonging to a same Transmission Control Protocol/Internet Protocol flow. The method also includes preparing an Internet Protocol packet having a single Internet Protocol header and a single Transmission Control Protocol segment having a single Transmission Control Protocol header and a single payload formed by a combination of the Transmission Control Protocol segment payloads of the multiple Internet Protocol packets. The method further includes generating a signal that causes receive processing of the Internet Protocol packet.
    Type: Application
    Filed: December 29, 2017
    Publication date: July 12, 2018
    Inventors: Srihari Makineni, Ravi Iyer, Dave Minturn, Sujoy Sen, Donald Newell, Li Zhao
  • Patent number: 10015117
    Abstract: In one embodiment, a method is provided. The method of this embodiment provides storing a packet header at a set of at least one page of memory allocated to storing packet headers, and storing the packet header and a packet payload at a location not in the set of at least one page of memory allocated to storing packet headers.
    Type: Grant
    Filed: July 17, 2015
    Date of Patent: July 3, 2018
    Assignee: Intel Corporation
    Inventors: Linden Cornett, David B. Minturn, Sujoy Sen, Hemal V. Shah, Anshuman Thakur, Gary Tsao, Anil Vasudevan