Patents by Inventor Pradeep Thomas

Pradeep Thomas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12231353
    Abstract: A fabric control protocol is described for use within a data center in which a switch fabric provides full mesh interconnectivity such that any of the servers may communicate packet data for a given packet flow to any other of the servers using any of a number of parallel data paths within the data center switch fabric. The fabric control protocol enables spraying of individual packets for a given packet flow across some or all of the multiple parallel data paths in the data center switch fabric and, optionally, reordering of the packets for delivery to the destination. The fabric control protocol may provide end-to-end bandwidth scaling and flow fairness within a single tunnel based on endpoint-controlled requests and grants for flows. In some examples, the fabric control protocol packet structure is carried over an underlying protocol, such as the User Datagram Protocol (UDP).
    Type: Grant
    Filed: January 28, 2020
    Date of Patent: February 18, 2025
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Deepak Goel, Narendra Jayawant Gathoo, Philip A Thomas, Srihari Raju Vegesna, Pradeep Sindhu, Wael Noureddine, Robert William Bowdidge, Ayaskant Pani, Gopesh Goyal
  • Patent number: 12204755
    Abstract: An elastic request handling technique limits a number of threads used to service input/output (I/O) requests of a low-latency I/O workload received by a file system server executing on a cluster having a plurality of nodes deployed in a virtualization environment. The limited number of threads (server threads) is constantly maintained as “active” and running on virtual central processing units (vCPUs) of a node. The file system server spawns and organizes the active server threads as one or more pools of threads. The server prioritizes the low-latency I/O requests by loading them onto the active threads and allowing the requests to run on those active threads to completion, thereby obviating overhead associated with lock contention and vCPU migration after a context switch (i.e., to avoid rescheduling a thread on a different vCPU after execution of the thread was suspended).
    Type: Grant
    Filed: June 29, 2022
    Date of Patent: January 21, 2025
    Assignee: Nutanix, Inc.
    Inventors: Daniel Chilton, Gaurav Gangalwar, Manoj Premanand Naik, Pradeep Thomas, Will Strickland
  • Patent number: 12182264
    Abstract: Examples of file analytics systems are described that may obtain metadata data and events data from a virtualized file server. The file analytics systems may detect one or more events from the events data matching a criteria indicating malicious activity. The file analytics systems may validate the detection of malicious activity. The validation may be performed by comparing the file type, such as the MIME type, of sample files before and after the suspected malicious activity. The systems may recover a share of the distributed file server including the one or more affected files by replacing the one or more affected files with stored versions of the one or more affected files from a snapshot of the share taken prior to the detected malicious activity.
    Type: Grant
    Filed: March 11, 2022
    Date of Patent: December 31, 2024
    Assignee: Nutanix, Inc.
    Inventors: Pankaj Kumar Sinha, Pradeep Thomas
  • Patent number: 12164383
    Abstract: An example file server manager updates a selected share of a destination distributed file server based on a snapshot of at least a portion of a selected share of a source distributed file server. The selected share of the destination distributed file server is updated while the source distributed file server serves client requests for storage items of the selected share of the source distributed file server. The file server manager receives a request to failover from the source distributed file server to the destination distributed file server and configures the destination distributed file server to service read and write requests for storage items of the selected share of the destination distributed file server. The file server manager further redirects client requests for storage items of the selected share of the source distributed file server to the destination distributed file server by updating active directory information.
    Type: Grant
    Filed: January 21, 2022
    Date of Patent: December 10, 2024
    Assignee: Nutanix, Inc.
    Inventors: Shyamsunder Prayagchand Rathi, Hemanth Thummala, Lakshmana Reddy, Pradeep Thomas, Kalpesh Ashok Bafna, Manoj Premanand Naik
  • Publication number: 20240193128
    Abstract: A technique extends a file system infrastructure of a storage system to provide a custom namespace within a pathname of a logical construct configured to invoke semantically interpretative context as a command embedded in a data access protocol request issued by a client and directed to the logical construct served by the storage system, without alteration to the data access protocol. The extension includes a “plug-in” engine of a data access protocol server executing on a network protocol stack of the storage system. The engine operates to extract a pathname from the request to determine whether the custom namespace incorporating the command is present and directed to the logical construct. If so, the engine semantically interprets the command within a context of the custom namespace to essentially convert the command to one or more predefined operations directed to the logical construct. The storage system then performs the operations and returns the results to the client.
    Type: Application
    Filed: December 7, 2022
    Publication date: June 13, 2024
    Inventors: Manoj Premanand Naik, Pradeep Thomas
  • Publication number: 20230359359
    Abstract: An elastic request handling technique limits a number of threads used to service input/output (I/O) requests of a low-latency I/O workload received by a file system server executing on a cluster having a plurality of nodes deployed in a virtualization environment. The limited number of threads (server threads) is constantly maintained as “active” and running on virtual central processing units (vCPUs) of a node. The file system server spawns and organizes the active server threads as one or more pools of threads. The server prioritizes the low-latency I/O requests by loading them onto the active threads and allowing the requests to run on those active threads to completion, thereby obviating overhead associated with lock contention and vCPU migration after a context switch (i.e., to avoid rescheduling a thread on a different vCPU after execution of the thread was suspended).
    Type: Application
    Filed: June 29, 2022
    Publication date: November 9, 2023
    Inventors: Daniel Chilton, Gaurav Gangalwar, Manoj Premanand Naik, Pradeep Thomas, Will Strickland
  • Publication number: 20230289443
    Abstract: Examples of file analytics systems are described that may obtain metadata data and events data from a virtualized file server. The file analytics systems may detect one or more events from the events data matching a criteria indicating malicious activity. The file analytics systems may validate the detection of malicious activity. The validation may be performed by comparing the file type, such as the MIME type, of sample files before and after the suspected malicious activity. The systems may recover a share of the distributed file server including the one or more affected files by replacing the one or more affected files with stored versions of the one or more affected files from a snapshot of the share taken prior to the detected malicious activity.
    Type: Application
    Filed: March 11, 2022
    Publication date: September 14, 2023
    Inventors: Pankaj Kumar Sinha, Pradeep Thomas
  • Publication number: 20230237022
    Abstract: Examples described herein are generally directed towards file share access, and more specifically towards a mechanism to connect file shares at the protocol level in a distributed file server environment. In operation, a first FSVM hosting a first file share may receive a request by a client to access a location in a name space. The first FSVM may determine the location is at a second file share linked to the first file share. The first FSVM may provide access to the second file share to the client. In some examples, the first file share and the second file share may be linked at the directory level.
    Type: Application
    Filed: January 24, 2022
    Publication date: July 27, 2023
    Applicant: NUTANIX, INC.
    Inventors: Pradeep Thomas, Suhrud Patankar, Srikrishan Malik, Manoj Premanand Naik
  • Publication number: 20230056217
    Abstract: An example file server manager updates a selected share of a destination distributed file server based on a snapshot of at least a portion of a selected share of a source distributed file server. The selected share of the destination distributed file server is updated while the source distributed file server serves client requests for storage items of the selected share of the source distributed file server. The file server manager receives a request to failover from the source distributed file server to the destination distributed file server and configures the destination distributed file server to service read and write requests for storage items of the selected share of the destination distributed file server. The file server manager further redirects client requests for storage items of the selected share of the source distributed file server to the destination distributed file server by updating active directory information.
    Type: Application
    Filed: January 21, 2022
    Publication date: February 23, 2023
    Applicant: NUTANIX, INC.
    Inventors: Shyamsunder Prayagchand Rathi, Hemanth Thummala, Lakshmana Reddy, Pradeep Thomas, Kalpesh Ashok Bafna, Manoj Premanand Naik
  • Patent number: 10346076
    Abstract: According to some embodiment, a backup storage system receives, by a first phase of a data deduplication pipeline, a request from a client for reading or writing a data segment associated with a data stream stored in or to a storage system. In response to the request, the system retrieves, by the first phase, load parameters associated with a second phase in the data deduplication pipeline. For each of the load parameters associated with the second phase, the system determines, by the first phase, whether the load parameter has exceeded a load threshold associated with the second phase. The system throttles, by the first phase, performance of a specific job in the data deduplication pipeline by the second phase in response to a determination that at least one of the load parameters associated with the second job phase has exceeded the load threshold associated with the second phase.
    Type: Grant
    Filed: July 3, 2017
    Date of Patent: July 9, 2019
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Uday Kiran Jonnala, Yamini Allu, Pradeep Thomas, Abhishek Rajimwale, Balaji Subramanian
  • Patent number: 10216660
    Abstract: According to some embodiment, a backup storage system receives a plurality of input/output (IO) requests at the storage system. The IO requests include random IO requests and sequential IO requests. The storage system determines whether there is a pending random IO request from the plurality of IO requests. In response to determining that there is a pending random IO request, the storage system determines whether a total latency of the sequential IO requests exceeds a predicted latency of the pending random IO request. The storage system services the pending random IO request in response to determining that the total latency of the sequential IO requests exceeds the predicted latency of the pending random IO request.
    Type: Grant
    Filed: July 13, 2017
    Date of Patent: February 26, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Uday Kiran Jonnala, Pradeep Thomas