Patents by Inventor Hema VENKATARAMANI

Hema VENKATARAMANI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11615042
    Abstract: This disclosure relates to high-performance computing, and more particularly to techniques for kernel-assisted device polling of user-space devices. A common kernel-based polling mechanism is provided for concurrently handling both kernel-based polling for kernel-space devices such as network interfaces (e.g., network NICs) and kernel-based polling for user-space devices such as remote direct memory access devices (e.g., RDMA NICs). Embodiments perform kernel-based polling on a first device that has a corresponding device driver in an operating system kernel. Using the same polling mechanism, the kernel-based polling is performed on a second device, the second device being a user-space device wherein the kernel-based polling on the second device is configured by creating a second device file descriptor that is not associated with a corresponding device driver in the operating system kernel.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: March 28, 2023
    Assignee: Nutanix, Inc.
    Inventors: Hema Venkataramani, Rohit Jain
  • Publication number: 20220318169
    Abstract: This disclosure relates to high-performance computing, and more particularly to techniques for kernel-assisted device polling of user-space devices. A common kernel-based polling mechanism is provided for concurrently handling both kernel-based polling for kernel-space devices such as network interfaces (e.g., network NICs) and kernel-based polling for user-space devices such as remote direct memory access devices (e.g., RDMA NICs). Embodiments perform kernel-based polling on a first device that has a corresponding device driver in an operating system kernel. Using the same polling mechanism, the kernel-based polling is performed on a second device, the second device being a user-space device wherein the kernel-based polling on the second device is configured by creating a second device file descriptor that is not associated with a corresponding device driver in the operating system kernel.
    Type: Application
    Filed: June 30, 2021
    Publication date: October 6, 2022
    Inventors: Hema VENKATARAMANI, Rohit JAIN
  • Patent number: 11429548
    Abstract: Methods, systems, and computer program products for high-performance cluster computing. Multiple components are operatively interconnected to carry out operations for high-performance RDMA I/O transfers over an RDMA NIC. A virtual machine of a virtualization environment initiates a first I/O call to an HCI storage pool controller using RDMA. Responsive to the first I/O call, a second I/O call is initiated from the HCI storage pool controller to a storage device of an HCI storage pool. The first I/O call to the HCI storage pool controller is implemented through a first virtual function of an RDMA NIC that is exposed in the user space of the virtualization environment. Prior to the first RDMA I/O call, a contiguous unit of memory to use in an RDMA I/O transfer is registered with the RDMA NIC. The contiguous unit of memory comprises memory that is registered using non-RDMA paths such as TCP or iSCSI.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: August 30, 2022
    Inventors: Hema Venkataramani, Felipe Franciosi, Gokul Kannan, Sreejith Mohanan, Alok Nemchand Kataria, Raphael Shai Norwitz
  • Publication number: 20220206852
    Abstract: Methods, systems and computer program products for lockless acquisition of memory for RDMA operations. A contiguous physical memory region is allocated. The contiguous physical memory region is divided into a plurality of preregistered chunks that are assigned to one or more process threads that are associated with an RDMA NIC. When responding to a request from a particular one of the one or more process threads, a buffer carved from the preregistered chunk of the contiguous physical memory region is assigned to the requesting process thread. Since the memory is pre-registered, and since the associations are made at the thread level, there is no need for locks when acquiring a buffer. Furthermore, since the memory is pre-registered, the threads do not incur registration latency. The contiguous physical memory region can be a contiguous HugePage contiguous region from which a plurality of individually allocatable buffers can be assigned to different threads.
    Type: Application
    Filed: December 31, 2020
    Publication date: June 30, 2022
    Inventors: Hema VENKATARAMANI, Alok Nemchand KATARIA, Rohit JAIN
  • Publication number: 20220179809
    Abstract: Methods, systems, and computer program products for high-performance cluster computing. Multiple components are operatively interconnected to carry out operations for high-performance RDMA I/O transfers over an RDMA NIC. A virtual machine of a virtualization environment initiates a first I/O call to an HCI storage pool controller using RDMA. Responsive to the first I/O call, a second I/O call is initiated from the HCI storage pool controller to a storage device of an HCI storage pool. The first I/O call to the HCI storage pool controller is implemented through a first virtual function of an RDMA NIC that is exposed in the user space of the virtualization environment. Prior to the first RDMA I/O call, a contiguous unit of memory to use in an RDMA I/O transfer is registered with the RDMA NIC. The contiguous unit of memory comprises memory that is registered using non-RDMA paths such as TCP or iSCSI.
    Type: Application
    Filed: January 29, 2021
    Publication date: June 9, 2022
    Applicant: Nutanix, Inc.
    Inventors: Hema VENKATARAMANI, Felipe FRANCIOSI, Gokul KANNAN, Sreejith MOHANAN, Alok Nemchand KATARIA, Raphael Shai NORWITZ
  • Publication number: 20220179675
    Abstract: Methods, systems, and computer program products for high-performance cluster computing. Multiple components are operatively interconnected to carry out operations for high-performance RDMA I/O transfers over an RDMA NIC. A virtual machine of a virtualization environment initiates a first I/O call to an HCI storage pool controller using RDMA. Responsive to the first I/O call, a second I/O call is initiated from the HCI storage pool controller to a storage device of an HCI storage pool. The first I/O call to the HCI storage pool controller is implemented through a first virtual function of an RDMA NIC that is exposed in the user space of the virtualization environment. Prior to the first RDMA I/O call, a contiguous unit of memory to use in an RDMA I/O transfer is registered with the RDMA NIC. The contiguous unit of memory comprises memory that is registered using non-RDMA paths such as TCP or iSCSI.
    Type: Application
    Filed: January 29, 2021
    Publication date: June 9, 2022
    Applicant: Nutanix, Inc.
    Inventors: Hema VENKATARAMANI, Felipe FRANCIOSI, Sreejith MOHANAN, Alok Nemchand KATARIA, Umang Sureshkumar PATEL
  • Patent number: 11216420
    Abstract: Systems and methods for iterative, high-performance, low-latency data replication. A method embodiment commences upon identifying one or more replica target nodes to receive replicas of working data. Steps of the method then compose at least one replication message. The replication message includes the location or contents of working data as well as a listing of downstream replica target nodes. The replication capacity is measured at the subject node. Based on the measured replication capacity, the subject node sends instructions in the replication message to one or more downstream replica target nodes. Any one or more of the downstream replica target nodes receives the instructions and iterates the steps of measuring its own capacity and determining the instructions, if any, to send to further downstream replica target nodes. Each replica target node replicates the working data. In some cases, the measured replication capacity is enough to perform all replications in parallel.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: January 4, 2022
    Assignee: Nutanix, Inc.
    Inventors: Hema Venkataramani, Peter Scott Wyckoff
  • Patent number: 10824369
    Abstract: Systems and methods for demand-based remote direct memory access buffer management. A method embodiment commences upon initially partitioning a memory pool at a computer that is to receive memory contents from a sender. The memory pool is partitioned into memory areas that comprise a plurality of different sized buffers that serve as target buffers for one or more direct memory access data transfer operations from the data sources. An initial first set of buffer apportionments are associated with each one of the one or more data sources and those initial sets are advertised to the corresponding data sources. Over time, based on messages that have been loaded into the receiver's memory, the payload sizes of the messages are observed. Based on the observed the demand for buffers that are used for the message payload, the constituency of the advertised buffers can grow or shrink elastically as compared to previous advertisements.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: November 3, 2020
    Assignee: Nutanix, Inc.
    Inventors: Hema Venkataramani, Peter Scott Wyckoff
  • Publication number: 20200042619
    Abstract: Systems and methods for iterative, high-performance, low-latency data replication. A method embodiment commences upon identifying one or more replica target nodes to receive replicas of working data. Steps of the method then compose at least one replication message. The replication message includes the location or contents of working data as well as a listing of downstream replica target nodes. The replication capacity is measured at the subject node. Based on the measured replication capacity, the subject node sends instructions in the replication message to one or more downstream replica target nodes. Any one or more of the downstream replica target nodes receives the instructions and iterates the steps of measuring its own capacity and determining the instructions, if any, to send to further downstream replica target nodes. Each replica target node replicates the working data. In some cases, the measured replication capacity is enough to perform all replications in parallel.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Applicant: Nutanix, Inc.
    Inventors: Hema VENKATARAMANI, Peter Scott WYCKOFF
  • Publication number: 20200042475
    Abstract: Systems and methods for demand-based remote direct memory access buffer management. A method embodiment commences upon initially partitioning a memory pool at a computer that is to receive memory contents from a sender. The memory pool is partitioned into memory areas that comprise a plurality of different sized buffers that serve as target buffers for one or more direct memory access data transfer operations from the data sources. An initial first set of buffer apportionments are associated with each one of the one or more data sources and those initial sets are advertised to the corresponding data sources. Over time, based on messages that have been loaded into the receiver's memory, the payload sizes of the messages are observed. Based on the observed the demand for buffers that are used for the message payload, the constituency of the advertised buffers can grow or shrink elastically as compared to previous advertisements.
    Type: Application
    Filed: July 31, 2018
    Publication date: February 6, 2020
    Applicant: Nutanix, Inc.
    Inventors: Hema VENKATARAMANI, Peter Scott WYCKOFF