Patents by Inventor Robert O. Sharp

Robert O. Sharp has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11929927
    Abstract: A network interface controller can be programmed to direct write received data to a memory buffer via either a host-to-device fabric or an accelerator fabric. For packets received that are to be written to a memory buffer associated with an accelerator device, the network interface controller can determine an address translation of a destination memory address of the received packet and determine whether to use a secondary head. If a translated address is available and a secondary head is to be used, a direct memory access (DMA) engine is used to copy a portion of the received packet via the accelerator fabric to a destination memory buffer associated with the address translation. Accordingly, copying a portion of the received packet through the host-to-device fabric and to a destination memory can be avoided and utilization of the host-to-device fabric can be reduced for accelerator bound traffic.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: March 12, 2024
    Assignee: Intel Corporation
    Inventors: Pratik M. Marolia, Rajesh M. Sankaran, Ashok Raj, Nrupal Jani, Parthasarathy Sarangam, Robert O. Sharp
  • Publication number: 20220360533
    Abstract: Methods, apparatus and software for implementing enhanced data center congestion management for non-TCP traffic. Non-congested transmit latencies are determined for transmission of packets or Ethernet frames along paths between source and destination end-end-nodes when congestion along the paths is not present or minimal. Transmit latencies are similarly measured along the same source-destination paths during ongoing operations during which traffic congestion may vary. Based on whether a difference between the transmit latency for a packet or frame and the non-congested transmit latency for the path exceeds a threshold, the path is marked as congested or not congested. A rate at which the non-TCP packets are transmitted along the path is then managed as function of a rate at which the path is marked as congested.
    Type: Application
    Filed: May 24, 2022
    Publication date: November 10, 2022
    Inventors: Ygdal Naouri, Robert O. Sharp, Kenneth G. Keels, Eric W. Multanen
  • Patent number: 11403137
    Abstract: Tenant support is provided in a multi-tenant configuration in a data center by a Physical Function driver communicating a virtual User Priority to a virtual traffic class mapper to a Virtual Function driver. The Physical Function driver configures the Network Interface Controller to map virtual User Priorities to Physical User Priorities and to enforce the Virtual Function's limited access to Traffic Classes. Data Center Bridging features assigned to the physical network interface controller are hidden by virtualizing user priorities and traffic classes. A virtual Data Center Bridging configuration is enabled for a Virtual Function, to provide access to the user priorities and traffic classes that are not visible to the Virtual Function that the Virtual Function may need.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: August 2, 2022
    Assignee: Intel Corporation
    Inventors: Manasi Deval, Neerav Parikh, Robert O. Sharp, Gregory J. Bowers, Ryan E. Hall, Chinh T. Cao
  • Patent number: 11025544
    Abstract: A network interface controller can be programmed to direct write received data to a memory buffer via either a host-to-device fabric or an accelerator fabric. For packets received that are to be written to a memory buffer associated with an accelerator device, the network interface controller can determine an address translation of a destination memory address of the received packet and determine whether to use a secondary head. If a translated address is available and a secondary head is to be used, a direct memory access (DMA) engine is used to copy a portion of the received packet via the accelerator fabric to a destination memory buffer associated with the address translation. Accordingly, copying a portion of the received packet through the host-to-device fabric and to a destination memory can be avoided and utilization of the host-to-device fabric can be reduced for accelerator bound traffic.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: June 1, 2021
    Assignee: Intel Corporation
    Inventors: Pratik M. Marolia, Rajesh M. Sankaran, Ashok Raj, Nrupal Jani, Parthasarathy Sarangam, Robert O. Sharp
  • Publication number: 20210112003
    Abstract: A network interface controller can be programmed to direct write received data to a memory buffer via either a host-to-device fabric or an accelerator fabric. For packets received that are to be written to a memory buffer associated with an accelerator device, the network interface controller can determine an address translation of a destination memory address of the received packet and determine whether to use a secondary head. If a translated address is available and a secondary head is to be used, a direct memory access (DMA) engine is used to copy a portion of the received packet via the accelerator fabric to a destination memory buffer associated with the address translation. Accordingly, copying a portion of the received packet through the host-to-device fabric and to a destination memory can be avoided and utilization of the host-to-device fabric can be reduced for accelerator bound traffic.
    Type: Application
    Filed: December 21, 2020
    Publication date: April 15, 2021
    Inventors: Pratik M. MAROLIA, Rajesh M. SANKARAN, Ashok RAJ, Nrupal JANI, Parthasarathy SARANGAM, Robert O. SHARP
  • Publication number: 20210006511
    Abstract: Methods and apparatus for software-controlled active-backup mode of link aggregation for RDMA and virtual functions. A Network Interface Controller (NIC) includes hardware implementing first and second physical functions (PFs) including transmit and receive resources to support data transfers via first and second ports. A bonding group is created with the first and second PFs. The first PF as an active PF and used for primary data transfers while implementing the second PF as a backup PF. On a link or port failure of the active PF, the bonding group is reconfigured to employ transmit and receive resources of the backup PF such that those resources are shared with the active PF. Data transfers are then performed using the shared resources of the active PF and the backup PF. Embodiments may support RDMA data transfers using PF bonding and the solution may be implemented in virtualized environments including virtual machines (VMs) in a manner transparent to the VMs.
    Type: Application
    Filed: September 21, 2020
    Publication date: January 7, 2021
    Inventors: Piotr Uminski, Anjali Singhai Jain, Eliel Louzoun, Robert O. Sharp, Vivek Kashyap
  • Publication number: 20200319812
    Abstract: Examples described herein relate to accessing an initiator as a Non-Volatile Memory Express (NMVe) device. In some examples, the initiator is configured with an address space, configured in kernel or user space, for access by a virtualized execution environment. In some examples, the initiator to copy one or more storage access commands from the virtualized execution environment into a queue for access by a remote direct memory access (RDMA) compatible network interface. In some examples, the network interface to provide Non-Volatile Memory Express over Fabrics (NVMe-oF) compatible commands based on the one or more storage access commands to a target storage device. In some examples, the initiator is created as a mediated device in kernel space or user space of a host system. In some examples, configuration of a physical storage pool address of the target storage device for access by the virtualized execution environment occurs by receipt of the physical storage pool address in a configuration command.
    Type: Application
    Filed: June 23, 2020
    Publication date: October 8, 2020
    Inventors: Shaopeng HE, Yadong LI, Ziye YANG, Changpeng LIU, Banghao YING, Robert O. SHARP
  • Publication number: 20200220816
    Abstract: Methods, apparatus and software for implementing enhanced data center congestion management for non-TCP traffic. Non-congested transmit latencies are determined for transmission of packets or Ethernet frames along paths between source and destination end-end-nodes when congestion along the paths is not present or minimal. Transmit latencies are similarly measured along the same source-destination paths during ongoing operations during which traffic congestion may vary. Based on whether a difference between the transmit latency for a packet or frame and the non-congested transmit latency for the path exceeds a threshold, the path is marked as congested or not congested. A rate at which the non-TCP packets are transmitted along the path is then managed as function of a rate at which the path is marked as congested.
    Type: Application
    Filed: March 13, 2020
    Publication date: July 9, 2020
    Applicant: Intel Corporation
    Inventors: Ygdal Naouri, Robert O. Sharp, Kenneth G. Keels, Eric W. Multanen
  • Patent number: 10708187
    Abstract: Methods, apparatus and software for implementing enhanced data center congestion management for non-TCP traffic. Non-congested transit latencies are determined for transmission of packets or Ethernet frames along paths between source and destination end-end-nodes when congestion along the paths is not present or minimal. Transit latencies are similarly measured along the same source-destination paths during ongoing operations during which traffic congestion may vary. Based on whether a difference between the transit latency for a packet or frame and the non-congested transit latency for the path exceeds a threshold, the path is marked as congested or not congested. A rate at which the non-TCP packets are transmitted along the path is then managed as function of a rate at which the path is marked as congested.
    Type: Grant
    Filed: May 22, 2014
    Date of Patent: July 7, 2020
    Assignee: Intel Corporation
    Inventors: Ygdal Naouri, Robert O. Sharp, Kenneth G. Keels, Eric W. Multanen
  • Publication number: 20200042350
    Abstract: Tenant support is provided in a multi-tenant configuration in a data center by a Physical Function driver communicating a virtual User Priority to a virtual traffic class mapper to a Virtual Function driver. The Physical Function driver configures the Network Interface Controller to map virtual User Priorities to Physical User Priorities and to enforce the Virtual Function's limited access to Traffic Classes. Data Center Bridging features assigned to the physical network interface controller are hidden by virtualizing user priorities and traffic classes. A virtual Data Center Bridging configuration is enabled for a Virtual Function, to provide access to the user priorities and traffic classes that are not visible to the Virtual Function that the Virtual Function may need.
    Type: Application
    Filed: October 7, 2019
    Publication date: February 6, 2020
    Inventors: Manasi DEVAL, Neerav PARIKH, Robert O. SHARP, Gregory J. BOWERS, Ryan E. HALL, Chinh T. CAO
  • Patent number: 10467182
    Abstract: In an embodiment of the present invention, a method includes partitioning a plurality of remote direct memory access context objects among a plurality of virtual functions, establishing a remote direct memory access connection between a first of the plurality of virtual functions, and migrating the remote direct memory access connection from the first of the plurality of virtual functions to a second of the plurality of virtual functions without disconnecting from the remote peer.
    Type: Grant
    Filed: May 19, 2016
    Date of Patent: November 5, 2019
    Assignee: Intel Corporation
    Inventors: Robert O. Sharp, Kenneth G. Keels
  • Publication number: 20190297015
    Abstract: A network interface controller can be programmed to direct write received data to a memory buffer via either a host-to-device fabric or an accelerator fabric. For packets received that are to be written to a memory buffer associated with an accelerator device, the network interface controller can determine an address translation of a destination memory address of the received packet and determine whether to use a secondary head. If a translated address is available and a secondary head is to be used, a direct memory access (DMA) engine is used to copy a portion of the received packet via the accelerator fabric to a destination memory buffer associated with the address translation. Accordingly, copying a portion of the received packet through the host-to-device fabric and to a destination memory can be avoided and utilization of the host-to-device fabric can be reduced for accelerator bound traffic.
    Type: Application
    Filed: June 7, 2019
    Publication date: September 26, 2019
    Inventors: Pratik M. MAROLIA, Rajesh M. SANKARAN, Ashok RAJ, Nrupal JANI, Parthasarathy SARANGAM, Robert O. SHARP
  • Patent number: 10051038
    Abstract: Generally, this disclosure relates to a shared send queue in a networked system. A method, apparatus and system are configured to support a plurality of reliable communication channels using a shared send queue. The reliable communication channels are configured to carry messages from a host to a plurality of destinations and to ensure completed order of messages is related to a transmission order.
    Type: Grant
    Filed: December 23, 2011
    Date of Patent: August 14, 2018
    Assignee: Intel Corporation
    Inventors: Vadim Makhervaks, Robert O. Sharp, Brian Hausauer, Kenneth G. Keels, Donald E. Wood
  • Patent number: 9558146
    Abstract: Apparatus, method and system for supporting Remote Direct Memory Access (RDMA) Read V2 Request and Response messages using the Internet Wide Area RDMA Protocol (iWARP). iWARP logic in an RDMA Network Interface Controller (RNIC) is configured to generate a new RDMA Read V2 Request message and generate a new RDMA Read V2 Response message in response to a received RDMA Read V2 Request message, and send the messages to an RDMA remote peer using iWARP implemented over an Ethernet network. The iWARP logic is further configured to process RDMA Read V2 Response messages received from the RDMA remote peer, and to write data contained in the messages to appropriate locations using DMA transfers from buffers on the RNIC into system memory. In addition, the new semantics removes the need for extra operations to grant and revoke remote access rights.
    Type: Grant
    Filed: July 18, 2013
    Date of Patent: January 31, 2017
    Assignee: Intel Corporation
    Inventors: Robert O. Sharp, Donald E. Wood, Kenneth G. Keels
  • Publication number: 20160267053
    Abstract: In an embodiment of the present invention, a method includes partitioning a plurality of remote direct memory access context objects among a plurality of virtual functions, establishing a remote direct memory access connection between a first of the plurality of virtual functions, and migrating the remote direct memory access connection from the first of the plurality of virtual functions to a second of the plurality of virtual functions without disconnecting from the remote peer.
    Type: Application
    Filed: May 19, 2016
    Publication date: September 15, 2016
    Inventors: Robert O. Sharp, Kenneth G. Keels
  • Patent number: 9411775
    Abstract: Apparatus, methods and systems for supporting Send with Immediate Data messages using Remote Direct Memory Access (RDMA) and the Internet Wide Area RDMA Protocol (iWARP). iWARP logic in an RDMA Network Interface Controller (RNIC) is configured to generate different types of Send with Immediate Data messages, each including a header with a unique RDMA opcode identifying the type of Send with Immediate Data message, and send the message to an RDMA remote peer using iWARP implemented over an Ethernet network. The iWARP logic is further configured to process the Send with Immediate Data messages received from the RDMA remote peer. The Send with Immediate Data messages include a Send with Immediate Data message, a Send with Invalidate and Immediate Data message, a Send with Solicited Event (SE) and Immediate Data message, and a Send with Invalidate and SE and Immediate Data message.
    Type: Grant
    Filed: July 24, 2013
    Date of Patent: August 9, 2016
    Assignee: Intel Corporation
    Inventors: Robert O. Sharp, Donald E. Wood, Kenneth G. Keels
  • Patent number: 9405725
    Abstract: An embodiment may include circuitry that may write a message from a system memory in a host to a memory space in an input/output (I/O) controller in the host. A host operating system may reside, at least in part, in the system memory. The message may include both data and at least one descriptor associated with the data. The data may be included in the at least one descriptor. The circuitry also may signal the I/O controller that the writing has occurred.
    Type: Grant
    Filed: September 29, 2011
    Date of Patent: August 2, 2016
    Assignee: Intel Corporation
    Inventors: Vadim Makhervaks, Robert O. Sharp, Kenneth G. Keels, Brian S. Hausauer, Steen K. Larsen
  • Patent number: 9354933
    Abstract: In an embodiment of the present invention, a method includes partitioning a plurality of remote direct memory access context objects among a plurality of virtual functions, establishing a remote direct memory access connection between a first of the plurality of virtual functions, and migrating the remote direct memory access connection from the first of the plurality of virtual functions to a second of the plurality of virtual functions without disconnecting from the remote peer.
    Type: Grant
    Filed: October 31, 2011
    Date of Patent: May 31, 2016
    Assignee: Intel Corporation
    Inventors: Robert O. Sharp, Kenneth G. Keels
  • Patent number: 9244881
    Abstract: An embodiment may include circuitry to facilitate, at least in part, a first network interface controller (NIC) in a client to be capable of accessing, via a second NIC in a server that is remote from the client and in a manner that is independent of an operating system environment in the server, at least one command interface of another controller of the server. The command interface may include at least one controller command queue. Such accessing may include writing at least one queue element to the at least one command queue to command the another controller to perform at least one operation associated with the another controller. The another controller may perform the at least one operation in response, at least in part, to the at least one queue element. Many alternatives, variations, and modifications are possible.
    Type: Grant
    Filed: March 4, 2015
    Date of Patent: January 26, 2016
    Assignee: Intel Corporation
    Inventors: Eliezer Tamir, Ben-Zion Friedman, Theodore L. Willke, Eliel Louzoun, Matthew R. Wilcox, Donald E. Wood, Steven B. McGowan, Robert O. Sharp
  • Publication number: 20150341273
    Abstract: Methods, apparatus and software for implementing enhanced data center congestion management for non-TCP traffic. Non-congested transmit latencies are determined for transmission of packets or Ethernet frames along paths between source and destination end-end-nodes when congestion along the paths is not present or minimal. Transmit latencies are similarly measured along the same source-destination paths during ongoing operations during which traffic congestion may vary. Based on whether a difference between the transmit latency for a packet or frame and the non-congested transmit latency for the path exceeds a threshold, the path is marked as congested or not congested. A rate at which the non-TCP packets are transmitted along the path is then managed as function of a rate at which the path is marked as congested.
    Type: Application
    Filed: May 22, 2014
    Publication date: November 26, 2015
    Inventors: Ygdal Naouri, Robert O. Sharp, Kenneth G. Keels, Eric W. Multanen