Patents by Inventor Jerrie L. Coffman
Jerrie L. Coffman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9749413Abstract: Methods and apparatus to provide peer-to-peer interrupt signaling between devices coupled via one or more interconnects are described. In one embodiment, a NIC (Network Interface Card such as a Remote Direct Memory Access (RDMA) capable NIC) transfers data directly into or out of the memory of a peer device that is coupled to the NIC via one or more interconnects, bypassing a host computing/processing unit and/or main system memory. Other embodiments are also disclosed.Type: GrantFiled: May 29, 2012Date of Patent: August 29, 2017Assignee: Intel CorporationInventors: Mark S. Hefty, Robert J. Woodruff, Jerrie L. Coffman, William R. Magro
-
Patent number: 9558148Abstract: Methods, apparatus, and software for optimizing network data flows within constrained systems. The methods enable data to be transferred between PCIe cards in multi-socket server platforms, each platform including a local socket having an InfiniBand (IB) HCA and a remote socket. Data to be transmitted outbound from a platform is transferred from a PCIe card to the platform's IB HCA via a proxied datapath. Data received at a platform may employ a direct PCIe peer-to-peer (P2P) transfer if the destined PCIe card is installed in the local socket or via a proxied datapath if the destined PCIe card is installed in a remote socket. Outbound transfers from a PCIe card in a local socket to the platform's IB HCA may selectively be transferred using an either a proxied data path for larger data transfers or a direct P2P datapath for smaller data transfers.Type: GrantFiled: April 30, 2014Date of Patent: January 31, 2017Assignee: Intel CorporationInventors: William R. Magro, Arlin R. Davis, Jerrie L. Coffman, Robert J. Woodruff, Jianxin Xiong
-
Publication number: 20150317280Abstract: Methods, apparatus, and software for optimizing network data flows within constrained systems. The methods enable data to be transferred between PCIe cards in multi-socket server platforms, each platform including a local socket having an InfiniBand (IB) HCA and a remote socket. Data to be transmitted outbound from a platform is transferred from a PCIe card to the platform's IB HCA via a proxied datapath. Data received at a platform may employ a direct PCIe peer-to-peer (P2P) transfer if the destined PCIe card is installed in the local socket or via a proxied datapath if the destined PCIe card is installed in a remote socket. Outbound transfers from a PCIe card in a local socket to the platform's IB HCA may selectively be transferred using an either a proxied data path for larger data transfers or a direct P2P datapath for smaller data transfers.Type: ApplicationFiled: April 30, 2014Publication date: November 5, 2015Inventors: William R. Magro, Arlin R. Davis, Jerrie L. Coffman, Robert J. Woodruff, Jianxin Xiong
-
Patent number: 8914556Abstract: Embodiments of the invention describe systems, apparatuses and methods that enable sharing Remote Direct Memory Access (RDMA) device hardware between a host and a peripheral device including a CPU and memory complex (alternatively referred to herein as a processor add-in card). Embodiments of the invention utilize interconnect hardware such as Peripheral Component Interconnect express (PCIe) hardware for peer-to-peer data transfers between processor add-in cards and RDMA devices. A host system may include modules or logic to map memory and registers to and/or from the RDMA device, thereby enabling I/O to be performed directly to and from user-mode applications on the processor add-in card, concurrently with host system I/O operations.Type: GrantFiled: September 30, 2011Date of Patent: December 16, 2014Assignee: Intel CorporationInventors: William R. Magro, Robert J. Woodruff, David M. Lee, Arlin R. Davis, Mark Sean Hefty, Jerrie L. Coffman
-
Publication number: 20140250202Abstract: Methods and apparatus to provide peer-to-peer interrupt signaling between devices coupled via one or more interconnects are described. In one embodiment, a NIC (Network Interface Card such as a Remote Direct Memory Access (RDMA) capable NIC) transfers data directly into or out of the memory of a peer device that is coupled to the NIC via one or more interconnects, bypassing a host computing/processing unit and/or main system memory. Other embodiments are also disclosed.Type: ApplicationFiled: May 29, 2012Publication date: September 4, 2014Inventors: Mark S. Hefty, Robert J. Woodruff, Jerrie L. Coffman, William R. Margo
-
Patent number: 8583755Abstract: A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation.Type: GrantFiled: July 17, 2012Date of Patent: November 12, 2013Assignee: Intel CorporationInventors: Mark Sean Hefty, Jerrie L. Coffman
-
Publication number: 20130275631Abstract: Embodiments of the invention describe systems, apparatuses and methods that enable sharing Remote Direct Memory Access (RDMA) device hardware between a host and a peripheral device including a CPU and memory complex (alternatively referred to herein as a processor add-in card). Embodiments of the invention utilize interconnect hardware such as Peripheral Component Interconnect express (PCIe) hardware for peer-to-peer data transfers between processor add-in cards and RDMA devices. A host system may include modules or logic to map memory and registers to and/or from the RDMA device, thereby enabling I/O to be performed directly to and from user-mode applications on the processor add-in card, concurrently with host system I/O operations.Type: ApplicationFiled: September 30, 2011Publication date: October 17, 2013Inventors: William R. Magro, Robert J. Woodruff, David M. Lee, Arlin R. Davis, Mark Sean Hefty, Jerrie L. Coffman
-
Publication number: 20120284355Abstract: A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation.Type: ApplicationFiled: July 17, 2012Publication date: November 8, 2012Inventors: Mark Sean Hefty, Jerrie L. Coffman
-
Patent number: 8250165Abstract: A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation.Type: GrantFiled: December 12, 2011Date of Patent: August 21, 2012Assignee: Intel CorporationInventors: Mark Sean Hefty, Jerrie L. Coffman
-
Publication number: 20120084380Abstract: A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation.Type: ApplicationFiled: December 12, 2011Publication date: April 5, 2012Inventors: Mark Sean Hefty, Jerrie L. Coffman
-
Patent number: 8099471Abstract: A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation.Type: GrantFiled: August 17, 2009Date of Patent: January 17, 2012Assignee: Intel CorporationInventors: Mark Sean Hefty, Jerrie L. Coffman
-
Publication number: 20100146069Abstract: A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation.Type: ApplicationFiled: August 17, 2009Publication date: June 10, 2010Inventors: Mark Sean Hefty, Jerrie L. Coffman
-
Patent number: 7624156Abstract: A method and system are provided for transferring data in a networked system between a local memory in a local system and a remote memory in a remote system. A RDMA request is received and a first buffer region is associated with a first transfer operation. The system determines whether a size of the first buffer region exceeds a maximum transfer size of the networked system. Portions of the second buffer region may be associated with the first transfer operation based on the determination of the size of the first buffer region. The system subsequently performs the first transfer operation.Type: GrantFiled: May 23, 2000Date of Patent: November 24, 2009Assignee: Intel CorporationInventors: Mark Sean Hefty, Jerrie L. Coffman
-
Patent number: 7143410Abstract: A host system is provided with a shared resource (such as work queues and completion queues); multiple processors arranged to access the shared resource; and an operating system arranged to allow multiple processors to perform work on the shared resource concurrently while supporting updates of the shared resource. Such an operating system may comprise a synchronization algorithm for synchronizing multiple threads of operation with a single thread so as to achieve mutual exclusion between multiple threads performing work on the shared resource and a single thread updating or changing the state of the shared resource without requiring serialization of all threads.Type: GrantFiled: March 31, 2000Date of Patent: November 28, 2006Assignee: Intel CorporationInventors: Jerrie L. Coffman, Mark S. Hefty, Fabian S. Tillier
-
Patent number: 6965911Abstract: A novel module system solution is provided for direct, transparent access to I/O storage devices connected to a host server within a system area network cluster for efficient sharing of resources and databases among all clustered servers. An exemplary driver system comprises a host driver module which may reside on and which may interface to a host operating system, and which establishes service connections with remote data processors on the system area network and provides direct access to the local storage devices while bypassing protocol stacks of the host operating system; an input/output platform (IOP) including a device driver module which may reside on and which may interface the local storage devices for controlling an array of local storage devices; and a local bus which connects and transports messages and data between the host driver module and the input/output platform (IOP).Type: GrantFiled: December 21, 1998Date of Patent: November 15, 2005Assignee: Intel CorporationInventors: Jerrie L. Coffman, Brad R. Rullman
-
Patent number: 6735174Abstract: Methods and systems for flow control over channel-based switched fabric connections between a first side and a second side. At least one posted receive buffer is stored in a receive buffer queue at the first side. A number of credits is incremented based on the at least one posted receive buffer. The second side is notified of the number of credits. A number of send credits is incremented at the second side based on the number of credits. A message is sent from the second side to the first side if the number of send credits is larger than or equal to two or the number of send credits is equal to one and a second number of credits is larger than or equal to one. The second number of credits is based on at least one second posted receive buffer at the second side. Therefore, communication of messages between the first side and the second side is prevented from deadlocking.Type: GrantFiled: March 29, 2000Date of Patent: May 11, 2004Assignee: Intel CorporationInventors: Mark S. Hefty, Jerrie L. Coffman
-
Methods and systems for flow control of transmissions over channel-based switched fabric connections
Publication number: 20040076116Abstract: Methods and systems for flow control over channel-based switched fabric connections between a first side and a second side. At least one posted receive buffer is stored in a receive buffer queue at the first side. A number of credits is incremented based on the at least one posted receive buffer. The second side is notified of the number of credits. A number of send credits is incremented at the second side based on the number of credits. A message is sent from the second side to the first side if the number of send credits is larger than or equal to two or the number of send credits is equal to one and a second number of credits is larger than or equal to one. The second number of credits is based on at least one second posted receive buffer at the second side. Therefore, communication of messages between the first side and the second side is prevented from deadlocking.Type: ApplicationFiled: October 16, 2003Publication date: April 22, 2004Inventors: Mark S. Hefty, Jerrie L. Coffman -
Patent number: 6718370Abstract: A host system is provided one or more hardware adapters; multiple work queues each configured to send and receive message data via said one or more hardware adapters; multiple completion queues each configured to coalesce completions from multiple work queues belonging to a single hardware adapters; and a completion queue management mechanism configured to check for completions across multiple completion queues in the context of either a single thread or multiple threads of operation.Type: GrantFiled: March 31, 2000Date of Patent: April 6, 2004Assignee: Intel CorporationInventors: Jerrie L. Coffman, Mark S. Hefty, Fabian S. Tillier
-
Patent number: 6675238Abstract: An apparatus and method for efficient input/output processing without the use of interrupts is described. The apparatus includes a plurality of descriptors where each descriptor includes a completion indicator and data associated with an input/output request. The plurality of descriptors includes a head descriptor and a tail descriptor. The apparatus further include a plurality of address holders associated with an input/output processor, and each the plurality of address holders is uniquely affiliated with one of the plurality of descriptors. The apparatus further include a polling mechanism for evaluating the completion indicator of the head descriptor and a completion processor for interfacing with the head descriptor. Finally, the apparatus includes connectors between the tail descriptor and address holder and between the input/output processor and the head descriptor.Type: GrantFiled: September 3, 1999Date of Patent: January 6, 2004Assignee: Intel CorporationInventors: Jerrie L. Coffman, Arlin R. Davis
-
Patent number: 6553438Abstract: Methods and system for a message resource pool with asynchronous and synchronous modes of operation. One or more buffers, descriptors, and message elements are allocated for a user. Each element is associated with one descriptor and at least one buffer. The allocation is performed by the message resource pool. The buffers and the descriptors are registered with a unit management function by the message resource pool. Control of an element and associated descriptor and at least one buffer is passed from the message resource pool to the user upon request by the user. The control of the element and associated descriptor and at least one buffer is returned from the user to the message resource pool once use of the element and associated descriptor and at least one buffer by the user has completed.Type: GrantFiled: April 24, 2000Date of Patent: April 22, 2003Assignee: Intel CorporationInventors: Jerrie L. Coffman, Mark S. Hefty, Fabian S. Tillier