Patents by Inventor Mark Debbage

Mark Debbage has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230412712
    Abstract: Described herein are optimized packet headers for Ethernet IP networks and related methods and devices. An example packet header includes a field comprising a source identifier (SID), the SID comprising a shortened representation of a complete Internet Protocol (IP) address of a source network device, a field comprising a destination identifier (DID), the DID comprising a shortened representation of a complete IP address of a destination network device, and a field having a total number of bits that is less than 8 and comprising a shortened representation of a type of encapsulation protocol for the packet. The packet header excludes fields comprising the complete IP address and a media access controller (MAC) address of the source network device, fields comprising the complete IP address and the MAC address of the destination network device, a field comprising a header checksum, and a field comprising a total size of the packet.
    Type: Application
    Filed: September 1, 2023
    Publication date: December 21, 2023
    Applicant: Intel Corporation
    Inventors: Nayan Suthar, Mark Debbage, Ramnarayanan Muthukaruppan
  • Publication number: 20220351326
    Abstract: Examples described herein relate to a first graphics processing unit (GPU) with at least one integrated communications system, wherein the at least one integrated communications system is to apply a reliability protocol to communicate with a second at least one integrated communications system associated with a second GPU to copy data from a first memory region to a second memory region and wherein the first memory region is associated with the first GPU and the second memory region is associated with the second GPU.
    Type: Application
    Filed: June 29, 2022
    Publication date: November 3, 2022
    Inventors: Todd RIMMER, Mark DEBBAGE, Bruce G. WARREN, Sayantan SUR, Nayan Amrutlal SUTHAR, Ajaya Durg
  • Patent number: 11467885
    Abstract: Technologies for processing network packets a compute device with a network interface controller (NIC) that includes a host interface, a packet processor, and a network interface. The host interface is configured to receive a transaction from the compute engine, wherein the transaction includes latency-sensitive data, determine a context of the latency-sensitive data, and verify the latency-sensitive data against one or more server policies as a function of the determined context. The packet processor is configured to identify a trust associated with the latency-sensitive data, determine whether to verify the latency-sensitive data against one or more network policies as a function of the identified trust, apply the one or more network policies, and encapsulate the latency-sensitive data into a network packet. The network interface is configured to transmit the network packet via an associated Ethernet port of the NIC. Other embodiments are described herein.
    Type: Grant
    Filed: December 30, 2017
    Date of Patent: October 11, 2022
    Assignee: Intel Corporation
    Inventors: Ronen Hyatt, Mark Debbage
  • Publication number: 20220138021
    Abstract: Examples described herein relate to a sender process having a capability to select from use of a plurality of connections to at least one target process, wherein the plurality of connections to at least one target process comprise a connection for the sender process and/or one or more connections allocated per job. In some examples, the connection for the sender process comprises a datagram transport for message transfers. In some examples, the one or more connections allocated per job utilize a kernel bypass datagram transport for message transfers. In some examples, the one or more connections allocated per job comprise a connection oriented transport and wherein multiple remote direct memory access (RDMA) write operations for a plurality of processes are to be multiplexed using the connection oriented transport.
    Type: Application
    Filed: December 24, 2021
    Publication date: May 5, 2022
    Inventors: Todd RIMMER, Mark DEBBAGE
  • Publication number: 20220124035
    Abstract: Examples described herein relate to a switch circuitry that includes circuitry to determine if a received packet comprises a control packet; circuitry to determine congestion metrics based on receipt of at least one control packet, wherein the at least one control packet comprises a Request To Send (RTS) or Clear To Send (CTS); and circuitry to transmit at least one of the congestion metrics in at least one packet to a sender and/or receiver network interface device.
    Type: Application
    Filed: December 24, 2021
    Publication date: April 21, 2022
    Inventors: Junggun LEE, Jeremias BLENDIN, Yanfang LE, Rong PAN, Mark DEBBAGE, Robert SOUTHWORTH
  • Publication number: 20220124046
    Abstract: Examples described herein relate to a network interface device performing offloaded tag matching operation to support both one or more eager transactions and one or more rendezvous transactions using a tag-matching protocol. In some examples, the tag matching operation is offloaded from a server to the network interface device. In some examples, the network interface device is to receive messages from one or more senders, wherein the messages comprise tags and select one or more of the messages to write into a buffer based on matching on sender and/or tag.
    Type: Application
    Filed: December 24, 2021
    Publication date: April 21, 2022
    Inventor: Mark DEBBAGE
  • Publication number: 20220116325
    Abstract: Examples described herein relate to a network interface device that includes circuitry to decide packet format of a packet including data to be transmitted based on network utilized to transmit the packet and circuitry to form the packet based on the decided packet format. In some examples, the network utilized to transmit the packet is based on an egress port of the packet. In some examples, the network utilized to transmit the packet comprises one or more of: direct interconnect, small scale-up network, or large scale-out network. In some examples, to decide packet format, the circuitry is to form the packet byte by byte to reduce overhead caused by preamble and number of header fields.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Bruce G. WARREN, Robert ZAK, Mark DEBBAGE, Todd RIMMER
  • Publication number: 20220085916
    Abstract: Examples described herein relate to a network interface device that includes circuitry to track one or more gaps in received packet sequence numbers using data and circuitry to indicate to a sender of packets non-delivered packets to identify a range of delivered packets. In some examples, the data identifies delivered packets and undelivered packets for one or more connections. In some examples, to indicate to a sender of packets non-delivered packets to identify a range of delivered packets, the circuitry is to provide negative acknowledgement sequence range indicating a start and end of non-delivered packets.
    Type: Application
    Filed: September 23, 2021
    Publication date: March 17, 2022
    Inventors: Mark DEBBAGE, Bruce G. WARREN
  • Publication number: 20210281618
    Abstract: In one embodiment, a system includes a device and a host. The device includes a device stream buffer. The host includes a processor to execute at least a first application and a second application, a host stream buffer, and a host scheduler. The first application is associated with a first transmit streaming channel to stream first data from the first application to the device stream buffer. The first transmit streaming channel has a first allocated amount of buffer space in the device stream buffer. The host scheduler schedules enqueue of the first data from the first application to the first transmit streaming channel based at least in part on availability of space in the first allocated amount of buffer space in the device stream buffer. Other embodiments are described and claimed.
    Type: Application
    Filed: May 6, 2021
    Publication date: September 9, 2021
    Inventors: LOKPRAVEEN MOSUR, ILANGO GANGA, ROBERT CONE, KSHITIJ ARUN DOSHI, JOHN J. BROWNE, MARK DEBBAGE, STEPHEN DOYLE, PATRICK FLEMING, DODDABALLAPUR JAYASIMHA
  • Publication number: 20210119930
    Abstract: Examples described herein relate to technologies for reliable packet transmission. In some examples, a network interface includes circuitry to: receive a request to transmit a packet to a destination device, select a path for the packet, provide a path identifier identifying one of multiple paths from the network interface to a destination and Path Sequence Number (PSN) for the packet, wherein the PSN is to identify a packet transmission order over the selected path, include the PSN in the packet, and transmit the packet. In some examples, if the packet is a re-transmit of a previously transmitted packet, the circuitry is to: select a path for the re-transmit packet, and set a PSN of the re-transmit packet that is a current packet transmission number for the selected path for the re-transmit packet.
    Type: Application
    Filed: October 29, 2020
    Publication date: April 22, 2021
    Inventors: Mark DEBBAGE, Robert SOUTHWORTH, Arvind SRINIVASAN, Cheolmin PARK, Todd RIMMER, Brian S. HAUSAUER
  • Publication number: 20190068509
    Abstract: Technologies for processing network packets a compute device with a network interface controller (NIC) that includes a host interface, a packet processor, and a network interface. The host interface is configured to receive a transaction from the compute engine, wherein the transaction includes latency-sensitive data, determine a context of the latency-sensitive data, and verify the latency-sensitive data against one or more server policies as a function of the determined context. The packet processor is configured to identify a trust associated with the latency-sensitive data, determine whether to verify the latency-sensitive data against one or more network policies as a function of the identified trust, apply the one or more network policies, and encapsulate the latency-sensitive data into a network packet. The network interface is configured to transmit the network packet via an associated Ethernet port of the NIC. Other embodiments are described herein.
    Type: Application
    Filed: December 30, 2017
    Publication date: February 28, 2019
    Inventors: Ronen Hyatt, Mark Debbage
  • Patent number: 10073796
    Abstract: Method and apparatus for sending packets using optimized PIO write sequences without sfences. Sequences of Programmed Input/Output (PIO) write instructions to write packet data to a PIO send memory are received at a processor supporting out of order execution. The PIO write instructions are received in an original order and executed out of order, with each PIO write instruction writing a store unit of data to a store buffer or a store block of data to the store buffer. Logic is provided for the store buffer to detect when store blocks are filled, resulting in the data in those store blocks being drained via PCIe posted writes that are written to send blocks in the PIO send memory at addresses defined by the PIO write instructions. Logic is employed for detecting the fill size of packets and when a packet's send blocks have been filled, enabling the packet data to be eligible for egress.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: September 11, 2018
    Assignee: Intel Corporation
    Inventors: Mark Debbage, Yatin M. Mutha
  • Patent number: 10044626
    Abstract: In an embodiment, an out-of-order, reliable, end-to-end protocol is provided that can enable direct user-level data placement and atomic operations between nodes of a multi-node network. The protocol may be optimized for low-loss environments such as High Performance Computing (HPC) applications, and may enable loss detection and de-duplication of packets through the use of a robust window state manager at a target node. A multi-node network implementing the protocol may have increased system reliability, packet throughput, and increased tolerance for adaptively routed traffic, while still allowing atomic operations to be idempotently applied directly to a user memory location.
    Type: Grant
    Filed: December 24, 2015
    Date of Patent: August 7, 2018
    Assignee: Intel Corporation
    Inventors: Keith Underwood, Charles Giefer, Mark Debbage, Karl P. Brummel, Nathan Miller, Bruce Pirie
  • Patent number: 10015056
    Abstract: System, method, and apparatus for improving the performance of collective operations in High Performance Computing (HPC). Compute nodes in a networked HPC environment form collective groups to perform collective operations. A spanning tree is formed including the compute nodes and switches and links used to interconnect the compute nodes, wherein the spanning tree is configured such that there is only a single route between any pair of nodes in the tree. The compute nodes implement processes for performing the collective operations, which includes exchanging messages between processes executing on other compute nodes, wherein the messages contain indicia identifying collective operations they belong to. Each switch is configured to implement message forwarding operations for its portion of the spanning tree. Each of the nodes in the spanning tree implements a ratcheted cyclical state machine that is used for synchronizing collective operations, along with status messages that are exchanged between nodes.
    Type: Grant
    Filed: July 12, 2016
    Date of Patent: July 3, 2018
    Assignee: Intel Corporation
    Inventors: Michael Heinz, Todd Rimmer, James Kunz, Mark Debbage
  • Patent number: 9984020
    Abstract: Method and apparatus for implementing an optimized credit return mechanism for packet sends. A Programmed Input/Output (PIO) send memory is partitioned into a plurality of send contexts, each comprising a memory buffer including a plurality of send blocks configured to store packet data. A storage scheme using FIFO semantics is implemented with each send block associated with a respective FIFO slot. In response to receiving packet data written to the send blocks and detecting the data in those send blocks has egressed from a send context, corresponding freed FIFO slots are detected, and a lowest slot for which credit return indicia has not be returned is determined. The highest slot in a sequence of freed slots from the lowest slot is then determined, and corresponding credit return indicia is returned.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: May 29, 2018
    Assignee: Intel Corporation
    Inventors: Mark Debbage, Yatin M. Mutha
  • Publication number: 20180039593
    Abstract: Method and apparatus for implementing an optimized credit return mechanism for packet sends. A Programmed Input/Output (PIO) send memory is partitioned into a plurality of send contexts, each comprising a memory buffer including a plurality of send blocks configured to store packet data. A storage scheme using FIFO semantics is implemented with each send block associated with a respective FIFO slot. In response to receiving packet data written to the send blocks and detecting the data in those send blocks has egressed from a send context, corresponding freed FIFO slots are detected, and a lowest slot for which credit return indicia has not be returned is determined. The highest slot in a sequence of freed slots from the lowest slot is then determined, and corresponding credit return indicia is returned.
    Type: Application
    Filed: October 16, 2017
    Publication date: February 8, 2018
    Applicant: Intel Corporation
    Inventors: Mark Debbage, Yatin M. Mutha
  • Patent number: 9792235
    Abstract: Method and apparatus for implementing an optimized credit return mechanism for packet sends. A Programmed Input/Output (PIO) send memory is partitioned into a plurality of send contexts, each comprising a memory buffer including a plurality of send blocks configured to store packet data. A storage scheme using FIFO semantics is implemented with each send block associated with a respective FIFO slot. In response to receiving packet data written to the send blocks and detecting the data in those send blocks has egressed from a send context, corresponding freed FIFO slots are detected, and a lowest slot for which credit return indicia has not be returned is determined. The highest slot in a sequence of freed slots from the lowest slot is then determined, and corresponding credit return indicia is returned.
    Type: Grant
    Filed: October 5, 2016
    Date of Patent: October 17, 2017
    Assignee: Intel Corporation
    Inventors: Mark Debbage, Yatin M. Mutha
  • Patent number: 9785359
    Abstract: Methods and apparatus for sending packets using optimized PIO write sequences without sfences and out-of-order credit returns. Sequences of Programmed Input/Output (PIO) write instructions to write packet data to a PIO send memory are received by a processor in an original order and executed out of order, resulting in the packet data being written to send blocks in the PIO send memory out of order, while the packets themselves are stored in sequential order once all of the packet data is written. The packets are egressed out of order by egressing packet data contained in the send blocks to an egress block using a non-sequential packet order that is different than the sequential packet order. In conjunction with egressing the packets, corresponding credits are returned in the non-sequential packet order. A block list comprising a linked list and a free list are used to facilitate out-of-order packet egress and corresponding out-of-order credit returns.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: October 10, 2017
    Assignee: Intel Corporation
    Inventors: Yatin Mutha, Mark Debbage
  • Publication number: 20170249079
    Abstract: Methods and apparatus for sending packets using optimized PIO write sequences without sfences and out-of-order credit returns. Sequences of Programmed Input/Output (PIO) write instructions to write packet data to a PIO send memory are received by a processor in an original order and executed out of order, resulting in the packet data being written to send blocks in the PIO send memory out of order, while the packets themselves are stored in sequential order once all of the packet data is written. The packets are egressed out of order by egressing packet data contained in the send blocks to an egress block using a non-sequential packet order that is different than the sequential packet order. In conjunction with egressing the packets, corresponding credits are returned in the non-sequential packet order. A block list comprising a linked list and a free list are used to facilitate out-of-order packet egress and corresponding out-of-order credit returns.
    Type: Application
    Filed: February 26, 2016
    Publication date: August 31, 2017
    Inventors: YATIN MUTHA, MARK DEBBAGE
  • Publication number: 20170235693
    Abstract: Method and apparatus for implementing an optimized credit return mechanism for packet sends. A Programmed Input/Output (PIO) send memory is partitioned into a plurality of send contexts, each comprising a memory buffer including a plurality of send blocks configured to store packet data. A storage scheme using FIFO semantics is implemented with each send block associated with a respective FIFO slot. In response to receiving packet data written to the send blocks and detecting the data in those send blocks has egressed from a send context, corresponding freed FIFO slots are detected, and a lowest slot for which credit return indicia has not be returned is determined. The highest slot in a sequence of freed slots from the lowest slot is then determined, and corresponding credit return indicia is returned.
    Type: Application
    Filed: October 5, 2016
    Publication date: August 17, 2017
    Applicant: lntel Corporation
    Inventors: Mark Debbage, Yatin M. Mutha