Patents by Inventor Brad A. Burres

Brad A. Burres has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240028381
    Abstract: A network interface device executes an input/output (I/O) virtualization manager to identify a virtual device defined to include resources of a particular virtual functions in a plurality of virtual functions associated with a physical function of a device. An operation is identified to be performed between the virtual device and a system image hosted by a host system coupled to the network interface device. The network interface device emulates the virtual device in the operation using the I/O virtualization manager.
    Type: Application
    Filed: September 28, 2023
    Publication date: January 25, 2024
    Applicant: Intel Corporation
    Inventors: Shaopeng He, Yadong Li, Anjali Singhai Jain, Eliel Louzoun, Israel Ben-Shahar, Brad A. Burres, Bartosz Pawlowski, Anton Nadezhdin, Rashmi Hanagal Nagabhushana, Rupin H. Vakharwala
  • Publication number: 20230412365
    Abstract: Technologies for processing network packets by a host interface of a network interface controller (NIC) of a compute device. The host interface is configured to retrieve, by a symmetric multi-purpose (SMP) array of the host interface, a message from a message queue of the host interface and process, by a processor core of a plurality of processor cores of the SMP array, the message to identify a long-latency operation to be performed on at least a portion of a network packet associated with the message. The host interface is further configured to generate another message which includes an indication of the identified long-latency operation and a next step to be performed upon completion. Additionally, the host interface is configured to transmit the other message to a corresponding hardware unit scheduler as a function of the subsequent long-latency operation to be performed. Other embodiments are described herein.
    Type: Application
    Filed: September 1, 2023
    Publication date: December 21, 2023
    Applicant: Intel Corporation
    Inventors: Thomas E. Willis, Brad Burres, Amit Kumar
  • Patent number: 11843691
    Abstract: Technologies for processing network packets by a host interface of a network interface controller (NIC) of a compute device. The host interface is configured to retrieve, by a symmetric multi-purpose (SMP) array of the host interface, a message from a message queue of the host interface and process, by a processor core of a plurality of processor cores of the SMP array, the message to identify a long-latency operation to be performed on at least a portion of a network packet associated with the message. The host interface is further configured to generate another message which includes an indication of the identified long-latency operation and a next step to be performed upon completion. Additionally, the host interface is configured to transmit the other message to a corresponding hardware unit scheduler as a function of the subsequent long-latency operation to be performed. Other embodiments are described herein.
    Type: Grant
    Filed: June 10, 2021
    Date of Patent: December 12, 2023
    Assignee: Intel Corporation
    Inventors: Thomas E. Willis, Brad Burres, Amit Kumar
  • Publication number: 20230342214
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed for a remote processing acceleration engine. Disclosed is an infrastructure processing unit (IPU) comprising an offload engine driver to access a remote procedure call (RPC) from business logic circuitry, network interface circuitry, and RPC offload circuitry to select a destination to perform an operation associated with the RPC call, the destination selected based on an ability of the destination to perform the operation using remote direct memory access (RDMA), and cause communication of the operation to the destination via the network interface circuitry.
    Type: Application
    Filed: June 30, 2023
    Publication date: October 26, 2023
    Inventors: Thomas E. Willis, Vered Bar Bracha, Dinesh Kumar, David Anderson, Dror Bohrer, Stephen Ibanez, Salma Johnson, Brad Burres
  • Patent number: 11687264
    Abstract: Technologies for an accelerator interface over Ethernet are disclosed. In the illustrative embodiment, a network interface controller of a compute device may receive a data packet. If the network interface controller determines that the data packet should be pre-processed (e.g., decrypted) with a remote accelerator device, the network interface controller may encapsulate the data packet in an encapsulating network packet and send the encapsulating network packet to a remote accelerator device on a remote compute device. The remote accelerator device may pre-process the data packet (e.g., decrypt the data packet) and send it back to the network interface controller. The network interface controller may then send the pre-processed packet to a processor of the compute device.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: June 27, 2023
    Assignee: Intel Corporation
    Inventors: Chih-Jen Chang, Brad Burres, Jose Niell, Dan Biederman, Robert Cone, Pat Wang, Kenneth Keels, Patrick Fleming
  • Publication number: 20210306142
    Abstract: Technologies for processing network packets by a host interface of a network interface controller (NIC) of a compute device. The host interface is configured to retrieve, by a symmetric multi-purpose (SMP) array of the host interface, a message from a message queue of the host interface and process, by a processor core of a plurality of processor cores of the SMP array, the message to identify a long-latency operation to be performed on at least a portion of a network packet associated with the message. The host interface is further configured to generate another message which includes an indication of the identified long-latency operation and a next step to be performed upon completion. Additionally, the host interface is configured to transmit the other message to a corresponding hardware unit scheduler as a function of the subsequent long-latency operation to be performed. Other embodiments are described herein.
    Type: Application
    Filed: June 10, 2021
    Publication date: September 30, 2021
    Applicant: Intel Corporation
    Inventors: Thomas E. Willis, Brad Burres, Amit Kumar
  • Patent number: 10783100
    Abstract: Technologies for flexible I/O endpoint acceleration include a computing device having a root complex, a soft endpoint coupled to the root complex, and an offload complex coupled to the soft endpoint. The soft endpoint establishes an emulated endpoint hierarchy based on endpoint firmware. The computing device may program the endpoint firmware. The soft endpoint receives an I/O transaction that originates from the root complex and determines whether to process the I/O transaction. The soft endpoint may process the I/O transaction or forward the I/O transaction to the offload complex. The soft endpoint may encapsulate the I/O transaction with metadata and forward the encapsulated transaction to the offload complex. The soft endpoint may store responses from the offload complex in a history buffer and retrieve the responses in response to retried I/O transactions. The I/O transaction may be a PCI Express transaction layer packet. Other embodiments are described and claimed.
    Type: Grant
    Filed: March 27, 2019
    Date of Patent: September 22, 2020
    Assignee: Intel Corporation
    Inventors: Matthew J. Adiletta, Brad Burres, Duane Galbi, Amit Kumar, Yadong Li, Salma Mirza, Jose Niell, Thomas E. Willis, William Duggan
  • Patent number: 10747457
    Abstract: Technologies for processing network packets in an agent-mesh architecture include a network interface controller (NIC) of a computing device configured to write, by a network fabric interface of a memory fabric of the NIC, a received network packet to the memory fabric in a distributed fashion. The network fabric interface is configured to send an event message indicating the received network packet to a packet processor communicatively coupled to the memory fabric. The packet processor is configured to read, in response to having received the generated event message, at least a portion of the received network packet from the memory fabric, identify an agent of the NIC for additional processing of the received network packet, generate a network packet received event message indicating the received network packet is available for processing, and transmit the network packet received event message to the identified agent. Other embodiments are described herein.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: August 18, 2020
    Assignee: Intel Corporation
    Inventors: Brad Burres, Ronen Chayat, Alain Gravel, Robert Hathaway, Amit Y. Kumar, Jose Niell, Nadav Turbovich
  • Patent number: 10732879
    Abstract: Technologies for processing network packets by a network interface controller (NIC) of a computing device include a network interface, a packet processor, and a controller device of the NIC, each communicatively coupled to a memory fabric of the NIC. The packet processor is configured to receive an event message from the memory fabric and transmit a message to the controller device, wherein the message indicates the network packet has been received and includes the memory fabric location pointer. The controller device is configured to fetch at least a portion of the received network packet from the memory fabric, write an inbound descriptor usable by one or more on-die cores of the NIC to perform an operation on the fetched portion, and restructure the network packet as a function of an outbound descriptor written by the on-die cores subsequent to performing the operation. Other embodiments are described herein.
    Type: Grant
    Filed: September 30, 2017
    Date of Patent: August 4, 2020
    Assignee: Intel Corporation
    Inventors: Jose Niell, Brad Burres, Erik McShane, Naru Dames Sundar, Alain Gravel
  • Publication number: 20200065271
    Abstract: Technologies for flexible I/O endpoint acceleration include a computing device having a root complex, a soft endpoint coupled to the root complex, and an offload complex coupled to the soft endpoint. The soft endpoint establishes an emulated endpoint hierarchy based on endpoint firmware. The computing device may program the endpoint firmware. The soft endpoint receives an I/O transaction that originates from the root complex and determines whether to process the I/O transaction. The soft endpoint may process the I/O transaction or forward the I/O transaction to the offload complex. The soft endpoint may encapsulate the I/O transaction with metadata and forward the encapsulated transaction to the offload complex. The soft endpoint may store responses from the offload complex in a history buffer and retrieve the responses in response to retried I/O transactions. The I/O transaction may be a PCI Express transaction layer packet. Other embodiments are described and claimed.
    Type: Application
    Filed: March 27, 2019
    Publication date: February 27, 2020
    Inventors: Matthew J. Adiletta, Brad Burres, Duane Galbi, Amit Kumar, Yadong Li, Salma Mirza, Jose Niell, Thomas E. Willis, William Duggan
  • Publication number: 20190044809
    Abstract: Technologies for processing network packets by a host interface of a network interface controller (NIC) of a compute device. The host interface is configured to retrieve, by a symmetric multi-purpose (SMP) array of the host interface, a message from a message queue of the host interface and process, by a processor core of a plurality of processor cores of the SMP array, the message to identify a long-latency operation to be performed on at least a portion of a network packet associated with the message. The host interface is further configured to generate another message which includes an indication of the identified long-latency operation and a next step to be performed upon completion. Additionally, the host interface is configured to transmit the other message to a corresponding hardware unit scheduler as a function of the subsequent long-latency operation to be performed. Other embodiments are described herein.
    Type: Application
    Filed: December 6, 2017
    Publication date: February 7, 2019
    Inventors: Thomas E. Willis, Brad Burres, Amit Kumar
  • Publication number: 20180152383
    Abstract: Technologies for processing network packets in an agent-mesh architecture include a network interface controller (NIC) of a computing device configured to write, by a network fabric interface of a memory fabric of the NIC, a received network packet to the memory fabric in a distributed fashion. The network fabric interface is configured to send an event message indicating the received network packet to a packet processor communicatively coupled to the memory fabric. The packet processor is configured to read, in response to having received the generated event message, at least a portion of the received network packet from the memory fabric, identify an agent of the NIC for additional processing of the received network packet, generate a network packet received event message indicating the received network packet is available for processing, and transmit the network packet received event message to the identified agent. Other embodiments are described herein.
    Type: Application
    Filed: September 29, 2017
    Publication date: May 31, 2018
    Inventors: Brad Burres, Ronen Chayat, Alain Gravel, Robert Hathaway, Amit Y. Kumar, Jose Niell, Nadav Turbovich
  • Publication number: 20180152317
    Abstract: Technologies for an accelerator interface over Ethernet are disclosed. In the illustrative embodiment, a network interface controller of a compute device may receive a data packet. If the network interface controller determines that the data packet should be pre-processed (e.g., decrypted) with a remote accelerator device, the network interface controller may encapsulate the data packet in an encapsulating network packet and send the encapsulating network packet to a remote accelerator device on a remote compute device. The remote accelerator device may pre-process the data packet (e.g., decrypt the data packet) and send it back to the network interface controller. The network interface controller may then send the pre-processed packet to a processor of the compute device.
    Type: Application
    Filed: September 29, 2017
    Publication date: May 31, 2018
    Inventors: Chih-Jen Chang, Brad Burres, Jose Niell, Dan Biederman, Robert Cone, Pat Wang, Kenneth Keels, Patrick Fleming
  • Publication number: 20180152540
    Abstract: Technologies for processing network packets by a network interface controller (NIC) of a computing device include a network interface, a packet processor, and a controller device of the NIC, each communicatively coupled to a memory fabric of the NIC. The packet processor is configured to receive an event message from the memory fabric and transmit a message to the controller device, wherein the message indicates the network packet has been received and includes the memory fabric location pointer. The controller device is configured to fetch at least a portion of the received network packet from the memory fabric, write an inbound descriptor usable by one or more on-die cores of the NIC to perform an operation on the fetched portion, and restructure the network packet as a function of an outbound descriptor written by the on-die cores subsequent to performing the operation. Other embodiments are described herein.
    Type: Application
    Filed: September 30, 2017
    Publication date: May 31, 2018
    Inventors: Jose Niell, Brad Burres, Erik McShane, Naru Sundar, Alain Gravel
  • Patent number: 7827471
    Abstract: A method is described for use in determining a residue of a message. The method includes loading at least a portion of each of a set of polynomials derived from a first polynomial, g(x), and determining the residue using a set of stages. Individual ones of the stages apply a respective one of the derived set of polynomials to data output by a preceding one of the set of stages.
    Type: Grant
    Filed: October 12, 2006
    Date of Patent: November 2, 2010
    Assignee: Intel Corporation
    Inventors: William C. Hasenplaugh, Brad A. Burres, Gunnar Gaubatz
  • Publication number: 20080092020
    Abstract: A method is described for use in determining a residue of a message. The method includes loading at least a portion of each of a set of polynomials derived from a first polynomial, g(x), and determining the residue using a set of stages. Individual ones of the stages apply a respective one of the derived set of polynomials to data output by a preceding one of the set of stages.
    Type: Application
    Filed: October 12, 2006
    Publication date: April 17, 2008
    Inventors: William C. Hasenplaugh, Brad A. Burres, Gunnar Gaubatz
  • Publication number: 20060146864
    Abstract: Techniques for arbitrating and scheduling thread usage in multi-threaded compute engines. Various schemes are disclosed for allocating compute (execution) usage of compute engines supporting multiple hardware contexts. The schemes include non-pre-emptive (cooperative) round-robin, priority-based round-robin with pre-emption, time division, cooperative round-robin with time division, and priority-based round-robin with pre-emption and time division. Aspects of the foregoing schemes may also be combined to form new schemes. The schemes enable finer control of thread execution in pipeline execution environments, such as employed for performing packet-processing operations.
    Type: Application
    Filed: December 30, 2004
    Publication date: July 6, 2006
    Inventors: Mark Rosenbluth, Peter Barry, Paul Dormitzer, Brad Burres
  • Publication number: 20050135604
    Abstract: An architecture to perform a hash algorithm. Embodiments of the invention relate to the use of processor architecture logic to implement an addition operation of initial state information to intermediate state information as required by hash algorithms while reducing the contribution of the addition operation to the critical path of the algorithm's performance within the processor architecture.
    Type: Application
    Filed: December 22, 2003
    Publication date: June 23, 2005
    Inventors: Wajdi Feghali, Gilbert Wolrich, Matthew Adiletta, Brad Burres