Patents by Inventor Brad A. Burres
Brad A. Burres has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240028381Abstract: A network interface device executes an input/output (I/O) virtualization manager to identify a virtual device defined to include resources of a particular virtual functions in a plurality of virtual functions associated with a physical function of a device. An operation is identified to be performed between the virtual device and a system image hosted by a host system coupled to the network interface device. The network interface device emulates the virtual device in the operation using the I/O virtualization manager.Type: ApplicationFiled: September 28, 2023Publication date: January 25, 2024Applicant: Intel CorporationInventors: Shaopeng He, Yadong Li, Anjali Singhai Jain, Eliel Louzoun, Israel Ben-Shahar, Brad A. Burres, Bartosz Pawlowski, Anton Nadezhdin, Rashmi Hanagal Nagabhushana, Rupin H. Vakharwala
-
Publication number: 20230412365Abstract: Technologies for processing network packets by a host interface of a network interface controller (NIC) of a compute device. The host interface is configured to retrieve, by a symmetric multi-purpose (SMP) array of the host interface, a message from a message queue of the host interface and process, by a processor core of a plurality of processor cores of the SMP array, the message to identify a long-latency operation to be performed on at least a portion of a network packet associated with the message. The host interface is further configured to generate another message which includes an indication of the identified long-latency operation and a next step to be performed upon completion. Additionally, the host interface is configured to transmit the other message to a corresponding hardware unit scheduler as a function of the subsequent long-latency operation to be performed. Other embodiments are described herein.Type: ApplicationFiled: September 1, 2023Publication date: December 21, 2023Applicant: Intel CorporationInventors: Thomas E. Willis, Brad Burres, Amit Kumar
-
Patent number: 11843691Abstract: Technologies for processing network packets by a host interface of a network interface controller (NIC) of a compute device. The host interface is configured to retrieve, by a symmetric multi-purpose (SMP) array of the host interface, a message from a message queue of the host interface and process, by a processor core of a plurality of processor cores of the SMP array, the message to identify a long-latency operation to be performed on at least a portion of a network packet associated with the message. The host interface is further configured to generate another message which includes an indication of the identified long-latency operation and a next step to be performed upon completion. Additionally, the host interface is configured to transmit the other message to a corresponding hardware unit scheduler as a function of the subsequent long-latency operation to be performed. Other embodiments are described herein.Type: GrantFiled: June 10, 2021Date of Patent: December 12, 2023Assignee: Intel CorporationInventors: Thomas E. Willis, Brad Burres, Amit Kumar
-
Publication number: 20230342214Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed for a remote processing acceleration engine. Disclosed is an infrastructure processing unit (IPU) comprising an offload engine driver to access a remote procedure call (RPC) from business logic circuitry, network interface circuitry, and RPC offload circuitry to select a destination to perform an operation associated with the RPC call, the destination selected based on an ability of the destination to perform the operation using remote direct memory access (RDMA), and cause communication of the operation to the destination via the network interface circuitry.Type: ApplicationFiled: June 30, 2023Publication date: October 26, 2023Inventors: Thomas E. Willis, Vered Bar Bracha, Dinesh Kumar, David Anderson, Dror Bohrer, Stephen Ibanez, Salma Johnson, Brad Burres
-
Patent number: 11687264Abstract: Technologies for an accelerator interface over Ethernet are disclosed. In the illustrative embodiment, a network interface controller of a compute device may receive a data packet. If the network interface controller determines that the data packet should be pre-processed (e.g., decrypted) with a remote accelerator device, the network interface controller may encapsulate the data packet in an encapsulating network packet and send the encapsulating network packet to a remote accelerator device on a remote compute device. The remote accelerator device may pre-process the data packet (e.g., decrypt the data packet) and send it back to the network interface controller. The network interface controller may then send the pre-processed packet to a processor of the compute device.Type: GrantFiled: September 29, 2017Date of Patent: June 27, 2023Assignee: Intel CorporationInventors: Chih-Jen Chang, Brad Burres, Jose Niell, Dan Biederman, Robert Cone, Pat Wang, Kenneth Keels, Patrick Fleming
-
Publication number: 20210306142Abstract: Technologies for processing network packets by a host interface of a network interface controller (NIC) of a compute device. The host interface is configured to retrieve, by a symmetric multi-purpose (SMP) array of the host interface, a message from a message queue of the host interface and process, by a processor core of a plurality of processor cores of the SMP array, the message to identify a long-latency operation to be performed on at least a portion of a network packet associated with the message. The host interface is further configured to generate another message which includes an indication of the identified long-latency operation and a next step to be performed upon completion. Additionally, the host interface is configured to transmit the other message to a corresponding hardware unit scheduler as a function of the subsequent long-latency operation to be performed. Other embodiments are described herein.Type: ApplicationFiled: June 10, 2021Publication date: September 30, 2021Applicant: Intel CorporationInventors: Thomas E. Willis, Brad Burres, Amit Kumar
-
Patent number: 10783100Abstract: Technologies for flexible I/O endpoint acceleration include a computing device having a root complex, a soft endpoint coupled to the root complex, and an offload complex coupled to the soft endpoint. The soft endpoint establishes an emulated endpoint hierarchy based on endpoint firmware. The computing device may program the endpoint firmware. The soft endpoint receives an I/O transaction that originates from the root complex and determines whether to process the I/O transaction. The soft endpoint may process the I/O transaction or forward the I/O transaction to the offload complex. The soft endpoint may encapsulate the I/O transaction with metadata and forward the encapsulated transaction to the offload complex. The soft endpoint may store responses from the offload complex in a history buffer and retrieve the responses in response to retried I/O transactions. The I/O transaction may be a PCI Express transaction layer packet. Other embodiments are described and claimed.Type: GrantFiled: March 27, 2019Date of Patent: September 22, 2020Assignee: Intel CorporationInventors: Matthew J. Adiletta, Brad Burres, Duane Galbi, Amit Kumar, Yadong Li, Salma Mirza, Jose Niell, Thomas E. Willis, William Duggan
-
Patent number: 10747457Abstract: Technologies for processing network packets in an agent-mesh architecture include a network interface controller (NIC) of a computing device configured to write, by a network fabric interface of a memory fabric of the NIC, a received network packet to the memory fabric in a distributed fashion. The network fabric interface is configured to send an event message indicating the received network packet to a packet processor communicatively coupled to the memory fabric. The packet processor is configured to read, in response to having received the generated event message, at least a portion of the received network packet from the memory fabric, identify an agent of the NIC for additional processing of the received network packet, generate a network packet received event message indicating the received network packet is available for processing, and transmit the network packet received event message to the identified agent. Other embodiments are described herein.Type: GrantFiled: September 29, 2017Date of Patent: August 18, 2020Assignee: Intel CorporationInventors: Brad Burres, Ronen Chayat, Alain Gravel, Robert Hathaway, Amit Y. Kumar, Jose Niell, Nadav Turbovich
-
Patent number: 10732879Abstract: Technologies for processing network packets by a network interface controller (NIC) of a computing device include a network interface, a packet processor, and a controller device of the NIC, each communicatively coupled to a memory fabric of the NIC. The packet processor is configured to receive an event message from the memory fabric and transmit a message to the controller device, wherein the message indicates the network packet has been received and includes the memory fabric location pointer. The controller device is configured to fetch at least a portion of the received network packet from the memory fabric, write an inbound descriptor usable by one or more on-die cores of the NIC to perform an operation on the fetched portion, and restructure the network packet as a function of an outbound descriptor written by the on-die cores subsequent to performing the operation. Other embodiments are described herein.Type: GrantFiled: September 30, 2017Date of Patent: August 4, 2020Assignee: Intel CorporationInventors: Jose Niell, Brad Burres, Erik McShane, Naru Dames Sundar, Alain Gravel
-
Publication number: 20200065271Abstract: Technologies for flexible I/O endpoint acceleration include a computing device having a root complex, a soft endpoint coupled to the root complex, and an offload complex coupled to the soft endpoint. The soft endpoint establishes an emulated endpoint hierarchy based on endpoint firmware. The computing device may program the endpoint firmware. The soft endpoint receives an I/O transaction that originates from the root complex and determines whether to process the I/O transaction. The soft endpoint may process the I/O transaction or forward the I/O transaction to the offload complex. The soft endpoint may encapsulate the I/O transaction with metadata and forward the encapsulated transaction to the offload complex. The soft endpoint may store responses from the offload complex in a history buffer and retrieve the responses in response to retried I/O transactions. The I/O transaction may be a PCI Express transaction layer packet. Other embodiments are described and claimed.Type: ApplicationFiled: March 27, 2019Publication date: February 27, 2020Inventors: Matthew J. Adiletta, Brad Burres, Duane Galbi, Amit Kumar, Yadong Li, Salma Mirza, Jose Niell, Thomas E. Willis, William Duggan
-
Publication number: 20190044809Abstract: Technologies for processing network packets by a host interface of a network interface controller (NIC) of a compute device. The host interface is configured to retrieve, by a symmetric multi-purpose (SMP) array of the host interface, a message from a message queue of the host interface and process, by a processor core of a plurality of processor cores of the SMP array, the message to identify a long-latency operation to be performed on at least a portion of a network packet associated with the message. The host interface is further configured to generate another message which includes an indication of the identified long-latency operation and a next step to be performed upon completion. Additionally, the host interface is configured to transmit the other message to a corresponding hardware unit scheduler as a function of the subsequent long-latency operation to be performed. Other embodiments are described herein.Type: ApplicationFiled: December 6, 2017Publication date: February 7, 2019Inventors: Thomas E. Willis, Brad Burres, Amit Kumar
-
Publication number: 20180152383Abstract: Technologies for processing network packets in an agent-mesh architecture include a network interface controller (NIC) of a computing device configured to write, by a network fabric interface of a memory fabric of the NIC, a received network packet to the memory fabric in a distributed fashion. The network fabric interface is configured to send an event message indicating the received network packet to a packet processor communicatively coupled to the memory fabric. The packet processor is configured to read, in response to having received the generated event message, at least a portion of the received network packet from the memory fabric, identify an agent of the NIC for additional processing of the received network packet, generate a network packet received event message indicating the received network packet is available for processing, and transmit the network packet received event message to the identified agent. Other embodiments are described herein.Type: ApplicationFiled: September 29, 2017Publication date: May 31, 2018Inventors: Brad Burres, Ronen Chayat, Alain Gravel, Robert Hathaway, Amit Y. Kumar, Jose Niell, Nadav Turbovich
-
Publication number: 20180152540Abstract: Technologies for processing network packets by a network interface controller (NIC) of a computing device include a network interface, a packet processor, and a controller device of the NIC, each communicatively coupled to a memory fabric of the NIC. The packet processor is configured to receive an event message from the memory fabric and transmit a message to the controller device, wherein the message indicates the network packet has been received and includes the memory fabric location pointer. The controller device is configured to fetch at least a portion of the received network packet from the memory fabric, write an inbound descriptor usable by one or more on-die cores of the NIC to perform an operation on the fetched portion, and restructure the network packet as a function of an outbound descriptor written by the on-die cores subsequent to performing the operation. Other embodiments are described herein.Type: ApplicationFiled: September 30, 2017Publication date: May 31, 2018Inventors: Jose Niell, Brad Burres, Erik McShane, Naru Sundar, Alain Gravel
-
Publication number: 20180152317Abstract: Technologies for an accelerator interface over Ethernet are disclosed. In the illustrative embodiment, a network interface controller of a compute device may receive a data packet. If the network interface controller determines that the data packet should be pre-processed (e.g., decrypted) with a remote accelerator device, the network interface controller may encapsulate the data packet in an encapsulating network packet and send the encapsulating network packet to a remote accelerator device on a remote compute device. The remote accelerator device may pre-process the data packet (e.g., decrypt the data packet) and send it back to the network interface controller. The network interface controller may then send the pre-processed packet to a processor of the compute device.Type: ApplicationFiled: September 29, 2017Publication date: May 31, 2018Inventors: Chih-Jen Chang, Brad Burres, Jose Niell, Dan Biederman, Robert Cone, Pat Wang, Kenneth Keels, Patrick Fleming
-
Patent number: 7827471Abstract: A method is described for use in determining a residue of a message. The method includes loading at least a portion of each of a set of polynomials derived from a first polynomial, g(x), and determining the residue using a set of stages. Individual ones of the stages apply a respective one of the derived set of polynomials to data output by a preceding one of the set of stages.Type: GrantFiled: October 12, 2006Date of Patent: November 2, 2010Assignee: Intel CorporationInventors: William C. Hasenplaugh, Brad A. Burres, Gunnar Gaubatz
-
Publication number: 20080092020Abstract: A method is described for use in determining a residue of a message. The method includes loading at least a portion of each of a set of polynomials derived from a first polynomial, g(x), and determining the residue using a set of stages. Individual ones of the stages apply a respective one of the derived set of polynomials to data output by a preceding one of the set of stages.Type: ApplicationFiled: October 12, 2006Publication date: April 17, 2008Inventors: William C. Hasenplaugh, Brad A. Burres, Gunnar Gaubatz
-
Publication number: 20060146864Abstract: Techniques for arbitrating and scheduling thread usage in multi-threaded compute engines. Various schemes are disclosed for allocating compute (execution) usage of compute engines supporting multiple hardware contexts. The schemes include non-pre-emptive (cooperative) round-robin, priority-based round-robin with pre-emption, time division, cooperative round-robin with time division, and priority-based round-robin with pre-emption and time division. Aspects of the foregoing schemes may also be combined to form new schemes. The schemes enable finer control of thread execution in pipeline execution environments, such as employed for performing packet-processing operations.Type: ApplicationFiled: December 30, 2004Publication date: July 6, 2006Inventors: Mark Rosenbluth, Peter Barry, Paul Dormitzer, Brad Burres
-
Publication number: 20050135604Abstract: An architecture to perform a hash algorithm. Embodiments of the invention relate to the use of processor architecture logic to implement an addition operation of initial state information to intermediate state information as required by hash algorithms while reducing the contribution of the addition operation to the critical path of the algorithm's performance within the processor architecture.Type: ApplicationFiled: December 22, 2003Publication date: June 23, 2005Inventors: Wajdi Feghali, Gilbert Wolrich, Matthew Adiletta, Brad Burres