Patents by Inventor Brian Hausauer
Brian Hausauer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230403149Abstract: Encryption operations are securely offloaded to a network interface controller (NIC). Encryption keys are securely transferred from a virtual machine (VM) to the NIC and data is securely transferred from encrypted VM memory to secure buffers in the NIC. The NIC handles the encryption and decryption operations in hardware, greatly increasing encryption performance while not reducing security. This is especially useful in cloud server environments, so the cloud service provider does not have access to the encryption keys or the unencrypted data. The offloaded operations are performed with numerous different communication protocols, including RDMA, QUIC, IPsec underlay and WireGuard.Type: ApplicationFiled: June 2, 2023Publication date: December 14, 2023Inventors: Brian Hausauer, Renato Recio
-
Publication number: 20230403148Abstract: Encryption operations are securely offloaded to a network interface controller (NIC). Encryption keys are securely transferred from a virtual machine (VM) to the NIC and data is securely transferred from encrypted VM memory to secure buffers in the NIC. The NIC handles the encryption and decryption operations in hardware, greatly increasing encryption performance while not reducing security. This is especially useful in cloud server environments, so the cloud service provider does not have access to the encryption keys or the unencrypted data. The offloaded operations are performed with numerous different communication protocols, including RDMA, QUIC, IPsec underlay and WireGuard.Type: ApplicationFiled: June 2, 2023Publication date: December 14, 2023Inventors: Brian Hausauer, Renato Recio
-
Publication number: 20230403260Abstract: Encryption operations are securely offloaded to a network interface controller (NIC). Encryption keys are securely transferred from a virtual machine (VM) to the NIC and data is securely transferred from encrypted VM memory to secure buffers in the NIC. The NIC handles the encryption and decryption operations in hardware, greatly increasing encryption performance while not reducing security. This is especially useful in cloud server environments, so the cloud service provider does not have access to the encryption keys or the unencrypted data. The offloaded operations are performed with numerous different communication protocols, including RDMA, QUIC, IPsec underlay and WireGuard.Type: ApplicationFiled: June 2, 2023Publication date: December 14, 2023Inventors: Brian Hausauer, Renato Recio
-
Publication number: 20230403137Abstract: Encryption operations are securely offloaded to a network interface controller (NIC). Encryption keys are securely transferred from a virtual machine (VM) to the NIC and data is securely transferred from encrypted VM memory to secure buffers in the NIC. The NIC handles the encryption and decryption operations in hardware, greatly increasing encryption performance while not reducing security. This is especially useful in cloud server environments, so the cloud service provider does not have access to the encryption keys or the unencrypted data. The offloaded operations are performed with numerous different communication protocols, including RDMA, QUIC, IPsec underlay and WireGuard.Type: ApplicationFiled: June 2, 2023Publication date: December 14, 2023Inventors: Renato Recio, Brian Hausauer
-
Publication number: 20230403136Abstract: Encryption operations are securely offloaded to a network interface controller (NIC). Encryption keys are securely transferred from a virtual machine (VM) to the NIC and data is securely transferred from encrypted VM memory to secure buffers in the NIC. The NIC handles the encryption and decryption operations in hardware, greatly increasing encryption performance while not reducing security. This is especially useful in cloud server environments, so the cloud service provider does not have access to the encryption keys or the unencrypted data. The offloaded operations are performed with numerous different communication protocols, including RDMA, QUIC, IPsec underlay and WireGuard.Type: ApplicationFiled: June 2, 2023Publication date: December 14, 2023Inventors: Brian Hausauer, Renato Recio
-
Publication number: 20230403150Abstract: Encryption operations are securely offloaded to a network interface controller (NIC). Encryption keys are securely transferred from a virtual machine (VM) to the NIC and data is securely transferred from encrypted VM memory to secure buffers in the NIC. The NIC handles the encryption and decryption operations in hardware, greatly increasing encryption performance while not reducing security. This is especially useful in cloud server environments, so the cloud service provider does not have access to the encryption keys or the unencrypted data. The offloaded operations are performed with numerous different communication protocols, including RDMA, QUIC, IPsec underlay and WireGuard.Type: ApplicationFiled: June 2, 2023Publication date: December 14, 2023Inventors: Renato Recio, Brian Hausauer
-
Patent number: 10051038Abstract: Generally, this disclosure relates to a shared send queue in a networked system. A method, apparatus and system are configured to support a plurality of reliable communication channels using a shared send queue. The reliable communication channels are configured to carry messages from a host to a plurality of destinations and to ensure completed order of messages is related to a transmission order.Type: GrantFiled: December 23, 2011Date of Patent: August 14, 2018Assignee: Intel CorporationInventors: Vadim Makhervaks, Robert O. Sharp, Brian Hausauer, Kenneth G. Keels, Donald E. Wood
-
Publication number: 20140310369Abstract: Generally, this disclosure relates to a shared send queue in a networked system. A method, apparatus and system are configured to support a plurality of reliable communication channels using a shared send queue. The reliable communication channels are configured to carry messages from a host to a plurality of destinations and to ensure completed order of messages is related to a transmission order.Type: ApplicationFiled: December 23, 2011Publication date: October 16, 2014Inventors: Vadim Makhervaks, Robert O. Sharp, Brian Hausauer, Kenneth G. Keels, Donald E. Wood
-
Publication number: 20080043750Abstract: An apparatus is provided, for performing a direct memory access (DMA) operation between a host memory in a first server and a network adapter. The apparatus includes a host frame parser and a protocol engine. The host frame parser is configured to receive data corresponding to the DMA operation from a host interface, and is configured to insert markers on-the-fly into the data at a prescribed interval and to provide marked data for transmission to a second server over a network fabric. The protocol engine is coupled to the host frame parser. The protocol engine is configured to direct the host frame parser to insert the markers, and is configured to specify a first marker value and an offset value, whereby the host frame parser is enabled to locate and insert a first marker into the data.Type: ApplicationFiled: January 19, 2007Publication date: February 21, 2008Applicant: NETEFFECT, INC.Inventors: Kenneth Keels, Jeff Carlson, Brian Hausauer, David Maguire
-
Publication number: 20070226386Abstract: A flexible arrangement allows a single arrangement of Ethernet channel adapter (ECA) hardware functions to appear as needed to conform to various operating system deployment models. A PCI interface presents a logical model of virtual devices appropriate to the relevant operating system. Mapping parameters and values are associated with the packet streams to allow the packet streams to be properly processed according to the presented logical model and needed operations. Mapping occurs at both the host side and at the network side to allow the multiple operations of the ECA to be performed while still allowing proper delivery at each interface.Type: ApplicationFiled: February 17, 2006Publication date: September 27, 2007Applicant: NetEffect, Inc.Inventors: Robert Sharp, Kenneth Keels, Brian Hausauer, John LaCombe
-
Publication number: 20070226750Abstract: A computer system such as a server pipelines RNIC interface (RI) management/control operations such as memory registration operations to hide from network applications the latency in performing RDMA work requests caused in part by delays in processing the memory registration operations and the time required to execute the registration operations themselves. A separate QP-like structure, called a control QP (CQP), interfaces with a control processor (CP) to form a control path pipeline, separate from the transaction pipeline, which is designated to handle all control path traffic associated with the processing of RI control operations. This includes memory registration operations (MR OPs), as well as the creation and destruction of traditional QPs for processing RDMA transactions. Once the MR OP has been queued in the control path pipeline of the adapter, a pending bit is set which is associated with the MR OP.Type: ApplicationFiled: February 17, 2006Publication date: September 27, 2007Applicant: NetEffect, Inc.Inventors: Robert Sharp, Kenneth Keels, Brian Hausauer, Eric Rose
-
Publication number: 20070165672Abstract: A mechanism for performing remote direct memory access (RDMA) operations between a first server and a second server. The apparatus includes a packet parser and a protocol engine. The packet parser processes a TCP segment within an arriving network frame, where the packet parser performs one or more speculative CRC checks according to an upper layer protocol (ULP), and where the one or more speculative CRC checks are performed concurrent with arrival of the network frame. The protocol engine is coupled to the packet parser. The protocol engine receives results of the one or more speculative CRC checks, and selectively employs the results for validation of a framed protocol data unit (FPDU) according to the ULP.Type: ApplicationFiled: February 17, 2006Publication date: July 19, 2007Applicant: NetEffect, Inc.Inventors: Kenneth Keels, Brian Hausauer, Vadim Makhervaks, Eric Schneider
-
Publication number: 20060236063Abstract: An RDMA enabled I/O adapter and device driver is disclosed. In response to a memory registration that includes a list of physical memory pages backing a virtually contiguous memory region, an entry in a table in the adapter memory is allocated. A variable size data structure to store the physical addresses of the pages is also allocated as follows: if the pages are physically contiguous, the physical page address of the beginning page is stored directly in the table entry and no other allocations are made; otherwise, one small page table is allocated if the addresses will fit in a small page table; otherwise, one large page table is allocated if the addresses will fit in a large page table; otherwise, a page directory is allocated and enough page tables to store the addresses are allocated. The size and number of the small and large page tables is programmable.Type: ApplicationFiled: February 17, 2006Publication date: October 19, 2006Applicant: NetEffect, Inc.Inventors: Brian Hausauer, Robert Sharp
-
Publication number: 20060230119Abstract: A mechanism for performing remote direct memory access (RDMA) operations between a first server and a second server over an Ethernet fabric. The RDMA operations are initiated by execution of a verb according to a remote direct memory access protocol. The verb is executed by a CPU on the first server. The apparatus includes transaction logic that is configured to process a work queue element corresponding to the verb, and that is configured to accomplish the RDMA operations over a TCP/IP interface between the first and second servers, where the work queue element resides within first host memory corresponding to the first server. The transaction logic includes transmit history information stores and a protocol engine. The transmit history information stores maintains parameters associated with said work queue element.Type: ApplicationFiled: December 22, 2005Publication date: October 12, 2006Applicant: NetEffect, Inc.Inventors: Brian Hausauer, Tristan Gross, Kenneth Keels, Shaun Wandler
-
Publication number: 20040054841Abstract: A device for providing data includes a data source, a bus interface, a data buffer, and control logic. The bus interface is coupled to a plurality of control lines of a bus and adapted to receive a read request targeting the data source. The control logic is adapted to determine if the read request requires multiple data phases to complete based on the control lines, and to retrieve at least two data phases of data from the data source and store them in the data buffer in response to the read request requiring multiple data phases to complete. A method for retrieving data includes receiving a read request on a bus. The bus includes a plurality of control lines. It is determined if the read request requires multiple data phases to complete based on the control lines. At least two data phases of data are retrieved from a data source in response to the read request requiring multiple data phases to complete. The at least two data phases of data are stored in a data buffer.Type: ApplicationFiled: August 14, 2003Publication date: March 18, 2004Inventors: Ryan Callison, Brian Hausauer
-
Patent number: 6631437Abstract: A device for providing data includes a data source, a bus interface, a data buffer, and control logic. The bus interface is coupled to a plurality of control lines of a bus and adapted to receive a read request targeting the data source. The control logic is adapted to determine if the read request requires multiple data phases to complete based on the control lines, and to retrieve at least two data phases of data from the data source and store them in the data buffer in response to the read request requiring multiple data phases to complete. A method for retrieving data includes receiving a read request on a bus. The bus includes a plurality of control lines. It is determined if the read request requires multiple data phases to complete based on the control lines. At least two data phases of data are retrieved from a data source in response to the read request requiring multiple data phases to complete. The at least two data phases of data are stored in a data buffer.Type: GrantFiled: April 6, 2000Date of Patent: October 7, 2003Assignee: Hewlett-Packard Development Company, L.P.Inventors: Ryan Callison, Brian Hausauer
-
Patent number: 5930496Abstract: An apparatus and method for determining the types of expansion cards connected to the expansion slot connectors of a computer system. Detect signals are provided to decode logic for determining the types of expansion cards connected to the computer system. If the expansion cards are compatible the decode logic produces an output power supply signal that indicates what the voltage level should be for the power supply to the cards. If the cards are incompatible, the decode logic may not provide power to any of the cards or only provide power to some of the cards that are compatible. For computers that allow expansion cards to connect to the computer while the computer is powered on, hot-plug logic cooperates with the decode logic to establish power and communication with newly connected interface cards. The connectors in the computer do not include keys and thus interface cards without keys, as well as cards with different types of key arrangements can be connected to and communicate with the computer.Type: GrantFiled: September 26, 1997Date of Patent: July 27, 1999Assignee: Compaq Computer CorporationInventors: John M. MacLaren, Brian Hausauer, Usha Rajagopaian