Patents by Inventor Patrick Connor

Patrick Connor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200244577
    Abstract: Methods, apparatus, and systems for implementing in Network Interface Controller (NIC) flow switching. Switching operations are effected via hardware-based forwarding mechanisms in apparatus such as NICs in a manner that does not employ use of computer system processor resources and is transparent to operating systems hosted by such computer systems. The forwarding mechanisms are configured to move or copy Media Access Control (MAC) frame data between receive (Rx) and transmit (Tx) queues associated with different NIC ports that may be on the same NIC or separate NICs. The hardware-based switching operations effect forwarding of MAC frames between NIC ports using memory operations, thus reducing external network traffic, internal interconnect traffic, and processor workload associated with packet processing.
    Type: Application
    Filed: April 16, 2020
    Publication date: July 30, 2020
    Applicant: Intel Corporation
    Inventors: Iosif Gasparakis, Peter P. Waskiewicz, JR., Patrick Connor
  • Patent number: 10693781
    Abstract: Methods, apparatus, and systems for implementing in Network Interface Controller (NIC) flow switching. Switching operations are effected via hardware-based forwarding mechanisms in apparatus such as NICs in a manner that does not employ use of computer system processor resources and is transparent to operating systems hosted by such computer systems. The forwarding mechanisms are configured to move or copy Media Access Control (MAC) frame data between receive (Rx) and transmit (Tx) queues associated with different NIC ports that may be on the same NIC or separate NICs. The hardware-based switching operations effect forwarding of MAC frames between NIC ports using memory operations, thus reducing external network traffic, internal interconnect traffic, and processor workload associated with packet processing.
    Type: Grant
    Filed: November 3, 2015
    Date of Patent: June 23, 2020
    Assignee: Intel Corporation
    Inventors: Iosif Gasparakis, Peter P. Waskiewicz, Jr., Patrick Connor
  • Patent number: 10684973
    Abstract: Methods, apparatus, and computer platforms and architectures employing many-to-many and many-to-one peripheral switches. The methods and apparatus may be implemented on computer platforms having multiple nodes, such as those employing a Non-uniform Memory Access (NUMA) architecture, wherein each node comprises a plurality of components including a processor having at least one level of memory cache and being operatively coupled to system memory and operatively coupled to a many-to-many peripheral switch that includes a plurality of downstream ports to which NICs and/or peripheral expansion slots are operatively coupled, or a many-to-one switch that enables a peripheral device to be shared by multiple nodes. During operation, packets are received at the NICs and DMA memory writes are initiated using memory write transactions identifying a destination memory address.
    Type: Grant
    Filed: August 30, 2013
    Date of Patent: June 16, 2020
    Assignee: Intel Corporation
    Inventors: Patrick Connor, Matthew A. Jared, Duke C. Hong, Elizabeth M. Kappler, Chris Pavlas, Scott P. Dubal
  • Publication number: 20200177660
    Abstract: Examples described herein relate to providing a streaming protocol packet segmentation offload request to a network interface. The request can specify a segment of content to transmit and meta data associated with the content. The offload request can cause the network interface to generate at least one header field value for the packet and insert at least one header field prior to transmission of the packet. In some examples, the network interface generates a validation value for a transport layer protocol based on the packet with the inserted at least one header field. Some examples provide for pre-packetized content to be stored and available to copy to the network interface. In such examples, the network interface can modify or update certain header fields prior to transmitting the packet.
    Type: Application
    Filed: February 3, 2020
    Publication date: June 4, 2020
    Inventors: Patrick CONNOR, James R. HEARN, Kevin LIEDTKE
  • Patent number: 10630315
    Abstract: Technologies for applying a redundancy encoding scheme to segmented portions of a data block include an endpoint computing device communicatively coupled to a destination computing device. The endpoint computing device is configured to divide a block of data into a plurality of data segments as a function of a transmit window size and a redundancy encoding scheme, and generate redundant data usable to reconstruct each of the plurality of data segments. The endpoint computing device is additionally configured to format a series of network packets that each includes a data segment of the plurality of data segments and generated redundant data for at least one other data segment of the plurality of data segments. Further, the endpoint computing device is configured to transport each of the series of network packets to a destination computing device. Other embodiments are described herein.
    Type: Grant
    Filed: September 29, 2017
    Date of Patent: April 21, 2020
    Assignee: Intel Corporation
    Inventors: Patrick Connor, Kapil Sood, Scott Dubal, Andrew Herdrich, James Hearn
  • Patent number: 10601738
    Abstract: Technologies for buffering received network packet data include a compute device with a network interface controller (NIC) configured to determine a packet size of a network packet received by the NIC and identify a preferred buffer size between a small buffer and a large buffer. The NIC is further configured to select, from the descriptor, a buffer pointer based on the preferred buffer size, wherein the buffer pointer comprises one of a small buffer pointer corresponding to a first physical address in memory allocated to the small buffer or a large buffer pointer corresponding to a second physical address in memory allocated to the large buffer. Additionally, the NIC is configured to store at least a portion of the network packet in the memory based on the selected buffer pointer. Other embodiments are described herein.
    Type: Grant
    Filed: June 30, 2018
    Date of Patent: March 24, 2020
    Assignee: Intel Corporation
    Inventors: Bruce Richardson, Chris MacNamara, Patrick Fleming, Tomasz Kantecki, Ciara Loftus, John J. Browne, Patrick Connor
  • Publication number: 20190317802
    Abstract: Examples are described herein that can be used to offload a sequence of work events to one or more accelerators to a work scheduler. An application can issue a universal work descriptor to a work scheduler. The universal work descriptor can specify a policy for scheduling and execution of one or more work events. The universal work descriptor can refer to one or more work events for execution. The work scheduler can, in some cases, perform translation of the universal work descriptor or a work event descriptor for compatibility and execution by an accelerator. The application can receive notice of completion of the sequence of work from the work scheduler or an accelerator.
    Type: Application
    Filed: June 21, 2019
    Publication date: October 17, 2019
    Inventors: Alexander BACHMUTSKY, Andrew J. HERDRICH, Patrick CONNOR, Raghu KONDAPALLI, Francesc GUIM BERNAT, Scott P. DUBAL, James R. HEARN, Kapil SOOD, Niall D. MCDONNELL, Matthew J. ADILETTA
  • Patent number: 10423783
    Abstract: Methods and apparatus to recover a processor state during a system failure or security event are disclosed. An example apparatus to recover data includes a processor including a local memory and a system monitor in communication with the processor. The system monitor is to copy processor backup data to a non-volatile memory in response to a processor backup event. The processor backup data includes contents of the local memory.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: September 24, 2019
    Assignee: Intel Corporation
    Inventors: Chris Pavlas, James R. Hearn, Scott P. Dubal, Patrick Connor
  • Publication number: 20190268269
    Abstract: A computing device includes an appliance status table to store at least one of reliability and performance data for one or more network functions virtualization (NFV) appliances and one or more legacy network appliances. The computing device includes a load controller to configure an Internet Protocol (IP) filter rule to select a packet for which processing of the packet is to be migrated from a selected one of the one or more legacy network appliances to a selected one of the one or more NFV appliances, and to update the appliance status table with received at least one of reliability and performance data for the one or more legacy network appliances and the one or more NFV appliances. The computing device includes a packet distributor to receive the packet, to select one of the one or more NFV appliances based at least in part on the appliance status table, and to send the packet to the selected NFV appliance. Other embodiments are described herein.
    Type: Application
    Filed: April 26, 2019
    Publication date: August 29, 2019
    Inventors: Patrick CONNOR, Andrey CHILIKIN, Brendan RYAN, Chris MACNAMARA, John J. BROWNE, Krishnamurthy JAMBUR SATHYANARAYANA, Stephen DOYLE, Tomasz KANTECKI, Anthony KELLY, Ciara LOFTUS, Fiona TRAHE
  • Patent number: 10348428
    Abstract: Examples may include techniques to enable synchronized execution of a command by nodes in a network fabric. A node capable of hosting a fabric manager for the network fabric (fabric manager node) may generate one or more packets including a command to be executed by at least some nodes in the network fabric. In some examples, a time stamp is also included with at least one of the one or more packets to indicate to receiving nodes to execute the command at a synchronized time.
    Type: Grant
    Filed: December 23, 2014
    Date of Patent: July 9, 2019
    Assignee: INTEL CORPORATION
    Inventors: Ira Weiny, Steven R. Carbonari, Alexander W. Min, Tsung-yuan C. Tai, Brian J. Skerry, Patrick Connor
  • Publication number: 20190101980
    Abstract: Systems and methods for tracking gaze information of a user includes detecting, by a sensor of a head mounted display, that a user is wearing the HMD. An encoded signal indicative of glasses being worn with the HMD, by the user, is detected by the sensor of the HMD. In response to processing the encoded signal, a gaze detection function of the HMD is disabled by the HMD. Encoded gaze data transmitted by the glasses is received by the HMD. The encoded gaze data is processed by an image frame processor and used to adjust image frames produced for rendering on a display screen of the HMD.
    Type: Application
    Filed: December 15, 2017
    Publication date: April 4, 2019
    Inventors: Jeffrey Roger Stafford, Christopher Norden, Patrick Connor
  • Publication number: 20190103881
    Abstract: Technologies for applying a redundancy encoding scheme to segmented portions of a data block include an endpoint computing device communicatively coupled to a destination computing device. The endpoint computing device is configured to divide a block of data into a plurality of data segments as a function of a transmit window size and a redundancy encoding scheme, and generate redundant data usable to reconstruct each of the plurality of data segments. The endpoint computing device is additionally configured to format a series of network packets that each includes a data segment of the plurality of data segments and generated redundant data for at least one other data segment of the plurality of data segments. Further, the endpoint computing device is configured to transport each of the series of network packets to a destination computing device. Other embodiments are described herein.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Patrick Connor, Kapil Sood, Scott Dubal, Andrew Herdrich, James Hearn
  • Patent number: 10221580
    Abstract: A modular wall system for use in trade shows includes a wall or header panel formed of frame parts, and a connecting bracket for detachably connecting the end of the panel to an upstanding column. Perpendicular sections of a corner connector are received in the hollow end of different ones of the frame parts to connect the frame parts. A fabric cover defines a recess with an open end adapted to receive the frame parts. A closure device is provided for at least partially closing the open end of the fabric cover. A vertical support member extends from the frame parts to a surface to transmit the weight of an accessory to the surface. The accessory may be an adjustable garment rack, a coupling grid, a shelving unit, or a peg-board unit and a hook. The system includes a crate adapted to receive a plurality of members for transport.
    Type: Grant
    Filed: September 30, 2015
    Date of Patent: March 5, 2019
    Assignee: GLENMORE INDUSTRIES LLC
    Inventor: Patrick Connors
  • Publication number: 20190052457
    Abstract: Technologies for providing efficient sharing of encrypted data in a disaggregated architecture include a sled. The sled includes a set of memory devices and a controller connected to the set of memory devices. The memory controller is to receive, from a first application executed by a compute sled, a data access request to share a data set between the first application and a second application. The data set is encrypted in one or more of the memory devices. Additionally, the controller is to determine, in response to the data access request, a key identifier that uniquely identifies a key that is usable to perform cryptographic operations on the data set. Further, the controller is to send, to an encryption key manager, a request to provide the key corresponding to the key identifier to be used by the second application to decrypt the data set and send, to the second application, a handle associated with an address in the set of memory devices where the data set is located.
    Type: Application
    Filed: March 30, 2018
    Publication date: February 14, 2019
    Inventors: Patrick Connor, Scott Dubal, Andrew J. Herdrich, James R. Hearn, Kapil Sood
  • Publication number: 20190044893
    Abstract: Technologies for buffering received network packet data include a compute device with a network interface controller (NIC) configured to determine a packet size of a network packet received by the NIC and identify a preferred buffer size between a small buffer and a large buffer. The NIC is further configured to select, from the descriptor, a buffer pointer based on the preferred buffer size, wherein the buffer pointer comprises one of a small buffer pointer corresponding to a first physical address in memory allocated to the small buffer or a large buffer pointer corresponding to a second physical address in memory allocated to the large buffer. Additionally, the NIC is configured to store at least a portion of the network packet in the memory based on the selected buffer pointer. Other embodiments are described herein.
    Type: Application
    Filed: June 30, 2018
    Publication date: February 7, 2019
    Inventors: Bruce Richardson, Chris MacNamara, Patrick Fleming, Tomasz Kantecki, Ciara Loftus, John J. Browne, Patrick Connor
  • Publication number: 20190042297
    Abstract: Technologies for deploying virtual machines (VMs) in a virtual network function (VNF) infrastructure include a compute device configured to collect a plurality of performance metrics based on a set of key performance indicators, determine a key performance indicator value for each of the set of key performance indicators based on the collected plurality of performance metrics, and determine a service quality index for a virtual machine (VM) instance of a plurality of VM instances managed by the compute as a function each key performance indicator value. Additionally, the compute device is configured to determine whether the determined service quality index is acceptable and perform, in response to a determination that the determined service quality index is not acceptable, an optimization action to ensure the VM instance is deployed on an acceptable host of the compute device. Other embodiments are described herein.
    Type: Application
    Filed: September 13, 2018
    Publication date: February 7, 2019
    Inventors: Patrick Connor, Scott Dubal, Chris Pavlas, Katalin Bartfai-Walcott, Amritha Nambiar, Sharada Ashok Shiddibhavi
  • Publication number: 20190044860
    Abstract: Technologies for providing adaptive polling of packet queues include a compute device. The compute device includes a network interface controller and a compute engine that includes a set of cores and a memory that includes a queue to store packets received by the network interface controller. The compute engine is configured to determine a predicted time period for the queue to receive packets without overflowing, execute, during the time period and with a core that is assigned to periodically poll the queue for packets, a workload, and poll, with the assigned core, the queue to remove the packets from the queue. Other embodiments are also described and claimed.
    Type: Application
    Filed: June 18, 2018
    Publication date: February 7, 2019
    Inventors: Chris MacNamara, John Browne, Tomasz Kantecki, Ciara Loftus, John Barry, Patrick Connor, Patrick Fleming
  • Publication number: 20190044812
    Abstract: Technologies for dynamically selecting resources for virtual switching include a network appliance configured to identify a present demand on processing resources of the network appliance that are configured to process data associated with network packets received by the network appliance. Additionally, the network appliance is configured to determine a present capacity of one or more acceleration resources of the network appliance and determine a virtual switch operation mode based on the present demand and the present capacity of the acceleration resources, wherein the virtual switch operation mode indicates which of the acceleration resources are to be enabled. The network appliance is additionally configured to configure a virtual switch of the network appliance to operate as a function of the determined virtual switch operation mode and assign acceleration resources of the network appliance as a function of the determined virtual switch operation mode. Other embodiments are described herein.
    Type: Application
    Filed: September 13, 2018
    Publication date: February 7, 2019
    Inventors: Ciara Loftus, Chris MacNamara, John J. Browne, Patrick Fleming, Tomasz Kantecki, John Barry, Patrick Connor
  • Publication number: 20190044879
    Abstract: Technologies for reordering network packets on egress include a network interface controller (NIC) configured to associate a received network packet with a descriptor, generate a sequence identifier for the received network packet, and insert the generated sequence identifier into the associated descriptor. The NIC is further configured to determine whether the received network packet is to be transmitted from a compute device associated with the NIC to another compute device and insert, in response to a determination that the received network packet is to be transmitted to the another compute device, the descriptor into a transmission queue of descriptors. Additionally, the NIC is configured to transmit the network packet based on position of the descriptor in the transmission queue of descriptors based on the generated sequence identifier. Other embodiments are described herein.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: Bruce Richardson, Andrew Cunningham, Alexander J. Leckey, Brendan Ryan, Patrick Fleming, Patrick Connor, David Hunt, Andrey Chilikin, Chris MacNamara
  • Patent number: D853114
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: July 9, 2019
    Assignee: Jake's Holding Corporation
    Inventors: Patrick Connor, Gayle Nummelin, Gerry Veitch, Abram Fehr, Shawn Bontaine