Patents by Inventor Chris Macnamara

Chris Macnamara has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190364492
    Abstract: A circuit arrangement includes a preprocessing circuit configured to obtain context information related to a user location, a learning circuit configured to determine a predicted user movement based on context information related to a user location to obtain a predicted route and to determine predicted radio conditions along the predicted route, and a decision circuit configured to, based on the predicted radio conditions, identify one or more first areas expected to have a first type of radio conditions and one or more second areas expected to have a second type of radio conditions different from the first type of radio conditions and to control radio activity while traveling on the predicted route according to the one or more first areas and the one or more second areas.
    Type: Application
    Filed: June 28, 2019
    Publication date: November 28, 2019
    Inventors: Shahrnaz Azizi, Biljana Badic, John Browne, Dave Cavalcanti, Hyung-Nam Choi, Thorsten Clevorn, Ajay Gupta, Maruti Gupta Hyde, Ralph Hasholzner, Nageen Himayat, Simon Hunt, Ingolf Karls, Thomas Kenney, Yiting Liao, Chris Macnamara, Marta Martinez Tarradell, Markus Dominik Mueck, Venkatesan Nallampatti Ekambaram, Niall Power, Bernhard Raaf, Reinhold Schneider, Ashish Singh, Sarabjot Singh, Srikathyayani Srikanteswara, Shilpa Talwar, Feng Xue, Zhibin Yu, Robert Zaus, Stefan Franz, Uwe Kliemann, Christian Drewes, Juergen Kreuchauf
  • Publication number: 20190327190
    Abstract: Technologies for scalable packet reception and transmission include a network device. The network device is to establish a ring that is defined as a circular buffer and includes a plurality of slots to store entries representative of packets. The network device is also to generate and assign receive descriptors to the slots in the ring. Each receive descriptor includes a pointer to a corresponding memory buffer to store packet data. The network device is further to determine whether the NIC has received one or more packets and copy, with direct memory access (DMA) and in response to a determination that the NIC has received one or more packets, packet data of the received one or more packets from the NIC to the memory buffers associated with the receive descriptors assigned to the slots in the ring.
    Type: Application
    Filed: July 2, 2019
    Publication date: October 24, 2019
    Inventors: John J. Browne, Tomasz Kantecki, Chris MacNamara, Pierre Laurent, Sean Harte
  • Patent number: 10445272
    Abstract: A network system includes a central processing unit and a peripheral device in electrical communication with the central processing unit. The peripheral device has at least one power input and a data input. The network system also includes an out of band controller in electrical communication with the central processing unit, the peripheral device, and an external management interface. Responsive to an identified threat, the out of band controller is configured to disable the at least one power input and the data input to the peripheral device, where the disablement indicates to the central processing unit that a hot plug event has occurred with respect to the peripheral device. The out of band controller is also configured to enable auxiliary power to the peripheral device such that the out of band controller remains in communication with the peripheral device during remediation of the identified threat.
    Type: Grant
    Filed: July 5, 2018
    Date of Patent: October 15, 2019
    Assignee: Intel Corporation
    Inventors: Kevin Devey, John Browne, Chris Macnamara, Eoin Walsh, Bruce Richardson, Andrew Cunningham, Niall Power, David Hunt, Changzheng Wei, Eliezer Tamir
  • Patent number: 10433035
    Abstract: An apparatus includes telemetry registers, a memory, and a virtualized telemetry controller. The memory may store a set of telemetry profiles, including a first telemetry profile specifying a collection trigger, a set of telemetry registers, and a telemetry data destination. The virtualized telemetry controller may be to: detect a condition satisfying the collection trigger specified in the first telemetry profile; in response to a detection of the condition, read telemetry values from the set of telemetry registers specified in the first telemetry profile; generate a telemetry container including the telemetry values; and send the telemetry container to the telemetry data destination specified in the first telemetry profile.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: October 1, 2019
    Assignee: Intel Corporation
    Inventors: Ronen Chayat, Andrey Chilikin, John J. Browne, Chris MacNamara, Tomasz Kantecki
  • Publication number: 20190268269
    Abstract: A computing device includes an appliance status table to store at least one of reliability and performance data for one or more network functions virtualization (NFV) appliances and one or more legacy network appliances. The computing device includes a load controller to configure an Internet Protocol (IP) filter rule to select a packet for which processing of the packet is to be migrated from a selected one of the one or more legacy network appliances to a selected one of the one or more NFV appliances, and to update the appliance status table with received at least one of reliability and performance data for the one or more legacy network appliances and the one or more NFV appliances. The computing device includes a packet distributor to receive the packet, to select one of the one or more NFV appliances based at least in part on the appliance status table, and to send the packet to the selected NFV appliance. Other embodiments are described herein.
    Type: Application
    Filed: April 26, 2019
    Publication date: August 29, 2019
    Inventors: Patrick CONNOR, Andrey CHILIKIN, Brendan RYAN, Chris MACNAMARA, John J. BROWNE, Krishnamurthy JAMBUR SATHYANARAYANA, Stephen DOYLE, Tomasz KANTECKI, Anthony KELLY, Ciara LOFTUS, Fiona TRAHE
  • Publication number: 20190238442
    Abstract: Technologies for performance monitoring include a computing device having multiple processor cores. The computing device performs a training workload with a processor core by continuously polling an empty input queue. The computing device determines empty polling thresholds based on the empty polling workload. The computing device performs a packet processing workload with one or more processor cores by continuously polling input queues associated with network traffic. The computing device compares a measured number of empty polls performed by the packet processing workload against the empty polling thresholds. The computing device configures power management of one or more processor cores in response to the comparison. The computing device may determine empty polling trends and compare the measured number of empty polls and the empty polling trends to the empty polling thresholds. Other embodiments are described and claimed.
    Type: Application
    Filed: April 11, 2019
    Publication date: August 1, 2019
    Inventors: Peter McCarthy, Chris MacNamara, John Browne, Liang J. Ma, Liam Day
  • Patent number: 10341264
    Abstract: Technologies for scalable packet reception and transmission include a network device. The network device is to establish a ring that is defined as a circular buffer and includes a plurality of slots to store entries representative of packets. The network device is also to generate and assign receive descriptors to the slots in the ring. Each receive descriptor includes a pointer to a corresponding memory buffer to store packet data. The network device is further to determine whether the NIC has received one or more packets and copy, with direct memory access (DMA) and in response to a determination that the NIC has received one or more packets, packet data of the received one or more packets from the NIC to the memory buffers associated with the receive descriptors assigned to the slots in the ring.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: July 2, 2019
    Assignee: Intel Corporation
    Inventors: John J. Browne, Tomasz Kantecki, Chris MacNamara, Pierre Laurent, Sean Harte
  • Publication number: 20190199646
    Abstract: Packets are differentiated based on their traffic class. A traffic class is allocated bandwidth for transmission. One or more core or thread can be allocated to process packets of a traffic class for transmission based on allocated bandwidth for that traffic class. If multiple traffic classes are allocated bandwidth, and a traffic class underutilizes allocated bandwidth or a traffic class is allocated insufficient bandwidth, then allocated bandwidth can be adjusted for a future transmission time slot. For example, a higher priority traffic class with excess bandwidth can share the excess bandwidth with a next highest priority traffic class for use to allocate packets for transmission for the same time slot.
    Type: Application
    Filed: February 27, 2019
    Publication date: June 27, 2019
    Inventors: Jasvinder SINGH, John J. BROWNE, Tomasz KANTECKI, Chris MACNAMARA
  • Patent number: 10331590
    Abstract: Discloses is an apparatus including a network interface controller (NIC), memory, and an accelerator. The accelerator can include a direct memory access (DMA) controller configured to receive data packets from the NIC and to provide the data packets to the memory. The accelerator can also include processing circuitry to generate processed data packets by implementing packet processing functions on the data packets received from the NIC, and to provide the processed data packets to at least one processing core. Other methods, apparatuses, articles and systems are also described.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: June 25, 2019
    Assignee: Intel Corporation
    Inventors: Chris MacNamara, Tomasz Kantecki, John J. Browne
  • Publication number: 20190190785
    Abstract: Methods, systems, and computer programs are presented for managing resources to deliver a network service in a distributed configuration. A method includes an operation for identifying resources for delivering a network service, the resources being classified by geographic area. Further, the method includes operations for selecting service agents to configure the identified resources, each service agent to manage service pools for delivering the network service across at least one geographic area, the service agents being selected to provide configurability for the service pools. The method further includes operations for sending configuration rules, to the service agents, configured to establish service pools for delivering the network service across the geographic areas. Service traffic information is collected from the service agents, and the resources are adjusted based on the collected service traffic information.
    Type: Application
    Filed: September 30, 2016
    Publication date: June 20, 2019
    Inventors: Damien Power, Alan Carey, Chris MacNamara
  • Publication number: 20190155645
    Abstract: Packets received at an input port can be sub-divided into timeslots. A core or thread can process packets associated with a timeslot. The timeslot size can be increased or decreased based on utilization of a core that is allocated to process packets associated with a timeslot. A timeslot number can be assigned to each received packet. For transmission of the received packets, the timeslot number can be used to maintain an order of transmission to attempt to reduce out-of-order packet transmission.
    Type: Application
    Filed: January 23, 2019
    Publication date: May 23, 2019
    Inventors: John J. BROWNE, Chris MACNAMARA, Tomasz KANTECKI
  • Publication number: 20190104458
    Abstract: Aspects of data re-direction are described, which can include software-defined networking (SDN) data re-direction operations. Some aspects include data re-direction operations performed by one or more virtualized network functions. In some aspects, a network router decodes an indication of a handover of a user equipment (UE) from a first end point (EP) to a second EP, based on the indication, the router can update a relocation table including the UE identifier, an identifier of the first EP, and an identifier of the second EP. The router can receive a data packet for the UE, configured for transmission to the first EP, and modify the data packet, based on the relocation table, for rerouting to the second EP. In some aspects, the router can decode handover prediction information, including an indication of a predicted future geographic location of the UE, and update the relocation table based on the handover prediction information.
    Type: Application
    Filed: September 28, 2018
    Publication date: April 4, 2019
    Inventors: Jonas Svennebring, Niall D. McDonnell, Andrey Chilikin, Andrew Cunningham, Chris MacNamara, Carl-Oscar Montelius, Eliezer Tamir, Bjorn Topel
  • Publication number: 20190097984
    Abstract: Techniques and apparatuses for processing data unit are described. In one embodiment, for example, an apparatus for networking may include at least one memory, logic, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to access an encrypted packet having an encrypted portion, determine at least one flow control segment of the encrypted portion, decrypt the at least one flow control segment to generate a partially-decrypted packet comprising a decrypted at least one flow control segment and an encrypted remainder portion, the remainder portion comprising a portion of the encrypted packet that does not include the decrypted at least one flow control segment, access process information in the decrypted at least one flow control segment, and process the partially-decrypted packet according to the process information. Other embodiments are described and claimed.
    Type: Application
    Filed: September 26, 2017
    Publication date: March 28, 2019
    Applicant: INTEL CORPORATION
    Inventors: John J. Browne, Chris Macnamara, Namakkal N. Venkatesan, Tomasz Kantecki, Declan W. Doherty
  • Publication number: 20190052530
    Abstract: Examples include techniques for monitoring a data packet transfer rate at an interface queue, and based at least in part on a comparison of the data packet transfer rate to a threshold, assigning the interface queue from a core of a first class to a core of a second class or assigning the interface queue from a core of the second class to a core of the first class.
    Type: Application
    Filed: October 15, 2018
    Publication date: February 14, 2019
    Inventors: Mohammad Abdul AWAL, Jasvinder SINGH, Reshma PATTAN, David HUNT, Declan DOHERTY, Chris MACNAMARA
  • Publication number: 20190044893
    Abstract: Technologies for buffering received network packet data include a compute device with a network interface controller (NIC) configured to determine a packet size of a network packet received by the NIC and identify a preferred buffer size between a small buffer and a large buffer. The NIC is further configured to select, from the descriptor, a buffer pointer based on the preferred buffer size, wherein the buffer pointer comprises one of a small buffer pointer corresponding to a first physical address in memory allocated to the small buffer or a large buffer pointer corresponding to a second physical address in memory allocated to the large buffer. Additionally, the NIC is configured to store at least a portion of the network packet in the memory based on the selected buffer pointer. Other embodiments are described herein.
    Type: Application
    Filed: June 30, 2018
    Publication date: February 7, 2019
    Inventors: Bruce Richardson, Chris MacNamara, Patrick Fleming, Tomasz Kantecki, Ciara Loftus, John J. Browne, Patrick Connor
  • Publication number: 20190044873
    Abstract: Examples may include an apparatus having processing logic to receive a packet, to classify the packet based at least in part on a header of the packet, to apply one or more serial packet filter rules to the packet, and when parallel packet filter rules are selected to apply one or more parallel packet filter rules to the packet, wherein application of the serial packet filter rules is performed in parallel with application of the parallel packet filter rules.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: John BROWNE, Chris MACNAMARA, Tomasz KANTECKI, Parthasarathy SARANGAM
  • Publication number: 20190042310
    Abstract: Technologies for power-aware scheduling include a computing device that receives network packets. The computing device classifies the network packets by priority level and then assigns each network packet to a performance group bin. The packets are assigned based on priority level and other performance criteria. The computing device schedules the network packets assigned to each performance group for processing by a processing engine such as a processor core. Network packets assigned to performance groups having a high priority level are scheduled for processing by processing engines with a high performance level. The computing device may select performance levels for processing engines based on processing workload of the network packets. The computing device may control the performance level of the processing engines, for example by controlling the frequency of processor cores. The processing workload may include packet encryption. Other embodiments are described and claimed.
    Type: Application
    Filed: April 12, 2018
    Publication date: February 7, 2019
    Inventors: John Browne, Chris MacNamara, Tomasz Kantecki, Peter McCarthy, Ma Liang, Mairtin O'Loingsigh, Rory Sexton, John Griffin, Nemanja Marjanovic, David Hunt
  • Publication number: 20190042454
    Abstract: Examples include techniques to manage cache resource allocations associated with one or more cache class of service (CLOS) assignments for a processor cache. Examples include flushing portions of an allocated cache resource responsive to reassignments of CLOS.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: Tomasz KANTECKI, John BROWNE, Chris MACNAMARA, Timothy VERRALL, Marcel CORNU, Eoin WALSH, Andrew J. HERDRICH
  • Publication number: 20190044860
    Abstract: Technologies for providing adaptive polling of packet queues include a compute device. The compute device includes a network interface controller and a compute engine that includes a set of cores and a memory that includes a queue to store packets received by the network interface controller. The compute engine is configured to determine a predicted time period for the queue to receive packets without overflowing, execute, during the time period and with a core that is assigned to periodically poll the queue for packets, a workload, and poll, with the assigned core, the queue to remove the packets from the queue. Other embodiments are also described and claimed.
    Type: Application
    Filed: June 18, 2018
    Publication date: February 7, 2019
    Inventors: Chris MacNamara, John Browne, Tomasz Kantecki, Ciara Loftus, John Barry, Patrick Connor, Patrick Fleming
  • Publication number: 20190044799
    Abstract: Technologies for hot-swapping a legacy network appliance with a network functions virtualization (NFV) appliance include a migration management compute device configured to establish a secure connection with the legacy network appliance and retrieve configuration information and operational parameters of the legacy network appliance via the established secure connection. The migration management compute device is further configured to deploy a VNF instance on the NFV appliance based on the configuration information and operational parameters, and perform a hot-swap operation to re-route network traffic from the legacy network appliance to the NFV appliance. Other embodiments are described herein.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: John J. Browne, Michael McGrath, Chris MacNamara