Patents by Inventor John J. Browne

John J. Browne has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190102312
    Abstract: A computing apparatus, including: a processor; a pointer to a counter memory location; and a lazy increment counter engine to: receive a stimulus to update the counter; and lazy increment the counter including issuing a weakly-ordered increment directive to the pointer.
    Type: Application
    Filed: September 30, 2017
    Publication date: April 4, 2019
    Inventors: Niall D. McDonnell, Christopher MacNamara, John J. Browne, Andrew Cunningham, Brendan Ryan, Patrick Fleming, Namakkal N. Venkatesan, Bruce Richardson, Tomasz Kantecki, Sean Harte, Pierre Laurent
  • Publication number: 20190097889
    Abstract: A computing apparatus, including: a hardware platform; and an interworking broker function (IBF) hosted on the hardware platform, the IBF including a translation driver (TD) associated with a legacy network appliance lacking native interoperability with an orchestrator, the IBF configured to: receive from the orchestrator a network function provisioning or configuration command for the legacy network appliance; operate the TD to translate the command to a format consumable by the legacy network appliance; and forward the command to the legacy network appliance.
    Type: Application
    Filed: September 27, 2017
    Publication date: March 28, 2019
    Applicant: Intel Corporation
    Inventors: John J. Browne, Timothy Verrall, Maryam Tahhan, Michael J. McGrath, Sean Harte, Kevin Devey, Jonathan Kenny, Christopher MacNamara
  • Publication number: 20190097951
    Abstract: A network interface device, including: an ingress interface; a host platform interface to communicatively couple to a host platform; and a packet preprocessor including logic to: receive via the ingress interface a data sequence including a plurality of discrete data units; identify the data sequence as data for a parallel processing operation; reorder the discrete data units into a reordered data frame, the reordered data frame configured to order the discrete data units for consumption by the parallel operation; and send the reordered data to the host platform via the host platform interface.
    Type: Application
    Filed: September 28, 2017
    Publication date: March 28, 2019
    Applicant: Intel Corporation
    Inventors: Tomasz Kantecki, Niall Power, John J. Browne, Christopher MacNamara, Stephen Doyle
  • Publication number: 20190097948
    Abstract: An apparatus, including: a hardware platform; logic to execute on the hardware platform, the logic configured to: receive a batch including first plurality of packets; identify a common attribute of the batch; perform batch processing on the batch according to the common attribute; generate a hint for the batch, the hint comprising information about the batch to facilitate processing of the batch; and forward the batch to a host platform network interface with the hint.
    Type: Application
    Filed: September 28, 2017
    Publication date: March 28, 2019
    Applicant: Intel Corporation
    Inventors: John J. Browne, Christopher MacNamara, Tomasz Kantecki, Barak Hermesh, Sean Harte, Andrey Chilikin, Brendan Ryan, Bruce Richardson, Michael A. O'Hanlon, Andrew Cunningham
  • Publication number: 20190097984
    Abstract: Techniques and apparatuses for processing data unit are described. In one embodiment, for example, an apparatus for networking may include at least one memory, logic, at least a portion of the logic comprised in hardware coupled to the at least one memory, the logic to access an encrypted packet having an encrypted portion, determine at least one flow control segment of the encrypted portion, decrypt the at least one flow control segment to generate a partially-decrypted packet comprising a decrypted at least one flow control segment and an encrypted remainder portion, the remainder portion comprising a portion of the encrypted packet that does not include the decrypted at least one flow control segment, access process information in the decrypted at least one flow control segment, and process the partially-decrypted packet according to the process information. Other embodiments are described and claimed.
    Type: Application
    Filed: September 26, 2017
    Publication date: March 28, 2019
    Applicant: INTEL CORPORATION
    Inventors: John J. Browne, Chris Macnamara, Namakkal N. Venkatesan, Tomasz Kantecki, Declan W. Doherty
  • Patent number: 10225183
    Abstract: In one embodiment, a system comprises a network interface controller to determine context information associated with a data packet. The network interface controller may select a receive descriptor profile from a plurality of receive descriptor profiles based upon a first portion of the context information and build a receive descriptor for the data packet based upon a second portion of the context information and the selected receive descriptor profile.
    Type: Grant
    Filed: June 1, 2016
    Date of Patent: March 5, 2019
    Assignee: Intel Corporation
    Inventors: Ronen Chayat, Andrey Chilikin, John J. Browne
  • Patent number: 10212827
    Abstract: Techniques and mechanisms for controlling configurable circuitry including an antifuse. In an embodiment, the antifuse is disposed in or on a substrate, the antifuse configured to form a solder joint to facilitate interconnection of circuit components. Control circuitry to operate with the antifuse is disposed in, or at a side of, the same substrate. The antifuse is activated based on a voltage provided at an input node, where the control circuitry automatically transitions through a pre-determined sequence of states in response to the voltage. The pre-determined sequence of states coordinates activation of one or more fuses and switched coupling one or more circuit components to the antifuse. In another embodiment, multiple antifuses, variously disposed in or on the substrate, are configured each to be activated based on the voltage provided at an input node.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: February 19, 2019
    Assignee: Intel Corporation
    Inventors: John J. Browne, Andrew Maclean, Shawna M. Liff
  • Publication number: 20190042739
    Abstract: Technologies for cache side channel attack detection and mitigation include an analytics server and one or more monitored computing devices. The analytics server polls each computing device for analytics counter data. The computing device generates the analytics counter data using a resource manager of a processor of the computing device. The analytics counter data may include last-level cache data or memory bandwidth data. The analytics server identifies suspicious core activity based on the analytics counter data and, if identified, deploys a detection process to the computing device. The computing device executes the detection process to identify suspicious application activity. If identified, the computing device may perform one or more corrective actions. Corrective actions include limiting resource usage by a suspicious process using the resource manager of the processor. The resource manager may limit cache occupancy or memory bandwidth used by the suspicious process.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: John J. Browne, Marcel Cornu, Timothy Verrall, Tomasz Kantecki, Niall Power, Weigang Li, Eoin Walsh, Maryam Tahhan
  • Publication number: 20190044893
    Abstract: Technologies for buffering received network packet data include a compute device with a network interface controller (NIC) configured to determine a packet size of a network packet received by the NIC and identify a preferred buffer size between a small buffer and a large buffer. The NIC is further configured to select, from the descriptor, a buffer pointer based on the preferred buffer size, wherein the buffer pointer comprises one of a small buffer pointer corresponding to a first physical address in memory allocated to the small buffer or a large buffer pointer corresponding to a second physical address in memory allocated to the large buffer. Additionally, the NIC is configured to store at least a portion of the network packet in the memory based on the selected buffer pointer. Other embodiments are described herein.
    Type: Application
    Filed: June 30, 2018
    Publication date: February 7, 2019
    Inventors: Bruce Richardson, Chris MacNamara, Patrick Fleming, Tomasz Kantecki, Ciara Loftus, John J. Browne, Patrick Connor
  • Publication number: 20190042314
    Abstract: Particular embodiments described herein provide for an electronic device that can be configured to partition a resource into a plurality of partitions and allocate a reserved portion and a corresponding burst portion in each of the plurality of partitions. Each of the allocated reserved portions and corresponding burst portions are reserved for a specific component or application, where any part of the allocated burst portion not being used by the specific component or application can be used by other components and/or applications.
    Type: Application
    Filed: January 12, 2018
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: Timothy Verrall, John J. Browne, Tomasz Kantecki, Maryam Tahhan, Eoin Walsh, Andrew Duignan, Alan Carey, Wojciech Andralojc, Damien Power, Tarun Viswanathan
  • Publication number: 20190044812
    Abstract: Technologies for dynamically selecting resources for virtual switching include a network appliance configured to identify a present demand on processing resources of the network appliance that are configured to process data associated with network packets received by the network appliance. Additionally, the network appliance is configured to determine a present capacity of one or more acceleration resources of the network appliance and determine a virtual switch operation mode based on the present demand and the present capacity of the acceleration resources, wherein the virtual switch operation mode indicates which of the acceleration resources are to be enabled. The network appliance is additionally configured to configure a virtual switch of the network appliance to operate as a function of the determined virtual switch operation mode and assign acceleration resources of the network appliance as a function of the determined virtual switch operation mode. Other embodiments are described herein.
    Type: Application
    Filed: September 13, 2018
    Publication date: February 7, 2019
    Inventors: Ciara Loftus, Chris MacNamara, John J. Browne, Patrick Fleming, Tomasz Kantecki, John Barry, Patrick Connor
  • Publication number: 20190044799
    Abstract: Technologies for hot-swapping a legacy network appliance with a network functions virtualization (NFV) appliance include a migration management compute device configured to establish a secure connection with the legacy network appliance and retrieve configuration information and operational parameters of the legacy network appliance via the established secure connection. The migration management compute device is further configured to deploy a VNF instance on the NFV appliance based on the configuration information and operational parameters, and perform a hot-swap operation to re-route network traffic from the legacy network appliance to the NFV appliance. Other embodiments are described herein.
    Type: Application
    Filed: June 29, 2018
    Publication date: February 7, 2019
    Inventors: John J. Browne, Michael McGrath, Chris MacNamara
  • Patent number: 10200410
    Abstract: A round-robin network security system implemented by a number of peer devices included in a plurality of networked peer devices. The round-robin security system permits the rotation of the system security controller among at least a portion of the peer devices. Each of the peer devices uses a defined trust assessment ruleset to determine whether the system security controller is trusted/trustworthy. An untrusted system security controller peer device is replaced by another of the peer devices selected by the peer devices. The current system security controller peer device transfers system threat information and security risk information collected from the peer devices to the new system security controller elected by the peer devices.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: February 5, 2019
    Assignee: Intel Corporation
    Inventors: Michael Hingston McLaughlin Bursell, Stephen T. Palermo, Chris MacNamara, Pierre Laurent, John J. Browne
  • Publication number: 20190004922
    Abstract: A method for monitoring health of processes includes a compute device having a performance monitoring parameter manager and an analytics engine. The compute device accesses performance monitoring parameters associated with a monitored process of the compute device. The compute device samples one or more hardware counters associated with the monitored process and applies a performance monitor filter to the sampled one or more hardware counters to generate hardware counter values. The compute device performs a process fault check on the monitored process based on the hardware counter values and the performance monitoring parameters.
    Type: Application
    Filed: June 29, 2017
    Publication date: January 3, 2019
    Inventors: John J. Browne, Tomasz Kantecki, Wojciech Andralojc, Timothy Verrall, Maryam Tahhan, Eoin Walsh, Damien Power, Chris Macnamara
  • Publication number: 20190007349
    Abstract: Technologies for dynamically managing a batch size of packets include a network device. The network device is to receive, into a queue, packets from a remote node to be processed by the network device, determine a throughput provided by the network device while the packets are processed, determine whether the determined throughput satisfies a predefined condition, and adjust a batch size of packets in response to a determination that the determined throughput satisfies a predefined condition. The batch size is indicative of a threshold number of queued packets required to be present in the queue before the queued packets in the queue can be processed by the network device.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Inventors: Ren Wang, Mia Primorac, Tsung-Yuan C. Tai, Saikrishna Edupuganti, John J. Browne
  • Publication number: 20190007330
    Abstract: Technologies for network packet processing include a computing device that receives incoming network packets. The computing device adds the incoming network packets to an input lockless shared ring, and then classifies the network packets. After classification, the computing device adds the network packets to multiple lockless shared traffic class rings, with each ring associated with a traffic class and output port. The computing device may allocate bandwidth between network packets active during a scheduling quantum in the traffic class rings associated with an output port, schedule the network packets in the traffic class rings for transmission, and then transmit the network packets in response to scheduling. The computing device may perform traffic class separation in parallel with bandwidth allocation and traffic scheduling. In some embodiments, the computing device may perform bandwidth allocation and/or traffic scheduling on each traffic class ring in parallel.
    Type: Application
    Filed: June 28, 2017
    Publication date: January 3, 2019
    Inventors: John J. Browne, Tomasz Kantecki, Chris Macnamara, Pierre Laurent, Sean Harte, Peter McCarthy, Jacqueline F. Jardim, Liang Ma
  • Publication number: 20180357099
    Abstract: Particular embodiments described herein provide for a network element that can be configured to determine a pre-execution performance test, where the pre-execution performance test is at least partially based on requirements for a process to be executed, cause the pre-execution performance test to be executed on a platform before the process is executed on the platform, where the platform is a dynamically allocated group of resources, analyze results of the pre-execution performance test, and cause the process to be executed on the platform if the results of the pre-execution performance test satisfy a condition. In an example, the process is a virtual network function.
    Type: Application
    Filed: June 8, 2017
    Publication date: December 13, 2018
    Applicant: Intel Corporation
    Inventors: John J. Browne, Tomasz Kantecki, Eoin Walsh, Maryam Tahhan, Timothy Verrall, Tarun Viswanathan, Rory Browne
  • Publication number: 20180335824
    Abstract: In an example, there is disclosed a demand scaling engine, including: a processor interface to communicatively couple to a processor; a network controller interface to communicatively couple to a network controller and to receive network demand data; a scaleup criterion; a current processor frequency scale datum; and logic, provided at least partly in hardware, to: receive the network demand data; compare the network demand data to the scaleup criterion; determine that the network demand data exceeds the scaleup criterion; and instruct the processor via the processor interface to scaleup processor frequency.
    Type: Application
    Filed: May 22, 2017
    Publication date: November 22, 2018
    Applicant: Intel Corporation
    Inventors: Christopher MacNamara, John J. Browne, William J. Bowhill, Christopher Nolan, Nemanja Marjanovic, Rory Sexton, Padraic Agnew, Colin Hanily
  • Publication number: 20180331960
    Abstract: A fabric interface, including: an ingress port to receive incoming network traffic; a host interface to forward the incoming network traffic to a host; and a virtualization-aware overload protection engine including: an overload detector to detect an overload condition on the incoming network traffic; a packet inspector to inspect packets of the incoming network traffic; and a prioritizer to identify low priority packets to be dropped, and high priority packets to be forwarded to the host.
    Type: Application
    Filed: May 15, 2017
    Publication date: November 15, 2018
    Inventors: John J. Browne, Chris MacNamara, Ronen Chayat
  • Publication number: 20180287911
    Abstract: Particular embodiments described herein provide for a network element that can be configured to receive a link monitoring message, determine one or more resources associated with the link monitoring message, determine a status of each of the one or more resources, and send a response that provides an indication of the status of each of the one or more resources. In an example, the link monitoring is part of a bidirectional forwarding detection packet.
    Type: Application
    Filed: March 31, 2017
    Publication date: October 4, 2018
    Applicant: Intel Corporation
    Inventors: Mark D. Gray, John J. Browne, Maryam Tahhan