Patents by Inventor John J. Browne

John J. Browne has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250110903
    Abstract: A hardware accelerator device is provided with accelerator hardware to perform dictionary compressions in hardware based on a request from an application executed by a processor device coupled to the hardware accelerator device to compress data for the application.
    Type: Application
    Filed: December 12, 2024
    Publication date: April 3, 2025
    Applicant: Intel Corporation
    Inventors: Dongsheng Liang, Junyuan Wang, Xiaoyan Bo, Yuze Xiao, Haoxiang Sun, Weigang Li, Marian Horgan, Fei Wang, John J. Browne, Laurent Coquerel, Giovanni Cabiddu, Vijay Sundar Selvamani, Steven Linsell, Karthikeyan Gopal, Deepika Ranganatha
  • Publication number: 20250103519
    Abstract: Apparatuses, methods, and computer readable media for regulating command submission to a shared device. A processor may receive a command for an operation to be performed by another device. The processor may determine an identifier of an address space of a process associated with the command. The processor may determine whether to accept or reject the command.
    Type: Application
    Filed: June 14, 2022
    Publication date: March 27, 2025
    Applicant: Intel Corporation
    Inventors: JUNYUAN WANG, JOHN J BROWNE, MAKSIM LUKOSHKOV, XIN ZENG, TOMASZ KANTECKI, WEIGANG LI, WENQIAN YU
  • Publication number: 20250045097
    Abstract: An apparatus, method, and computer-readable medium for configuring of microservices in a networked system. The method comprising monitoring one or more metrics of the networked system, determining a topology of the networked system based on the one or more metrics, and configuring of the plurality of microservices based on the topology.
    Type: Application
    Filed: October 25, 2024
    Publication date: February 6, 2025
    Inventors: Marcos E. CARRANZA, Francesc GUIM BERNAT, Rajesh POORNACHANDRAN, John J. BROWNE, Stephen T. PALERMO
  • Publication number: 20250013493
    Abstract: Examples described herein relate to circuitry to: monitor utilization data for a plurality of processes; determine one or more priority levels associated with at least one of the plurality of processes based on policy parameters; and adjust a frequency of operation of the interface circuitry based on the monitored utilization data and the determined priority levels of the processes. In some examples, adjust the frequency of operation of the interface circuitry is to prioritize frequency of operation requested by a higher priority workload over a frequency of operations requested by a lower priority workload.
    Type: Application
    Filed: September 18, 2024
    Publication date: January 9, 2025
    Inventors: Chris MACNAMARA, John J. BROWNE, Nilanjan PALIT, Chetan HIREMATH, Rory SEXTON, Conor WALSH, Kevin LAATZ, Andriy GLUSTSOV, Peter McCARTHY, Katelyn DONNELLAN, Vishal DEEP AJMERA, David HUNT, Gordon NOONAN
  • Publication number: 20250013507
    Abstract: Techniques for computer power management are disclosed. In one embodiment, a data center includes several compute nodes and a power management node. Power telemetry data is gathered at each of the compute nodes and sent to the power management node. The power management node analyzes the telemetry data, such as by applying filtering to identify certain metrics. The power management node may use rules to analyze the telemetry data and determine whether power management actions should be performed. The power management node may instruct the compute node to, e.g., change a power state of a processor or processor core. In some embodiments, cores may be managed by an orchestrator, and the orchestrator may identify cores to be placed in high-power and low-power states, as appropriate.
    Type: Application
    Filed: September 20, 2024
    Publication date: January 9, 2025
    Applicant: Intel Corporation
    Inventors: Chris M. MacNamara, John J. Browne, Przemyslaw J. Perycz, Pawel S. Zak, Reshma Pattan
  • Publication number: 20250004495
    Abstract: A processing device includes a plurality of processing cores, a control register, associated with a first processing core of the plurality of processing cores, to store a first base clock frequency value at which the first processing core is to run, and a power management circuit to receive a base clock frequency request comprising a second base clock frequency value, store the second base clock frequency value in the control register to cause the first processing core to run at the second base clock frequency value, and expose the second base clock frequency value on a hardware interface associated with the power management circuit.
    Type: Application
    Filed: July 5, 2024
    Publication date: January 2, 2025
    Inventors: Vasudevan Srinivasan, Krishnakanth V. Sistla, Corey D. Gough, Ian M. Steiner, Nikhil Gupta, Vivek Garg, Ankush Varma, Sujal A. Vora, David P. Lerner, Joseph M. Sullivan, Nagasubramanian Gurumoorthy, William J. Bowhill, Venkatesh Ramamurthy, Chris MacNamara, John J. Browne, Ripan Das
  • Patent number: 12177277
    Abstract: In one embodiment, a system includes a device and a host. The device includes a device stream buffer. The host includes a processor to execute at least a first application and a second application, a host stream buffer, and a host scheduler. The first application is associated with a first transmit streaming channel to stream first data from the first application to the device stream buffer. The first transmit streaming channel has a first allocated amount of buffer space in the device stream buffer. The host scheduler schedules enqueue of the first data from the first application to the first transmit streaming channel based at least in part on availability of space in the first allocated amount of buffer space in the device stream buffer. Other embodiments are described and claimed.
    Type: Grant
    Filed: May 6, 2021
    Date of Patent: December 24, 2024
    Assignee: Intel Corporation
    Inventors: Lokpraveen Mosur, Ilango Ganga, Robert Cone, Kshitij Arun Doshi, John J. Browne, Mark Debbage, Stephen Doyle, Patrick Fleming, Doddaballapur Jayasimha
  • Patent number: 12160368
    Abstract: Examples described herein relate to a device configured to allocate memory resources for packets received by the network interface based on received configuration settings. In some examples, the device is a network interface. Received configuration settings can include one or more of: latency, memory bandwidth, timing of when the content is expected to be accessed, or encryption parameters. In some examples, memory resources include one or more of: a cache, a volatile memory device, a storage device, or persistent memory. In some examples, based on a configuration settings not being available, the network interface is to perform one or more of: dropping a received packet, store the received packet in a buffer that does not meet the configuration settings, or indicate an error. In some examples, configuration settings are conditional where the settings are applied if one or more conditions is met.
    Type: Grant
    Filed: April 27, 2020
    Date of Patent: December 3, 2024
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Patrick Connor, Patrick G. Kutch, John J. Browne, Alexander Bachmutsky
  • Patent number: 12132825
    Abstract: Technologies for accelerated key caching in an edge hierarchy include multiple edge appliance devices organized in tiers. An edge appliance device receives a request for a key, such as a private key. The edge appliance device determines whether the key is included in a local key cache and, if not, requests the key from an edge appliance device included in an inner tier of the edge hierarchy. The edge appliance device may request the key from an edge appliance device included in a peer tier of the edge hierarchy. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys in the key cache for eviction. The edge appliance device may activate per-tenant accelerated logic to identify one or more keys for pre-fetching. Those functions of the edge appliance device may be performed by an accelerator such as an FPGA. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 23, 2021
    Date of Patent: October 29, 2024
    Assignee: Intel Corporation
    Inventors: Timothy Verrall, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar, Ned M. Smith, Rajesh Poornachandran, Kapil Sood, Tarun Viswanathan, John J. Browne, Patrick Kutch
  • Publication number: 20240353915
    Abstract: Systems, apparatus, articles of manufacture, and methods are disclosed to perform dynamic function control. An example apparatus includes interface circuitry, machine-readable instructions, and at least one processor circuit to be programmed by the machine-readable instructions to parse a packet for a function directive, activate a function associated with the function directive based on a type of the function directive being associated with an activation instruction, disable the function associated with the function directive based on the type of the function directive being associated with a deactivation instruction, and publish an active function list (AFL) and a passive function list (PFL) based on the type of the function directive.
    Type: Application
    Filed: July 3, 2024
    Publication date: October 24, 2024
    Inventors: Akhilesh S. Thyagaturu, Francesc Guim Bernat, Karthik Kumar, Stephen Thomas Palermo, John J. Browne
  • Publication number: 20240334245
    Abstract: Examples described herein relate to a network interface device that performs: offloading processing of fragments of a packet to an accelerator; processing non-fragmented packets; and prioritizing dropping of fragments of the packet over dropping of non-fragmented packets. Offloading processing of fragments of the packet to the accelerator can include: the accelerator performing: reassembling the fragments of the packet into a first reassembly packet; and based on congestion associated with at least one of the fragments of the packet of the first reassembly packet: dropping fragments of the first reassembly packet associated with one or more flows; halting reassembly of the first reassembly packet; and forwarding a second packet to a host system, wherein the second packet indicates that congestion occurred, identifies one or more impacted flows, and indicates a number of dropped packet fragments.
    Type: Application
    Filed: June 7, 2024
    Publication date: October 3, 2024
    Inventors: John J. BROWNE, Andrey CHILIKIN, Elazar COHEN, Joseph HASTING, James CLEE, Jerry PIROG, Jamison D. WHITESELL, Ambalavanar ARULAMBALAM, Anjali Singhai JAIN, Andrew CUNNINGHAM, Ruben DAHAN
  • Publication number: 20240320043
    Abstract: Examples described herein relate to determination of per-virtualized execution environment power usage based on an identifier of a processor that executes at least two virtualized execution environments, power usage of the processor, and number of virtualized execution environments executed by the processor.
    Type: Application
    Filed: June 5, 2024
    Publication date: September 26, 2024
    Applicant: Intel Corporation
    Inventors: John J. BROWNE, Chris MACNAMARA
  • Patent number: 12066853
    Abstract: A processing device includes a plurality of processing cores, a control register, associated with a first processing core of the plurality of processing cores, to store a first base clock frequency value at which the first processing core is to run, and a power management circuit to receive a base clock frequency request comprising a second base clock frequency value, store the second base clock frequency value in the control register to cause the first processing core to run at the second base clock frequency value, and expose the second base clock frequency value on a hardware interface associated with the power management circuit.
    Type: Grant
    Filed: June 5, 2023
    Date of Patent: August 20, 2024
    Assignee: Intel Corporation
    Inventors: Vasudevan Srinivasan, Krishnakanth V. Sistla, Corey D. Gough, Ian M. Steiner, Nikhil Gupta, Vivek Garg, Ankush Varma, Sujal A. Vora, David P. Lerner, Joseph M. Sullivan, Nagasubramanian Gurumoorthy, William J. Bowhill, Venkatesh Ramamurthy, Chris MacNamara, John J. Browne, Ripan Das
  • Patent number: 12068928
    Abstract: Methods, apparatus, systems and articles of manufacture are disclosed to schedule workloads based on secure edge to device telemetry by calculating a difference between a first telemetric data received from a first hardware device and an operating parameter and computing an adjustment for a second hardware device based on the difference between the first telemetric data and the operating parameter.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: August 20, 2024
    Assignee: Intel Corporation
    Inventors: Kapil Sood, Timothy Verrall, Ned M. Smith, Tarun Viswanathan, Kshitij Doshi, Francesc Guim Bernat, John J. Browne, Katalin Bartfai-Walcott, Maryam Tahhan, Eoin Walsh, Damien Power
  • Publication number: 20240152460
    Abstract: An example disclosed apparatus comprises a trigger monitor to detect an event satisfying a cache scrape trigger rule during execution of a workload, and a cache scraper to scrape cache data from cache in hardware during the execution of the workload.
    Type: Application
    Filed: December 19, 2023
    Publication date: May 9, 2024
    Inventors: John J. Browne, Kshitij Arun Doshi, Thijs Metsch, Francesc Guim Bernat, Adrian Hoban
  • Publication number: 20240143505
    Abstract: Methods and apparatus for dynamic selection of super queue size for CPUs with higher number of cores. An apparatus includes a plurality of compute modules, each module including a plurality of processor cores with integrated first level (L1) caches and a shared second level (L2) cache, a plurality of Last Level Caches (LLCs) or LLC blocks and a plurality of memory interface blocks interconnect via a mesh interconnect. A compute module is configured to arbitrate access to the shared L2 cache and enqueue L2 cache misses in a super queue (XQ). The compute module further is configured to dynamically adjust the size of the XQ during runtime operations. The compute module tracks parameters comprising an L2 miss rate or count and LLC hit latency and adjusts the XQ size as a function of these parameters. A lookup table using the L2 miss rate/count and LLC hit latency may be implemented to dynamically select the XQ size.
    Type: Application
    Filed: December 22, 2023
    Publication date: May 2, 2024
    Inventors: Amruta MISRA, Ajay RAMJI, Rajendrakumar CHINNAIYAN, Chris MACNAMARA, Karan PUTTANNAIAH, Pushpendra KUMAR, Vrinda KHIRWADKAR, Sanjeevkumar Shankrappa ROKHADE, John J. BROWNE, Francesc GUIM BERNAT, Karthik KUMAR, Farheena Tazeen SYEDA
  • Publication number: 20240111587
    Abstract: Examples described herein relate to an accelerator that includes an interface and circuitry coupled to the interface. In some examples, the circuitry is configured to access compressed data, decompress the compressed data, and output the decompressed data based on a call to an application programming interface (API). In some examples, based on a first call to the API having first values, the circuitry is to decompress at least a subset of the data and output at least one strict subset of the decompressed data. In some examples, based on a second call to the API having second values, the circuitry is to decompress an entirety of the data and output the decompressed data.
    Type: Application
    Filed: December 1, 2023
    Publication date: April 4, 2024
    Inventors: Marian HORGAN, Mateusz POLROLA, Fei Z. WANG, John J. BROWNE, Laurent COQUEREL
  • Publication number: 20240089206
    Abstract: A computing device includes an appliance status table to store at least one of reliability and performance data for one or more network functions virtualization (NFV) appliances and one or more legacy network appliances. The computing device includes a load controller to configure an Internet Protocol (IP) filter rule to select a packet for which processing of the packet is to be migrated from a selected one of the one or more legacy network appliances to a selected one of the one or more NFV appliances, and to update the appliance status table with received at least one of reliability and performance data for the one or more legacy network appliances and the one or more NFV appliances. The computing device includes a packet distributor to receive the packet, to select one of the one or more NFV appliances based at least in part on the appliance status table, and to send the packet to the selected NFV appliance. Other embodiments are described herein.
    Type: Application
    Filed: November 17, 2023
    Publication date: March 14, 2024
    Inventors: Patrick CONNOR, Andrey CHILIKIN, Brendan RYAN, Chris MACNAMARA, John J. BROWNE, Krishnamurthy JAMBUR SATHYANARAYANA, Stephen DOYLE, Tomasz KANTECKI, Anthony KELLY, Ciara LOFTUS, Fiona TRAHE
  • Patent number: 11907704
    Abstract: Various systems and methods for enabling derivation and distribution of an attestation manifest for a software update image are described. In an example, these systems and methods include orchestration functions and communications, providing functionality and components for a software update process which also provides verification and attestation among multiple devices and operators.
    Type: Grant
    Filed: May 2, 2022
    Date of Patent: February 20, 2024
    Assignee: Intel Corporation
    Inventors: Ned M. Smith, Kshitij Arun Doshi, John J. Browne, Vincent J. Zimmer, Francesc Guim Bernat, Kapil Sood
  • Publication number: 20240031219
    Abstract: Methods, apparatus, and systems are disclosed for mapping active assurance intents to resource orchestration and life cycle management. An example apparatus disclosed herein is to reserve a probe on a compute device in a cluster of compute devices based on a request to satisfy a resource availability criterion associated with a resource of the cluster, apply a risk mitigation operation based on the resource availability criterion before deployment of a workload to the cluster, and monitor whether the criterion is satisfied based on data from the probe after deployment of the workload to the cluster.
    Type: Application
    Filed: September 29, 2023
    Publication date: January 25, 2024
    Inventors: John J. Browne, Kshitij Arun Doshi, Francesc Guim Bernat, Adrian Hoban, Mats Agerstam, Shekar Ramachandran, Thijs Metsch, Timothy Verrall, Ciara Loftus, Emma Collins, Krzysztof Kepka, Pawel Zak, Aibhne Breathnach, Ivens Zambrano, Shanshu Yang