Patents by Inventor Frank R. Dropps

Frank R. Dropps has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240069742
    Abstract: One aspect of the application can provide a system and method for replacing a failing node with a spare node in a non-uniform memory access (NUMA) system. During operation, in response to determining that a node-migration condition is met, the system can initialize a node controller of the spare node such that accesses to a memory local to the spare node are to be processed by the node controller, quiesce the failing node and the spare node to allow state information of processors on the failing node to be migrated to processors on the spare node, and subsequent to unquiescing the failing node and the spare node, migrate data from the failing node to the spare node while maintaining cache coherence in the NUMA system and while the NUMA system remains in operation, thereby facilitating continuous execution of processes previously executed on the failing node.
    Type: Application
    Filed: August 29, 2022
    Publication date: February 29, 2024
    Inventors: Thomas Edward McGee, Brian J. Johnson, Frank R. Dropps, Derek S. Schumacher, Stuart C. Haden, Michael S. Woodacre
  • Patent number: 11888751
    Abstract: A system for facilitating enhanced virtual channel switching in a node of a distributed computing environment is provided. During operation, the system can allocate flow control credits for a first virtual channel to an upstream node in the distributed computing environment. The system can receive, via a message path comprising the upstream node, a message on the first virtual channel based on the allocated flow control credits. The system can then store the message in a queue associated with an input port and determine whether the message is a candidate for changing the first virtual channel at the node based on a mapping rule associated with the input port. If the message is a candidate, the system can associate the message with a second virtual channel indicated in the mapping rule in the queue. Subsequently, the system can send the message from the queue on the second virtual channel.
    Type: Grant
    Filed: February 15, 2022
    Date of Patent: January 30, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Joseph G. Tietz, Derek Alan Sherlock
  • Publication number: 20230275591
    Abstract: Methods and devices are provided for circuits. One device includes an adjustment circuit having an adjustable resistor for modifying a resistance value of a resistive device, the adjustment circuit connected to an adjustment terminal of the resistive device. The resistance value of the adjustable resistor changes, when a voltage or charge on the adjustment terminal of the adjustable resistor is changed. The adjustable resistor is a phase change element with an adjusting terminal to which different voltage values are applied for adjusting a conversion device threshold value.
    Type: Application
    Filed: May 9, 2023
    Publication date: August 31, 2023
    Inventor: Frank R. Dropps
  • Publication number: 20230262001
    Abstract: A system for facilitating enhanced virtual channel switching in a node of a distributed computing environment is provided. During operation, the system can allocate flow control credits for a first virtual channel to an upstream node in the distributed computing environment. The system can receive, via a message path comprising the upstream node, a message on the first virtual channel based on the allocated flow control credits. The system can then store the message in a queue associated with an input port and determine whether the message is a candidate for changing the first virtual channel at the node based on a mapping rule associated with the input port. If the message is a candidate, the system can associate the message with a second virtual channel indicated in the mapping rule in the queue. Subsequently, the system can send the message from the queue on the second virtual channel.
    Type: Application
    Filed: February 15, 2022
    Publication date: August 17, 2023
    Inventors: Frank R. Dropps, Joseph G. Tietz, Derek Alan Sherlock
  • Patent number: 11716088
    Abstract: Methods and devices are provided for circuits. One device includes an adjustment circuit having an adjustable resistor for modifying a resistance value of a resistive device, the adjustment circuit connected to an adjustment terminal of the resistive device. The resistance value of the adjustable resistor changes, when a voltage or charge on the adjustment terminal of the adjustable resistor is changed. The adjustable resistor is a phase change element with an adjusting terminal to which different voltage values are applied for adjusting a conversion device threshold value.
    Type: Grant
    Filed: November 2, 2021
    Date of Patent: August 1, 2023
    Inventor: Frank R. Dropps
  • Patent number: 11625326
    Abstract: In exemplary aspects of managing the ejection of entries of a coherence directory cache, the directory cache includes directory cache entries that can store copies of respective directory entries from a coherency directory. Each of the directory cache entries is configured to include state and ownership information of respective memory blocks. Information is stored, which indicates if memory blocks are in an active state within a memory region of a memory. A request is received and includes a memory address of a first memory block. Based on the memory address in the request, a cache hit in the directory cache is detected. The request is determined to be a request to change the state of the first memory block to an invalid state. The ejection of a directory cache entry corresponding to the first memory block is managed based on ejection policy rules.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: April 11, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Frank R. Dropps
  • Patent number: 11556478
    Abstract: A cache system may include a cache to store a plurality of cache lines in a write-back mode; dirty cache line counter circuitry to store a count of dirty cache lines in the cache, increment the count when a new dirty cache line is added to the cache, and decrement the count when an old dirty cache line is written-back from the cache; dirty cache line write-back tracking circuitry to store an ordering of the dirty cache lines in a write-back order; mapping circuitry to map the dirty lines into the ordering; and controller circuity to use the mapping circuity to identify an evicted dirty cache line in the ordering and remove the evicted dirty cache line from the ordering.
    Type: Grant
    Filed: October 30, 2020
    Date of Patent: January 17, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Frank R. Dropps
  • Patent number: 11556471
    Abstract: In exemplary aspects of cache coherency management, a first request is received and includes an address of a first memory block in a shared memory. The shared memory includes memory blocks of memory devices associated with respective processors. Each of the memory blocks are associated with one of a plurality of memory categories indicating a protocol for managing cache coherency for the respective memory block. A memory category associated with the first memory block is determined and a response to the first request is based on the memory category of the first memory block. The first memory block and a second memory block are included in one of the same memory devices, and the memory category of the first memory block is different than the memory category of the second memory block.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: January 17, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Michael S. Woodacre, Thomas McGee, Michael Malewicki
  • Publication number: 20220138110
    Abstract: A cache system may include a cache to store a plurality of cache lines in a write-back mode; dirty cache line counter circuitry to store a count of dirty cache lines in the cache, increment the count when a new dirty cache line is added to the cache, and decrement the count when an old dirty cache line is written-back from the cache; dirty cache line write-back tracking circuitry to store an ordering of the dirty cache lines in a write-back order; mapping circuitry to map the dirty lines into the ordering; and controller circuity to use the mapping circuity to identify an evicted dirty cache line in the ordering and remove the evicted dirty cache line from the ordering.
    Type: Application
    Filed: October 30, 2020
    Publication date: May 5, 2022
    Inventor: Frank R. Dropps
  • Patent number: 11314637
    Abstract: To reduce latency and bandwidth consumption in systems, systems and methods are provided for grouping multiple cache line request messages in a related and speculative manner. That is, multiple cache lines are likely to have the same state and ownership characteristics, and therefore, requests for multiple cache lines can be grouped. Information received in response can be directed to the requesting processor socket, and those speculatively received (not actually requested, but likely to be requested) can be maintained in queue or other memory until a request is received for that information, or until discarded to free up tracking space for new requests.
    Type: Grant
    Filed: May 29, 2020
    Date of Patent: April 26, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Thomas McGee, Michael Malewicki
  • Publication number: 20220060192
    Abstract: Methods and devices are provided for circuits. One device includes an adjustment circuit having an adjustable resistor for modifying a resistance value of a resistive device, the adjustment circuit connected to an adjustment terminal of the resistive device. The resistance value of the adjustable resistor changes, when a voltage or charge on the adjustment terminal of the adjustable resistor is changed. The adjustable resistor is a phase change element with an adjusting terminal to which different voltage values are applied for adjusting a conversion device threshold value.
    Type: Application
    Filed: November 2, 2021
    Publication date: February 24, 2022
    Inventor: Frank R. Dropps
  • Patent number: 11200347
    Abstract: Systems and methods for encrypted processing are provided. For example, an apparatus for encrypted processing includes: an input interface adapted to receive input from a device; an encrypted processor connected to the input interface; a program store control connected to the encrypted processor, the program store control controlling use of and access to at least two program stores, where at least one program store acts as a primary program store and at least one program store acts as a back-up program store; and an output interface connected to the encrypted processor for outputting at least one of commands or data; where the encrypted processor is programmed to: receive and validate a request; determine whether a valid request is a program update request for a first program; and initiate a lock mechanism into a locked state.
    Type: Grant
    Filed: April 20, 2020
    Date of Patent: December 14, 2021
    Inventor: Frank R. Dropps
  • Patent number: 11196432
    Abstract: Methods and devices are provided for circuits. One device includes an adjustment circuit having an adjustable resistor for modifying a resistance value of a resistive device, the adjustment circuit connected to an adjustment terminal of the resistive device. The resistance value of the adjustable resistor changes, when a voltage or charge on the adjustment terminal of the adjustable resistor is changed. The adjustable resistor is a phase change element with an adjusting terminal to which different voltage values are applied for adjusting a conversion device threshold value.
    Type: Grant
    Filed: November 23, 2020
    Date of Patent: December 7, 2021
    Inventor: Frank R. Dropps
  • Publication number: 20210374050
    Abstract: To reduce latency and bandwidth consumption in systems, systems and methods are provided for grouping multiple cache line request messages in a related and speculative manner. That is, multiple cache lines are likely to have the same state and ownership characteristics, and therefore, requests for multiple cache lines can be grouped. Information received in response can be directed to the requesting processor socket, and those speculatively received (not actually requested, but likely to be requested) can be maintained in queue or other memory until a request is received for that information, or until discarded to free up tracking space for new requests.
    Type: Application
    Filed: May 29, 2020
    Publication date: December 2, 2021
    Inventors: FRANK R. DROPPS, THOMAS MCGEE, MICHAEL MALEWICKI
  • Patent number: 11188480
    Abstract: Systems and methods are provided for addressing die are inefficiencies associated with the use of redundant ternary content-addressable memory (TCAM) for facilitating error detection and correction. Only a portion of redundant TCAMs (or portions of the same TCAM) are reserved for modified coherency directory cache entries, while remaining portions are available for unmodified coherency directory cache entries. The amount of space reserved for redundant, modified coherency directory cache entries can be programmable and adaptable.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: November 30, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Thomas Edward McGee
  • Publication number: 20210367699
    Abstract: A fiber loop includes a plurality of processors coupled to each other and a controller coupled to each of the plurality of processors. The controller is configured to: assign to each of the plurality of processors a number of wavelengths for interconnect communications between the plurality of processors; receive, from a first processor of the plurality of processors, a request for one or more additional wavelengths; determine whether an interconnect bandwidth utilization on the fiber loop is less than a threshold; and in response to determining that the interconnect bandwidth utilization on the fiber loop is less than the threshold, reassign, to the first processor, one or more wavelengths that are assigned to a second processor of the plurality of processors.
    Type: Application
    Filed: May 22, 2020
    Publication date: November 25, 2021
    Inventors: Frank R. Dropps, Mir Ashkan Seyedi
  • Patent number: 11184103
    Abstract: A fiber loop includes a plurality of processors coupled to each other and a controller coupled to each of the plurality of processors. The controller is configured to: assign to each of the plurality of processors a number of wavelengths for interconnect communications between the plurality of processors; receive, from a first processor of the plurality of processors, a request for one or more additional wavelengths; determine whether an interconnect bandwidth utilization on the fiber loop is less than a threshold; and in response to determining that the interconnect bandwidth utilization on the fiber loop is less than the threshold, reassign, to the first processor, one or more wavelengths that are assigned to a second processor of the plurality of processors.
    Type: Grant
    Filed: May 22, 2020
    Date of Patent: November 23, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Frank R. Dropps, Mir Ashkan Seyedi
  • Publication number: 20210357334
    Abstract: Systems and methods are provided for addressing die are inefficiencies associated with the use of redundant ternary content-addressable memory (TCAM) for facilitating error detection and correction. Only a portion of redundant TCAMs (or portions of the same TCAM) are reserved for modified coherency directory cache entries, while remaining portions are available for unmodified coherency directory cache entries. The amount of space reserved for redundant, modified coherency directory cache entries can be programmable and adaptable.
    Type: Application
    Filed: May 12, 2020
    Publication date: November 18, 2021
    Inventors: FRANK R. DROPPS, THOMAS EDWARD MCGEE
  • Patent number: 11169921
    Abstract: A system and method for cache coherency within multiprocessor environments is provided. Each node controller of a plurality of nodes within a multiprocessor system receives a cache coherency protocol request from local processor sockets and other node controller(s). A ternary content addressable memory (TCAM) accelerator in the node controller determines if the cache coherency protocol request comprises a snoop request and, if it is determined to be a snoop request, searching the TCAM based on an address within the cache coherency protocol request. In response to detecting only one match between an entry of the TCAM and the received snoop request, sending a response to the requesting local processor a response without having to access a coherency directory.
    Type: Grant
    Filed: May 9, 2019
    Date of Patent: November 9, 2021
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Frank R. Dropps
  • Publication number: 20210240625
    Abstract: In exemplary aspects of managing the ejection of entries of a coherence directory cache, the directory cache includes directory cache entries that can store copies of respective directory entries from a coherency directory. Each of the directory cache entries is configured to include state and ownership information of respective memory blocks. Information is stored, which indicates if memory blocks are in an active state within a memory region of a memory. A request is received and includes a memory address of a first memory block. Based on the memory address in the request, a cache hit in the directory cache is detected. The request is determined to be a request to change the state of the first memory block to an invalid state. The ejection of a directory cache entry corresponding to the first memory block is managed based on ejection policy rules.
    Type: Application
    Filed: April 20, 2021
    Publication date: August 5, 2021
    Inventor: Frank R. Dropps