Patents by Inventor RYAN D. MENHUSEN

RYAN D. MENHUSEN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11874767
    Abstract: In some examples, a system partitions a shared memory address space of a shared memory among a plurality of processing entities into a plurality of memory partitions, where a respective memory partition is associated with a respective processing entity. A first processing entity forwards, to a second processing entity, a first data operation, based on a determination by the first processing entity that the first data operation is to be applied to data for a memory partition associated with the second processing entity. The second processing entity applies the first data operation that includes writing data of the first data operation to the memory partition associated with the second processing entity using a non-atomic operation.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: January 16, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventor: Ryan D. Menhusen
  • Patent number: 11768772
    Abstract: In some examples, a system includes a processing entity and a memory to store data arranged in a plurality of bins associated with respective key values of a key. The system includes a cache to store cached data elements for respective accumulators that are updatable to represent occurrences of the respective key values of the key, where each accumulator corresponds to a different bin of the plurality of bins, and each cached data element has a range that is less than a range of a corresponding bin of the plurality of bins. Responsive to a value of a given cached data element as updated by a given accumulator satisfying a criterion, the processing entity is to cause an aggregation of the value of the given cached data element with a bin value in a respective bin.
    Type: Grant
    Filed: December 15, 2021
    Date of Patent: September 26, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Ryan D. Menhusen, Darel Neal Emmot
  • Publication number: 20230185721
    Abstract: In some examples, a system includes a processing entity and a memory to store data arranged in a plurality of bins associated with respective key values of a key. The system includes a cache to store cached data elements for respective accumulators that are updatable to represent occurrences of the respective key values of the key, where each accumulator corresponds to a different bin of the plurality of bins, and each cached data element has a range that is less than a range of a corresponding bin of the plurality of bins. Responsive to a value of a given cached data element as updated by a given accumulator satisfying a criterion, the processing entity is to cause an aggregation of the value of the given cached data element with a bin value in a respective bin.
    Type: Application
    Filed: December 15, 2021
    Publication date: June 15, 2023
    Inventors: Ryan D. Menhusen, Darel Neal Emmot
  • Publication number: 20230185707
    Abstract: In some examples, a system partitions a shared memory address space of a shared memory among a plurality of processing entities into a plurality of memory partitions, where a respective memory partition is associated with a respective processing entity. A first processing entity forwards, to a second processing entity, a first data operation, based on a determination by the first processing entity that the first data operation is to be applied to data for a memory partition associated with the second processing entity. The second processing entity applies the first data operation that includes writing data of the first data operation to the memory partition associated with the second processing entity using a non-atomic operation.
    Type: Application
    Filed: December 15, 2021
    Publication date: June 15, 2023
    Inventor: Ryan D. Menhusen
  • Patent number: 11392495
    Abstract: Systems and methods are provided for accurately simulating memory operations of a multi-compute-engine system, such as a multi-core system. Simulation speed can be increased by consolidation location and state information associated with data stored in one or more caches of a simulated cache hierarchy. This consolidation of information can be reflected in a single cache line map or flat cache. Accordingly, searches for data (and copies of the data) in each and every cache of the simulated cache hierarchy can be performed fast and with greater efficiency than conventional simulation systems that operate using sequential, cache-by-cache searching, while maintaining data coherency.
    Type: Grant
    Filed: February 8, 2019
    Date of Patent: July 19, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Ryan D. Menhusen, Todd Austin Carrington
  • Publication number: 20200387438
    Abstract: Systems and methods are provided for accurately simulating a hardware computing system. Application programming interfaces (APIs) are called within process code, the process being executed in simulation of the hardware computing system to start, stop, pause, and/or end tracking of one or more hardware events correlated to data about which a user wishes to receive statistics. Defining APIs within the process code allows per-process and per-instruction level granularity in the statistics.
    Type: Application
    Filed: June 10, 2019
    Publication date: December 10, 2020
    Inventors: TODD AUSTIN CARRINGTON, Ryan D. Menhusen, John L. Byrne
  • Publication number: 20200257622
    Abstract: Systems and methods are provided for accurately simulating memory operations of a multi-compute-engine system, such as a multi-core system. Simulation speed can be increased by consolidation location and state information associated with data stored in one or more caches of a simulated cache hierarchy. This consolidation of information can be reflected in a single cache line map or flat cache. Accordingly, searches for data (and copies of the data) in each and every cache of the simulated cache hierarchy can be performed fast and with greater efficiency than conventional simulation systems that operate using sequential, cache-by-cache searching, while maintaining data coherency.
    Type: Application
    Filed: February 8, 2019
    Publication date: August 13, 2020
    Inventors: RYAN D. MENHUSEN, TODD AUSTIN CARRINGTON