Patents by Inventor Kimberly Keeton
Kimberly Keeton has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240020155Abstract: Systems and methods are provided for incorporating an optimized dispatcher with an FaaS infrastructure to permit and restrict access to resources. For example, the dispatcher may assign requests to “warm” resources and initiate a fault process if the resource is overloaded or a cache-miss is identified (e.g., by restarting or rebooting the resource). The warm instances or accelerators associated with the allocation size that are identified may be commensurate to the demand and help dynamically route requests to faster accelerators.Type: ApplicationFiled: September 28, 2023Publication date: January 18, 2024Inventors: Dejan S. Milojicic, Kimberly Keeton, Paolo Faraboschi, Cullen E. Bash
-
Patent number: 11809218Abstract: Systems and methods are provided for incorporating an optimized dispatcher with an FaaS infrastructure to permit and restrict access to resources. For example, the dispatcher may assign requests to “warm” resources and initiate a fault process if the resource is overloaded or a cache-miss is identified (e.g., by restarting or rebooting the resource). The warm instances or accelerators associated with the allocation size that are identified may be commensurate to the demand and help dynamically route requests to faster accelerators.Type: GrantFiled: March 11, 2021Date of Patent: November 7, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Dejan S. Milojicic, Kimberly Keeton, Paolo Faraboschi, Cullen E. Bash
-
Patent number: 11561607Abstract: Encoding of domain logic rules in an analog content addressable memory (aCAM) is disclosed. By encoding domain logic in an aCAM, rapid and flexible search capabilities are enabled, including the capability to search ranges of analog values, fuzzy match capabilities, and optimized parameter search capabilities. This is achieved with low latency by using only a small number of clock cycles at low power. A domain logic ruleset may be represented using various data structures such as decision trees, directed graphs, or the like. These representations can be converted to a table of values, where each table column can be directly mapped to a corresponding row of the aCAM.Type: GrantFiled: October 30, 2020Date of Patent: January 24, 2023Assignee: Hewlett Packard Enterprise Development LPInventors: Catherine Graves, Can Li, John Paul Strachan, Dejan S. Milojicic, Kimberly Keeton
-
Publication number: 20220291952Abstract: Systems and methods are provided for incorporating an optimized dispatcher with an FaaS infrastructure to permit and restrict access to resources. For example, the dispatcher may assign requests to “warm” resources and initiate a fault process if the resource is overloaded or a cache-miss is identified (e.g., by restarting or rebooting the resource). The warm instances or accelerators associated with the allocation size that are identified may be commensurate to the demand and help dynamically route requests to faster accelerators.Type: ApplicationFiled: March 11, 2021Publication date: September 15, 2022Inventors: DEJAN S. MILOJICIC, Kimberly Keeton, Paolo Faraboschi, Cullen E. Bash
-
Publication number: 20220138204Abstract: Encoding of domain logic rules in an analog content addressable memory (aCAM) is disclosed. By encoding domain logic in an aCAM, rapid and flexible search capabilities are enabled, including the capability to search ranges of analog values, fuzzy match capabilities, and optimized parameter search capabilities. This is achieved with low latency by using only a small number of clock cycles at low power. A domain logic ruleset may be represented using various data structures such as decision trees, directed graphs, or the like. These representations can be converted to a table of values, where each table column can be directly mapped to a corresponding row of the aCAM.Type: ApplicationFiled: October 30, 2020Publication date: May 5, 2022Inventors: CATHERINE GRAVES, CAN LI, JOHN PAUL STRACHAN, DEJAN S. MILOJICIC, KIMBERLY KEETON
-
Patent number: 11144237Abstract: Systems and methods for concurrent reading and writing in shared, persistent byte-addressable non-volatile memory is described herein. One method includes in response to initiating a write sequence to one or more memory elements, checking an identifier memory element to determine whether a write sequence is in progress. In addition, the method includes updating an ingress counter. The method also includes adding process identification associated with a writer node to the identifier memory element. Next, a write operation is performed. After the write operation, an egress counter is incremented and the identifier memory element is reset to an expected value.Type: GrantFiled: August 1, 2019Date of Patent: October 12, 2021Assignee: Hewlett Packard Enterprise Development LPInventors: Milind M. Chabbi, Yupu Zhang, Haris Volos, Kimberly Keeton
-
Patent number: 10997064Abstract: Examples relate to ordering updates for nonvolatile memory accesses. In some examples, a first update that is propagated from a write-through processor cache of a processor is received by a write ordering buffer, where the first update is associated with a first epoch. The first update is stored in a first buffer entry of the write ordering buffer. At this stage, a second update that is propagated from the write-through processor cache is received, where the second update is associated with a second epoch. A second buffer entry of the write ordering buffer is allocated to store the second update. The first buffer entry and the second buffer entry can then be evicted to non-volatile memory in epoch order.Type: GrantFiled: June 26, 2019Date of Patent: May 4, 2021Assignee: Hewlett Packard Enterprise Development LPInventors: Sanketh Nalli, Haris Volos, Kimberly Keeton
-
Patent number: 10942824Abstract: Exemplary embodiments herein describe programming models and frameworks for providing parallel and resilient tasks. Tasks are created in accordance with predetermined structures. Defined tasks are stored as data objects in a shared pool of memory that is made up of disaggregated memory communicatively coupled via a high performance interconnect that supports atomic operations as descried herein. Heterogeneous compute nodes are configured to execute tasks stored in the shared memory. When compute nodes fail, they do not impact the shared memory, the tasks or other data stored in the shared memory, or the other non-failing compute nodes. The non-failing compute nodes can take on the responsibility of executing tasks owned by other compute nodes, including tasks of a compute node that fails, without needing a centralized manager or schedule to re-assign those tasks. Task processing can therefore be performed in parallel and without impact from node failures.Type: GrantFiled: October 8, 2018Date of Patent: March 9, 2021Assignee: Hewlett Packard Enterprise Development LPInventors: Haris Volos, Kimberly Keeton, Sharad Singhal, Yupu Zhang
-
Publication number: 20210034281Abstract: Systems and methods for concurrent reading and writing in shared, persistent byte-addressable non-volatile memory is described herein. One method includes in response to initiating a write sequence to one or more memory elements, checking an identifier memory element to determine whether a write sequence is in progress. In addition, the method includes updating an ingress counter. The method also includes adding process identification associated with a writer node to the identifier memory element. Next, a write operation is performed. After the write operation, an egress counter is incremented and the identifier memory element is reset to an expected value.Type: ApplicationFiled: August 1, 2019Publication date: February 4, 2021Inventors: Milind M. Chabbi, Yupu Zhang, Haris Volos, Kimberly Keeton
-
Patent number: 10854331Abstract: A transformation on raw data is applied to produce transformed data, where the transformation includes at least one selected from among a summary of the raw data or a transform of the raw data between different domains. In response to a query to access data, the query is processed using the transformed data.Type: GrantFiled: October 26, 2014Date of Patent: December 1, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Henggang Cui, Kimberly Keeton, Indrajit Roy, Krishnamurthy Viswanathan, Haris Volos
-
Patent number: 10698878Abstract: In some examples, a graph processing server is communicatively linked to a shared memory. The shared memory may also be accessible to a different graph processing server. The graph processing server may compute an updated vertex value for a graph portion handled by the graph processing server and flush the updated vertex value to the shared memory, for retrieval by the different graph processing server. The graph processing server may also notify the different graph processing server indicating that the updated vertex value has been flushed to the shared memory.Type: GrantFiled: March 6, 2015Date of Patent: June 30, 2020Assignee: Hewlett Packard Enterprise Development LPInventors: Stanko Novakovic, Kimberly Keeton, Paolo Faraboschi, Robert Schreiber
-
Publication number: 20200110676Abstract: Exemplary embodiments herein describe programming models and frameworks for providing parallel and resilient tasks. Tasks are created in accordance with predetermined structures. Defined tasks are stored as data objects in a shared pool of memory that is made up of disaggregated memory communicatively coupled via a high performance interconnect that supports atomic operations as descried herein. Heterogeneous compute nodes are configured to execute tasks stored in the shared memory. When compute nodes fail, they do not impact the shared memory, the tasks or other data stored in the shared memory, or the other non-failing compute nodes. The non-failing compute nodes can take on the responsibility of executing tasks owned by other compute nodes, including tasks of a compute node that fails, without needing a centralized manager or schedule to re-assign those tasks. Task processing can therefore be performed in parallel and without impact from node failures.Type: ApplicationFiled: October 8, 2018Publication date: April 9, 2020Inventors: Haris Volos, Kimberly Keeton, Sharad Singhal, Yupu Zhang
-
Patent number: 10489310Abstract: Determining cache value currency using persistent markers is disclosed herein. In one example, a cache entry is retrieved from a local cache memory device. The cache entry includes a key, a value to be used by the computing device, and a marker flag to determine whether the cache entry is current. The local cache memory device also includes a marker location that indicates a location of a marker in a shared persistent fabric-attached memory (FAM). Using a marker location, the marker is retrieved from the shared persistent FAM. From the marker and the marker flag, it is determined whether the cache entry is current. The shared FAM pool is connected to the local cache memory devices of multiple computing devices.Type: GrantFiled: October 20, 2017Date of Patent: November 26, 2019Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Kimberly Keeton, Yupu Zhang, Haris Volos, Ram Swaminathan, Evan R. Kirshenbaum
-
Publication number: 20190317891Abstract: Examples relate to ordering updates for nonvolatile memory accesses. In some examples, a first update that is propagated from a write-through processor cache of a processor is received by a write ordering buffer, where the first update is associated with a first epoch. The first update is stored in a first buffer entry of the write ordering buffer. At this stage, a second update that is propagated from the write-through processor cache is received, where the second update is associated with a second epoch. A second buffer entry of the write ordering buffer is allocated to store the second update. The first buffer entry and the second buffer entry can then be evicted to non-volatile memory in epoch order.Type: ApplicationFiled: June 26, 2019Publication date: October 17, 2019Inventors: Sanketh Nalli, Haris Volos, Kimberly Keeton
-
Patent number: 10417215Abstract: A system includes processing nodes and shared memory. Each processing node includes a processor and local memory. The local memory of each processing node stores at least a partial copy of the immutable data stage of a dataset. The shared memory is accessible by each processing node and stores a sole copy of the mutable data stage of the dataset and a master copy of the immutable data stage of a dataset.Type: GrantFiled: September 29, 2017Date of Patent: September 17, 2019Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Huanchen Zhang, Kimberly Keeton
-
Patent number: 10372602Abstract: Examples relate to ordering updates for nonvolatile memory accesses. In some examples, a first update that is propagated from a write-through processor cache of a processor is received by a write ordering buffer, where the first update is associated with a first epoch. The first update is stored in a first buffer entry of the write ordering buffer. At this stage, a second update that is propagated from the write-through processor cache is received, where the second update is associated with a second epoch. A second buffer entry of the write ordering buffer is allocated to store the second update. The first buffer entry and the second buffer entry can then be evicted to non-volatile memory in epoch order.Type: GrantFiled: January 30, 2015Date of Patent: August 6, 2019Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Sanketh Nalli, Haris Volos, Kimberly Keeton
-
Publication number: 20190121750Abstract: Determining cache value currency using persistent markers is disclosed herein. In one example, a cache entry is retrieved from a local cache memory device. The cache entry includes a key, a value to be used by the computing device, and a marker flag to determine whether the cache entry is current. The local cache memory device also includes a marker location that indicates a location of a marker in a shared persistent fabric-attached memory (FAM). Using a marker location, the marker is retrieved from the shared persistent FAM. From the marker and the marker flag, it is determined whether the cache entry is current. The shared FAM pool is connected to the local cache memory devices of multiple computing devices.Type: ApplicationFiled: October 20, 2017Publication date: April 25, 2019Applicant: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LPInventors: Kimberly Keeton, Yupu Zhang, Haris Volos, Ram Swaminathan, Evan R. Kirshenbaum
-
Publication number: 20190102416Abstract: A system includes processing nodes and shared memory. Each processing node includes a processor and local memory. The local memory of each processing node stores at least a partial copy of the immutable data stage of a dataset. The shared memory is accessible by each processing node and stores a sole copy of the mutable data stage of the dataset and a master copy of the immutable data stage of a dataset.Type: ApplicationFiled: September 29, 2017Publication date: April 4, 2019Inventors: Huanchen Zhang, Kimberly Keeton
-
Publication number: 20180322158Abstract: Example implementations relate to changing concurrency control modes. An example implementation includes controlling a concurrency control mode of a data slot that stores a data value. A concurrency control mode of a data slot may be changed from an optimistic concurrency control mode to a multi-version concurrency control mode responsive to detecting a read-write conflict for the data slot. A concurrency control mode of a data slot may be changed from a multi-version concurrency control mode to an optimistic concurrency control mode responsive to detecting that the data slot satisfies a low contention criterion.Type: ApplicationFiled: May 2, 2017Publication date: November 8, 2018Inventors: Huanchen Zhang, Kimberly Keeton
-
Publication number: 20180025043Abstract: In some examples, a graph processing server is communicatively linked to a shared memory. The shared memory may also be accessible to a different graph processing server. The graph processing server may compute an updated vertex value for a graph portion handled by the graph processing server and flush the updated vertex value to the shared memory, for retrieval by the different graph processing server. The graph processing server may also notify the different graph processing server indicating that the updated vertex value has been flushed to the shared memory.Type: ApplicationFiled: March 6, 2015Publication date: January 25, 2018Inventors: Stanko Novakovic, Kimberly Keeton, Paolo Faraboschi, Robert Schreiber