Patents by Inventor Thomas Willhalm

Thomas Willhalm has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210064531
    Abstract: Methods and apparatus for software-defined coherent caching of pooled memory. The pooled memory is implemented in an environment having a disaggregated architecture where compute resources such as compute platforms are connected to disaggregated memory via a network or fabric. Software-defined caching policies are implemented in hardware in a processor SoC or discrete device such as a Network Interface Controller (NIC) by programming logic in an FPGA or accelerator on the SoC or discrete device. The programmed logic is configured to implement software-defined caching policies in hardware for effecting disaggregated memory (DM) caching in an associated DM cache of at least a portion of an address space allocated for the software application in the disaggregated memory.
    Type: Application
    Filed: November 9, 2020
    Publication date: March 4, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Alexander Bachmutsky, Zhongyan Lu, Thomas Willhalm
  • Publication number: 20210051096
    Abstract: Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
    Type: Application
    Filed: October 30, 2020
    Publication date: February 18, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Raj Ramanujan, Brian Slechta
  • Patent number: 10922265
    Abstract: Various embodiments are generally directed to an apparatus, method and other techniques to receive a transaction request to perform a transaction with the memory, the transaction request including a synchronization indication to indicate utilization of transaction synchronization to perform the transaction. Embodiments may include sending a request to a caching agent to perform the transaction, receiving a response from the caching agent, the response to indicate whether the transaction conflicts or does not conflict with another transaction, and performing the transaction if the response indicates the transaction does not conflict with the other transaction, or delaying the transaction for a period of time if the response indicates the transaction does conflict with the other transaction.
    Type: Grant
    Filed: June 27, 2017
    Date of Patent: February 16, 2021
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Karthik Kumar, Nicolae Popovici, Thomas Willhalm
  • Patent number: 10915791
    Abstract: Technology for a memory controller is described. The memory controller can receive a request to store training data. The request can include a model identifier (ID) that identifies a model that is associated with the training data. The memory controller can send a write request to store the training data associated with the model ID in a memory region in a pooled memory that is allocated for the model ID. The training data that is stored in the memory region in the pooled memory can be addressable based on the model ID.
    Type: Grant
    Filed: December 27, 2017
    Date of Patent: February 9, 2021
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark A. Schmisseur, Thomas Willhalm
  • Publication number: 20210011864
    Abstract: In one embodiment, an apparatus includes: a table to store a plurality of entries, each entry to identify a memory domain of a system and a coherency status of the memory domain; and a control circuit coupled to the table. The control circuit may be configured to receive a request to change a coherency status of a first memory domain of the system, and dynamically update a first entry of the table for the first memory domain to change the coherency status between a coherent memory domain and a non-coherent memory domain, in response to the request. Other embodiments are described and claimed.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 14, 2021
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Alexander Bachmutsky
  • Patent number: 10885004
    Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: January 5, 2021
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Mark A. Schmisseur, Benjamin Graniello
  • Publication number: 20200387310
    Abstract: A memory controller method and apparatus includes a modification of at least one of a first timing scheme or a second timing scheme based on information about one or more data requests to be included in at least one of a first queue scheduler or a second queue scheduler, the first timing scheme indicating when one or more requests in the first queue scheduler are to be issued to the first memory set via a first memory set interface and over a channel, the second timing scheme indicating when one or more requests in the second queue scheduler are to be issued to the second memory set via a second memory set interface and over the channel.
    Type: Application
    Filed: June 19, 2020
    Publication date: December 10, 2020
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Thomas WILLHALM, Mark SCHMISSEUR
  • Patent number: 10846014
    Abstract: Examples relate to a method for a memory module, a method for a memory controller, a method for a processor, to a memory module controller device or apparatus, to a memory controller device or apparatus, to a processor device or apparatus, a memory module, a memory controller, a processor, a computer system and a computer program. The method for the memory module comprises obtaining one or more memory write instructions of a group memory write instruction. The group memory write instruction comprises a plurality of memory write instructions to be executed atomically. The one or more memory write instructions relate to one or more memory addresses associated with memory of the memory module. The method comprises executing the one or more memory write instructions using previously unallocated memory of the memory module. The method comprises obtaining a commit instruction for the group memory write instruction.
    Type: Grant
    Filed: December 17, 2018
    Date of Patent: November 24, 2020
    Assignee: Intel Corporation
    Inventors: Ginger Gilsdorf, Karthik Kumar, Thomas Willhalm, Mark Schmisseur, Francesc Guim Bernat
  • Patent number: 10846230
    Abstract: Embodiments of the invention include a machine-readable medium having stored thereon at least one instruction, which if performed by a machine causes the machine to perform a method that includes decoding, with a node, an invalidate instruction; and executing, with the node, the invalidate instruction for invalidating a memory range specified across a fabric interconnect.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: November 24, 2020
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat, Brian J. Slechta
  • Patent number: 10838647
    Abstract: Devices and systems for distributing data across disaggregated memory resources is disclosed and described. An acceleration controller device can include an adaptive data migration engine (ADME) configured to communicatively couple to a fabric interconnect, and is further configured to monitor application data performance metrics at the plurality of disaggregated memory pools for a plurality of applications executing on the plurality of compute resources, select a current application having a current application data performance metric, determine an alternate memory pool from the plurality of disaggregated memory pools estimated to increase application data performance relative to the current application data performance metric, and migrate the data from the current memory pool to the alternate memory pool.
    Type: Grant
    Filed: March 14, 2018
    Date of Patent: November 17, 2020
    Assignee: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm, Mark A. Schmisseur
  • Patent number: 10824673
    Abstract: A system includes a non-volatile random access memory storing a column store main fragment of a column of a database table, and a processing unit to read the column store main fragment from the non-volatile random access memory. A volatile random access memory storing a column store delta fragment of the column of the database table may also be included, in which the processing unit is to write to the column store delta fragment. According to some systems, the stored column store main fragment is byte-addressable, and is copied from the volatile random access memory to the non-volatile random access memory without using a filesystem cache.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: November 3, 2020
    Assignee: SAP SE
    Inventors: Oliver Rebholz, Ivan Schreter, Abdelkader Sellami, Daniel Booss, Gunter Radestock, Peter Bumbulis, Alexander Boehm, Frank Renkes, Werner Thesing, Thomas Willhalm
  • Publication number: 20200319696
    Abstract: Methods and apparatus for platform ambient data management schemes for tiered architectures. A platform including one or more CPUs coupled to multiple tiers of memory comprising various types of DIMMs (e.g., DRAM, hybrid, DCPMM) is powered by a battery subsystem receiving input energy harvested from one or more green energy sources. Energy threshold conditions are detected, and associated memory reconfiguration is performed. The memory reconfiguration may include but is not limited to copying data between DIMMs (or memory ranks on the DIMMS in the same tier, copying data between a first type of memory to a second type of memory on a hybrid DIMM, and flushing dirty lines in a DIMM in a first memory tier being used as a cache for a second memory tier. Following data copy and flushing operations, the DIMMs and/or their memory devices are powered down and/or deactivated.
    Type: Application
    Filed: June 21, 2020
    Publication date: October 8, 2020
    Inventors: Karthik Kumar, Thomas Willhalm, Francesc Guim Bernat
  • Patent number: 10782969
    Abstract: A processor of an aspect includes a plurality of packed data registers, and a decode unit to decode a vector cache line write back instruction. The vector cache line write back instruction is to indicate a source packed memory indices operand that is to include a plurality of memory indices. The processor also includes a cache coherency system coupled with the packed data registers and the decode unit. The cache coherency system, in response to the vector cache line write back instruction, to cause, any dirty cache lines, in any caches in a coherency domain, which are to have stored therein data for any of a plurality of memory addresses that are to be indicated by any of the memory indices of the source packed memory indices operand, to be written back toward one or more memories. Other processors, methods, and systems are also disclosed.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: September 22, 2020
    Assignee: Intel Corporation
    Inventors: Kshitij A. Doshi, Thomas Willhalm
  • Publication number: 20200285420
    Abstract: In one embodiment, an apparatus includes: a first queue to store requests that are guaranteed to be delivered to a persistent memory; a second queue to store requests that are not guaranteed to be delivered to the persistent memory; a control circuit to receive the requests and to direct the requests to the first queue or the second queue; and an egress circuit coupled to the first queue to deliver the requests stored in the first queue to the persistent memory even when a power failure occurs. Other embodiments are described and claimed.
    Type: Application
    Filed: May 26, 2020
    Publication date: September 10, 2020
    Inventors: FRANCESC GUIM BERNAT, KARTHIK KUMAR, DONALD FAW, THOMAS WILLHALM
  • Publication number: 20200278804
    Abstract: A memory request manager in a memory system registers a tenant for access to a plurality of memory devices, registers one or more service level agreement (SLA) requirements for the tenant for access to the plurality of memory devices, monitors usage of the plurality of memory devices by tenants, receives a memory request from the tenant to access a selected one of the plurality of memory devices, and allows the access when usage of the plurality of memory devices meets the one or more SLA requirements for the tenant.
    Type: Application
    Filed: April 13, 2020
    Publication date: September 3, 2020
    Inventors: Francesc GUIM BERNAT, Karthik KUMAR, Tushar Sudhakar GOHAD, Mark A. SCHMISSEUR, Thomas WILLHALM
  • Patent number: 10747691
    Abstract: Examples provide a memory device, a dual inline memory module, a storage device, an apparatus for storing, a method for storing, a computer program, a machine readable storage, and a machine readable medium. A memory device is configured to store data and comprises one or more interfaces configured to receive and to provide data. The memory device further comprises a memory module configured to store the data, and a memory logic component configured to control the one or more interfaces and the memory module. The memory logic component is further configured to receive information on a specific memory region with one or more model identifications, to receive information on an instruction to perform an acceleration function for one or more certain model identifications, and to perform the acceleration function on data in a specific memory region with the one or more certain model identifications.
    Type: Grant
    Filed: April 10, 2018
    Date of Patent: August 18, 2020
    Assignee: Intel Corporation
    Inventors: Mark Schmisseur, Thomas Willhalm, Francesc Guim Bernat, Karthik Kumar
  • Patent number: 10728311
    Abstract: A computing device, method and system to implement an adaptive compression scheme in a network fabric. The computing device may include a memory device and a fabric controller coupled to the memory device. The fabric controller may include processing circuitry having logic to communicate with a plurality of peer computing devices in the network fabric. The logic may be configured to implement the adaptive compression scheme to select, based on static information and on dynamic information relating to a peer computing device of the plurality of peer computing devices, a compression algorithm to compress a data payload destined for the peer computing device, and to compress the data payload based on the compression algorithm. The static information may include information on data payload decompression supported methods of the peer computing device, and the dynamic information may include information on link load at the peer computing device.
    Type: Grant
    Filed: February 16, 2017
    Date of Patent: July 28, 2020
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Nicolas A. Salhuana, Daniel Rivas Barragan
  • Publication number: 20200228626
    Abstract: Technologies for providing advanced resource management in a disaggregated environment include a compute device. The compute device includes circuitry to obtain a workload to be executed by a set of resources in a disaggregated system, query a sled in the disaggregated system to identify an estimated time to complete execution of a portion of the workload to be accelerated using a kernel, and assign, in response to a determination that the estimated time to complete execution of the portion of the workload satisfies a target quality of service associated with the workload, the portion of the workload to the sled for acceleration.
    Type: Application
    Filed: March 25, 2020
    Publication date: July 16, 2020
    Inventors: Francesc Guim Bernat, Slawomir Putyrski, Susanne M. Balle, Thomas Willhalm, Karthik Kumar
  • Publication number: 20200226272
    Abstract: An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to process memory operation requests from a memory controller, and provide a front end interface to remote pooled memory hosted at a near edge device. An embodiment of another electronic apparatus may include local memory and logic communicatively coupled the local memory, the logic to allocate a range of the local memory as remote pooled memory, and provide a back end interface to the remote pooled memory for memory requests from a far edge device. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: March 26, 2020
    Publication date: July 16, 2020
    Applicant: Intel Corporation
    Inventors: Francesc Guim Bernat, Karthik Kumar, Mark Schmisseur, Thomas Willhalm
  • Publication number: 20200218669
    Abstract: An apparatus and/or system is described including a memory device including a memory range and a temporal data management unit (TDMU) coupled to the memory device to receive from an interface, the memory range and a temporal range corresponding to validity of data in the memory range, check the temporal range against a time and/or date value provided by a timer or clock to identify the data in the memory range as expired, and invalidate the data that is expired in the memory device. In some embodiments, the TDMU includes hardware logic that resides on a memory module with the memory device and is coupled to invalidate expired data when the memory module is decoupled from the interface. Other embodiments may be disclosed and claimed.
    Type: Application
    Filed: March 16, 2020
    Publication date: July 9, 2020
    Inventors: Ginger H. Gilsdorf, Karthik Kumar, Mark A. Schmisseur, Thomas Willhalm, Francesc Guim Bernat