Patents by Inventor Benjamin GRANIELLO

Benjamin GRANIELLO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240103914
    Abstract: In one embodiment, a processor includes: a plurality of cores to execute instructions; at least one monitor coupled to the plurality of cores to measure at least one of power information, temperature information, or scalability information; and a control circuit coupled to the at least one monitor. Based at least in part on the at least one of the power information, the temperature information, or the scalability information, the control circuit is to notify an operating system that one or more of the plurality of cores are to transition to a forced idle state in which non-affinitized workloads are prevented from being scheduled. Other embodiments are described and claimed.
    Type: Application
    Filed: September 28, 2022
    Publication date: March 28, 2024
    Inventors: Russell J. Fenger, Rajshree A. Chabukswar, Benjamin Graniello, Monica Gupta, Guy M. Therien, Michael W. Chynoweth
  • Publication number: 20240086341
    Abstract: Methods, apparatus and systems for adaptive fabric allocation for local and remote emerging memories-based prediction schemes. In conjunction with performing memory transfers between a compute host and memory device connected via one or more interconnect segments, memory read and write traffic is monitored for at least one interconnect segment having reconfigurable upstream lanes and downstream lanes. Predictions of expected read and write bandwidths for the at least one interconnect segment are then made. Based on the expected read and write bandwidths, the upstream lanes and downstream lanes are dynamically reconfigured. The interconnect segments include interconnect links such as Compute Exchange Link (CXL) flex buses and memory channels for local memory implementations, and fabric links for remote memory implementations. For local memory, management messages may be used to provide telemetry information containing the expected read and write bandwidths.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 14, 2024
    Inventors: Benjamin Graniello, Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm
  • Patent number: 11789878
    Abstract: Methods, apparatus and systems for adaptive fabric allocation for local and remote emerging memories-based prediction schemes. In conjunction with performing memory transfers between a compute host and memory device connected via one or more interconnect segments, memory read and write traffic is monitored for at least one interconnect segment having reconfigurable upstream lanes and downstream lanes. Predictions of expected read and write bandwidths for the at least one interconnect segment are then made. Based on the expected read and write bandwidths, the upstream lanes and downstream lanes are dynamically reconfigured. The interconnect segments include interconnect links such as Compute Exchange Link (CXL) flex buses and memory channels for local memory implementations, and fabric links for remote memory implementations. For local memory, management messages may be used to provide telemetry information containing the expected read and write bandwidths.
    Type: Grant
    Filed: December 19, 2019
    Date of Patent: October 17, 2023
    Assignee: Intel Corporation
    Inventors: Benjamin Graniello, Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm
  • Patent number: 11451435
    Abstract: Technologies for providing multi-tenant support in edge resources using edge channels include a device that includes circuitry to obtain a message associated with a service provided at the edge of a network. Additionally, the circuitry is to identify an edge channel based on metadata associated with the message. The edge channel has a predefined amount of resource capacity allocated to the edge channel to process the message. Further, the circuitry is to determine the predefined amount of resource capacity allocated to the edge channel and process the message using the allocated resource capacity for the identified edge channel.
    Type: Grant
    Filed: March 28, 2019
    Date of Patent: September 20, 2022
    Assignee: INTEL CORPORATION
    Inventors: Francesc Guim Bernat, Karthik Kumar, Benjamin Graniello, Timothy Verrall, Andrew J. Herdrich, Rashmin Patel, Monica Kenguva, Brinda Ganesh, Alexander Vul, Ned M. Smith, Suraj Prabhakaran
  • Patent number: 11404105
    Abstract: A write history buffer can prevent write disturb in memory, enabling a reduction in write disturb refresh rate and improvement in performance. A memory device can include circuitry to cause consecutive write commands to the same address to be spaced by an amount of time to reduce incidences of write disturb, and therefore reduce the required write disturb refresh rate and improve performance. In one example, a memory device receives multiple write commands to an address. In response to receipt of the multiple write commands, the first write command is sent to the memory and a timer is started. Subsequent write commands that are received after the first write command and before expiration of the timer are held in a buffer. After expiration of the timer, only the most recent of the subsequent write commands to the address is sent to the memory array.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: August 2, 2022
    Assignee: Intel Corporation
    Inventors: Akanksha Mehta, Benjamin Graniello, Rakan Maddah, Philip Hillier, Richard P. Mangold, Prashant S. Damle, Kunal A. Khochare
  • Patent number: 11188264
    Abstract: A memory system includes a nonvolatile (NV) memory device with asymmetry between intrinsic read operation delay and intrinsic write operation delay. The system can select to perform memory access operations with the NV memory device with the asymmetry, in which case write operations have a lower delay than read operations. The system can alternatively select to perform memory access operations with the NV memory device where a configured write operation delay that matches the read operation delay.
    Type: Grant
    Filed: February 3, 2020
    Date of Patent: November 30, 2021
    Assignee: Intel Corporation
    Inventors: Shekoufeh Qawami, Philip Hillier, Benjamin Graniello, Rajesh Sundaram
  • Publication number: 20210110862
    Abstract: A write history buffer can prevent write disturb in memory, enabling a reduction in write disturb refresh rate and improvement in performance. A memory device can include circuitry to cause consecutive write commands to the same address to be spaced by an amount of time to reduce incidences of write disturb, and therefore reduce the required write disturb refresh rate and improve performance. In one example, a memory device receives multiple write commands to an address. In response to receipt of the multiple write commands, the first write command is sent to the memory and a timer is started. Subsequent write commands that are received after the first write command and before expiration of the timer are held in a buffer. After expiration of the timer, only the most recent of the subsequent write commands to the address is sent to the memory array.
    Type: Application
    Filed: December 21, 2020
    Publication date: April 15, 2021
    Inventors: Akanksha MEHTA, Benjamin GRANIELLO, Rakan MADDAH, Philip HILLIER, Richard P. MANGOLD, Prashant S. DAMLE, Kunal A. KHOCHARE
  • Patent number: 10885004
    Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.
    Type: Grant
    Filed: June 19, 2018
    Date of Patent: January 5, 2021
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Mark A. Schmisseur, Benjamin Graniello
  • Publication number: 20200174705
    Abstract: A memory system includes a nonvolatile (NV) memory device with asymmetry between intrinsic read operation delay and intrinsic write operation delay. The system can select to perform memory access operations with the NV memory device with the asymmetry, in which case write operations have a lower delay than read operations. The system can alternatively select to perform memory access operations with the NV memory device where a configured write operation delay that matches the read operation delay.
    Type: Application
    Filed: February 3, 2020
    Publication date: June 4, 2020
    Inventors: Shekoufeh QAWAMI, Philip HILLIER, Benjamin GRANIELLO, Rajesh SUNDARAM
  • Publication number: 20200125503
    Abstract: Methods, apparatus and systems for adaptive fabric allocation for local and remote emerging memories-based prediction schemes. In conjunction with performing memory transfers between a compute host and memory device connected via one or more interconnect segments, memory read and write traffic is monitored for at least one interconnect segment having reconfigurable upstream lanes and downstream lanes. Predictions of expected read and write bandwidths for the at least one interconnect segment are then made. Based on the expected read and write bandwidths, the upstream lanes and downstream lanes are dynamically reconfigured. The interconnect segments include interconnect links such as Compute Exchange Link (CXL) flex buses and memory channels for local memory implementations, and fabric links for remote memory implementations. For local memory, management messages may be used to provide telemetry information containing the expected read and write bandwidths.
    Type: Application
    Filed: December 19, 2019
    Publication date: April 23, 2020
    Inventors: Benjamin Graniello, Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm
  • Patent number: 10599579
    Abstract: Cache on a persistent memory module is dynamically allocated as a prefetch cache or a write back cache to prioritize read and write operations to a persistent memory on the persistent memory module based on monitoring read/write accesses and/or user-selected allocation.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: March 24, 2020
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Francesc Guim Bernat, Benjamin Graniello, Thomas Willhalm, Mustafa Hajeer
  • Publication number: 20200076682
    Abstract: Technologies for providing multi-tenant support in edge resources using edge channels include a device that includes circuitry to obtain a message associated with a service provided at the edge of a network. Additionally, the circuitry is to identify an edge channel based on metadata associated with the message. The edge channel has a predefined amount of resource capacity allocated to the edge channel to process the message. Further, the circuitry is to determine the predefined amount of resource capacity allocated to the edge channel and process the message using the allocated resource capacity for the identified edge channel.
    Type: Application
    Filed: March 28, 2019
    Publication date: March 5, 2020
    Inventors: Francesc Guim Bernat, Karthik Kumar, Benjamin Graniello, Timothy Verrall, Andrew J. Herdrich, Rashmin Patel, Monica Kenguva, Brinda Ganesh, Alexander Vul, Ned M. Smith, Suraj Prabhakaran
  • Publication number: 20190384837
    Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.
    Type: Application
    Filed: June 19, 2018
    Publication date: December 19, 2019
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Thomas WILLHALM, Mark A. SCHMISSEUR, Benjamin GRANIELLO
  • Patent number: 10402330
    Abstract: Examples include a processor including a coherency mode indicating one of a directory-based cache coherence protocol and a snoop-based cache coherency protocol, and a caching agent to monitor a bandwidth of reading from and/or writing data to a memory coupled to the processor, to set the coherency mode to the snoop-based cache coherency protocol when the bandwidth exceeds a threshold, and to set the coherency mode to the directory-based cache coherency protocol when the bandwidth does not exceed the threshold.
    Type: Grant
    Filed: April 3, 2018
    Date of Patent: September 3, 2019
    Assignee: Intel Corporation
    Inventors: Karthik Kumar, Mustafa Hajeer, Thomas Willhalm, Francesc Guim Bernat, Benjamin Graniello
  • Publication number: 20190171387
    Abstract: Techniques and mechanisms for wear leveling across dual inline memory modules (DIMMs) by migrating data using direct memory accesses. In an embodiment, a direct memory access (DMA) controller detects that a metric of accesses to a first page of a first DIMM is outside of some range. Based on the detecting, the DMA controller disables an access to the first page by a processor core. While the access is disabled, the DMA controller performs DMA operations to migrate data from the first page to a second page of a second DIMM. The first page and the second page correspond, respectively, to a first physical address and a second physical address. In another embodiment, an update to address mapping information replaces a first correspondence of a virtual address to the first physical address with a second correspondence of the virtual address to the second physical address.
    Type: Application
    Filed: January 31, 2019
    Publication date: June 6, 2019
    Inventors: Thomas WILLHALM, Francesc GUIM BERNAT, Karthik KUMAR, Benjamin GRANIELLO, Mustafa HAJEER
  • Publication number: 20190095329
    Abstract: Technology for a system operable to allocate physical pages of memory is described. The system can include a memory side cache, a memory side cache monitoring unit coupled to the memory side cache, and an operating system (OS) page allocator. The OS page allocator can receive feedback from the memory side cache monitoring unit. The OS page allocator can adjust a page allocation policy that defines the physical pages allocated by the OS page allocator based on the feedback received from the memory side cache monitoring unit.
    Type: Application
    Filed: September 27, 2017
    Publication date: March 28, 2019
    Applicant: Intel Corporation
    Inventors: Karthik Kumar, Benjamin A. Graniello
  • Publication number: 20190042458
    Abstract: Cache on a persistent memory module is dynamically allocated as a prefetch cache or a write back cache to prioritize read and write operations to a persistent memory on the persistent memory module based on monitoring read/write accesses and/or user-selected allocation.
    Type: Application
    Filed: June 25, 2018
    Publication date: February 7, 2019
    Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Benjamin GRANIELLO, Thomas WILLHALM, Mustafa HAJEER
  • Publication number: 20190042423
    Abstract: A method is described. The method includes configuring different software programs that are to execute on a computer with customized hardware caching service levels. The available set of hardware caching levels at least comprise L1, L2 and L3 caching levels and at least one of the following hardware caching levels is available for customized support of a software program L2, L3 and L4.
    Type: Application
    Filed: April 19, 2018
    Publication date: February 7, 2019
    Inventors: Karthik KUMAR, Benjamin GRANIELLO, Mark A. SCHMISSEUR, Thomas WILLHALM, Francesc GUIM BERNAT
  • Publication number: 20190042429
    Abstract: Examples include a processor including a coherency mode indicating one of a directory-based cache coherence protocol and a snoop-based cache coherency protocol, and a caching agent to monitor a bandwidth of reading from and/or writing data to a memory coupled to the processor, to set the coherency mode to the snoop-based cache coherency protocol when the bandwidth exceeds a threshold, and to set the coherency mode to the directory-based cache coherency protocol when the bandwidth does not exceed the threshold.
    Type: Application
    Filed: April 3, 2018
    Publication date: February 7, 2019
    Inventors: Karthik KUMAR, Mustafa HAJEER, Thomas WILLHALM, Francesc GUIM BERNAT, Benjamin GRANIELLO
  • Patent number: 10031699
    Abstract: Technology for a system operable to write and read data from memory is described. The system can include memory and a memory controller. The memory controller can send an instruction to write data to a NVM address in the memory at a time of last write (TOLW). The memory controller can determine to read the data from the NVM address in the memory at read time. The memory controller can determine a read voltage to read the data from the NVM address in the memory at the read time. The read voltage can be determined based on a difference between the TOLW and the read time, and a modeled voltage drift for the NVM address over a period of time.
    Type: Grant
    Filed: October 18, 2017
    Date of Patent: July 24, 2018
    Assignee: Intel Corporation
    Inventors: Benjamin A. Graniello, Karthik Kumar