Patents by Inventor Benjamin GRANIELLO
Benjamin GRANIELLO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240103914Abstract: In one embodiment, a processor includes: a plurality of cores to execute instructions; at least one monitor coupled to the plurality of cores to measure at least one of power information, temperature information, or scalability information; and a control circuit coupled to the at least one monitor. Based at least in part on the at least one of the power information, the temperature information, or the scalability information, the control circuit is to notify an operating system that one or more of the plurality of cores are to transition to a forced idle state in which non-affinitized workloads are prevented from being scheduled. Other embodiments are described and claimed.Type: ApplicationFiled: September 28, 2022Publication date: March 28, 2024Inventors: Russell J. Fenger, Rajshree A. Chabukswar, Benjamin Graniello, Monica Gupta, Guy M. Therien, Michael W. Chynoweth
-
Publication number: 20240086341Abstract: Methods, apparatus and systems for adaptive fabric allocation for local and remote emerging memories-based prediction schemes. In conjunction with performing memory transfers between a compute host and memory device connected via one or more interconnect segments, memory read and write traffic is monitored for at least one interconnect segment having reconfigurable upstream lanes and downstream lanes. Predictions of expected read and write bandwidths for the at least one interconnect segment are then made. Based on the expected read and write bandwidths, the upstream lanes and downstream lanes are dynamically reconfigured. The interconnect segments include interconnect links such as Compute Exchange Link (CXL) flex buses and memory channels for local memory implementations, and fabric links for remote memory implementations. For local memory, management messages may be used to provide telemetry information containing the expected read and write bandwidths.Type: ApplicationFiled: September 22, 2023Publication date: March 14, 2024Inventors: Benjamin Graniello, Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm
-
Patent number: 11789878Abstract: Methods, apparatus and systems for adaptive fabric allocation for local and remote emerging memories-based prediction schemes. In conjunction with performing memory transfers between a compute host and memory device connected via one or more interconnect segments, memory read and write traffic is monitored for at least one interconnect segment having reconfigurable upstream lanes and downstream lanes. Predictions of expected read and write bandwidths for the at least one interconnect segment are then made. Based on the expected read and write bandwidths, the upstream lanes and downstream lanes are dynamically reconfigured. The interconnect segments include interconnect links such as Compute Exchange Link (CXL) flex buses and memory channels for local memory implementations, and fabric links for remote memory implementations. For local memory, management messages may be used to provide telemetry information containing the expected read and write bandwidths.Type: GrantFiled: December 19, 2019Date of Patent: October 17, 2023Assignee: Intel CorporationInventors: Benjamin Graniello, Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm
-
Patent number: 11451435Abstract: Technologies for providing multi-tenant support in edge resources using edge channels include a device that includes circuitry to obtain a message associated with a service provided at the edge of a network. Additionally, the circuitry is to identify an edge channel based on metadata associated with the message. The edge channel has a predefined amount of resource capacity allocated to the edge channel to process the message. Further, the circuitry is to determine the predefined amount of resource capacity allocated to the edge channel and process the message using the allocated resource capacity for the identified edge channel.Type: GrantFiled: March 28, 2019Date of Patent: September 20, 2022Assignee: INTEL CORPORATIONInventors: Francesc Guim Bernat, Karthik Kumar, Benjamin Graniello, Timothy Verrall, Andrew J. Herdrich, Rashmin Patel, Monica Kenguva, Brinda Ganesh, Alexander Vul, Ned M. Smith, Suraj Prabhakaran
-
Patent number: 11404105Abstract: A write history buffer can prevent write disturb in memory, enabling a reduction in write disturb refresh rate and improvement in performance. A memory device can include circuitry to cause consecutive write commands to the same address to be spaced by an amount of time to reduce incidences of write disturb, and therefore reduce the required write disturb refresh rate and improve performance. In one example, a memory device receives multiple write commands to an address. In response to receipt of the multiple write commands, the first write command is sent to the memory and a timer is started. Subsequent write commands that are received after the first write command and before expiration of the timer are held in a buffer. After expiration of the timer, only the most recent of the subsequent write commands to the address is sent to the memory array.Type: GrantFiled: December 21, 2020Date of Patent: August 2, 2022Assignee: Intel CorporationInventors: Akanksha Mehta, Benjamin Graniello, Rakan Maddah, Philip Hillier, Richard P. Mangold, Prashant S. Damle, Kunal A. Khochare
-
Patent number: 11188264Abstract: A memory system includes a nonvolatile (NV) memory device with asymmetry between intrinsic read operation delay and intrinsic write operation delay. The system can select to perform memory access operations with the NV memory device with the asymmetry, in which case write operations have a lower delay than read operations. The system can alternatively select to perform memory access operations with the NV memory device where a configured write operation delay that matches the read operation delay.Type: GrantFiled: February 3, 2020Date of Patent: November 30, 2021Assignee: Intel CorporationInventors: Shekoufeh Qawami, Philip Hillier, Benjamin Graniello, Rajesh Sundaram
-
Publication number: 20210110862Abstract: A write history buffer can prevent write disturb in memory, enabling a reduction in write disturb refresh rate and improvement in performance. A memory device can include circuitry to cause consecutive write commands to the same address to be spaced by an amount of time to reduce incidences of write disturb, and therefore reduce the required write disturb refresh rate and improve performance. In one example, a memory device receives multiple write commands to an address. In response to receipt of the multiple write commands, the first write command is sent to the memory and a timer is started. Subsequent write commands that are received after the first write command and before expiration of the timer are held in a buffer. After expiration of the timer, only the most recent of the subsequent write commands to the address is sent to the memory array.Type: ApplicationFiled: December 21, 2020Publication date: April 15, 2021Inventors: Akanksha MEHTA, Benjamin GRANIELLO, Rakan MADDAH, Philip HILLIER, Richard P. MANGOLD, Prashant S. DAMLE, Kunal A. KHOCHARE
-
Patent number: 10885004Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.Type: GrantFiled: June 19, 2018Date of Patent: January 5, 2021Assignee: Intel CorporationInventors: Karthik Kumar, Francesc Guim Bernat, Thomas Willhalm, Mark A. Schmisseur, Benjamin Graniello
-
Publication number: 20200174705Abstract: A memory system includes a nonvolatile (NV) memory device with asymmetry between intrinsic read operation delay and intrinsic write operation delay. The system can select to perform memory access operations with the NV memory device with the asymmetry, in which case write operations have a lower delay than read operations. The system can alternatively select to perform memory access operations with the NV memory device where a configured write operation delay that matches the read operation delay.Type: ApplicationFiled: February 3, 2020Publication date: June 4, 2020Inventors: Shekoufeh QAWAMI, Philip HILLIER, Benjamin GRANIELLO, Rajesh SUNDARAM
-
Publication number: 20200125503Abstract: Methods, apparatus and systems for adaptive fabric allocation for local and remote emerging memories-based prediction schemes. In conjunction with performing memory transfers between a compute host and memory device connected via one or more interconnect segments, memory read and write traffic is monitored for at least one interconnect segment having reconfigurable upstream lanes and downstream lanes. Predictions of expected read and write bandwidths for the at least one interconnect segment are then made. Based on the expected read and write bandwidths, the upstream lanes and downstream lanes are dynamically reconfigured. The interconnect segments include interconnect links such as Compute Exchange Link (CXL) flex buses and memory channels for local memory implementations, and fabric links for remote memory implementations. For local memory, management messages may be used to provide telemetry information containing the expected read and write bandwidths.Type: ApplicationFiled: December 19, 2019Publication date: April 23, 2020Inventors: Benjamin Graniello, Francesc Guim Bernat, Karthik Kumar, Thomas Willhalm
-
Patent number: 10599579Abstract: Cache on a persistent memory module is dynamically allocated as a prefetch cache or a write back cache to prioritize read and write operations to a persistent memory on the persistent memory module based on monitoring read/write accesses and/or user-selected allocation.Type: GrantFiled: June 25, 2018Date of Patent: March 24, 2020Assignee: Intel CorporationInventors: Karthik Kumar, Francesc Guim Bernat, Benjamin Graniello, Thomas Willhalm, Mustafa Hajeer
-
Publication number: 20200076682Abstract: Technologies for providing multi-tenant support in edge resources using edge channels include a device that includes circuitry to obtain a message associated with a service provided at the edge of a network. Additionally, the circuitry is to identify an edge channel based on metadata associated with the message. The edge channel has a predefined amount of resource capacity allocated to the edge channel to process the message. Further, the circuitry is to determine the predefined amount of resource capacity allocated to the edge channel and process the message using the allocated resource capacity for the identified edge channel.Type: ApplicationFiled: March 28, 2019Publication date: March 5, 2020Inventors: Francesc Guim Bernat, Karthik Kumar, Benjamin Graniello, Timothy Verrall, Andrew J. Herdrich, Rashmin Patel, Monica Kenguva, Brinda Ganesh, Alexander Vul, Ned M. Smith, Suraj Prabhakaran
-
Publication number: 20190384837Abstract: A group of cache lines in cache may be identified as cache lines not to be flushed to persistent memory until all cache line writes for the group of cache lines have been completed.Type: ApplicationFiled: June 19, 2018Publication date: December 19, 2019Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Thomas WILLHALM, Mark A. SCHMISSEUR, Benjamin GRANIELLO
-
Patent number: 10402330Abstract: Examples include a processor including a coherency mode indicating one of a directory-based cache coherence protocol and a snoop-based cache coherency protocol, and a caching agent to monitor a bandwidth of reading from and/or writing data to a memory coupled to the processor, to set the coherency mode to the snoop-based cache coherency protocol when the bandwidth exceeds a threshold, and to set the coherency mode to the directory-based cache coherency protocol when the bandwidth does not exceed the threshold.Type: GrantFiled: April 3, 2018Date of Patent: September 3, 2019Assignee: Intel CorporationInventors: Karthik Kumar, Mustafa Hajeer, Thomas Willhalm, Francesc Guim Bernat, Benjamin Graniello
-
Publication number: 20190171387Abstract: Techniques and mechanisms for wear leveling across dual inline memory modules (DIMMs) by migrating data using direct memory accesses. In an embodiment, a direct memory access (DMA) controller detects that a metric of accesses to a first page of a first DIMM is outside of some range. Based on the detecting, the DMA controller disables an access to the first page by a processor core. While the access is disabled, the DMA controller performs DMA operations to migrate data from the first page to a second page of a second DIMM. The first page and the second page correspond, respectively, to a first physical address and a second physical address. In another embodiment, an update to address mapping information replaces a first correspondence of a virtual address to the first physical address with a second correspondence of the virtual address to the second physical address.Type: ApplicationFiled: January 31, 2019Publication date: June 6, 2019Inventors: Thomas WILLHALM, Francesc GUIM BERNAT, Karthik KUMAR, Benjamin GRANIELLO, Mustafa HAJEER
-
Publication number: 20190095329Abstract: Technology for a system operable to allocate physical pages of memory is described. The system can include a memory side cache, a memory side cache monitoring unit coupled to the memory side cache, and an operating system (OS) page allocator. The OS page allocator can receive feedback from the memory side cache monitoring unit. The OS page allocator can adjust a page allocation policy that defines the physical pages allocated by the OS page allocator based on the feedback received from the memory side cache monitoring unit.Type: ApplicationFiled: September 27, 2017Publication date: March 28, 2019Applicant: Intel CorporationInventors: Karthik Kumar, Benjamin A. Graniello
-
Publication number: 20190042458Abstract: Cache on a persistent memory module is dynamically allocated as a prefetch cache or a write back cache to prioritize read and write operations to a persistent memory on the persistent memory module based on monitoring read/write accesses and/or user-selected allocation.Type: ApplicationFiled: June 25, 2018Publication date: February 7, 2019Inventors: Karthik KUMAR, Francesc GUIM BERNAT, Benjamin GRANIELLO, Thomas WILLHALM, Mustafa HAJEER
-
Publication number: 20190042423Abstract: A method is described. The method includes configuring different software programs that are to execute on a computer with customized hardware caching service levels. The available set of hardware caching levels at least comprise L1, L2 and L3 caching levels and at least one of the following hardware caching levels is available for customized support of a software program L2, L3 and L4.Type: ApplicationFiled: April 19, 2018Publication date: February 7, 2019Inventors: Karthik KUMAR, Benjamin GRANIELLO, Mark A. SCHMISSEUR, Thomas WILLHALM, Francesc GUIM BERNAT
-
Publication number: 20190042429Abstract: Examples include a processor including a coherency mode indicating one of a directory-based cache coherence protocol and a snoop-based cache coherency protocol, and a caching agent to monitor a bandwidth of reading from and/or writing data to a memory coupled to the processor, to set the coherency mode to the snoop-based cache coherency protocol when the bandwidth exceeds a threshold, and to set the coherency mode to the directory-based cache coherency protocol when the bandwidth does not exceed the threshold.Type: ApplicationFiled: April 3, 2018Publication date: February 7, 2019Inventors: Karthik KUMAR, Mustafa HAJEER, Thomas WILLHALM, Francesc GUIM BERNAT, Benjamin GRANIELLO
-
Patent number: 10031699Abstract: Technology for a system operable to write and read data from memory is described. The system can include memory and a memory controller. The memory controller can send an instruction to write data to a NVM address in the memory at a time of last write (TOLW). The memory controller can determine to read the data from the NVM address in the memory at read time. The memory controller can determine a read voltage to read the data from the NVM address in the memory at the read time. The read voltage can be determined based on a difference between the TOLW and the read time, and a modeled voltage drift for the NVM address over a period of time.Type: GrantFiled: October 18, 2017Date of Patent: July 24, 2018Assignee: Intel CorporationInventors: Benjamin A. Graniello, Karthik Kumar