Patents by Inventor ZVIKA GREENFIELD

ZVIKA GREENFIELD has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230086149
    Abstract: Some embodiments include apparatuses and electrical models associated with the apparatus. One of the apparatuses includes a power control unit to monitor a power state of the apparatus for entry into a standby mode. The apparatus can include a two-level memory (2LM) hardware accelerator to, responsive to a notification from the power control unit of entry into the standby mode, flush dynamic random access memory (DRAM) content from a first memory part to a second memory part. The apparatus can include processing circuitry to determine memory utilization and move memory from a first memory portion to a second memory portion responsive to memory utilization exceeding a threshold. Other methods systems and apparatuses are described.
    Type: Application
    Filed: September 23, 2021
    Publication date: March 23, 2023
    Inventors: Chia-Hung S. Kuo, Deepak Gandiga Shivakumar, Anoop Mukker, Arik Gihon, Zvika Greenfield, Asaf Rubinstein, Leo Aqrabawi
  • Patent number: 11188467
    Abstract: A method is described. The method includes receiving a read or write request for a cache line. The method includes directing the request to a set of logical super lines based on the cache line's system memory address. The method includes associating the request with a cache line of the set of logical super lines. The method includes, if the request is a write request: compressing the cache line to form a compressed cache line, breaking the cache line down into smaller data units and storing the smaller data units into a memory side cache. The method includes, if the request is a read request: reading smaller data units of the compressed cache line from the memory side cache and decompressing the cache line.
    Type: Grant
    Filed: September 28, 2017
    Date of Patent: November 30, 2021
    Assignee: Intel Corporation
    Inventors: Israel Diamand, Alaa R. Alameldeen, Sreenivas Subramoney, Supratik Majumder, Srinivas Santosh Kumar Madugula, Jayesh Gaur, Zvika Greenfield, Anant V. Nori
  • Patent number: 11036412
    Abstract: A multilevel memory subsystem includes a persistent memory device that can access data chunks sequentially or randomly to improve read latency, or can prefetch data blocks to improve read bandwidth. A media controller dynamically switches between a first read mode of accessing data chunks sequentially or randomly and a second read mode of prefetching data blocks. The media controller switches between the first and second read modes based on a number of read commands pending in a command queue.
    Type: Grant
    Filed: September 27, 2019
    Date of Patent: June 15, 2021
    Assignee: Intel Corporation
    Inventors: Sahar Khalili, Zvika Greenfield, Sowmiya Jayachandran, Robert J. Royer, Jr., Dimpesh Patel
  • Patent number: 10949356
    Abstract: A method is described. The method includes receiving notice of a page fault. A page targeted by a memory access instruction that resulted in the page fault residing in persistent memory without system memory status. In response to the page fault, updating page table information to include a translation that points to the page in persistent memory such that the page changes to system memory status without moving the page and system memory expands to include the page in persistent memory.
    Type: Grant
    Filed: June 14, 2019
    Date of Patent: March 16, 2021
    Assignee: Intel Corporation
    Inventors: James A. Boyd, Robert J. Royer, Jr., Lily P. Looi, Gary C. Chow, Zvika Greenfield, Chia-Hung S. Kuo, Dale J. Juenemann
  • Publication number: 20210056030
    Abstract: A method is described. The method includes receiving a read or write request for a cache line. The method includes directing the request to a set of logical super lines based on the cache line's system memory address. The method includes associating the request with a cache line of the set of logical super lines. The method includes, if the request is a write request: compressing the cache line to form a compressed cache line, breaking the cache line down into smaller data units and storing the smaller data units into a memory side cache. The method includes, if the request is a read request: reading smaller data units of the compressed cache line from the memory side cache and decompressing the cache line.
    Type: Application
    Filed: November 6, 2020
    Publication date: February 25, 2021
    Inventors: Israel DIAMAND, Alaa R. ALAMELDEEN, Sreenivas SUBRAMONEY, Supratik MAJUMDER, Srinivas Santosh Kumar MADUGULA, Jayesh GAUR, Zvika GREENFIELD, Anant V. NORI
  • Patent number: 10915453
    Abstract: An apparatus is described. The apparatus includes a memory controller to interface to a multi-level system memory having first and second different cache structures. The memory controller has circuitry to service a read request by concurrently performing a look-up into the first and second different cache structures for a cache line that is targeted by the read request.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: February 9, 2021
    Assignee: Intel Corporation
    Inventors: Israel Diamand, Zvika Greenfield, Julius Mandelblat, Asaf Rubinstein
  • Publication number: 20200226066
    Abstract: An apparatus is described. The apparatus includes a memory controller to interface with a multi-level memory having a near memory and a far memory. The memory controller to maintain first and second caches. The first cache to cache pages recently accessed from the far memory. The second cache to cache addresses of pages recently accessed from the far memory. The second cache having a first level and a second level. The first level to cache addresses of pages that are more recently accessed than pages whose respective addresses are cached in the second level. The memory controller comprising logic circuitry to inform system software that: a) a first page in the first cache that is accessed less than other pages in the first cache is a candidate for migration from the far memory to the near memory; and/or, b) a second page whose address travels a threshold number of round trips between the first and second levels of the second cache is a candidate for migration from the far memory to the near memory.
    Type: Application
    Filed: March 27, 2020
    Publication date: July 16, 2020
    Inventors: Eran SHIFER, Zeshan A. CHISHTI, Sanjay K. KUMAR, Zvika GREENFIELD, Philip LANTZ, Eshel SERLIN, Asaf RUBINSTEIN, Robert J. ROYER, JR.
  • Patent number: 10678706
    Abstract: Embodiments of the present disclosure are directed towards a computing device having a cache memory device with a scrubber logic. In some embodiments, the scrubber logic controller may be coupled with the cache device, and may perform a selection for eviction from the cache device a portion of data stored in the cache device, based at least in part on one or more selection criteria, at a dynamically adjusted level of aggressiveness of the selection performance. The scrubber logic controller may adjust the level of aggressiveness of the selection performance, based at least in part on a determined time left to complete the selection performance at a current level of aggressiveness. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: March 13, 2018
    Date of Patent: June 9, 2020
    Assignee: INTEL CORPORATION
    Inventors: Zvika Greenfield, Eshel Serlin, Asaf Rubinstein, Eli Abadi
  • Patent number: 10657058
    Abstract: Interleaved cache controllers with shared metadata are disclosed and described. A memory system may comprise a plurality of cache controllers and a metadata store interconnected by a metadata store fabric. The metadata store receives information from at least one of the plurality of cache controllers, a portion of which is stored as shared distributed metadata. The metadata store provides shared access of the shared distributed metadata hosted to the plurality of cache controllers.
    Type: Grant
    Filed: June 26, 2018
    Date of Patent: May 19, 2020
    Assignee: Intel Corporation
    Inventors: Daniel Greenspan, Zvika Greenfield
  • Patent number: 10558570
    Abstract: Described herein are embodiments of asymmetric memory management to enable high bandwidth accesses. In embodiments, a high bandwidth cache or high bandwidth region can be synthesized using the bandwidth capabilities of more than one memory source. In one embodiment, memory management circuitry includes input/output (I/O) circuitry coupled with a first memory and a second memory. The I/O circuitry is to receive memory access requests. The memory management circuitry also includes logic to determine if the memory access requests are for data in a first region of system memory or a second region of system memory, and in response to a determination that one of the memory access requests is to the first region and a second of the memory access requests is to the second region, access data in the first region from the cache of the first memory and concurrently access data in the second region from the second memory.
    Type: Grant
    Filed: February 24, 2017
    Date of Patent: February 11, 2020
    Assignee: Intel Corporation
    Inventors: Nadav Bonen, Zvika Greenfield, Randy Osborne
  • Publication number: 20200034061
    Abstract: A multilevel memory subsystem includes a persistent memory device that can access data chunks sequentially or randomly to improve read latency, or can prefetch data blocks to improve read bandwidth. A media controller dynamically switches between a first read mode of accessing data chunks sequentially or randomly and a second read mode of prefetching data blocks. The media controller switches between the first and second read modes based on a number of read commands pending in a command queue.
    Type: Application
    Filed: September 27, 2019
    Publication date: January 30, 2020
    Inventors: Sahar KHALILI, Zvika GREENFIELD, Sowmiya JAYACHANDRAN, Robert J. ROYER, JR., Dimpesh PATEL
  • Publication number: 20190303300
    Abstract: A method is described. The method includes receiving notice of a page fault. A page targeted by a memory access instruction that resulted in the page fault residing in persistent memory without system memory status. In response to the page fault, updating page table information to include a translation that points to the page in persistent memory such that the page changes to system memory status without moving the page and system memory expands to include the page in persistent memory.
    Type: Application
    Filed: June 14, 2019
    Publication date: October 3, 2019
    Inventors: James A. BOYD, Robert J. ROYER, JR., Lily P. LOOI, Gary C. CHOW, Zvika GREENFIELD, Chia-Hung S. KUO, Dale J. JUENEMANN
  • Patent number: 10304418
    Abstract: An electronic processing system may include a processor and a multi-level memory coupled to the processor, the multi-level memory including at least a main memory and a fast memory, the fast memory having relatively faster performance as compared to the main memory. The system may further include a fast memory controller coupled to the fast memory and a graphics controller coupled to the fast memory controller. The fast memory may include a cache portion allocated to a cache region to allow a corresponding mapping of elements of the main memory in the cache region, and a graphics portion allocated to a graphics region for the graphics controller with no corresponding mapping of the graphics region with the main memory.
    Type: Grant
    Filed: September 27, 2016
    Date of Patent: May 28, 2019
    Assignee: Intel Corporation
    Inventors: Daniel Greenspan, Randy Osborne, Zvika Greenfield, Israel Diamand, Asaf Rubinstein
  • Publication number: 20190102314
    Abstract: An embodiment of a semiconductor package apparatus may include technology to determine a workload characteristic for a tag cache, and adjust a power parameter for the tag cache based on the workload characteristic. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: September 29, 2017
    Publication date: April 4, 2019
    Inventors: Zhe Wang, Zeshan Chishti, Nagi Aboulenein, Zvika Greenfield
  • Publication number: 20190095331
    Abstract: A method is described. The method includes receiving a read or write request for a cache line. The method includes directing the request to a set of logical super lines based on the cache line's system memory address. The method includes associating the request with a cache line of the set of logical super lines. The method includes, if the request is a write request: compressing the cache line to form a compressed cache line, breaking the cache line down into smaller data units and storing the smaller data units into a memory side cache. The method includes, if the request is a read request: reading smaller data units of the compressed cache line from the memory side cache and decompressing the cache line.
    Type: Application
    Filed: September 28, 2017
    Publication date: March 28, 2019
    Inventors: Israel DIAMAND, Alaa R. ALAMELDEEN, Sreenivas SUBRAMONEY, Supratik MAJUMDER, Srinivas Santosh Kumar MADUGULA, Jayesh GAUR, Zvika GREENFIELD, Anant V. NORI
  • Patent number: 10241916
    Abstract: Provided are an apparatus, system, and method for sparse superline removal. In response to occupancy of a replacement tracker (RT) exceeding an RT eviction watermark, an eviction process is triggered for evicting a superline from a sectored cache storing at least one superline. An eviction candidate is selected from superlines that have: 1) a sector usage below or equal to a superline low watermark and 2) an RT timestamp that is greater than a superline age watermark.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: March 26, 2019
    Assignee: INTEL CORPORATION
    Inventors: Zvika Greenfield, Zeshan A. Chishti, Israel Diamand
  • Patent number: 10210925
    Abstract: A memory controller issues a targeted refresh command. A specific row of a memory device can be the target of repeated accesses. When the row is accessed repeatedly within a time threshold (also referred to as “hammered” or a “row hammer event”), physically adjacent row (a “victim” row) may experience data corruption. The memory controller receives an indication of a row hammer event, identifies the row associated with the row hammer event, and sends one or more commands to the memory device to cause the memory device to perform a targeted refresh that will refresh the victim row.
    Type: Grant
    Filed: December 7, 2017
    Date of Patent: February 19, 2019
    Assignee: Intel Corporation
    Inventors: Kuljit S. Bains, John B. Halbert, Christopher P. Mozak, Theodore Z. Schoenborn, Zvika Greenfield
  • Publication number: 20190050346
    Abstract: Embodiments of the present disclosure are directed towards a computing device having a cache memory device with a scrubber logic. In some embodiments, the scrubber logic controller may be coupled with the cache device, and may perform a selection for eviction from the cache device a portion of data stored in the cache device, based at least in part on one or more selection criteria, at a dynamically adjusted level of aggressiveness of the selection performance. The scrubber logic controller may adjust the level of aggressiveness of the selection performance, based at least in part on a determined time left to complete the selection performance at a current level of aggressiveness. Other embodiments may be described and/or claimed.
    Type: Application
    Filed: March 13, 2018
    Publication date: February 14, 2019
    Inventors: Zvika Greenfield, Eshel Serlin, Asaf Rubinstein, Eli Abadi
  • Patent number: 10204047
    Abstract: An apparatus is described that includes a memory controller having an interface to couple to a multi-level system memory. The memory controller also includes a coherency buffer and coherency services logic circuitry. The coherency buffer is to keep cache lines for which read and/or write requests have been received. The coherency services logic circuitry is coupled to the interface and the coherency buffer. The coherency services logic circuitry is to merge a cache line that has been evicted from a level of the multi-level system memory with another version of the cache line within the coherency buffer before writing the cache line back to a deeper level of the multi-level system memory if at least one of the following is true: the another version of said cache line is in a modified state; the memory controller has a pending write request for the cache line.
    Type: Grant
    Filed: March 27, 2015
    Date of Patent: February 12, 2019
    Assignee: Intel Corporation
    Inventors: Israel Diamand, Nir Misgav, Aravindh Anantaraman, Zvika Greenfield
  • Patent number: 10176099
    Abstract: An apparatus includes a cache controller, the cache controller to receive, from a requestor, a memory access request referencing a memory address of a memory. The cache controller may identify a cache entry associated with the memory address, and responsive to determining that a first data item stored in the cache entry matches a data pattern indicating cache entry invalidity, read a second data item from a memory location identified by the memory address. The cache controller may then return, to the requestor, a response comprising the second data item.
    Type: Grant
    Filed: July 11, 2016
    Date of Patent: January 8, 2019
    Assignee: Intel Corporation
    Inventors: Jayesh Gaur, Supratik Majumder, Zvika Greenfield, Israel Diamand