Patents by Inventor Mariusz Barczak

Mariusz Barczak has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10599585
    Abstract: A method and apparatus for caching data accessed in a storage device, which include a selection of a list from a plurality of lists based on a cache block accessed from a cache memory, the cache memory being partitioned into a plurality of cache portions, each of the plurality of lists being assigned to a respective cache portion of the plurality of cache portions, each of the plurality of lists indicating an order in which cache blocks of the respective cache portion were accessed. Furthermore, a determination as to whether the accessed cache block meets a list update criteria, and an update the order in which cache blocks, assigned to the selected list, were accessed from the cache memory based on determining the accessed cache block meets the list update criteria may be included.
    Type: Grant
    Filed: March 23, 2017
    Date of Patent: March 24, 2020
    Assignee: INTEL CORPORATION
    Inventors: Michal Wysoczanski, Mariusz Barczak
  • Patent number: 10452546
    Abstract: Examples may include techniques to monitor processing of I/O requests of an application being executed by a computing platform by collecting a trace of the I/O requests, the trace including an I/O class of each I/O request; replay the trace and automatically analyze possible cache configuration policies for using a cache during execution of the application by the computing platform; and determine an optimal cache configuration policy for the cache from the possible cache configuration policies. The optimal cache configuration policy may then be applied to use of the cache during subsequent execution of the application by the computing platform.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: October 22, 2019
    Assignee: Intel Corporation
    Inventors: Michael Mesnier, Arun Raghunath, Mariusz Barczak, John Keys
  • Patent number: 10318450
    Abstract: Technology for an apparatus is described. The apparatus can include a memory controller with circuitry configured to define a caching and processing priority policy for one or more input/output (I/O) request class types. The memory controller can monitor one or more I/O contexts of one or more I/O requests. The memory controller can associate the one or more I/O contexts with one or more I/O class types using an I/O context association table. The memory controller can execute the one or more I/O requests according to the caching and processing priority policy of the one or more I/O class types. The apparatus can include an interface to the memory controller.
    Type: Grant
    Filed: July 1, 2016
    Date of Patent: June 11, 2019
    Assignee: Intel Corporation
    Inventors: Maciej Kaminski, Piotr Wysocki, Mariusz Barczak
  • Publication number: 20190095336
    Abstract: A host computing arrangement is provided, which may include a host processor having a host operating system and host kernel associated therewith. The host processor may be configured to host a guest operating system, mirror a filesystem of the guest operating system via the host kernel, and generate caching criteria by scanning the mirrored filesystem. The host computing arrangement may further include a cache engine. The cache engine may be configured to process an I/O request from the guest operating system based on the caching criteria generated by the host processor.
    Type: Application
    Filed: September 28, 2017
    Publication date: March 28, 2019
    Inventor: Mariusz Barczak
  • Patent number: 10228874
    Abstract: An embodiment of a storage apparatus may include persistent storage media, a namespace having backend storage, and a virtual function controller communicatively coupled to the persistent storage media and the namespace to assign the namespace to a virtual storage function and to control access to the namespace by the virtual storage function. The virtual function controller may be further configured to cache access to the namespace on the persistent storage media. Other embodiments are disclosed and claimed.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: March 12, 2019
    Assignee: Intel Corporation
    Inventors: Piotr Wysocki, Mariusz Barczak
  • Patent number: 10223271
    Abstract: Provided are an apparatus, computer program product, and method to perform cache operations in a solid state drive. A cache memory determines whether data for a requested storage address in a primary storage namespace received from a host system is stored at an address in the cache memory namespace to which the requested storage address maps according to a cache mapping scheme. Multiple of the storage addresses in the primary storage map to one address in the cache memory namespace. The cache memory returns to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace.
    Type: Grant
    Filed: March 31, 2017
    Date of Patent: March 5, 2019
    Assignee: INTEL CORPORATION
    Inventors: Mariusz Barczak, Piotr Wysocki
  • Publication number: 20190042470
    Abstract: Examples may include techniques to improve cache performance in a computing system. An eviction service may be used to manage a dirty list and a clean list, set a cache line to hot, set a cache line to clean, set a cache line to dirty, and evict a cache line from the cache. A cache engine may be used to write data into the cache at a cache line, request the eviction service to set the cache line to dirty, and manage a dirty cache lines counter for each chunk of the primary memory. A cleaning thread may be used to determine a dirtiest chunk of a primary memory, get a cache line of the dirtiest chunk, and when the cache line of the dirtiest chunk is dirty, read the cache line to get data from the cache, write the data to primary memory, request the eviction service to set the cache line to clean, and manage the dirty cache lines counters.
    Type: Application
    Filed: March 2, 2018
    Publication date: February 7, 2019
    Inventors: Mariusz BARCZAK, Igor KONOPKO, Adam RUTKOWSKI
  • Publication number: 20190042386
    Abstract: A technology is described for a logical storage driver. An example method can include using the logical storage driver to: forward requests to a first storage stack for processing of an I/O workload associated with the I/O requests. Initiate generation of trace data for the I/O workload for collection and analysis to determine a second storage stack for improving performance of the I/O workload. Receive the storage processing logic for processing the I/O workloads using the storage configuration for the I/O workload, where the storage processing logic interfaces with the storage configuration. Intercept the I/O requests that correspond to the I/O workload. And, process the I/O workloads using the storage processing logic that interfaces with the storage configuration.
    Type: Application
    Filed: December 27, 2017
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: MARIUSZ BARCZAK, MICHAL WYSOCZANSKI, ANDRZEJ JAKOWSKI
  • Publication number: 20190034120
    Abstract: An embodiment of a semiconductor package apparatus may include technology to determine a stream classification for an access request to a persistent storage media, and assign the access request to a stream based on the stream classification. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: December 29, 2017
    Publication date: January 31, 2019
    Inventors: Mariusz Barczak, Dhruvil Shah, Kapil Karkra, Andrzej Jakowski, Piotr Wysocki
  • Publication number: 20190034339
    Abstract: Examples may include techniques to monitor processing of I/O requests of an application being executed by a computing platform by collecting a trace of the I/O requests, the trace including an I/O class of each I/O request; replay the trace and automatically analyze possible cache configuration policies for using a cache during execution of the application by the computing platform; and determine an optimal cache configuration policy for the cache from the possible cache configuration policies. The optimal cache configuration policy may then be applied to use of the cache during subsequent execution of the application by the computing platform, thereby improving performance.
    Type: Application
    Filed: December 21, 2017
    Publication date: January 31, 2019
    Inventors: Michael MESNIER, Arun RAGHUNATH, Mariusz BARCZAK, John KEYS
  • Publication number: 20180285275
    Abstract: Provided are an apparatus, computer program product, and method to perform cache operations in a solid state drive. A cache memory determines whether data for a requested storage address in a primary storage namespace received from a host system is stored at an address in the cache memory namespace to which the requested storage address maps according to a cache mapping scheme. Multiple of the storage addresses in the primary storage map to one address in the cache memory namespace. The cache memory returns to the host system the data at the requested address stored in the cache memory namespace in response to determining that the data for the requested storage address is stored in the cache memory namespace.
    Type: Application
    Filed: March 31, 2017
    Publication date: October 4, 2018
    Inventors: Mariusz BARCZAK, Piotr WYSOCKI
  • Publication number: 20180276139
    Abstract: A method and apparatus for caching data accessed in a storage device, which include a selection of a list from a plurality of lists based on a cache block accessed from a cache memory, the cache memory being partitioned into a plurality of cache portions, each of the plurality of lists being assigned to a respective cache portion of the plurality of cache portions, each of the plurality of lists indicating an order in which cache blocks of the respective cache portion were accessed. Furthermore, a determination as to whether the accessed cache block meets a list update criteria, and an update the order in which cache blocks, assigned to the selected list, were accessed from the cache memory based on determining the accessed cache block meets the list update criteria may be included.
    Type: Application
    Filed: March 23, 2017
    Publication date: September 27, 2018
    Inventors: Michal Wysoczanski, Mariusz Barczak
  • Publication number: 20180188985
    Abstract: An embodiment of a storage apparatus may include persistent storage media, a namespace having backend storage, and a virtual function controller communicatively coupled to the persistent storage media and the namespace to assign the namespace to a virtual storage function and to control access to the namespace by the virtual storage function. The virtual function controller may be further configured to cache access to the namespace on the persistent storage media. Other embodiments are disclosed and claimed.
    Type: Application
    Filed: December 29, 2016
    Publication date: July 5, 2018
    Inventors: Piotr Wysocki, Mariusz Barczak
  • Publication number: 20180067854
    Abstract: Methods and apparatus related to an aggressive write-back cache cleaning policy optimized for Non-Volatile Memory (NVM) are described. In one embodiment, dirty cache lines are sorted by their LBA (Logic Block Address) on backend storage and an attempt is made to first flush (or remove) the largest sequential portions (including one or more cache lines). Other embodiments are also disclosed and claimed.
    Type: Application
    Filed: September 7, 2016
    Publication date: March 8, 2018
    Applicant: Intel Corporation
    Inventors: Maciej Kaminski, Mariusz Barczak
  • Publication number: 20180004690
    Abstract: Technology for an apparatus is described. The apparatus can include a memory controller with circuitry configured to define a caching and processing priority policy for one or more input/output (I/O) request class types. The memory controller can monitor one or more I/O contexts of one or more I/O requests. The memory controller can associate the one or more I/O contexts with one or more I/O class types using an I/O context association table. The memory controller can execute the one or more I/O requests according to the caching and processing priority policy of the one or more I/O class types. The apparatus can include an interface to the memory controller.
    Type: Application
    Filed: July 1, 2016
    Publication date: January 4, 2018
    Applicant: Intel Corporation
    Inventors: Maciej Kaminski, Piotr Wysocki, Mariusz Barczak