Patents by Inventor Luca Bert

Luca Bert has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210124499
    Abstract: A processing device, operatively coupled with the memory device, is configured to provide a plurality of functions for accessing the memory device, wherein a function of the plurality of function receives input/output (I/O) operations from a host computing system. The processing device further determines a quality of service level of each function of the plurality of functions, and assigns to each function of the plurality of functions a corresponding function weight based on a corresponding quality of service level. The processing device also selects, for execution, a subset of the I/O operations, the subset comprising a number of I/O operations received at each function of the plurality of functions, wherein the number of I/O operations is determined according to the corresponding function weight of each function. The processing logic then executes the subset of I/O operations at the memory device.
    Type: Application
    Filed: October 24, 2019
    Publication date: April 29, 2021
    Inventor: Luca Bert
  • Publication number: 20210124498
    Abstract: A processing device in a memory system receives a request to execute a first operation of a first input/output (I/O) operation type at a memory device. The processing device further determines whether a second operation of a second I/O operation type is being executed at the memory device. Responsive to determining that the second operation is being executed, the processing device suspends the second operation after a delay time period, the delay time period corresponds to a first operation weight of the first operation and a second operation weight of the second operation, executes the first operation at the memory device, and responsive to determining that executing the first operation is complete, the processing device resumes execution of the second operation at the memory device.
    Type: Application
    Filed: October 24, 2019
    Publication date: April 29, 2021
    Inventor: Luca Bert
  • Publication number: 20210124497
    Abstract: A processing device, operatively coupled with the memory device, is configured to provide a plurality of functions for accessing the memory device, a function of the plurality of functions receives input/output (I/O) operations from a host computing system. The processing device further selects a first function of the plurality of functions to service and assigns a first operation weight to a first I/O operation type of I/O operations received at the first function and a second operation weight to a second I/O operation type of I/O operations received at the first function. The processing device also selects, for execution, a first number of operations of the first I/O operation type of the I/O operations received at the first function according to the first operation weight and a second number of operations of the second I/O operation type of the I/O operations received at the first function according to the second operation weight.
    Type: Application
    Filed: October 24, 2019
    Publication date: April 29, 2021
    Inventor: Luca Bert
  • Publication number: 20200356396
    Abstract: A processing device, operatively coupled with a memory component, is configured to provide a plurality of virtual memory controllers and to provide a plurality of physical functions, wherein each of the plurality of physical functions corresponds to a different one of the plurality of virtual memory controllers. The processing device further presents the plurality of physical functions to a host computing system over a peripheral component interconnect express (PCIe) interface, wherein each of the plurality of physical functions corresponds to a different virtual machine running on the host computing system, and manages input/output (TO) operations received from the host computing systems and directed to the plurality of physical functions, as well as background operations performed on the memory component, in view of class of service parameters associated with the plurality of physical functions.
    Type: Application
    Filed: August 22, 2019
    Publication date: November 12, 2020
    Inventor: Luca Bert
  • Patent number: 10268592
    Abstract: Applications that use non-volatile random access memory (NVRAM), such as those that apply file system journal writes and database log writes where write operations apply data sequentially over the NVRAM, map the available capacity of the NVRAM in a virtual address space without compromising performance. The NVRAM is segmented into regions with multiple such regions fitting within a volatile RAM element accessible to the application and the NVRAM. One or more regions are loaded in the volatile RAM and reflected in page tables that reference the regions. The page tables are managed on a host computer executing the application. One region space in the volatile RAM is unused and available for transferred information. Mechanisms are provided for dynamically transferring regions and interfacing with the host computer. As the application sequentially accesses information in the stored regions, older regions are removed and new regions loaded from NVRAM to the volatile RAM.
    Type: Grant
    Filed: February 27, 2014
    Date of Patent: April 23, 2019
    Assignee: Avago Technologies International Sales Pte. Limited
    Inventors: Saugata Das Purkayastha, Luca Bert, Philip K. Wong, Anant Baderdinni
  • Patent number: 10013344
    Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the plurality of cache-lines may be associated with meta-data indicating one or more of a dirty state and an invalid state. The controller may be connected to the memory and configured to detect an input/output (I/O) operation directed to a file system. The controller may perform a read-fill based on a hint value when there is a read miss in the cache. The hint value may be based on the application access pattern. The hint value may be passed to a caching layer with a corresponding I/O.
    Type: Grant
    Filed: January 27, 2014
    Date of Patent: July 3, 2018
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Luca Bert, Anant Baderdinni, Saugata Das Purkayastha, Philip K. Wong
  • Patent number: 9921753
    Abstract: Embodiments herein provide for redundant data storage. One storage system includes first and second host systems each comprising a memory and a persistent storage device. The storage system also includes first and second storage controllers each comprising a memory (e.g., DRAM). The memory of the first storage controller is mapped to the memory of the first host system and the memory of the second storage controller is mapped to the memory of the second host system. The first storage controller is operable to DMA data from the persistent storage device of the first host system to the memory of the first storage controller, and to direct the second storage controller to DMA the data to the persistent storage device of the second host system via the memory of the second storage controller.
    Type: Grant
    Filed: March 23, 2015
    Date of Patent: March 20, 2018
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Sumanesh Samanta, Luca Bert, Naveen Krishnamurthy
  • Patent number: 9734062
    Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the cache-lines comprises a plurality of sub-cache lines. Each of the plurality of cache-lines and each of the plurality of sub-cache lines is associated with meta-data indicating one or more of a dirty state and an invalid state. The controller is connected to the memory and configured to (i) recognize sub-cache line boundaries and (ii) process the I/O requests in multiples of a size of said sub-cache lines to minimize cache-fills.
    Type: Grant
    Filed: December 18, 2013
    Date of Patent: August 15, 2017
    Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
    Inventors: Saugata Das Purkayastha, Luca Bert, Horia Simionescu, Kishore Kaniyar Sampathkumar, Mark Ish
  • Patent number: 9542320
    Abstract: Systems and methods maintain cache coherency between storage controllers using input/output virtualization. In one embodiment, a primary storage controller receives write commands over a virtualized interface, stores the write commands in cache memory, tracks a status of the write commands processed from the cache memory, and stores the status in a portion of the cache memory. A backup storage controller includes a backup cache that receives replications of the write commands via direct memory access operations, and stores the replications of the write commands. The primary storage controller makes the status available to a host system. In response to a failure of the primary storage controller, the backup storage synchronizes with the status from the host system, and resumes I/O operations for the logical volume.
    Type: Grant
    Filed: January 12, 2015
    Date of Patent: January 10, 2017
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Luca Bert, Sumanesh Samanta, Philip K. Wong
  • Patent number: 9542101
    Abstract: A data storage system and methods for managing data to be transferred between a host and a data volume distributed across solid state storage modules are disclosed. A storage controller couples the host to the data volume and manages data transfers to and from the logical volume. The storage controller receives a set of parameters that define how an array of blocks and chunks of buffered data will be distributed across solid state storage modules. The storage controller receives and buffers data to be stored and transfers the same when the capacity of the buffered data will fill a set of arranged stripes in the defined array in a single write operation.
    Type: Grant
    Filed: September 22, 2013
    Date of Patent: January 10, 2017
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Horia Simionescu, Anant Baderdinni, Luca Bert
  • Publication number: 20160283134
    Abstract: Embodiments herein provide for redundant data storage. One storage system includes first and second host systems each comprising a memory and a persistent storage device. The storage system also includes first and second storage controllers each comprising a memory (e.g., DRAM). The memory of the first storage controller is mapped to the memory of the first host system and the memory of the second storage controller is mapped to the memory of the second host system. The first storage controller is operable to DMA data from the persistent storage device of the first host system to the memory of the first storage controller, and to direct the second storage controller to DMA the data to the persistent storage device of the second host system via the memory of the second storage controller.
    Type: Application
    Filed: March 23, 2015
    Publication date: September 29, 2016
    Inventors: Sumanesh Samanta, Luca Bert, Naveen Krishnamurthy
  • Publication number: 20160203080
    Abstract: Systems and methods maintain cache coherency between storage controllers using input/output virtualization. In one embodiment, a primary storage controller receives write commands over a virtualized interface, stores the write commands in cache memory, tracks a status of the write commands processed from the cache memory, and stores the status in a portion of the cache memory. A backup storage controller includes a backup cache that receives replications of the write commands via direct memory access operations, and stores the replications of the write commands. The primary storage controller makes the status available to a host system. In response to a failure of the primary storage controller, the backup storage synchronizes with the status from the host system, and resumes I/O operations for the logical volume.
    Type: Application
    Filed: January 12, 2015
    Publication date: July 14, 2016
    Inventors: Luca Bert, Sumanesh Samanta, Philip Wong
  • Patent number: 9292228
    Abstract: A RAID controller includes a cache memory in which write cache blocks (WCBs) are protected by a RAID-5 (striping plus parity) scheme while read cache blocks (RCBs) are not protected in such a manner. If a received cache block is an RCB, the RAID controller stores it in the cache memory without storing any corresponding parity information. When a sufficient number of WCBs to constitute a full stripe have been received but not yet stored in the cache memory, the RAID controller computes a corresponding parity block and stores the RCBs and parity block in the cache memory as a single stripe.
    Type: Grant
    Filed: February 6, 2013
    Date of Patent: March 22, 2016
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Anant Baderdinni, Horia Simionescu, Luca Bert
  • Patent number: 9292204
    Abstract: A system and method for managing cache memory of at least one node of a multiple-node storage cluster. A first cache data and a first cache metadata are stored for data transfers between a respective node and regions of a storage cluster receiving at least a first selected number of data transfer requests. When the node is rebooted, a second (new) cache data is stored to replace the first (old) cache data. The second cache data is compiled utilizing the first cache metadata to identify previously cached regions of the storage cluster receiving at least a second selected number of data transfer requests after the node is rebooted. The second selected number of data transfer requests is less than the first selected number of data transfer requests to enable a rapid build of the second cache data.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: March 22, 2016
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Sumanesh Samanta, Sujan Biswas, Horia Cristian Simionescu, Luca Bert, Mark Ish
  • Patent number: 9286175
    Abstract: The disclosure is directed to preserving data consistency in a multiple-node data storage system. According to various embodiments, a write log is maintained including log entries for data transfer requests being served by a respective node of the multiple-node data storage system. Rather than maintaining a full write journal of data and parity associated with each data transfer request, the log entries only need to identify portions of the virtual volume being updated according to the data transfer requests served by each node. When a first node fails, a second node takes over administration of a virtual volume for the failed node. Upon taking over for the first (failed) node, the second node resolves any inconsistencies between data and parity in portions of the virtual volume identified the respective log entries. Accordingly, write holes are prevented without substantially increasing memory usage or system complexity.
    Type: Grant
    Filed: November 27, 2013
    Date of Patent: March 15, 2016
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Sumanesh Samanta, Horia Cristian Simionescu, Luca Bert, Debal Kr. Mridha, Mohana Rao Goli
  • Patent number: 9256384
    Abstract: A data storage system is provided that implements a command-push model that reduces latencies. The host system has access to a nonvolatile memory (NVM) device of the memory controller to allow the host system to push commands into a command queue located in the NVM device. The host system completes each IO without the need for intervention from the memory controller, thereby obviating the need for synchronization, or handshaking, between the host system and the memory controller. For write commands, the memory controller does not need to issue a completion interrupt to the host system upon completion of the command because the host system considers the write command completed at the time that the write command is pushed into the queue of the memory controller. The combination of all of these features results in a large reduction in overall latency.
    Type: Grant
    Filed: February 4, 2013
    Date of Patent: February 9, 2016
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Luca Bert, Anant Baderdinni, Horia Simionescu, Mark Ish
  • Publication number: 20160026579
    Abstract: A cache controller having a cache supported by a non-volatile memory element manages metadata operations by defining a mathematical relationship between a cache line in a data store exposed to a host system and a location identifier associated with an instance of the cache line in the non-volatile memory. The cache controller maintains most recently used bit maps identifying data in the cache, as well as a data characteristic bit map identifying data that has changed since it was added to the cache. The cache controller maintains a most recently used bit map to replace the recently map at an appropriate time and a fresh bitmap tracks the most recently used bit map. The cache controller uses a collision bitmap, an imposter index and a quotient to modify cache lines stored in the non-volatile memory element.
    Type: Application
    Filed: July 22, 2014
    Publication date: January 28, 2016
    Inventors: Sumanesh Samanta, Suagata Das Purkayastha, Mark Ish, Horia Simionescu, Luca Bert
  • Patent number: 9201681
    Abstract: A method and controller device for sharing computing resources in a virtualized environment having a plurality of virtual machines. The method includes assigning a portion of the computing resources to the plurality of virtual machines. The method also includes leasing by a first virtual machine at least a portion of the assigned computing resources of at least one second virtual machine. The first virtual machine leases computing resources from the at least one second virtual machine when the first virtual machine needs additional computing resources and at least a portion of the assigned computing resources of the at least one second virtual machine are not being used by the at least one second virtual machine.
    Type: Grant
    Filed: February 14, 2013
    Date of Patent: December 1, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Luca Bert, Parag R. Maharana
  • Patent number: 9182912
    Abstract: The present invention is directed to a method for providing storage acceleration in a data storage system. In the data storage system described herein, multiple independent controllers may be utilized, such that a first storage controller may be connected to a first storage tier (ex.—a fast tier) which includes a solid-state drive, while a second storage controller may be connected to a second storage tier (ex.—a slower tier) which includes a hard disk drive. The accelerator functionality may be split between the host of the system and the first storage controller of the system (ex.—some of the accelerator functionality may be offloaded to the first storage controller) for promoting improved storage acceleration performance within the system.
    Type: Grant
    Filed: August 3, 2011
    Date of Patent: November 10, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Luca Bert, Mark Ish
  • Patent number: 9158695
    Abstract: The present disclosure is directed to a system for dynamically adaptive caching. The system includes a storage device having a physical capacity for storing data received from a host. The system may also include a control module for receiving data from the host and compressing the data to a compressed data size. Alternatively, the data may also be compressed by the storage device. The control module may be configured for determining an amount of available space on the storage device and also determining a reclaimed space, the reclaimed space being according to a difference between the size of the data received from the host and the compressed data size. The system may also include an interface module for presenting a logical capacity to the host. The logical capacity has a variable size and may include at least a portion of the reclaimed space.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: October 13, 2015
    Assignee: Seagate Technology LLC
    Inventors: Horia Simionescu, Mark Ish, Luca Bert, Robert Quinn, Earl T. Cohen, Timothy Canepa