Patents by Inventor Luca Bert

Luca Bert has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150220452
    Abstract: Applications that use non-volatile random access memory (NVRAM), such as those that apply file system journal writes and database log writes where write operations apply data sequentially over the NVRAM, map the available capacity of the NVRAM in a virtual address space without compromising performance. The NVRAM is segmented into regions with multiple such regions fitting within a volatile RAM element accessible to the application and the NVRAM. One or more regions are loaded in the volatile RAM and reflected in page tables that reference the regions. The page tables are managed on a host computer executing the application. One region space in the volatile RAM is unused and available for transferred information. Mechanisms are provided for dynamically transferring regions and interfacing with the host computer. As the application sequentially accesses information in the stored regions, older regions are removed and new regions loaded from NVRAM to the volatile RAM.
    Type: Application
    Filed: February 27, 2014
    Publication date: August 6, 2015
    Applicant: LSI Corporation
    Inventors: Saugata Das Purkayastha, Luca Bert, Philip K. Wong, Anant Baderdinni
  • Publication number: 20150199269
    Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the plurality of cache-lines may be associated with meta-data indicating one or more of a dirty state and an invalid state. The controller may be connected to the memory and configured to detect an input/output (I/O) operation directed to a file system. The controller may perform a read-fill based on a hint value when there is a read miss in the cache. The hint value may be based on the application access pattern. The hint value may be passed to a caching layer with a corresponding I/O.
    Type: Application
    Filed: January 27, 2014
    Publication date: July 16, 2015
    Applicant: LSI Corporation
    Inventors: Luca Bert, Anant Baderdinni, Saugata Das Purkayastha, Philip K. Wong
  • Patent number: 9079562
    Abstract: Providing active-active failover capability to non-failover capable direct-attached storage (DAS) servers including connecting a first and a second non-failover capable direct-attached storage (DAS) servers to a shared storage pool via an expander that supports storage zoning, configuring a first storage zone including the first DAS server and a first portion of the shared storage pool, configuring a second storage zone including the second DAS server and a second portion of the shared storage pool, detecting that the second DAS server has failed, zoning out the second portion of the shared storage pool and mapping the second portion of the shared storage pool to the first storage zone.
    Type: Grant
    Filed: November 13, 2008
    Date of Patent: July 14, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventor: Luca Bert
  • Publication number: 20150169458
    Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the cache-lines comprises a plurality of sub-cache lines. Each of the plurality of cache-lines and each of the plurality of sub-cache lines is associated with meta-data indicating one or more of a dirty state and an invalid state. The controller is connected to the memory and configured to (i) recognize sub-cache line boundaries and (ii) process the I/O requests in multiples of a size of said sub-cache lines to minimize cache-fills.
    Type: Application
    Filed: December 18, 2013
    Publication date: June 18, 2015
    Applicant: LSI Corporation
    Inventors: Saugata Das Purkayastha, Luca Bert, Horia Simionescu, Kishore Kaniyar Sampathkumar, Mark Ish
  • Patent number: 9058274
    Abstract: The disclosure is directed to a system and method for managing READ cache memory of at least one node of a multiple-node storage cluster. According to various embodiments, a cache data and a cache metadata are stored for data transfers between a respective node (hereinafter “first node”) and regions of a storage cluster. When the first node is disabled, data transfers are tracked between one or more active nodes of the plurality of nodes and cached regions of the storage cluster. When the first node is rebooted, at least a portion of valid cache data is retained based upon the tracked data transfers. Accordingly, local cache memory does not need to be entirely rebuilt each time a respective node is rebooted.
    Type: Grant
    Filed: June 24, 2013
    Date of Patent: June 16, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Sumanesh Samanta, Sujan Biswas, Horia Cristian Simionescu, Luca Bert, Mark Ish
  • Publication number: 20150135006
    Abstract: The disclosure is directed to preserving data consistency in a multiple-node data storage system. According to various embodiments, a write log is maintained including log entries for data transfer requests being served by a respective node of the multiple-node data storage system. Rather than maintaining a full write journal of data and parity associated with each data transfer request, the log entries only need to identify portions of the virtual volume being updated according to the data transfer requests served by each node. When a first node fails, a second node takes over administration of a virtual volume for the failed node. Upon taking over for the first (failed) node, the second node resolves any inconsistencies between data and parity in portions of the virtual volume identified the respective log entries. Accordingly, write holes are prevented without substantially increasing memory usage or system complexity.
    Type: Application
    Filed: November 27, 2013
    Publication date: May 14, 2015
    Applicant: LSI Corporation
    Inventors: Sumanesh Samanta, Horia Cristian Simionescu, Luca Bert, Debal Kr. Mridha, Mohana Rao Goli
  • Patent number: 9015418
    Abstract: A method and system for self-sizing dynamic cache for virtualized environments is disclosed. The preferred embodiment self sizes unequal portions of the total amount of cache and allocates to a plurality of active virtualized machines (VM) according to VM requirements and administrative standards. As a new VM may emerge and request an amount of cache, the cache controller reclaims currently used cache from the active VM and reallocates the unequal portions of cache required by each VM. To ensure cache availability, a quick reclamation amount of cache is immediately available to each new VM as it makes the request begins operation. After reallocation, the newly created VM may rely on a guaranteed minimum quota of cache to ensure performance.
    Type: Grant
    Filed: November 20, 2012
    Date of Patent: April 21, 2015
    Assignee: LSI Corporation
    Inventor: Luca Bert
  • Patent number: 8984234
    Abstract: A method and system for managing a cache for a host machine is disclosed. The method includes: indicating each cache line in the cache as being in a transitional meta-state when any virtual machine hosted on the host machine moves out of the host machine; each time a particular cache line is accessed, indicating that particular cache line as no longer in the transitional meta-state; and marking the cache lines still in the transitional meta-state as invalid when a virtual machine moves back to the host machine.
    Type: Grant
    Filed: January 11, 2013
    Date of Patent: March 17, 2015
    Assignee: LSI Corporation
    Inventors: Parag R. Maharana, Luca Bert, Earl Cohen
  • Patent number: 8977893
    Abstract: A RAID data storage system incorporates permanently empty blocks into each stripe, distributed among all the data storage devices, to accelerate rebuild time by reducing the number of blocks that need to be rebuilt in the event of a failure.
    Type: Grant
    Filed: February 17, 2012
    Date of Patent: March 10, 2015
    Assignee: LSI Corporation
    Inventors: Sumanesh Samanta, Luca Bert, Satadal Bhattacharjee
  • Patent number: 8977799
    Abstract: A multi-tiered system of data storage includes a plurality of data storage solutions. The data storage solutions are organized such that the each progressively faster, more expensive solution serves as a cache for the previous solution, and each solution includes a dedicated data block to store individual data sets, newly written in a plurality of write operations, for later migration to slower data storage solutions in a single write operation.
    Type: Grant
    Filed: September 26, 2011
    Date of Patent: March 10, 2015
    Assignee: LSI Corporation
    Inventor: Luca Bert
  • Patent number: 8918576
    Abstract: A method for selectively placing cache data, comprising the steps of (A) determining a line temperature for a plurality of devices, (B) determining a device temperature for the plurality of devices, (C) calculating an entry temperature for the plurality of devices in response to the cache line temperature and the device temperature and (D) distributing a plurality of write operations across the plurality of devices such that thermal energy is distributed evenly over the plurality of devices.
    Type: Grant
    Filed: April 24, 2012
    Date of Patent: December 23, 2014
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Luca Bert, Mark Ish, Rajiv Ganth Rajaram
  • Publication number: 20140351523
    Abstract: The disclosure is directed to a system and method for managing cache memory of at least one node of a multiple-node storage cluster. According to various embodiments, a first cache data and a first cache metadata are stored for data transfers between a respective node and regions of a storage cluster receiving at least a first selected number of data transfer requests. When the node is rebooted, a second (new) cache data is stored to replace the first (old) cache data. The second cache data is compiled utilizing the first cache metadata to identify previously cached regions of the storage cluster receiving at least a second selected number of data transfer requests after the node is rebooted. The second selected number of data transfer requests is less than the first selected number of data transfer requests to enable a rapid build of the second cache data.
    Type: Application
    Filed: June 25, 2013
    Publication date: November 27, 2014
    Inventors: Sumanesh Samanta, Sujan Biswas, Horia Cristian Simionescu, Luca Bert, Mark Ish
  • Publication number: 20140344523
    Abstract: The disclosure is directed to a system and method for managing READ cache memory of at least one node of a multiple-node storage cluster. According to various embodiments, a cache data and a cache metadata are stored for data transfers between a respective node (hereinafter “first node”) and regions of a storage cluster. When the first node is disabled, data transfers are tracked between one or more active nodes of the plurality of nodes and cached regions of the storage cluster. When the first node is rebooted, at least a portion of valid cache data is retained based upon the tracked data transfers. Accordingly, local cache memory does not need to be entirely rebuilt each time a respective node is rebooted.
    Type: Application
    Filed: June 24, 2013
    Publication date: November 20, 2014
    Inventors: Sumanesh Samanta, Sujan Biswas, Horia Cristian Simionescu, Luca Bert, Mark Ish
  • Publication number: 20140337578
    Abstract: A RAID system is provided in which, in the event that a rebuild is to be performed for one of the PDs, a filter driver of the operating system of the computer of the RAID system informs the RAID controller of the RAID system of addresses in the virtual memory that are unused. Unused virtual memory addresses are those which have never been written by the OS as well as those which have been written by the OS and subsequently freed by the OS. The RAID controller translates the unused virtual memory addresses into unused physical addresses. The RAID controller then reconstructs data and parity only for the unused physical addresses in the PD for which the rebuild is being performed. This reduces the amount of data and parity that are rebuilt during a rebuild process and reduces the amount of time that is required to perform the rebuild process. In addition, the RAID system is capable of being configured to prevent or reduce data sprawl.
    Type: Application
    Filed: July 29, 2014
    Publication date: November 13, 2014
    Inventor: Luca Bert
  • Publication number: 20140304464
    Abstract: A dedupe cache solution is provided that uses an in-line signature generation algorithm on the front-end of the data storage system and an off-line dedupe algorithm on the back-end of the data storage system. The in-line signature generation algorithm is performed as data is moved from the system memory device of the host system into the DRAM device of the storage controller. Because the signature generation algorithm is an in-line process, it has very little if any detrimental impact on write latency and is scalable to storage environments that have high IOPS. The back-end deduplication algorithm looks at data that the front-end process has indicated may be a duplicate and performs deduplication as needed. Because the deduplication algorithm is performed off-line on the back-end, it also does not contribute any additional write latency.
    Type: Application
    Filed: April 3, 2013
    Publication date: October 9, 2014
    Applicant: LSI Corporation
    Inventor: Luca Bert
  • Patent number: 8838891
    Abstract: The invention provides for SSD cache expansion by assigning all excess overprovisioned space (OP) above a level of advertised SSD memory to SSD cache. As additional SSD memory is needed to provide the advertised SSD memory, an offsetting portion of the OP is reassigned from excess overprovisioned space to the SSD cache. In this manner, the advertised SSD memory is maintained while continuously allocating all available excess OP to cache. The result is that all of the available SSD memory is allocated to cache, a portion to maintain the advertised SSD memory and the balance as excess OP allocated to cache. This eliminates idle OP in the SSD allocation.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: September 16, 2014
    Assignee: LSI Corporation
    Inventor: Luca Bert
  • Publication number: 20140258595
    Abstract: A cache controller implemented in O/S kernel, driver and application levels within a guest virtual machine dynamically allocates a cache store to virtual machines for improved responsiveness to changing demands of virtual machines. A single cache device or a group of cache devices are provisioned as multiple logical devices and exposed to a resource allocator. A core caching algorithm executes in the guest virtual machine. As new virtual machines are added under the management of the virtual machine monitor, existing virtual machines are prompted to relinquish a portion of the cache store allocated for use by the respective existing machines. The relinquished cache is allocated to the new machine. Similarly, if a virtual machine is shutdown or migrated to a new host system, the cache capacity allocated to the virtual machine is redistributed among the remaining virtual machines being managed by the virtual machine monitor.
    Type: Application
    Filed: August 15, 2013
    Publication date: September 11, 2014
    Applicant: LSI Corporation
    Inventors: Pradeep Radhakrishna Venkatesha, Siddhartha Kumar Panda, Parag R. Maharana, Luca Bert
  • Patent number: 8825950
    Abstract: A RAID system is provided in which, in the event that a rebuild is to be performed for one of the PDs, a filter driver of the operating system of the computer of the RAID system informs the RAID controller of the RAID system of addresses in the virtual memory that are unused. Unused virtual memory addresses are those which have never been written by the OS as well as those which have been written by the OS and subsequently freed by the OS. The RAID controller translates the unused virtual memory addresses into unused physical addresses. The RAID controller then reconstructs data and parity only for the unused physical addresses in the PD for which the rebuild is being performed. This reduces the amount of data and parity that are rebuilt during a rebuild process and reduces the amount of time that is required to perform the rebuild process. In addition, the RAID system is capable of being configured to prevent or reduce data sprawl.
    Type: Grant
    Filed: March 1, 2011
    Date of Patent: September 2, 2014
    Assignee: LSI Corporation
    Inventor: Luca Bert
  • Publication number: 20140229941
    Abstract: A method and controller device for sharing computing resources in a virtualized environment having a plurality of virtual machines. The method includes assigning a portion of the computing resources to the plurality of virtual machines. The method also includes leasing by a first virtual machine at least a portion of the assigned computing resources of at least one second virtual machine. The first virtual machine leases computing resources from the at least one second virtual machine when the first virtual machine needs additional computing resources and at least a portion of the assigned computing resources of the at least one second virtual machine are not being used by the at least one second virtual machine.
    Type: Application
    Filed: February 14, 2013
    Publication date: August 14, 2014
    Applicant: LSI CORPORATION
    Inventors: Luca Bert, Parag R. Maharana
  • Publication number: 20140223071
    Abstract: A data storage system is provided that implements a command-push model that reduces latencies. The host system has access to a nonvolatile memory (NVM) device of the memory controller to allow the host system to push commands into a command queue located in the NVM device. The host system completes each IO without the need for intervention from the memory controller, thereby obviating the need for synchronization, or handshaking, between the host system and the memory controller. For write commands, the memory controller does not need to issue a completion interrupt to the host system upon completion of the command because the host system considers the write command completed at the time that the write command is pushed into the queue of the memory controller. The combination of all of these features results in a large reduction in overall latency.
    Type: Application
    Filed: February 4, 2013
    Publication date: August 7, 2014
    Applicant: LSI CORPORATION
    Inventors: Luca Bert, Anant Baderdinni, Horia Simionescu, Mark Ish