Patents by Inventor Horia Simionescu
Horia Simionescu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20180113634Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving an Input/Output (I/O) command from a host system at a storage controller, parsing the I/O command at the storage controller with a host I/O manager to extract command instructions therefrom. The host I/O manager is able to generate at least one local message that includes the command instructions extracted from the I/O command and transmit the at least one local message to a cache manager. The cache manager is enabled to work in local memory to execute the command instructions contained in the at least one message. The cache manager is also configured to chain multiple buffer segments together on-demand to support multiple stripe sizes that are specific to the I/O command received from the host system.Type: ApplicationFiled: October 26, 2016Publication date: April 26, 2018Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Publication number: 20180113639Abstract: A system and method for efficient variable length memory frame allocation are described. The method is described to include receiving a frame allocation request from a host system, allocating a super frame from a stack of free super frames for the frame allocation request, the super frame comprising a set of consecutively numbered frames, updating entries in a super frame bitmap to indicate that the super frame has been allocated from the stack of free super frames, determining a super frame identifier for the allocated super frame, and enabling the super frame or the set of consecutively numbered frames to be allocated to storing data in connection with the frame allocation request or subsequent frame allocation requests from the host system.Type: ApplicationFiled: October 26, 2016Publication date: April 26, 2018Inventors: Horia Simionescu, Eugene Saghi, Sridhar Rao Veerla, Panthini Pandit, Timothy Hoglund, Gowrisankar Radhakrishnan
-
Publication number: 20180113810Abstract: A system and method for data caching are provided. The disclosed method includes organizing a plurality of hash slots into a plurality of hash slot buckets such that each hash slot bucket in the plurality of hash slot buckets contains a plurality of hash slots having Logical Block Addressing (LBA) and Cache Segment ID (CSID) pairs, receiving an Input/Output (I/O) request from a host system, determining that cache memory is needed to fulfill the I/O request, and performing a cache lookup in connection with fulfilling the I/O request, where the cache lookup includes analyzing the plurality of hash slots for unoccupied hash slots by comparing a hash against hash values assigned to the hash slot buckets instead of individual hash values assigned to the hash slots.Type: ApplicationFiled: October 26, 2016Publication date: April 26, 2018Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Publication number: 20180113635Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving a host command from a host, extracting command information from the host command, determining an Input/Output (I/O) action to be taken in connection with the host command, identifying a particular memory module from among a plurality of memory modules to execute the I/O action, generating an accelerated I/O message for transmission to the particular memory module, the accelerated I/O message comprising at least one Internal Scatter Gather List (ISGL) having a plurality of Scatter Gather Extents (SGEs) that enable the particular memory module to execute the I/O action solely based on the at least one ISGL, and transmitting the accelerated I/O message to the particular memory module.Type: ApplicationFiled: October 26, 2016Publication date: April 26, 2018Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Publication number: 20180113633Abstract: A system and method for efficient cache buffering are provided. The disclosed method includes receiving a host command from a host, extracting command information from the host command, determining an Input/Output (I/O) action to be taken in connection with the host command, determining that the I/O action spans more than one strip, and based on the I/O action spanning more than one strip, allocating a cache frame anchor for a row on-demand along with a cache frame anchor for a strip to accommodate the I/O action.Type: ApplicationFiled: October 26, 2016Publication date: April 26, 2018Inventors: Horia Simionescu, Timothy Hoglund, Sridhar Rao Veerla, Panthini Pandit, Gowrisankar Radhakrishnan
-
Patent number: 9734062Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the cache-lines comprises a plurality of sub-cache lines. Each of the plurality of cache-lines and each of the plurality of sub-cache lines is associated with meta-data indicating one or more of a dirty state and an invalid state. The controller is connected to the memory and configured to (i) recognize sub-cache line boundaries and (ii) process the I/O requests in multiples of a size of said sub-cache lines to minimize cache-fills.Type: GrantFiled: December 18, 2013Date of Patent: August 15, 2017Assignee: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.Inventors: Saugata Das Purkayastha, Luca Bert, Horia Simionescu, Kishore Kaniyar Sampathkumar, Mark Ish
-
Patent number: 9542101Abstract: A data storage system and methods for managing data to be transferred between a host and a data volume distributed across solid state storage modules are disclosed. A storage controller couples the host to the data volume and manages data transfers to and from the logical volume. The storage controller receives a set of parameters that define how an array of blocks and chunks of buffered data will be distributed across solid state storage modules. The storage controller receives and buffers data to be stored and transfers the same when the capacity of the buffered data will fill a set of arranged stripes in the defined array in a single write operation.Type: GrantFiled: September 22, 2013Date of Patent: January 10, 2017Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.Inventors: Horia Simionescu, Anant Baderdinni, Luca Bert
-
Patent number: 9292228Abstract: A RAID controller includes a cache memory in which write cache blocks (WCBs) are protected by a RAID-5 (striping plus parity) scheme while read cache blocks (RCBs) are not protected in such a manner. If a received cache block is an RCB, the RAID controller stores it in the cache memory without storing any corresponding parity information. When a sufficient number of WCBs to constitute a full stripe have been received but not yet stored in the cache memory, the RAID controller computes a corresponding parity block and stores the RCBs and parity block in the cache memory as a single stripe.Type: GrantFiled: February 6, 2013Date of Patent: March 22, 2016Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.Inventors: Anant Baderdinni, Horia Simionescu, Luca Bert
-
Patent number: 9274713Abstract: A storage controller coupled to a host computer is dynamically configured by a device driver executing in the host computer. The storage controller manages a logical volume for the host using a set of flash-based storage devices arranged as a redundant array of inexpensive disks (RAID). The device driver identifies a RAID type for the logical volume and a queue depth from a stream of I/O commands. For a logical volume in RAID 0, the device driver compares the queue depth to a threshold value and configures the storage controller to process the stream of I/O commands with a first path or an alternative path based on a result of the comparison. For a logical volume in RAID 5, the device driver performs a similar comparison and uses the result to direct the storage controller to use a write back or a write through mode of operation.Type: GrantFiled: May 8, 2014Date of Patent: March 1, 2016Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.Inventors: Horia Simionescu, Siddhartha Kumar Panda, Kunal Sablok, Kapil Sundrani
-
Patent number: 9256384Abstract: A data storage system is provided that implements a command-push model that reduces latencies. The host system has access to a nonvolatile memory (NVM) device of the memory controller to allow the host system to push commands into a command queue located in the NVM device. The host system completes each IO without the need for intervention from the memory controller, thereby obviating the need for synchronization, or handshaking, between the host system and the memory controller. For write commands, the memory controller does not need to issue a completion interrupt to the host system upon completion of the command because the host system considers the write command completed at the time that the write command is pushed into the queue of the memory controller. The combination of all of these features results in a large reduction in overall latency.Type: GrantFiled: February 4, 2013Date of Patent: February 9, 2016Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.Inventors: Luca Bert, Anant Baderdinni, Horia Simionescu, Mark Ish
-
Publication number: 20160034185Abstract: Methods and structure for splitting Input/Output (I/O) for Redundant Array of Independent Disks (RAID) systems. One embodiment is a system that includes a processor of a host, and a memory of the host. The processor and the memory implement a device driver for communicating with a physically distinct RAID controller. The device driver is able to receive an I/O request, from an Operating System of the host, that is directed to a RAID volume. The device driver is further able to determine that the controller includes dedicated circuitry for handling I/O requests directed to a single RAID strip. Responsive to determining that the controller includes such dedicated circuitry, the device driver is able to identify RAID strip boundaries within the received request, and to generate multiple child I/O requests that are each directed to a single strip of the volume and correspond to the identified strip boundaries.Type: ApplicationFiled: July 30, 2014Publication date: February 4, 2016Inventors: Horia Simionescu, Kunal Sablok, Siddharth Kumar Panda, Durga Prasad Bhattarai
-
Publication number: 20160026579Abstract: A cache controller having a cache supported by a non-volatile memory element manages metadata operations by defining a mathematical relationship between a cache line in a data store exposed to a host system and a location identifier associated with an instance of the cache line in the non-volatile memory. The cache controller maintains most recently used bit maps identifying data in the cache, as well as a data characteristic bit map identifying data that has changed since it was added to the cache. The cache controller maintains a most recently used bit map to replace the recently map at an appropriate time and a fresh bitmap tracks the most recently used bit map. The cache controller uses a collision bitmap, an imposter index and a quotient to modify cache lines stored in the non-volatile memory element.Type: ApplicationFiled: July 22, 2014Publication date: January 28, 2016Inventors: Sumanesh Samanta, Suagata Das Purkayastha, Mark Ish, Horia Simionescu, Luca Bert
-
Patent number: 9244868Abstract: A method and system for IO processing in a storage system is disclosed. In accordance with the present disclosure, a controller may take long term “lease” of a portion (e.g., an LBA range) of a virtual disk of a RAID system and then utilize local locks for IOs directed to the leased portion. The method and system in accordance with the present disclosure eliminates inter-controller communication for the majority of IOs and improves the overall performance for a High Availability Active-Active DAS RAID system.Type: GrantFiled: September 21, 2012Date of Patent: January 26, 2016Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.Inventors: Sumanesh Samanta, Sujan Biswas, Horia Simionescu
-
Publication number: 20150347310Abstract: A cache controller coupled to a cache store supported by a solid-state memory element uses a metadata update process that reduces write amplification caused by writing both cache data and metadata to the solid-state memory element. The cache controller partitions the solid-state memory element to include a metadata portion, a host data or cache portion and a log portion. Host write requests that include “hot” data are processed and recorded by the cache controller. The cache controller maintains first and second maps. A log thread combines multiple metadata updates in a single log entry block. Pending metadata updates are checked to determine when a commit threshold is reached. Thereafter, the pending metadata updates are written to the solid-state memory element and the maps are updated.Type: ApplicationFiled: May 30, 2014Publication date: December 3, 2015Applicant: LSI CorporationInventors: Mark Ish, Sumanesh Samanta, Horia Simionescu, Saugata Das Purkayastha
-
Patent number: 9158695Abstract: The present disclosure is directed to a system for dynamically adaptive caching. The system includes a storage device having a physical capacity for storing data received from a host. The system may also include a control module for receiving data from the host and compressing the data to a compressed data size. Alternatively, the data may also be compressed by the storage device. The control module may be configured for determining an amount of available space on the storage device and also determining a reclaimed space, the reclaimed space being according to a difference between the size of the data received from the host and the compressed data size. The system may also include an interface module for presenting a logical capacity to the host. The logical capacity has a variable size and may include at least a portion of the reclaimed space.Type: GrantFiled: August 3, 2012Date of Patent: October 13, 2015Assignee: Seagate Technology LLCInventors: Horia Simionescu, Mark Ish, Luca Bert, Robert Quinn, Earl T. Cohen, Timothy Canepa
-
Publication number: 20150286438Abstract: A storage controller coupled to a host computer is dynamically configured by a device driver executing in the host computer. The storage controller manages a logical volume for the host using a set of flash-based storage devices arranged as a redundant array of inexpensive disks (RAID). The device driver identifies a RAID type for the logical volume and a queue depth from a stream of I/O commands. For a logical volume in RAID 0, the device driver compares the queue depth to a threshold value and configures the storage controller to process the stream of I/O commands with a first path or an alternative path based on a result of the comparison. For a logical volume in RAID 5, the device driver performs a similar comparison and uses the result to direct the storage controller to use a write back or a write through mode of operation.Type: ApplicationFiled: May 8, 2014Publication date: October 8, 2015Applicant: LSI CorporationInventors: Horia Simionescu, Siddhartha Kumar Panda, Kunal Sablok, Kapil Sundrani
-
Publication number: 20150169458Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the cache-lines comprises a plurality of sub-cache lines. Each of the plurality of cache-lines and each of the plurality of sub-cache lines is associated with meta-data indicating one or more of a dirty state and an invalid state. The controller is connected to the memory and configured to (i) recognize sub-cache line boundaries and (ii) process the I/O requests in multiples of a size of said sub-cache lines to minimize cache-fills.Type: ApplicationFiled: December 18, 2013Publication date: June 18, 2015Applicant: LSI CorporationInventors: Saugata Das Purkayastha, Luca Bert, Horia Simionescu, Kishore Kaniyar Sampathkumar, Mark Ish
-
Patent number: 9015525Abstract: A high availability DAS system uses a solid state cache to provide near active-active performance in a DAS duster, while retaining the implementation simplicity of active-passive or dual active system. Each node in the duster may include a solid state cache that stores hot I/O in an active-active mode, which allows the data to be read from or written to the underlying dual-active or active/passive DAS RAID system only when access to the “hot Region” cools down or in the case of Cache Miss. The hot I/O data includes hot read data that accumulated dynamically regardless of ownership of the drives where the hot read data is permanently stored. The hot I/O data also includes hot write data that is mirrored across the solid state cache memories to avoid potential dirty write data conflicts and also to provide High Availability in case of server failures.Type: GrantFiled: June 19, 2012Date of Patent: April 21, 2015Assignee: LSI CorporationInventors: Sumanesh Samanta, Sujan Biswas, Horia Simionescu
-
Publication number: 20140244902Abstract: An apparatus having a cache and a circuit is disclosed. The cache includes a plurality of cache lines. The cache is configured to (i) store a plurality of data items in the cache lines and (ii) generate a map that indicates a dirty state or a clean state of each of the cache lines. The cache also has a write-back policy to a memory. The circuit is configured to (i) check a location in the map corresponding to a read address of a read request and (ii) obtain read data directly from the memory by bypassing the cache in response to the location having the clean state.Type: ApplicationFiled: March 15, 2013Publication date: August 28, 2014Applicant: LSI CORPORATIONInventors: Horia Simionescu, Siddartha Kumar Panda, Kunal Sablok, Veera Kumar Reddy Oleti
-
Publication number: 20140223071Abstract: A data storage system is provided that implements a command-push model that reduces latencies. The host system has access to a nonvolatile memory (NVM) device of the memory controller to allow the host system to push commands into a command queue located in the NVM device. The host system completes each IO without the need for intervention from the memory controller, thereby obviating the need for synchronization, or handshaking, between the host system and the memory controller. For write commands, the memory controller does not need to issue a completion interrupt to the host system upon completion of the command because the host system considers the write command completed at the time that the write command is pushed into the queue of the memory controller. The combination of all of these features results in a large reduction in overall latency.Type: ApplicationFiled: February 4, 2013Publication date: August 7, 2014Applicant: LSI CORPORATIONInventors: Luca Bert, Anant Baderdinni, Horia Simionescu, Mark Ish