Patents by Inventor Mark Ish

Mark Ish has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20160034186
    Abstract: Methods and structure for host-side device drivers for Redundant Array of Independent Disks (RAID) systems. One system includes a processor and memory of a host, which implement a device driver. The device driver receives an Input/Output (I/O) request from an Operating System (OS) of the host, translates Logical Block Addresses (LBAs) from the received request into physical addresses at multiple storage devices, generates child I/O requests directed to the physical addresses based on the received request, and accesses an address lock system at a RAID controller to determine whether the physical addresses are accessible. If the physical addresses are accessible, the device driver reserves the physical addresses by updating the address lock system, and directs the child I/O requests to a hardware path at the RAID controller for handling single-strip I/O requests. If the physical addresses are not accessible, the device driver delays processing of the child I/O requests.
    Type: Application
    Filed: July 30, 2014
    Publication date: February 4, 2016
    Applicant: LSI CORPORATION
    Inventors: Adam Weiner, James A Rizzo, Mark Ish, Robert L Sheffield, Horia Cristian Simionescu
  • Publication number: 20160026579
    Abstract: A cache controller having a cache supported by a non-volatile memory element manages metadata operations by defining a mathematical relationship between a cache line in a data store exposed to a host system and a location identifier associated with an instance of the cache line in the non-volatile memory. The cache controller maintains most recently used bit maps identifying data in the cache, as well as a data characteristic bit map identifying data that has changed since it was added to the cache. The cache controller maintains a most recently used bit map to replace the recently map at an appropriate time and a fresh bitmap tracks the most recently used bit map. The cache controller uses a collision bitmap, an imposter index and a quotient to modify cache lines stored in the non-volatile memory element.
    Type: Application
    Filed: July 22, 2014
    Publication date: January 28, 2016
    Inventors: Sumanesh Samanta, Suagata Das Purkayastha, Mark Ish, Horia Simionescu, Luca Bert
  • Publication number: 20160004644
    Abstract: A storage controller maintaining a cache manages modified data flush operations. A set-associative map or relationship between individual cache lines in the cache and a corresponding portion of the host managed or source data store is generated in such a way that a quotient can be used to identify modified data in the cache in the order of the source data's logical block addresses. The storage controller uses a collision bitmap, a dirty bit map and a flush table when flushing data from the cache. The storage controller selects a quotient and identifies modified cache lines in the cache identified by the quotient. As long as the quotient remains the same, the storage controller flushes or transfers the modified cache lines to the data store. Otherwise, when the quotient is not the same, the data in the cache is skipped. A linked list is used to traverse skipped cache lines.
    Type: Application
    Filed: July 2, 2014
    Publication date: January 7, 2016
    Inventors: Sumanesh Samanta, Mark Ish, Suagata Das Purkayastha
  • Publication number: 20150347310
    Abstract: A cache controller coupled to a cache store supported by a solid-state memory element uses a metadata update process that reduces write amplification caused by writing both cache data and metadata to the solid-state memory element. The cache controller partitions the solid-state memory element to include a metadata portion, a host data or cache portion and a log portion. Host write requests that include “hot” data are processed and recorded by the cache controller. The cache controller maintains first and second maps. A log thread combines multiple metadata updates in a single log entry block. Pending metadata updates are checked to determine when a commit threshold is reached. Thereafter, the pending metadata updates are written to the solid-state memory element and the maps are updated.
    Type: Application
    Filed: May 30, 2014
    Publication date: December 3, 2015
    Applicant: LSI Corporation
    Inventors: Mark Ish, Sumanesh Samanta, Horia Simionescu, Saugata Das Purkayastha
  • Patent number: 9195596
    Abstract: An apparatus comprising a controller and a memory. The controller may be configured to generate (i) an index signal and (ii) an information signal in response to (i) one or more address signals and (ii) a data signal. The memory may be configured to store said information signal in one of a plurality of cache lines. Each of the plurality of cache lines has an associated one of a plurality of cache headers. Each of the plurality of cache headers includes (i) a first bit configured to indicate whether the associated cache line has all valid entries and (ii) a second bit configured to indicate whether the associated cache line has at least one dirty entry.
    Type: Grant
    Filed: November 2, 2011
    Date of Patent: November 24, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventor: Mark Ish
  • Patent number: 9182912
    Abstract: The present invention is directed to a method for providing storage acceleration in a data storage system. In the data storage system described herein, multiple independent controllers may be utilized, such that a first storage controller may be connected to a first storage tier (ex.—a fast tier) which includes a solid-state drive, while a second storage controller may be connected to a second storage tier (ex.—a slower tier) which includes a hard disk drive. The accelerator functionality may be split between the host of the system and the first storage controller of the system (ex.—some of the accelerator functionality may be offloaded to the first storage controller) for promoting improved storage acceleration performance within the system.
    Type: Grant
    Filed: August 3, 2011
    Date of Patent: November 10, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Luca Bert, Mark Ish
  • Patent number: 9158695
    Abstract: The present disclosure is directed to a system for dynamically adaptive caching. The system includes a storage device having a physical capacity for storing data received from a host. The system may also include a control module for receiving data from the host and compressing the data to a compressed data size. Alternatively, the data may also be compressed by the storage device. The control module may be configured for determining an amount of available space on the storage device and also determining a reclaimed space, the reclaimed space being according to a difference between the size of the data received from the host and the compressed data size. The system may also include an interface module for presenting a logical capacity to the host. The logical capacity has a variable size and may include at least a portion of the reclaimed space.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: October 13, 2015
    Assignee: Seagate Technology LLC
    Inventors: Horia Simionescu, Mark Ish, Luca Bert, Robert Quinn, Earl T. Cohen, Timothy Canepa
  • Publication number: 20150169458
    Abstract: An apparatus comprising a memory and a controller. The memory may be configured to (i) implement a cache and (ii) store meta-data. The cache comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the cache-lines comprises a plurality of sub-cache lines. Each of the plurality of cache-lines and each of the plurality of sub-cache lines is associated with meta-data indicating one or more of a dirty state and an invalid state. The controller is connected to the memory and configured to (i) recognize sub-cache line boundaries and (ii) process the I/O requests in multiples of a size of said sub-cache lines to minimize cache-fills.
    Type: Application
    Filed: December 18, 2013
    Publication date: June 18, 2015
    Applicant: LSI Corporation
    Inventors: Saugata Das Purkayastha, Luca Bert, Horia Simionescu, Kishore Kaniyar Sampathkumar, Mark Ish
  • Patent number: 9058274
    Abstract: The disclosure is directed to a system and method for managing READ cache memory of at least one node of a multiple-node storage cluster. According to various embodiments, a cache data and a cache metadata are stored for data transfers between a respective node (hereinafter “first node”) and regions of a storage cluster. When the first node is disabled, data transfers are tracked between one or more active nodes of the plurality of nodes and cached regions of the storage cluster. When the first node is rebooted, at least a portion of valid cache data is retained based upon the tracked data transfers. Accordingly, local cache memory does not need to be entirely rebuilt each time a respective node is rebooted.
    Type: Grant
    Filed: June 24, 2013
    Date of Patent: June 16, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Sumanesh Samanta, Sujan Biswas, Horia Cristian Simionescu, Luca Bert, Mark Ish
  • Patent number: 9021141
    Abstract: A data storage controller exposes information stored in a locally managed volatile memory store to a host system. The locally managed volatile memory store is mapped to a corresponding portion of a peripheral component interconnect express (PCIe) compliant memory space managed by the host system. Backup logic in the data storage controller responds to a power event detected at the interface between the data storage controller and the host system by copying the contents of the volatile memory store to a non-volatile memory store on the data storage controller. Restore logic restores a data storage controller state by copying the contents of the non-volatile memory store to the locally managed volatile memory store upon the application of power such that the data in the volatile memory store is persistent even in the event of a loss of power to the host system and or the data storage controller.
    Type: Grant
    Filed: December 11, 2013
    Date of Patent: April 28, 2015
    Assignee: LSI Corporation
    Inventors: Mohamad El-Batal, Anant Baderdinni, Mark Ish, Jason M. Stuhlsatz
  • Publication number: 20150058533
    Abstract: A data storage controller exposes information stored in a locally managed volatile memory store to a host system. The locally managed volatile memory store is mapped to a corresponding portion of a peripheral component interconnect express (PCIe) compliant memory space managed by the host system. Backup logic in the data storage controller responds to a power event detected at the interface between the data storage controller and the host system by copying the contents of the volatile memory store to a non-volatile memory store on the data storage controller. Restore logic restores a data storage controller state by copying the contents of the non-volatile memory store to the locally managed volatile memory store upon the application of power such that the data in the volatile memory store is persistent even in the event of a loss of power to the host system and or the data storage controller.
    Type: Application
    Filed: December 11, 2013
    Publication date: February 26, 2015
    Applicant: LSI Corporation
    Inventors: Mohamad El-Batal, Anant Baderdinni, Mark Ish, Jason M. Stuhlsatz
  • Patent number: 8966170
    Abstract: An apparatus for elastic caching of redundant cache data. The apparatus may have a plurality of buffers and a circuit. The circuit may be configured to (i) receive a write request from a host to store write data in a storage volume, (ii) allocate a number of extents in the buffers based upon a redundant organization associated with the write request and (iii) store the write data in the number of extents, where (a) each of the number of extents is located in a different one of the buffers and (b) the number of extents are dynamically linked together in response to the write request.
    Type: Grant
    Filed: January 31, 2012
    Date of Patent: February 24, 2015
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Mark Ish, Anant Baderdinni, Gary J. Smerdon
  • Publication number: 20150026403
    Abstract: An apparatus having a cache and a controller is disclosed. The controller is configured to (i) gather a plurality of statistics corresponding to a plurality of requests made from one or more hosts to access a memory during an interval, (ii) store data of the requests selectively in the cache in response to a plurality of headers and (iii) adjust one or more parameters in the headers in response to the statistics. The requests and the parameters are recorded in the headers.
    Type: Application
    Filed: July 23, 2013
    Publication date: January 22, 2015
    Applicant: LSI Corporation
    Inventors: Mark Ish, Sumanesh Samanta
  • Patent number: 8918576
    Abstract: A method for selectively placing cache data, comprising the steps of (A) determining a line temperature for a plurality of devices, (B) determining a device temperature for the plurality of devices, (C) calculating an entry temperature for the plurality of devices in response to the cache line temperature and the device temperature and (D) distributing a plurality of write operations across the plurality of devices such that thermal energy is distributed evenly over the plurality of devices.
    Type: Grant
    Filed: April 24, 2012
    Date of Patent: December 23, 2014
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Luca Bert, Mark Ish, Rajiv Ganth Rajaram
  • Publication number: 20140351523
    Abstract: The disclosure is directed to a system and method for managing cache memory of at least one node of a multiple-node storage cluster. According to various embodiments, a first cache data and a first cache metadata are stored for data transfers between a respective node and regions of a storage cluster receiving at least a first selected number of data transfer requests. When the node is rebooted, a second (new) cache data is stored to replace the first (old) cache data. The second cache data is compiled utilizing the first cache metadata to identify previously cached regions of the storage cluster receiving at least a second selected number of data transfer requests after the node is rebooted. The second selected number of data transfer requests is less than the first selected number of data transfer requests to enable a rapid build of the second cache data.
    Type: Application
    Filed: June 25, 2013
    Publication date: November 27, 2014
    Inventors: Sumanesh Samanta, Sujan Biswas, Horia Cristian Simionescu, Luca Bert, Mark Ish
  • Publication number: 20140344523
    Abstract: The disclosure is directed to a system and method for managing READ cache memory of at least one node of a multiple-node storage cluster. According to various embodiments, a cache data and a cache metadata are stored for data transfers between a respective node (hereinafter “first node”) and regions of a storage cluster. When the first node is disabled, data transfers are tracked between one or more active nodes of the plurality of nodes and cached regions of the storage cluster. When the first node is rebooted, at least a portion of valid cache data is retained based upon the tracked data transfers. Accordingly, local cache memory does not need to be entirely rebuilt each time a respective node is rebooted.
    Type: Application
    Filed: June 24, 2013
    Publication date: November 20, 2014
    Inventors: Sumanesh Samanta, Sujan Biswas, Horia Cristian Simionescu, Luca Bert, Mark Ish
  • Patent number: 8892811
    Abstract: An apparatus having a memory circuit and a manager is disclosed. The memory circuit generally has (i) one or more Flash memories and (ii) a memory space that spans a plurality of memory addresses. The manager may be configured to (i) receive data items in a random order from one or more applications, (ii) write the data items in an active one of a plurality of regions in a memory circuit and (iii) mark the memory addresses in the active region that store the data items as used. Each data item generally has a respective host address. The applications may be executed in one or more computers. The memory addresses in the active region may be accessed in a sequential order while writing the data items to minimize a write amplification. The random order is generally preserved between the data items while writing in the active region.
    Type: Grant
    Filed: March 1, 2012
    Date of Patent: November 18, 2014
    Assignee: Avago Technologies General IP (Singapore) Pte. Ltd.
    Inventors: Mark Ish, Siddhartha K. Panda
  • Publication number: 20140258628
    Abstract: A cache controller having a cache store and associated with a storage system maintains information stored in the cache store across a reboot of the cache controller. The cache controller communicates with a host computer system and a data storage system. The cache controller partitions the cache memory to include a metadata portion and log portion. A separate portion is used for cached data elements. The cache controller maintains a copy of the metadata in a separate memory accessible to the host computer system. Data is written to the cache store when the metadata log reaches its capacity. Upon a reboot, metadata is copied back to the host computer system and the metadata log is traversed to copy additional changes in the cache that have not been saved to the data storage system.
    Type: Application
    Filed: August 15, 2013
    Publication date: September 11, 2014
    Applicant: LSI Corporation
    Inventors: Vinay Bangalore Shivashankaraiah, Subramanian Parameswaran, Mark Ish
  • Publication number: 20140244901
    Abstract: An apparatus having one or more memories and a controller is disclosed. The memories are divided into a plurality of regions. Each regions is divided into a plurality of blocks. The blocks correspond to a plurality of memory addresses respectively. The controller is configured to (i) receive data from a host, (ii) generate metadata that maps a plurality of host addresses of the data to the memory addresses of the memories and (iii) write sequentially into a given one of the regions both (a) a portion of the data and (b) a corresponding portion of the metadata.
    Type: Application
    Filed: March 13, 2013
    Publication date: August 28, 2014
    Applicant: LSI CORPORATION
    Inventors: Siddharth Kumar Panda, Thanu Anna Skariah, Kunal Sablok, Mark Ish
  • Patent number: 8812788
    Abstract: A method of virtual cache window headers for long term access history is disclosed. The method may include steps (A) to (C). Step (A) may receive a request at a circuit from a host to access an address in a memory. The circuit generally controls the memory and a cache. Step (B) may update the access history in a first of the headers in response to the request. The headers may divide an address space of the memory into a plurality of windows. Each window generally includes a plurality of subwindows. Each subwindow may be sized to match one of a plurality of cache lines in the cache. A first of the subwindows in a first of the windows may correspond to the address. Step (C) may copy data from the memory to the cache in response to the access history.
    Type: Grant
    Filed: August 23, 2011
    Date of Patent: August 19, 2014
    Assignee: LSI Corporation
    Inventors: David H. Solina, II, Mark Ish