Patents by Inventor Jeffrey S. Kimmel

Jeffrey S. Kimmel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9639278
    Abstract: The embodiments described herein are directed to the use of hashing in a file system metadata arrangement that reduces an amount of metadata stored in a memory of a node in a cluster and that reduces the amount of metadata needed to process an input/output (I/O) request at the node. Illustratively, the embodiments are directed to cuckoo hashing and, in particular, to a manner in which cuckoo hashing may be modified and applied to construct the file system metadata arrangement. In an embodiment, the file system metadata arrangement may be illustratively include a hash collision technique that employs a hash collision computation to determine a unique candidate extent key (having a candidate hash table index) in the event of a collision, i.e., a hash table index collides with a slot of a hash table matching a key found in the slot.
    Type: Grant
    Filed: October 29, 2015
    Date of Patent: May 2, 2017
    Assignee: NetApp, Inc.
    Inventors: Jeffrey S. Kimmel, T. Byron Rakitzis
  • Patent number: 9632731
    Abstract: Various systems and methods are described for configuring a data storage system. In one embodiment, a plurality of actual capacities of a plurality of storage devices of the data storage system are identified and divided into a plurality of capacity slices. The plurality of capacity slices are combined into a plurality of chunks of capacity slices, each having a combination of characteristics of the underlying physical storage devices. The chunks of capacity slices are then mapped to a plurality of logical storage devices. A group of the plurality of logical storage devices is then organized into a redundant array of logical storage devices.
    Type: Grant
    Filed: April 24, 2015
    Date of Patent: April 25, 2017
    Assignee: NETAPP, INC.
    Inventors: Jeffrey S. Kimmel, Tim Emami
  • Patent number: 9571575
    Abstract: Data consistency and availability can be provided at the granularity of logical storage objects in storage solutions that use storage virtualization in clustered storage environments. To ensure consistency of data across different storage elements, synchronization is performed across the different storage elements. Changes to data are synchronized across storage elements in different clusters by propagating the changes from a primary logical storage object to a secondary logical storage object. To satisfy the strictest RPOs while maintaining performance, change requests are intercepted prior to being sent to a filesystem that hosts the primary logical storage object and propagated to a different managing storage element associated with the secondary logical storage object.
    Type: Grant
    Filed: August 29, 2014
    Date of Patent: February 14, 2017
    Assignee: NETAPP, INC.
    Inventors: Jeffrey S. Kimmel, Susan M. Coatney, Yuedong Mu, Santosh Rao
  • Publication number: 20170031769
    Abstract: A technique efficiently creates a snapshot for a logical unit (LUN) served by a storage input/output (I/O) stack executing on a node of a cluster that organizes data as extents referenced by keys. In addition, the technique efficiently creates one or more snapshots for a group of LUNs organized as a consistency group (CG) and served by storage I/O stacks executing on a plurality of nodes of the cluster. To that end, the technique involves a plurality of indivisible operations (i.e., transactions) of a snapshot creation workflow administered by a Storage Area Network (SAN) administration layer (SAL) of the storage I/O stack in response to a snapshot create request issued by a host.
    Type: Application
    Filed: September 29, 2015
    Publication date: February 2, 2017
    Inventors: Ling Zheng, Long Yang, Kayuri H. Patel, Suhas Prakash, Jeffrey S. Kimmel, Anshul Pundir, Arun Rokade
  • Patent number: 9529546
    Abstract: In one embodiment, a layered file system includes a volume layer and an extent store layer configured to provide sequential log-structured layout of data and metadata on solid state drives (SSDs) of one or more storage arrays. The data is organized as variable-length extents of one or more logical units (LUNs). The metadata includes volume metadata mappings from offset ranges of a LUN to extent keys and extent metadata mappings of the extent keys to storage locations of the extents on the SSDs. The extent store layer maintaining the extent metadata mappings determines whether an extent is stored on a storage array, and, in response to determination that the extent is stored on the storage array, returns an extent key for the stored extent to the volume layer to enable global inline de-duplication that obviates writing a duplicate copy of the extent on the storage array.
    Type: Grant
    Filed: January 8, 2014
    Date of Patent: December 27, 2016
    Assignee: NetApp, Inc.
    Inventors: Rajesh Sundaram, Jeffrey S. Kimmel, Blake H. Lewis
  • Publication number: 20160357743
    Abstract: A technique reduces an amount of metadata stored in a memory of a node in a cluster. An extent store layer of a storage input/output (I/O) stack executing on the node stores key-value pairs in a plurality of data structures, e.g., cuckoo hash tables, resident in the memory. The cuckoo hash table embodies metadata that describes an extent and, as such, may be organized to associate a location on disk with a value that identifies the location on disk. The value may be embodied as a locator that includes a reference count used to support deduplication functionality of the extent store layer with respect to the extent. The reference count is divided into two portions: a delta count portion stored in memory for each slot of the hash table and an overflow count portion stored on disk in a header of each extent. One bit of the delta count portion is reserved as an overflow bit that indicates whether the in-memory reference count has overflowed.
    Type: Application
    Filed: June 2, 2015
    Publication date: December 8, 2016
    Inventors: Manish Swaminathan, Dhaval Patel, Edward D. McClanahan, Jeffrey S. Kimmel
  • Publication number: 20160357776
    Abstract: A flash-optimized, log-structured layer of a file system of a storage input/output (I/O) stack executes on one or more nodes of a cluster. The log-structured layer of the file system provides sequential storage of data and metadata (i.e., a log-structured layout) on solid state drives (SSDs) of storage arrays in the cluster to reduce write amplification, while leveraging variable compression and variable length data features of the storage I/O stack. The data may be organized as an arbitrary number of variable-length extents of one or more host-visible logical units (LUNs) served by the nodes. The metadata may include mappings from host-visible logical block address ranges (i.e., offset ranges) of a LUN to extent keys, as well as mappings of the extent keys to SSD storage locations of the extents. The storage location of an extent on SSD is effectively “virtualized” by its mapped extent key (i.e.
    Type: Application
    Filed: August 17, 2016
    Publication date: December 8, 2016
    Inventors: Rajesh Sundaram, Stephen Daniel, Jeffrey S. Kimmel, Blake H. Lewis
  • Patent number: 9489293
    Abstract: Techniques for opportunistic data storage are described. In one embodiment, for example, an apparatus may comprise a data storage device and a storage management module, and the storage management module may be operative to receive a request to store a set of data in the data storage device, the request indicating that the set of data is to be stored with opportunistic retention, the storage management module to select, based on allocation information, storage locations of the data storage device for opportunistic storage of the set of data and write the set of data to the selected storage locations. Other embodiments are described and claimed.
    Type: Grant
    Filed: August 19, 2013
    Date of Patent: November 8, 2016
    Assignee: NetApp, Inc.
    Inventor: Jeffrey S. Kimmel
  • Patent number: 9483349
    Abstract: In one embodiment, a node of a cluster having a plurality of nodes, executes a storage input/output (I/O) stack having a redundant array of independent disks (RAID) layer. The RAID layer organizes solid state drives (SSDs) within one or more storage arrays as a plurality of RAID groups associated with one or more extent stores. The RAID groups are formed from slices of storage spaces of the SSDs instead of entire storage spaces of the SSDs. This provides for RAID groups to co-exist on a same set of the SSDs.
    Type: Grant
    Filed: January 17, 2014
    Date of Patent: November 1, 2016
    Assignee: NetApp, Inc.
    Inventors: Rajesh Sundaram, Bharat Baddepudi, Jeffrey S. Kimmel
  • Patent number: 9454434
    Abstract: In one embodiment, one or more storage arrays of solid state drives (SSDs) that include a plurality of segments are organized as one or more redundant array of independent disks (RAID) groups, where the RAID groups provides data redundancy for the segments. A node executing a layered file system of a storage input/output (I/O) stack performs segment cleaning to clean the segments. It further initiates rebuild of a RAID configuration of the SSDs on a segment-by-segment basis in response to the segment cleaning. In such a configuration, each segment includes one or more RAID stripes that provide a level of data redundancy as well as RAID organization for the segment.
    Type: Grant
    Filed: January 17, 2014
    Date of Patent: September 27, 2016
    Assignee: NetApp, Inc.
    Inventors: Rajesh Sundaram, Bharat Baddepudi, Jeffrey S. Kimmel, T. Byron Rakitzis
  • Publication number: 20160274973
    Abstract: Embodiments described herein are directed to a file system driven RAID rebuild technique. A layered file system may organize storage of data as segments spanning one or more sets of storage devices, such as solid state drives (SSDs), of a storage array, wherein each set of SSDs may form a RAID group configured to provide data redundancy for a segment. The file system may then drive (i.e., initiate) rebuild of a RAID configuration of the SSDs on a segment-by-segment basis in response to cleaning of the segment (i.e., segment cleaning). Each segment may include one or more RAID stripes that provide a level of data redundancy (e.g., single parity RAID 5 or double parity RAID 6) as well as RAID organization (i.e., distribution of data and parity) for the segment. Notably, the level of data redundancy and RAID organization may differ among the segments of the array.
    Type: Application
    Filed: May 27, 2016
    Publication date: September 22, 2016
    Inventors: Rajesh Sundaram, Bharat Baddepudi, Jeffrey S. Kimmel, T. Byron Rakitzis
  • Patent number: 9448924
    Abstract: In one embodiment, storage arrays of solid state drives (SSDs) coupled to a node are organized as redundant array of independent disks (RAID) groups. Each storage array includes one or more segments. Each segment has contiguous free space on the SSDs. Data and metadata is organized on the SSDs with a sequential log-structured layout, with the data organized as variable-length extents of one or more logical units (LUNs). Segment cleaning is performed to clean a selected segment by moving the extents of the selected segment that contain valid data to one or more different segments so as to free the selected segment. Additional extents are written as a sequence of contiguous range write operations to the entire free segment with temporal locality to reduce data relocation within the SSDs as a result of the write operations.
    Type: Grant
    Filed: January 8, 2014
    Date of Patent: September 20, 2016
    Assignee: NetApp, Inc.
    Inventors: Rajesh Sundaram, Stephen Daniel, Jeffrey S. Kimmel, Blake H. Lewis
  • Publication number: 20160246522
    Abstract: An exactly once semantics (EOS) system of a storage input/output (I/O) stack implements a technique ensuring that non-idempotent operations occur exactly once in a storage system embodied as a node of a cluster. Illustratively, a first layer of the storage I/O stack may act as a client issuing a non-idempotent operation to second layer of the stack, which may act as a server. According to the technique, the EOS system may wrap (i.e., encapsulate) the non-idempotent operation within a transaction embodied as an EOS transaction data structure having a transaction identifier that uniquely identifies the transaction. The server may complete the transaction and reply with a result to the client, which may acknowledge receipt of the reply. In response to a crash and subsequent recovery of the node, the EOS system may determine whether the transaction had completed prior to the crash. If so, the EOS system ensures that the transaction is not re-played (re-executed).
    Type: Application
    Filed: February 25, 2015
    Publication date: August 25, 2016
    Inventors: Srinath Krishnamachari, Kayuri H. Patel, Jeffrey S. Kimmel, Edward D. McClanahan
  • Publication number: 20160248583
    Abstract: A technique perturbs an extent key to compute a candidate extent key in the event of a collision with metadata (i.e., two extents having different data that yield identical hash values) stored in a memory of a node in a cluster. The perturbing technique may be used to compute a candidate extent key that is not previously stored in an extent store instance. The candidate extent key may be computed from a hash value of an extent using a perturbing algorithm, i.e., a hash collision computation, which illustratively adds a perturb value to the hash value. The perturb value is illustratively sufficient to ensure that the candidate extent key resolves to a same hash bucket and node (extent store instance) as the original extent key. In essence, the technique ensures that the original extent key is perturbed in a deterministic manner to generate the candidate extent key, so that the original extent and candidate extent key “decode” to the same hash bucket and extent store instance.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 25, 2016
    Inventors: Edward D. McClanahan, Jeffrey S. Kimmel
  • Publication number: 20160246742
    Abstract: A hybrid message-based scheduling technique efficiently load balances a storage I/O stack partitioned into one or more non-blocking (i.e., free-running) messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services) and one or more operating system kernel blocking threads that execute blocking services. The technique combines the blocking and non-blocking services within a single coherent extended programming environment. The messaging kernel (MK) operates on processors apart from the operating system kernel that are allocated from a predetermined number of logical processors (i.e., hyper-threads) for use by an MK scheduler to schedule the non-blocking services within storage I/O stack as well as allocate a remaining number of logical processors for use by the blocking services. In addition, the technique provides a variation on a synchronization primitive that allows signaling between the two types of services (i.e.
    Type: Application
    Filed: February 23, 2016
    Publication date: August 25, 2016
    Inventor: Jeffrey S. Kimmel
  • Publication number: 20160246655
    Abstract: A cache affinity and processor utilization technique efficiently load balances work in a storage input/output (I/O) stack among a plurality of processors and associated processor cores of a node. The storage I/O stack employs one or more non-blocking messaging kernel (MK) threads that execute non-blocking message handlers (i.e., non-blocking services). The technique load balances work between the processor cores sharing a last level cache (LLC) (i.e., intra-LLC processor load balancing), and load balances work between the processors having separate LLCs (i.e., inter-LLC processor load balancing). The technique may allocate a predetermined number of logical processors for use by an MK scheduler to schedule the non-blocking services within the storage I/O stack, as well as allocate a remaining number of logical processors for use by blocking services, e.g., scheduled by an operating system kernel scheduler.
    Type: Application
    Filed: February 24, 2016
    Publication date: August 25, 2016
    Inventors: Jeffrey S. Kimmel, Christopher Joseph Corsi, Venkatesh Babu Chitlur Srinivasa
  • Patent number: 9418015
    Abstract: Among other things, one or more techniques and/or systems are provided for storing data within a hybrid storage aggregate comprising a lower-latency storage tier and a higher-latency storage tier. In particular, frequently accessed data, randomly accessed data, and/or short lived data may be stored (e.g., read caching and/or write caching) within the lower-latency storage tier. Infrequently accessed data and/or sequentially accessed data may be stored within the higher-latency storage tier. Because the hybrid storage aggregate may comprise a single logical container derived from the higher-latency storage tier and the lower-latency storage tier, additional storage and/or file system functionality may be implemented across the storage tiers. For example, deduplication functionality, caching functionality, backup/restore functionality, and/or other functionality may be provided through a single file system (or other type of arrangement) and/or a cache map implemented within the hybrid storage aggregate.
    Type: Grant
    Filed: April 24, 2015
    Date of Patent: August 16, 2016
    Assignee: NetApp, Inc.
    Inventors: Rajesh Sundaram, Douglas Paul Doucette, David Grunwald, Jeffrey S. Kimmel, Ashish Prakash
  • Patent number: 9405783
    Abstract: In one embodiment, a technique is provided for distributing data and associated metadata within a distributed storage architecture. A set of hash tables that embody mappings of cluster-wide identifiers associated with storage locations are stored for write data of write requests organized into extents. A hash value is generated from a hash function applied to each extent. The hash value is overloaded and used for multiple purposes within the distributed storage architecture, including (i) a remainder computation on the hash value to select a bucket of a plurality of buckets representative of the extents, (ii) a hash table selector of the hash value to select a hash table from the set of hash tables, and (iii) a hash table index computed from the hash value to select an entry from a plurality of entries of the selected hash table having a cluster-wide identifier identifying a storage location for the extent.
    Type: Grant
    Filed: October 2, 2013
    Date of Patent: August 2, 2016
    Assignee: NetApp, Inc.
    Inventors: Jeffrey S. Kimmel, Blake H. Lewis
  • Patent number: 9389958
    Abstract: In one embodiment, a file system driven RAID rebuild technique is provided. A layered file system may organize storage of data as segments spanning one or more sets of storage devices, such as solid state drives (SSDs), of a storage array, wherein each set of SSDs may form a RAID group configured to provide data redundancy for a segment. The file system may then drive (i.e., initiate) rebuild of a RAID configuration of the SSDs on a segment-by-segment basis in response to cleaning of the segment (i.e., segment cleaning). Each segment may include one or more RAID stripes that provide a level of data redundancy (e.g., single parity RAID 5 or double parity RAID 6) as well as RAID organization (i.e., distribution of data and parity) for the segment. Notably, the level of data redundancy and RAID organization may differ among the segments of the array.
    Type: Grant
    Filed: January 22, 2014
    Date of Patent: July 12, 2016
    Assignee: NetApp, Inc.
    Inventors: Rajesh Sundaram, Bharat Baddepudi, Jeffrey S. Kimmel, T. Byron Rakitzis
  • Publication number: 20160132396
    Abstract: In one embodiment, an extent store layer of a storage input/output (I/O) stack executing on one or more nodes of a cluster manages efficient logging and checkpointing of metadata. The metadata managed by the extent store layer, i.e., the extent store metadata, resides in a memory (in-core) of each node and is illustratively organized as a key-value extent store embodied as one or more data structures, e.g., a set of hash tables. Changes to the set of hash tables are recorded as a continuous stream of changes to SSD embodied as an extent store layer log. A separate log stream structure (e.g., an in-core buffer) may be associated respectively with each hash table such that changed (i.e., dirtied) slots of the hash table are recorded as entries in the log stream structure. The hash tables are written to SSD using a fuzzy checkpointing technique.
    Type: Application
    Filed: January 20, 2016
    Publication date: May 12, 2016
    Inventors: Jeffrey S. Kimmel, T. Byron Rakitzis