Patents by Inventor Jeffrey S. Kimmel

Jeffrey S. Kimmel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8627331
    Abstract: A technique is described for improving throughput in a processing system, such as a network storage server. The technique provides multiple levels (e.g., a hierarchy) of parallelism of process execution within a single mutual exclusion domain, in a manner which allows certain operations on metadata to be parallelized as well as certain operations on user data. The specific parallelization scheme used in any given embodiment is based at least partly on the underlying metadata structures used by the processing system. Consequently, a high degree of parallelization possible, which improves the throughput of the processing system.
    Type: Grant
    Filed: April 30, 2010
    Date of Patent: January 7, 2014
    Assignee: NetApp, Inc.
    Inventors: David Grunwald, Jeffrey S. Kimmel
  • Patent number: 8621029
    Abstract: A system and method provides a remote direct memory access over a transport medium that does not natively support remote direct memory access operations. An emulated VI module of a storage operating system emulates RDMA operations over such a medium, e.g., conventional Ethernet, thereby enabling storage. Storage appliances in a cluster configuration utilize the non-RDMA compatible transport medium as a cluster interconnect.
    Type: Grant
    Filed: April 28, 2004
    Date of Patent: December 31, 2013
    Assignee: NetApp, Inc.
    Inventors: James R. Grier, Abhijeet Gole, David W. Mitchell, Jeffrey S. Kimmel, Arthur F. Lent
  • Patent number: 8621145
    Abstract: Described is a technique for managing the content of a nonvolatile solid-state memory data cache to improve cache performance while at the same time, and in a complementary manner, providing for automatic wear leveling. A modified circular first-in first-out (FIFO) log/algorithm is generally used to determine cache content replacement. The algorithm is used as the default mechanism for determining cache content to be replaced when the cache is full but is subject to modification in some instances. In particular, data are categorized according to different data classes prior to being written to the cache, based on usage. Once cached, data belonging to certain classes are treated differently than the circular FIFO replacement algorithm would dictate. Further, data belonging to each class are localized to designated regions within the cache.
    Type: Grant
    Filed: January 29, 2010
    Date of Patent: December 31, 2013
    Assignee: NetApp, Inc.
    Inventors: Jeffrey S. Kimmel, Randy Pafford, Rajesh Sundaram
  • Patent number: 8621142
    Abstract: A technique for achieving consistent read latency from an array of non-volatile solid-state memories involves an external entity determining the “busy” or “not busy” status of non-volatile solid-state memory elements in a RAID group. An external data layout engine then uses parity based RAID data reconstruction to avoid having to read from any memory element that is busy in a RAID group, along with careful scheduling of writes and erasures.
    Type: Grant
    Filed: April 18, 2011
    Date of Patent: December 31, 2013
    Assignee: NetApp, Inc.
    Inventors: Steve C. Miller, Jeffrey S. Kimmel
  • Patent number: 8621146
    Abstract: A network storage system includes “raw” flash memory, and storage of data in that flash memory is controlled by an external, log structured, write out-of-place data layout engine of a storage server. By avoiding a separate, onboard data layout engine on the flash devices, the latency associated with operation of such a data layout engine is also avoided. The flash memory can be used as the main persistent storage of a storage server and/or as buffer cache of a storage server, or both. The flash memory can be accessible to multiple storage servers in a storage cluster. To reduce variability in read latency, each flash device provides its status (“busy” or not) to the data layout engine. The data layout engine uses RAID data reconstruction to avoid having to read from a busy flash device.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: December 31, 2013
    Assignee: NetApp, Inc.
    Inventors: Steve C. Miller, Jeffrey S. Kimmel
  • Publication number: 20130346810
    Abstract: A storage system, such as a file server, receives a request to perform a write operation that affects a data block. In response, the storage system writes to a storage device the data block together with context information which uniquely identifies the write operation with respect to the data block. When the data block is subsequently read from the storage device together with the context information, the context information that was read with the data block is used to determine whether a previous write of the data block was lost.
    Type: Application
    Filed: June 14, 2013
    Publication date: December 26, 2013
    Inventors: Jeffrey S. Kimmel, Sunitha S. Sankar, Rajesh Sundaram, Nitin Muppalaneni, Emily W. Eng, Eric C. Hamilton
  • Patent number: 8549222
    Abstract: A cache-based storage architecture has primary and secondary storage subsystems that are controlled by first and second data layout engines to provide a high-performance storage system. The primary storage subsystem illustratively comprises non-volatile electronic storage media configured as a cache, while the secondary storage subsystem comprises magnetic storage media configured as a disk array. The data layout engines illustratively implement data layout techniques that improve read and write performance to the primary and secondary storage subsystems. To that end, the data layout engines cooperate to optimize the use of the non-volatile cache as a primary storage stage that efficiently serves random data access operations prior to substantially transposing them into sequential data access operations for permanent (or archival) storage on the disk array.
    Type: Grant
    Filed: February 11, 2009
    Date of Patent: October 1, 2013
    Assignee: NetApp, Inc.
    Inventors: Steven R. Kleiman, Steven C. Miller, Jeffrey S. Kimmel
  • Patent number: 8549253
    Abstract: A system integrates an intelligent storage switch with a flexible virtualization system to enable the intelligent storage switch to provide efficient service of file and block protocol data access requests for information stored on the system. A storage operating system executing on a storage system coupled to the switch implements the virtualization system to provide a unified view of storage to clients by logically organizing the information as named files, directories and logical unit numbers. The virtualization system may be embodied as a file system having a write allocator configured to provide a flexible block numbering policy to the storage switch that addresses volume management capabilities, such as storage virtualization.
    Type: Grant
    Filed: April 30, 2010
    Date of Patent: October 1, 2013
    Assignee: NetApp, Inc.
    Inventors: Vijayan Rajan, Brian Pawlowski, Jeffrey S. Kimmel, Gary Ross
  • Patent number: 8478835
    Abstract: The data path in a network storage system is streamlined by sharing a memory among multiple functional modules (e.g., N-module and D-module) of a storage server that facilitates symmetric access to data from multiple clients. The shared memory stores data from clients or storage devices to facilitate communication of data between clients and storage devices and/or between functional modules, and reduces redundant copies necessary for data transport. It reduces latency and improves throughput efficiencies by minimizing data copies and using hardware assisted mechanisms such as DMA directly from host bus adapters over an interconnection, e.g. switched PCI-e “network”. This scheme is well suited for a “SAN array” architecture, but also can be applied to NAS protocols or in a unified protocol-agnostic storage system. The storage system can provide a range of configurations ranging from dual module to many modules with redundant switched fabrics for I/O, CPU, memory, and disk connectivity.
    Type: Grant
    Filed: July 17, 2008
    Date of Patent: July 2, 2013
    Assignee: NetApp. Inc.
    Inventors: Jeffrey S. Kimmel, Steve C. Miller, Ashish Prakash
  • Patent number: 8321645
    Abstract: At least certain embodiments include a method, system and apparatus for relocating data between tiers of storage media in a hybrid storage aggregate encompassing multiple tiers of heterogeneous physical storage media including a file system to automatically relocate the data between tiers.
    Type: Grant
    Filed: April 29, 2009
    Date of Patent: November 27, 2012
    Assignee: NetApp, Inc.
    Inventors: Faramarz Rabii, John Strunk, Jeffrey S. Kimmel
  • Patent number: 8224777
    Abstract: A system and method efficiently generates a set of parallel persistent consistency point images (PCPIs) of volumes configured as a SVS and served by a plurality of nodes interconnected as a cluster. A volume operations daemon (VOD) executing on a node of the cluster is configured to manage generation of the volume PCPIs. Notably, the set of PCPIs is generated substantially in parallel to thereby obtain a consistent and accurate point in time reference of the entire SVS.
    Type: Grant
    Filed: April 28, 2006
    Date of Patent: July 17, 2012
    Assignee: NetApp, Inc.
    Inventor: Jeffrey S. Kimmel
  • Patent number: 8171480
    Abstract: In a processing system which includes a physical processor that includes multiple logical processors, multiple domains are defined for multiple processes that can execute on the physical processor. Each of the processes is assigned to one of the domains. Processor utilization associated with the logical processors is measured, and each of the domains is allocated to a subset of the logical processors according to the processor utilization.
    Type: Grant
    Filed: April 21, 2004
    Date of Patent: May 1, 2012
    Assignee: Network Appliance, Inc.
    Inventors: Alexander D. Petruncola, Nareshkumar M. Patel, Grace Ho, Jeffrey S. Kimmel
  • Patent number: 8112585
    Abstract: A method implements a cache-policy switching module in a storage system. The storage system includes a cache memory to cache storage data. The cache memory uses a first cache configuration. The cache-policy switching module emulates the caching of the storage data with a plurality of cache configurations. Upon a determination that one of the plurality of cache configurations performs better than the first cache configuration, the cache-policy switching module automatically applies the better performing cache configuration to the cache memory for caching the storage data.
    Type: Grant
    Filed: April 30, 2009
    Date of Patent: February 7, 2012
    Assignee: NetApp, Inc.
    Inventors: Naresh Patel, Jeffrey S. Kimmel, Garth Goodson
  • Patent number: 8086914
    Abstract: Described herein are method and apparatus for storing data to a low-latency random read memory (LLRRM) device using non-aligned data striping, the LLRRM device being implemented on a storage system. The LLRRM device may comprise a bank comprising a plurality of memory chips, each chip being simultaneously accessible for storing data on a plurality of erase-units (EUs). A storage operating system may maintain, for each chip, a reserve data structure listing reserve EUs and a remapping data structure for tracking remappings between defective EUs to reserve EUs in the chip. A defective EU in a chip may be mapped to a reserve EU from the reserve data structure. Upon receiving a data block to be stored to the LLRRM device at the defective EU, the storage operating system may stripe the received data block across a plurality of chips in a non-aligned manner using the remapped reserve EU.
    Type: Grant
    Filed: April 15, 2011
    Date of Patent: December 27, 2011
    Assignee: NetApp. Inc.
    Inventors: Jeffrey S. Kimmel, Rajesh Sundaram, George Totolos, Jr., Michael W. J. Hordijk
  • Patent number: 8074021
    Abstract: A network storage system includes “raw” flash memory, and storage of data in that flash memory is controlled by an external, log structured, write out-of-place data layout engine of a storage server. By avoiding a separate, onboard data layout engine on the flash devices, the latency associated with operation of such a data layout engine is also avoided. The flash memory can be used as the main persistent storage of a storage server and/or as buffer cache of a storage server, or both. The flash memory can be accessible to multiple storage servers in a storage cluster. To reduce variability in read latency, each flash device provides its status (“busy” or not) to the data layout engine. The data layout engine uses RAID data reconstruction to avoid having to read from a busy flash device.
    Type: Grant
    Filed: March 27, 2008
    Date of Patent: December 6, 2011
    Assignee: NetApp, Inc.
    Inventors: Steve C. Miller, Jeffrey S. Kimmel
  • Patent number: 8032781
    Abstract: A system and method for allowing more rapid takeover of a failed filer by a clustered takeover partner filer in the presence of a coredump procedure (e.g. a transfer of the failed filer's working memory) is provided. To save time, the coredump is allowed to occur contemporaneously with the takeover of the failed filer's regular, active file service disks by the partner so that the takeover need not await completion of the coredump to begin. This is accomplished, briefly stated, by the following techniques. The coredump is written to a single disk that is not involved in regular file service, so that takeover of regular file services can proceed without interference from coredump. A reliable means for both filers in a cluster to identify the coredump disk is provided, which removes takeover dependence upon unreliable communications mechanisms.
    Type: Grant
    Filed: October 7, 2010
    Date of Patent: October 4, 2011
    Assignee: NetApp, Inc.
    Inventors: Susan M. Coatney, John Lloyd, Jeffrey S. Kimmel, Brian Parkison, David Brittain Bolen
  • Publication number: 20110196905
    Abstract: Described herein are method and apparatus for storing data to a low-latency random read memory (LLRRM) device using non-aligned data striping, the LLRRM device being implemented on a storage system. The LLRRM device may comprise a bank comprising a plurality of memory chips, each chip being simultaneously accessible for storing data on a plurality of erase-units (EUs). A storage operating system may maintain, for each chip, a reserve data structure listing reserve EUs and a remapping data structure for tracking remappings between defective EUs to reserve EUs in the chip. A defective EU in a chip may be mapped to a reserve EU from the reserve data structure. Upon receiving a data block to be stored to the LLRRM device at the defective EU, the storage operating system may stripe the received data block across a plurality of chips in a non-aligned manner using the remapped reserve EU.
    Type: Application
    Filed: April 15, 2011
    Publication date: August 11, 2011
    Inventors: Jeffrey S. Kimmel, Rajesh Sundaram, George Totolos, JR., Michael W.J. Hordijk
  • Patent number: 7979402
    Abstract: A system and method for managing data during consistency points in a storage system is provided. A buffer data control structure is modified to include a flags array that tracks various status flags for both a current and a next consistency point (CP). By utilizing multiple pointers within a buffer control structure, the storage system may permit write operations to continue to a data container undergoing write allocation. Received writes during a write allocation procedure are stored in raw data buffers and the buffer control structure is marked as being dirty for a next CP.
    Type: Grant
    Filed: April 30, 2010
    Date of Patent: July 12, 2011
    Assignee: NetApp, Inc.
    Inventors: Eric Hamilton, Jeffrey S. Kimmel, Robert L. Fair, Ashish Prakash
  • Patent number: 7945752
    Abstract: A technique for achieving consistent read latency from an array of non-volatile solid-state memories involves an external entity determining the “busy” or “not busy” status of non-volatile solid-state memory elements in a RAID group. An external data layout engine then uses parity based RAID data reconstruction to avoid having to read from any memory element that is busy in a RAID group, along with careful scheduling of writes and erasures.
    Type: Grant
    Filed: March 27, 2008
    Date of Patent: May 17, 2011
    Assignee: NetApp, Inc.
    Inventors: Steve C. Miller, Jeffrey S. Kimmel
  • Patent number: 7945822
    Abstract: Described herein are method and apparatus for storing data to a low-latency random read memory (LLRRM) device using non-aligned data striping, the LLRRM device being implemented on a storage system. The LLRRM device may comprise a bank comprising a plurality of memory chips, each chip being simultaneously accessible for storing data on a plurality of erase-units (EUs). A storage operating system may maintain, for each chip, a reserve data structure listing reserve EUs and a remapping data structure for tracking remappings between defective EUs to reserve EUs in the chip. A defective EU in a chip may be mapped to a reserve EU from the reserve data structure. Upon receiving a data block to be stored to the LLRRM device at the defective EU, the storage operating system may stripe the received data block across a plurality of chips in a non-aligned manner using the remapped reserve EU.
    Type: Grant
    Filed: April 27, 2009
    Date of Patent: May 17, 2011
    Assignee: NetApp, Inc.
    Inventors: Jeffrey S. Kimmel, Rajesh Sundaram, George Totolos, Jr., Michael W. J. Hordijk