Patents by Inventor Kevin J. Ash

Kevin J. Ash has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210049109
    Abstract: Provided are a computer program product, system, and method for managing adding of accessed tracks to a cache list based on accesses to different regions of the cache list. A cache has a least recently used (LRU) end and a most recently used (MRU) end. A determination is made of a high access region of tracks from the MRU end of the cache list based on a number of accesses to the tracks in the high access region. A flag is set for an accessed track, indicating to indicate the accessed track at the MRU end upon processing the accessed track at the LRU end, in response to the determining the accessed track is in the high access region. After the setting the flag, the accessed track remains at a current position in the cache list before being accessed.
    Type: Application
    Filed: August 16, 2019
    Publication date: February 18, 2021
    Inventors: Lokesh M. GUPTA, Kyler A. ANDERSON, Kevin J. ASH, Matthew J. KALOS
  • Publication number: 20210042231
    Abstract: Provide a computer program product, system, and method for adjusting insertion points used to determine locations in a cache list at which to indicate tracks based on number of tracks added at insertion points. There are a plurality of insertion points to a cache list for the cache having a least recently used (LRU) end and a most recently used (MRU) end. Each insertion point of the insertion points identifies a track in the cache list. A plurality of tracks are indicated at positions in the cache list with respect to insertion points. For each track indicated at an insertion point of the insertion points, at least one insertion point counter for at least one insertion point with respect to the insertion point at which the track is indicated is incremented. A plurality of the insertion points are adjusted to point to different tracks in the cache list based on insertion point counters for the insertion points.
    Type: Application
    Filed: August 7, 2019
    Publication date: February 11, 2021
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Publication number: 20210042241
    Abstract: Provided are a computer program product, system, and method for using insertion points to determine locations in a cache list at which to indicate tracks in a shared cache accessed by a plurality of processors. A plurality of insertion points to a cache list for the shared cache having a least recently used (LRU) end and a most recently used (MRU) end identify tracks in the cache list. For each processor, of a plurality of processors, for which indication of tracks accessed by the processor is received, a determination is made of insertion points of the provided insertion points at which to indicate the tracks for which indication is received. The tracks are indicated at positions in the cache list with respect to the determined insertion points.
    Type: Application
    Filed: August 7, 2019
    Publication date: February 11, 2021
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Publication number: 20210042230
    Abstract: Provided are a computer program product, system, and method for maintaining cache hit ratios for insertion points into a cache list to optimize memory allocation to a cache. A plurality of insertion points to a cache list for the cache each identify a track in the cache list. Insertion points to tracks in the cache list are used to determine locations in the cache list at which to indicate tracks in the cache in the cache list that are to be indicated at the MRU end of the cache list. Indication is made of cache hits for each of the insertion points used to indicate locations in the cache list for tracks accessed while indicated in the cache list. The cache hits indicated for the insertion points are to indicate whether to increase or decrease a size of the cache.
    Type: Application
    Filed: August 7, 2019
    Publication date: February 11, 2021
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Publication number: 20210042242
    Abstract: Provided are a computer program product, system, and method for using insertion points to determine locations in a cache list at which to move processed tracks. There are a plurality of insertion points to a cache list for the cache having a least recently used (LRU) end and a most recently used (MRU) end, wherein each insertion point of the insertion points identifies a track in the cache list. An insertion point of the insertion points is determined at which to move the processed track in response to determining that a processed track is indicated to move to the MRU end. The processed track is indicated at a position in the cache list with respect to the determined insertion point.
    Type: Application
    Filed: August 7, 2019
    Publication date: February 11, 2021
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Publication number: 20210042229
    Abstract: Provided are a computer program product, system, and method for adjusting a number of insertion points used to determine locations in a cache list at which to indicate tracks. Tracks added to the cache are indicated in a cache list. The cache list has a least recently used (LRU) end and a most recently used (MRU) end. In response to indicating in a cache list an insertion point interval number of tracks in the cache in a cache list, setting an insertion point to indicate one of the tracks of the insertion point interval number of tracks indicated in the cache list. Insertion points to tracks in the cache list are used to determine locations in the cache list at which to indicate tracks in the cache in the cache list.
    Type: Application
    Filed: August 7, 2019
    Publication date: February 11, 2021
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Patent number: 10915462
    Abstract: Provided are techniques for destaging pinned retryable data in cache. A ranks scan structure is created with an indicator for each rank of multiple ranks that indicates whether pinned retryable data in a cache for that rank is destageable. A cache directory is partitioned into chunks, wherein each of the chunks includes one or more tracks from the cache. A number of tasks are determined for the scan of the cache. The number of tasks are executed to scan the cache to destage pinned retryable data that is indicated as ready to be destaged by the ranks scan structure, wherein each of the tasks selects an unprocessed chunk of the cache directory for processing until the chunks of the cache directory have been processed.
    Type: Grant
    Filed: June 25, 2019
    Date of Patent: February 9, 2021
    Assignee: International Business Machines Corporation
    Inventors: Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
  • Patent number: 10901916
    Abstract: Provided are a computer program product, system, and method for managing adding of accessed tracks to a cache list based on accesses to different regions of the cache list. A cache has a least recently used (LRU) end and a most recently used (MRU) end. A determination is made of a high access region of tracks from the MRU end of the cache list based on a number of accesses to the tracks in the high access region. A flag is set for an accessed track, indicating to indicate the accessed track at the MRU end upon processing the accessed track at the LRU end, in response to the determining the accessed track is in the high access region. After the setting the flag, the accessed track remains at a current position in the cache list before being accessed.
    Type: Grant
    Filed: August 16, 2019
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Patent number: 10901793
    Abstract: Provided are a computer program product, system, and method for determining whether to process a host request using a machine learning module. Information that relates to at least one of running tasks, mail queue messages related to host requests, Input/Output (I/O) request processing, and a host request received from the host system is provided to a machine learning module. An output representing a processing load in a system is received from the machine learning module. The output is used to determine whether to process the host request.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: January 26, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew R. Craig, Beth A. Peterson, Lokesh M. Gupta, Kevin J. Ash
  • Patent number: 10901904
    Abstract: In response to an end of track access for a track in a cache, a determination is made as to whether the track has modified data and whether the track has one or more holes. In response to determining that the track has modified data and the track has one or more holes, an input on a plurality of attributes of a computing environment in which the track is processed is provided to a machine learning module to produce an output value. A determination is made as to whether the output value indicates whether one or more holes are to be filled in the track. In response to determining that the output value indicates that one or more holes are to be filled in the track, the track is staged to the cache from a storage drive.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick
  • Patent number: 10891227
    Abstract: Provided are a computer program product, system, and method for determining modified tracks to destage during a cache scan. A cache scan is initiated at a time interval to determine modified tracks to destage from a cache to the first or second storage. A modified track is processed during the cache scan. The modified track is destaged to the first storage in response to the modified track stored in the first storage. A determination is made as to whether there was a host write to the second storage since a previous cache scan in response to the modified track stored in the second storage. The modified track is destaged to the second storage in response to determining that there was a host write to the second storage since the previous cache scan.
    Type: Grant
    Filed: November 29, 2018
    Date of Patent: January 12, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Warren K. Stanley, Edward H. Lin, Kevin J. Ash, Matthew G. Borlick, Kyler A. Anderson
  • Patent number: 10884849
    Abstract: Provided are a computer program product, system, and method for mirroring information on modified data from a primary storage controller to a secondary storage controller for the secondary storage controller to use to calculate parity data. New primary parity data is calculated from modified data for a primary group of tracks in the primary storage and difference data from the modified data and a pre-modified version of the modified data. The difference data and one of the modified data and the new primary parity data are sent to the secondary storage controller to cause the secondary storage controller to write new secondary parity data and the modified data to a secondary group of tracks at the secondary storage. The modified data and the new primary parity data are written to the primary group of tracks in the primary storage.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: January 5, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Kevin J. Ash, John C. Elliott
  • Patent number: 10884936
    Abstract: Provided are a computer program product, system, and method for updating a track format table used to provide track format codes for cache control blocks with more frequently accessed track format metadata. A track format table associates track format codes with track format metadata. Each instance of the track format metadata indicates a layout of data in a track. Cache control blocks for tracks in the cache include track format codes associated with the track format metadata of the tracks in the cache. Track format access information indicating accesses of track format metadata not included in the track format table. Track format metadata, indicated in the track format access information that is not in the track format table, is added to the track format table to associate with a track format code based on a number of accesses of the track format metadata indicated in the track format access information.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Beth A. Peterson
  • Patent number: 10866901
    Abstract: A method for invalidating a track of data on a storage drive in preparation to unpin the track is disclosed. In one embodiment, such a method includes invalidating certain metadata associated with a track of data residing on a storage drive of a storage system. The method further creates, in cache of the storage system, an invalid track image associated with the track. The method destages, from the cache, the invalid track image to the storage drive. Once the invalid track image is destaged, the method may unpin the track in cache, thereby enabling destages of the track from the cache to the storage drive going forward. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: June 2, 2018
    Date of Patent: December 15, 2020
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos, Kyler A. Anderson
  • Patent number: 10866752
    Abstract: A method for reclaiming storage space in RAID arrays made up of heterogeneous storage drives is disclosed. In one embodiment, such a method includes determining a most common storage capacity for a set of storage drives utilized in a storage system. The method further identifies physical storage drives from the set that contain unused storage space. The method pools the unused storage space of the physical storage drives to create virtual storage drives with storage capacities substantially equal to the most common storage capacity. The method then utilizes the virtual storage drives in existing or new RAID arrays. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: December 15, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Kevin J. Ash, Karl A. Nielsen
  • Publication number: 20200371950
    Abstract: Provided are a computer program product, system, and method for managing cache segments between a global queue and a plurality of local queues using a machine learning module. Cache segment management information related to management of segments in the local queues and accesses to the global queue to transfer cache segments between the local queues and the global queue, are provided to a machine learning module to output an optimum number parameter comprising an optimum number of segments to maintain in a local queue and a transfer number parameter comprising a number of cache segments to transfer between a local queue and the global queue. The optimum number parameter and the transfer number parameter are sent to a processing unit having a local queue to cause the processing unit to transfer the transfer number parameter of cache segments between the local queue to the global queue.
    Type: Application
    Filed: May 21, 2019
    Publication date: November 26, 2020
    Inventors: Lokesh M. GUPTA, Kevin J. ASH, Beth A. PETERSON, Matthew R. CRAIG
  • Publication number: 20200371959
    Abstract: Provided are a computer program product, system, and method for managing cache segments between a global queue and a plurality of local queues by training a machine learning module. A machine learning module is provided input comprising cache segment management information related to management of segments in the local queues by the processing units and accesses of the global queue to transfer cache segments between the local queues and the global queue to output an optimum number parameter comprising an optimum number of segments to maintain in a local queue and a transfer number parameter comprising a number of cache segments to move between a local queue and the global queue. The machine learning module is retrained based on the cache segment management information to output an adjusted transfer number parameter and an adjusted optimum number parameter for the processing units.
    Type: Application
    Filed: May 21, 2019
    Publication date: November 26, 2020
    Inventors: Lokesh M. GUPTA, Kevin J. ASH, Beth A. PETERSON, Matthew R. CRAIG
  • Patent number: 10841395
    Abstract: Provided are a computer program product, system, and method for populating a secondary cache with unmodified tracks in a primary cache when redirecting host access from a primary server to a secondary server. Host access to tracks is redirected from the primary server to the secondary server. Prior to the redirecting, updates to tracks in the primary storage were replicated to the secondary server. After the redirecting host access to the secondary server, host access is directed to the secondary server and the secondary storage. A secondary cache at the secondary server is populated with unmodified tracks in a primary cache at the primary server when the host access was redirected to the secondary server to make available to the host access redirected to the secondary server.
    Type: Grant
    Filed: March 25, 2019
    Date of Patent: November 17, 2020
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Matthew J. Kalos, Brian A. Rinaldi
  • Publication number: 20200356489
    Abstract: A method for improving cache hit ratios for selected volumes within a storage system is disclosed. In one embodiment, such a method includes storing, in a cache of a storage system, non-favored storage elements and favored storage elements. The favored storage elements are retained in the cache longer than the non-favored storage elements. The method maintains a “non-favored” LRU list that contains entries associated with non-favored storage elements and designates an order in which the non-favored storage elements are evicted from the cache. The method also maintains one or more “favored” LRU lists that contain entries associated with favored storage elements and designate an order in which the favored storage elements are evicted from the cache. Each “favored” LRU list is associated with favored storage elements that have a different preferred residency time in the cache. A corresponding system and computer program product are also disclosed.
    Type: Application
    Filed: May 12, 2019
    Publication date: November 12, 2020
    Applicant: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Beth A. Peterson, Kyler A. Anderson
  • Publication number: 20200356494
    Abstract: A method for improving cache hit ratios for selected volumes when using synchronous I/O is disclosed. In one embodiment, such a method includes establishing, in cache, a first set of non-favored storage elements from non-favored storage areas. The method further establishes, in the cache, a second set of favored storage elements from favored storage areas. The method calculates a life expectancy for the non-favored storage elements to reside in the cache prior to eviction. The method further executes an eviction policy for the cache wherein the favored storage elements are maintained in the cache for longer than the life expectancy of the non-favored storage elements. A corresponding system and computer program product are also disclosed.
    Type: Application
    Filed: May 12, 2019
    Publication date: November 12, 2020
    Applicant: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Beth A. Peterson, Kevin J. Ash, Kyler A. Anderson