Patents by Inventor Beth A. Peterson

Beth A. Peterson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11157418
    Abstract: A method for improving cache hit ratios dedicates, within a cache, a portion of the cache to prefetched data elements. The method maintains a high priority LRU list designating an order in which high priority prefetched data elements are demoted, and a low priority LRU list designating an order in which low priority prefetched data elements are demoted. The method calculates, for the high priority LRU list, a first score based on a first priority and a first cache hit metric. The method calculates, for the low priority LRU list, a second score based on a second priority and a second cache hit metric. The method demotes, from the cache, a prefetched data element from the high priority LRU list or the low priority LRU list depending on which of the first score and the second score is lower. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 9, 2020
    Date of Patent: October 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Beth A. Peterson, Kyler A. Anderson
  • Patent number: 11150840
    Abstract: A method for pinning selected volumes within a heterogeneous cache is disclosed. The method maintains a heterogeneous cache made up of a higher performance portion and a lower performance portion. A list of pinned volumes is received that are provided higher priority than other volumes within the heterogeneous cache. The method dedicates, within the lower performance portion, a storage area to accommodate the pinned volumes and prestages the pinned volumes within the storage area. In certain embodiments, an LRU list is maintained that indicates an order in which storage elements of the pinned volumes are demoted from the storage area. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 9, 2020
    Date of Patent: October 19, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Kevin J. Ash, Beth A. Peterson
  • Patent number: 11151035
    Abstract: A method for improving cache hit ratios for selected storage elements within a storage system is disclosed. In one embodiment, such a method includes storing, in a cache of a storage system, non-favored storage elements and favored storage elements. The favored storage elements are retained in the cache longer than the non-favored storage elements. The method maintains a first LRU list containing entries associated with non-favored storage elements and designating an order in which the non-favored storage elements are evicted from the cache, and a second LRU list containing entries associated with favored storage elements and designating an order in which the favored storage elements are evicted from the cache. The method moves entries between the first LRU list and the second LRU list as favored storage elements are changed to non-favored storage elements and vice versa. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: May 12, 2019
    Date of Patent: October 19, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Beth A. Peterson, Kevin J. Ash, Kyler A. Anderson
  • Patent number: 11144213
    Abstract: A metadata track stores metadata corresponding to both a first customer data track and a second customer data track. In response to receiving a first request to perform a write on the first customer data track from a two track write process, exclusive access to the first customer data track is provided to the first request, and shared access to the metadata track is provided to the first request. In response to receiving a second request to perform a write on the second customer data track from the two track write process, exclusive access to the second customer data track is provided to the second request, and shared access to the metadata track is provided to the second request prior to providing exclusive access to the metadata track to at least one process that is waiting for exclusive access to the metadata track.
    Type: Grant
    Filed: January 13, 2020
    Date of Patent: October 12, 2021
    Assignee: Intemational Business Machines Corporation
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta, Jared M. Minch, Beth A. Peterson
  • Patent number: 11132306
    Abstract: A method for removing stale messages from a storage controller is disclosed. In one embodiment, such a method includes querying, by a host system, a storage controller to determine ownership of a lock on the storage controller. The host system receives, in response to the query, information indicating that the lock has been granted to the host system. This allows the host system to treat the lock as being granted even though the host system has not received a “lock granted” message from the storage controller. Before using the lock, the host system sends, to the storage controller, an instruction to clear any stale messages on the storage controller related to the lock. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: September 28, 2021
    Assignee: International Business Machines Corporation
    Inventors: Beth A. Peterson, Christopher D. Filachek, Mark A. Lehrer
  • Publication number: 20210247930
    Abstract: A method for pinning selected volumes within a heterogeneous cache is disclosed. The method maintains a heterogeneous cache made up of a higher performance portion and a lower performance portion. A list of pinned volumes is received that are provided higher priority than other volumes within the heterogeneous cache. The method dedicates, within the lower performance portion, a storage area to accommodate the pinned volumes and prestages the pinned volumes within the storage area. In certain embodiments, an LRU list is maintained that indicates an order in which storage elements of the pinned volumes are demoted from the storage area. A corresponding system and computer program product are also disclosed.
    Type: Application
    Filed: February 9, 2020
    Publication date: August 12, 2021
    Applicant: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Kevin J. Ash, Beth A. Peterson
  • Publication number: 20210248079
    Abstract: A method to prevent starvation of non-favored volumes in cache is disclosed. In one embodiment, such a method includes storing, in a cache of a storage system, non-favored storage elements and favored storage elements. A cache demotion algorithm is used to retain the favored storage elements in the cache longer than the non-favored storage elements. The method designates a maximum amount of storage space that the favored storage elements are permitted to consume in the cache. In preparation to free storage space in the cache, the method determines whether an amount of storage space consumed by the favored storage elements in the cache has reached the maximum amount. If so, the method frees storage space in the cache by demoting favored storage elements. If not, the method frees storage space in the cache in accordance with the cache demotion algorithm. A corresponding system and computer program product are also disclosed.
    Type: Application
    Filed: February 9, 2020
    Publication date: August 12, 2021
    Applicant: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Matthew G. Borlick, Beth A. Peterson
  • Publication number: 20210248086
    Abstract: A method for improving cache hit ratios dedicates, within a cache, a portion of the cache to prefetched data elements. The method maintains a high priority LRU list designating an order in which high priority prefetched data elements are demoted, and a low priority LRU list designating an order in which low priority prefetched data elements are demoted. The method calculates, for the high priority LRU list, a first score based on a first priority and a first cache hit metric. The method calculates, for the low priority LRU list, a second score based on a second priority and a second cache hit metric. The method demotes, from the cache, a prefetched data element from the high priority LRU list or the low priority LRU list depending on which of the first score and the second score is lower. A corresponding system and computer program product are also disclosed.
    Type: Application
    Filed: February 9, 2020
    Publication date: August 12, 2021
    Applicant: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Beth A. Peterson, Kyler A. Anderson
  • Publication number: 20210248080
    Abstract: A method for depopulating data from cache includes receiving a command to depopulate the cache of selected data. The command has an application identifier as a parameter. The application identifier is associated with an application that previously accessed the data. The method searches the cache for data elements that are marked with the application identifier and removes the data elements from the cache. In certain embodiments, the data elements are marked with a first application identifier associated with an application that staged the data elements into the cache, and a second application identifier associated with an application that last accessed the data elements. In certain embodiments, removing the data elements from the cache comprises only removing the data elements from the cache if the application identifier matches one or more of the first application identifier and the second application identifier. A corresponding system and computer program product are also disclosed.
    Type: Application
    Filed: February 9, 2020
    Publication date: August 12, 2021
    Applicant: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Kyler A. Anderson, Beth A. Peterson
  • Patent number: 11086559
    Abstract: Provided are techniques for cloud based store and restore with copy services. A store command to transfer data from one or more tracks of a volume to cloud storage is received. With track services, data for the one or more tracks in the volume is retrieved by emulating a host read. With a cloud data movement engine, the data for the one or more tracks is converted to data for one or more objects. With the cloud data movement engine, the one or more objects are stored in the cloud storage.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: August 10, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew R. Craig, Edward H. Lin, Beth A. Peterson, Qiang Xie
  • Patent number: 11080197
    Abstract: Provided are a computer program product, system, and method for managing access requests from a host to tracks in storage. A cursor is set to point to a track in a range of tracks established for sequential accesses. Cache resources are accessed for the cache for tracks in the range of tracks in advance of processing access requests to the range of tracks. Indication is received of a subset of tracks in the range of tracks for subsequent access transactions and a determination is made whether the cursor points to a track in the subset of tracks. The cursor is set to point to a track in the subset of tracks and cache resources are accessed for tracks in the subset of tracks for anticipation of access transactions to tracks in the subset of tracks.
    Type: Grant
    Filed: October 15, 2019
    Date of Patent: August 3, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ronald E. Bretschneider, Susan K. Candelaria, Beth A. Peterson, Dale F. Riedy, Peter G. Sutton, Harry M. Yudenfriend
  • Publication number: 20210224199
    Abstract: A method for improving cache hit ratios for selected storage elements within a storage system includes storing, in a cache of a storage system, non-favored storage elements and favored storage elements. The favored storage elements are retained in the cache longer than the non-favored storage elements. The method maintains a first LRU list containing entries associated with non-favored storage elements and designating an order in which the non-favored storage elements are evicted from the cache, and a second LRU list containing entries associated with favored storage elements and designating an order in which the favored storage elements are evicted from the cache. The method periodically scans the first LRU list for non-favored storage elements that have changed to favored storage elements, and the second LRU list for favored storage elements that have changed to non-favored storage elements. A corresponding system and computer program product are also disclosed.
    Type: Application
    Filed: January 20, 2020
    Publication date: July 22, 2021
    Applicant: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Matthew G. Borlick, Beth A. Peterson
  • Patent number: 11055234
    Abstract: Provided are a computer program product, system, and method for managing cache segments between a global queue and a plurality of local queues by training a machine learning module. A machine learning module is provided input comprising cache segment management information related to management of segments in the local queues by the processing units and accesses of the global queue to transfer cache segments between the local queues and the global queue to output an optimum number parameter comprising an optimum number of segments to maintain in a local queue and a transfer number parameter comprising a number of cache segments to move between a local queue and the global queue. The machine learning module is retrained based on the cache segment management information to output an adjusted transfer number parameter and an adjusted optimum number parameter for the processing units.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: July 6, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Beth A. Peterson, Matthew R. Craig
  • Patent number: 11048641
    Abstract: Provided are a computer program product, system, and method for managing cache segments between a global queue and a plurality of local queues using a machine learning module. Cache segment management information related to management of segments in the local queues and accesses to the global queue to transfer cache segments between the local queues and the global queue, are provided to a machine learning module to output an optimum number parameter comprising an optimum number of segments to maintain in a local queue and a transfer number parameter comprising a number of cache segments to transfer between a local queue and the global queue. The optimum number parameter and the transfer number parameter are sent to a processing unit having a local queue to cause the processing unit to transfer the transfer number parameter of cache segments between the local queue to the global queue.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: June 29, 2021
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kevin J. Ash, Beth A. Peterson, Matthew R. Craig
  • Patent number: 11042491
    Abstract: A request is received to perform a point in time copy operation from a source volume to a space efficient target volume. A controller copies data stored in a group of data storage units, from the source volume to a non-volatile storage, to preserve the point in time copy operation. A background process asynchronously copies the data from the non-volatile storage to the space efficient target volume to commit a physical point in time copy of the data from the source volume to the target volume.
    Type: Grant
    Filed: November 4, 2013
    Date of Patent: June 22, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Theresa M. Brown, Nedlaya Y. Francisco, Suguang Li, Beth A. Peterson
  • Patent number: 11029849
    Abstract: Provided are techniques for handling cache and Non-Volatile Storage (NVS) out of sync writes. At an end of a write for a cache track of a cache node, a cache node uses cache write statistics for the cache track of the cache node and Non-Volatile Storage (NVS) write statistics for a corresponding NVS track of an NVS node to determine that writes to the cache track and to the corresponding NVS track are out of sync. The cache node sets an out of sync indicator in a cache data control block for the cache track. The cache node sends a message to the NVS node to set an out of sync indicator in an NVS data control block for the corresponding NVS track. The cache node sets the cache track as pinned non-retryable due to the write being out of sync and reports possible data loss to error logs.
    Type: Grant
    Filed: November 21, 2019
    Date of Patent: June 8, 2021
    Assignee: International Business Machines Corporation
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta, Beth A. Peterson
  • Patent number: 11030125
    Abstract: A request is received to perform a point in time copy operation from a source volume to a space efficient target volume. A controller copies data stored in a group of data storage units, from the source volume to a non-volatile storage, to preserve the point in time copy operation. A background process asynchronously copies the data from the non-volatile storage to the space efficient target volume to commit a physical point in time copy of the data from the source volume to the target volume.
    Type: Grant
    Filed: February 5, 2013
    Date of Patent: June 8, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Theresa M. Brown, Nedlaya Y. Francisco, Suguang Li, Beth A. Peterson
  • Patent number: 11010248
    Abstract: Provided are a method, system, and computer program product in which a storage controller receives a first write command with a first token over a first interface from a host computational device. In response to a failure of the first write command in the storage controller, the storage controller retains selected resources for reuse for a retry of the first write command, wherein the retry of the first write command is expected from the host computational device over a second interface that is a slower communication link than the first interface. In response to receiving, by the storage controller, a second write command with a second token over the second interface, wherein the second token is identical to the first token, the storage controller determines that the second write command is a retry of the first write command and reuses the retained selected resources for executing the second write command.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: May 18, 2021
    Assignee: International Business Machines Corporation
    Inventors: Beth A. Peterson, Kevin J. Ash, Lokesh M. Gupta, Chung M. Fung
  • Patent number: 10996891
    Abstract: A host computational device transmits a first write command with a first token over a first interface to a storage controller. In response to receiving an indication by the host computational device that the first write command has failed in the storage controller, the host computational device transmits a second write command with a second token over a second interface to the storage controller, wherein the second write command is a retry of the first write command that failed, wherein the second token is identical to the first token, and wherein the second interface is a slower communication link than the first interface.
    Type: Grant
    Filed: February 28, 2019
    Date of Patent: May 4, 2021
    Assignee: International Business Machines Corporation
    Inventors: Beth A. Peterson, Kevin J. Ash, Lokesh M. Gupta, Chung M. Fung
  • Patent number: 10990315
    Abstract: Write transfer resource management in a data storage system in accordance with the present description includes overdue write transfer management logic which detects whether or not an established write set has become stale. In one embodiment, a determination is made as a function of whether a write transfer from a host and associated with an established write transfer set is overdue as measured by a time-out period of time. Upon determination that an established write transfer set has become stale, the stale write set is removed and the resources associated with the removed write set are freed for use by other write sets, significantly improving system performance. Other features and aspects may be realized, depending upon the particular application.
    Type: Grant
    Filed: December 3, 2018
    Date of Patent: April 27, 2021
    Assignee: International Business Machines Corporation
    Inventors: Beth A. Peterson, Dale F. Riedy, John R. Paveza, Ronald E. Bretschneider, Brian Lee, Chung M. Fung, Susan K. Candelaria