Patents by Inventor Lokesh M. Gupta

Lokesh M. Gupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11314649
    Abstract: In response to an end of track access for a track in a cache, a determination is made as to whether the track has modified data and whether the track has one or more holes. In response to determining that the track has modified data and the track has one or more holes, an input on a plurality of attributes of a computing environment in which the track is processed is provided to a machine learning module to produce an output value. A determination is made as to whether the output value indicates whether one or more holes are to be filled in the track. In response to determining that the output value indicates that one or more holes are to be filled in the track, the track is staged to the cache from a storage drive.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: April 26, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick
  • Patent number: 11314691
    Abstract: A method for improving asynchronous data replication between a primary storage system and a secondary storage system maintains a cache in the primary storage system. The cache includes a higher performance portion and a lower performance portion. The method monitors, in the cache, unmirrored data elements needing to be mirrored, but that have not yet been mirrored, from the primary storage system to the secondary storage system. The method maintains a regular LRU list designating an order in which data elements are demoted from the cache. The method determines whether a data element at an LRU end of the regular LRU list is an unmirrored data element. In the event the data element at the LRU end is an unmirrored data element, the method moves the data element from the higher performance portion to the lower performance portion. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: April 26, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Kevin J. Ash, Kyler A. Anderson
  • Patent number: 11314659
    Abstract: Provided are techniques for using real segments and alternate segments in Non-Volatile Storage (NVS). One or more write requests for a track are executed by alternating between storing data in one or more sectors of real segments and one or more sectors of alternate segments for each of the write requests, while setting indicators in a real sector structure and an alternate sector structure. In response to determining that the one or more write requests for the track have completed, the data stored in the one or more sectors of the real segments and in the one or more sectors of the alternate segments are merged to form newly written data. In response to determining that a hardened, previously written data of a track does exist in Non-Volatile Storage (NVS), the newly written data is merged with the hardened, previously written data in the NVS. The merged data is committed.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: April 26, 2022
    Assignee: International Business Machines Corporation
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos
  • Patent number: 11301394
    Abstract: Provided are a computer program product, system, and method for using a machine learning module to select one of multiple cache eviction algorithms to use to evict a track from the cache. A first cache eviction algorithm determines tracks to evict from the cache. A second cache eviction algorithm determines tracks to evict from the cache, wherein the first and second cache eviction algorithms use different eviction schemes. At least one machine learning module is executed to produce output indicating one of the first cache eviction algorithm and the second cache eviction algorithm to use to select a track to evict from the cache. A track is evicted that is selected by one of the first and second cache eviction algorithms indicated in the output from the at least one machine learning module.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: April 12, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Matthew G. Borlick, Kyler A. Anderson, Kevin J. Ash
  • Patent number: 11294886
    Abstract: Provided are a computer program product, system, and method for fixing anomalies in a preserved data structure used to generate a temporary data structure during system initialization. A preserved data structure in persistent storage is used to build a temporary data structure in a memory of the computing system during initialization of the computing system. The temporary data structure represents computational resources in the computing system and is rebuilt from the preserved data structure during the initialization. The preserved data structure and the temporary data structure are processed to determine whether the preserved data structure includes at least one anomaly that would result in rebuilding the temporary data structure with an error. Information on the preserved data structure and the temporary data structure having the anomaly are processed to determine modifications to correct the preserved data structure. The determined modifications are processed to correct the preserved data structure.
    Type: Grant
    Filed: February 11, 2020
    Date of Patent: April 5, 2022
    Assignee: International Business Machines Corporation
    Inventors: Matthew G. Borlick, Lokesh M. Gupta, Clint A. Hardy
  • Patent number: 11288600
    Abstract: Provided are a computer program product, system, and method for determining sectors of a track to stage into cache using a machine learning module. Performance attributes of system components affected by staging tracks from the storage to the cache are provided to a machine learning module. An output is received, from the machine learning module having processed the provided performance attributes, indicating a staging strategy indicating sectors of a track to stage into the cache comprising one of a plurality of staging strategies. Sectors of an accessed track that is not in the cache are staged into the cache according to the staging strategy indicated in the output.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: March 29, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Matthew G. Borlick, Kevin J. Ash
  • Patent number: 11281593
    Abstract: Provided are a computer program product, system, and method for using insertion points to determine locations in a cache list at which to indicate tracks in a shared cache accessed by a plurality of processors. A plurality of insertion points to a cache list for the shared cache having a least recently used (LRU) end and a most recently used (MRU) end identify tracks in the cache list. For each processor, of a plurality of processors, for which indication of tracks accessed by the processor is received, a determination is made of insertion points of the provided insertion points at which to indicate the tracks for which indication is received. The tracks are indicated at positions in the cache list with respect to the determined insertion points.
    Type: Grant
    Filed: August 7, 2019
    Date of Patent: March 22, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew J. Kalos
  • Patent number: 11281502
    Abstract: A method for dispatching tasks on processor cores based on memory access efficiency is disclosed. The method identifies a task and a memory area to be accessed by the task. The method may use one or more of a compiler, code knowledge, and run-time statistics to identify the memory area that is accessed by the task. The method identifies multiple processor cores that are candidates to execute the task and identifies a particular processor core from the multiple processor cores that provides most efficient access to the memory area. The method dispatches the task to execute on the particular processor core that is deemed most efficient. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 22, 2020
    Date of Patent: March 22, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Matthew J. Kalos, Kevin J. Ash, Trung N. Nguyen
  • Patent number: 11281594
    Abstract: A method for maintaining statistics for data elements in a cache is disclosed. The method maintains a heterogeneous cache comprising a higher performance portion and a lower performance portion. The method maintains, within the lower performance portion, a ghost cache containing statistics for data elements that are currently contained in the heterogeneous cache, and data elements that have been demoted from the heterogeneous cache within a specified time interval. The method maintains updates to the statistics in an update area within the higher performance portion. The method determines whether the updates have reached a specified threshold and, in the event the updates have reached the specified threshold, flushes the updates from the update area to the ghost cache to update the statistics. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: February 22, 2020
    Date of Patent: March 22, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick
  • Patent number: 11281497
    Abstract: Provided are a computer program product, system, and method for using a machine learning module to determine an allocation of stage and destage tasks. Storage performance information related to processing of Input/Output (I/O) requests with respect to the storage unit is provided to a machine learning module. The machine learning module receives a computed number of stage tasks and a computed number of destage tasks. A current number of stage tasks allocated to stage tracks from the storage unit to the cache is adjusted based on the computed number of stage tasks. A current number of destage tasks allocated to destage tracks from the cache to the storage unit is adjusted based on the computed number of destage tasks.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: March 22, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Matthew G. Borlick, Kevin J. Ash
  • Publication number: 20220075704
    Abstract: A machine learning module is trained by receiving inputs comprising attributes of a computing environment, where the attributes affect a likelihood of failure in the computing environment. In response to an event occurring in the computing environment, a risk score that indicates a predicted likelihood of failure in the computing environment is generated via forward propagation through a plurality of layers of the machine learning module. A margin of error is calculated based on comparing the generated risk score to an expected risk score, where the expected risk score indicates an expected likelihood of failure in the computing environment corresponding to the event. An adjustment is made of weights of links that interconnect nodes of the plurality of layers via back propagation to reduce the margin of error, to improve the predicted likelihood of failure in the computing environment.
    Type: Application
    Filed: November 15, 2021
    Publication date: March 10, 2022
    Inventors: James E. Olson, Micah Robison, Matthew G. Borlick, Lokesh M. Gupta, Richard P. Oubre, JR., Usman Ahmed, Richard H. Hopkins
  • Publication number: 20220075676
    Abstract: Input on a plurality of attributes of a computing environment is provided to a machine learning module to produce an output value that comprises a risk score that indicates a likelihood of a potential malfunctioning occurring within the computing environment. A determination is made as to whether the risk score exceeds a predetermined threshold. In response to determining that the risk score exceeds a predetermined threshold, an indication is transmitted to indicate that potential malfunctioning is likely to occur within the computing environment. A modification is made to the computing environment to prevent the potential malfunctioning from occurring.
    Type: Application
    Filed: November 15, 2021
    Publication date: March 10, 2022
    Inventors: James E. Olson, Micah Robison, Matthew G. Borlick, Lokesh M. Gupta, Richard P. Oubre, JR., Usman Ahmed, Richard H. Hopkins
  • Patent number: 11271967
    Abstract: Methods and systems for cyber-hacking detection are provided. One method includes generating, by a processor, one or more artificial accounts for a type of actual account, learning one or more hacking behaviors for the type of actual account, and detecting cyber-hacks in activity in the one or more artificial accounts based on the one or more hacking behaviors. Systems and computer program products for performing the above method are also provided.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: March 8, 2022
    Assignee: International Business Machines Corporation
    Inventors: Matthew G. Borlick, Lokesh M. Gupta
  • Patent number: 11263097
    Abstract: Provided are a computer program product, system, and method for using a track format code in a cache control block for a track in a cache to process read and write requests to the track in the cache. A track format table associates track format codes with track format metadata. A determination is made as to whether the track format table has track format metadata matching track format metadata of a track staged into the cache. A determination is made as to whether a track format code from the track format table for the track format metadata in the track format table matches the track format metadata of the track staged. A cache control block for the track being added to the cache is generated including the determined track format code when the track format table has the matching track format metadata.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 1, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Lokesh M. Gupta, Matthew J. Kalos, Beth A. Peterson
  • Patent number: 11243708
    Abstract: Provided are a computer program product, system, and method for providing track format information when mirroring updated tracks from a primary storage system to a secondary storage system. The primary storage system determines a track to mirror to the secondary storage system and determines whether there is track format information for the track to mirror. The track format information indicates a format and layout of data in the track, indicated in track metadata for the track. The primary storage system sends the track format information to the secondary storage system, in response to determining there is the track format information and mirrors the track to mirror to the secondary storage system. The secondary storage system uses the track format information for the track in the secondary cache when processing a read or write request to the mirrored track.
    Type: Grant
    Filed: January 8, 2020
    Date of Patent: February 8, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta
  • Patent number: 11237730
    Abstract: A method for improving cache hit ratios for selected volumes within a storage system is disclosed. In one embodiment, such a method includes monitoring I/O to multiple volumes residing on a storage system. The storage system includes a cache to store data associated with the volumes. The method determines, from the I/O, which particular volumes of the multiple volumes would benefit the most if provided favored status in the cache. The favored status provides increased residency time in the cache to the particular volumes compared to volumes not having the favored status. The method generates a list of the particular volumes and transmits the list to the storage system. The storage system, in turn, provides increased residency time to the particular volumes in accordance with their favored status. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: May 12, 2019
    Date of Patent: February 1, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Beth A. Peterson, Kevin J. Ash, Kyler A. Anderson
  • Patent number: 11226899
    Abstract: Provided are a computer program product, system, and method for populating a second cache with tracks from a first cache when transferring management of the tracks from a first node to a second node. Management of a first group of tracks in the storage managed by the first node is transferred to the second node managing access to a second group of tracks in the storage. After the transferring the management of the tracks, the second node manages access to the first and second groups of tracks and caches accessed tracks from the first and second groups in the second cache of the second node. The second cache of the second node is populated with the tracks in a first cache of the first node.
    Type: Grant
    Filed: July 24, 2019
    Date of Patent: January 18, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kevin J. Ash, Matthew G. Borlick, Lokesh M. Gupta, Matthew J. Kalos, Brian A. Rinaldi
  • Patent number: 11221954
    Abstract: A method for storing metadata in a cache comprising heterogeneous memory types is disclosed. The method stages data elements containing metadata into a lower performance portion of a cache. The cache includes the lower performance portion and a higher performance portion. In response to determining that the data elements are updated in the higher performance portion, the method records when the data elements were updated and invalidates the data elements in the lower performance portion. The method scans the lower performance portion for the data elements that are invalidated and re-stages, in the lower performance portion, the data elements that are invalidated and have not been updated in the higher performance portion in a last specified period of time. A corresponding system and computer program product are also disclosed.
    Type: Grant
    Filed: November 18, 2019
    Date of Patent: January 11, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick
  • Patent number: 11222265
    Abstract: A machine learning module receives inputs comprising attributes of a storage controller, where the attributes affect performance parameters for performing stages and destages in the storage controller. In response to an event, the machine learning module generates, via forward propagation, an output value that indicates whether to fill holes in a track of a cache by staging data to the cache prior to destage of the track. A margin of error is calculated based on comparing the generated output value to an expected output value, where the expected output value is generated from an indication of whether it is correct to fill holes in a track of the cache by staging data to the cache prior to destage of the track. An adjustment is made of weights of links that interconnect nodes of the plurality of layers via back propagation to reduce the margin of error.
    Type: Grant
    Filed: October 25, 2018
    Date of Patent: January 11, 2022
    Assignee: International Business Machines Corporation
    Inventors: Lokesh M. Gupta, Kyler A. Anderson, Kevin J. Ash, Matthew G. Borlick
  • Publication number: 20220004628
    Abstract: Provided are a computer program product, system, and method for detecting a security breach in a system managing access to a storage. Process Input/Output (I/O) activity by a process accessing data in a storage is monitored. A determination is made of a characteristic of the data subject to the I/O activity from the process. A determination is made as to whether a characteristic of the process I/O activity as compared to the characteristic of the data satisfies a condition. The process initiating the I/O activity is characterized as a suspicious process in response to determining that the condition is satisfied. A security breach is indicated in response to characterizing the process as the suspicious process.
    Type: Application
    Filed: September 17, 2021
    Publication date: January 6, 2022
    Inventors: Matthew G. Borlick, Lokesh M. Gupta