Patents by Inventor John T. Olson
John T. Olson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11593309Abstract: Embodiments include a method for fault tolerance in the delivery of event information within a file system cluster. One or more processors to determine event information associated with file system activity performed by a node of the cluster. The one or more processors add the event information to an event log buffer in memory. The one or more processors receive a first log sequence number (LSN) associated with flushing of recovery information from a recovery log buffer. The one or more processors determine the event information in the event log buffer having a log sequence number less than or equal to the first log sequence number, and determining the event information includes log sequence numbers less than or equal to the first log sequence number, the one or more processors flush the corresponding event information from the event log buffer to disk storage.Type: GrantFiled: November 5, 2020Date of Patent: February 28, 2023Assignee: International Business Machines CorporationInventors: John T. Olson, Deepavali M. Bhagwat, Frank Schmuck, Shekhar Amlekar, Luis Teran, Jacob Morris Tick, April Brown
-
Publication number: 20230055511Abstract: In an approach for optimizing clustered filesystem lock ordering in multi-gateway supported hybrid cloud environment, a processor identifies a wide area network (WAN) caching gateway topology between a set of nodes distributed in a hybrid cloud environment. A processor identifies a token request made for accessing a file targeted for WAN caching across sites within the hybrid cloud environment. A processor analyzes an importance of block ranges of the token request. A processor assigns a weight to the token request in relative comparison with weights allocated to token requests made by other applications. A processor dynamically modifies a read-ahead algorithm used during token generation to generate tokens for the block ranges of the token request based on the WAN caching gateway topology.Type: ApplicationFiled: August 20, 2021Publication date: February 23, 2023Inventors: Abhishek Satyanarayan Dave, John T. Olson, Sasikanth Eda
-
Publication number: 20220171657Abstract: Techniques are provided for dynamic workload tuning of a data pipeline that includes a plurality of stages, each associated with a respective storage element, a storage element monitor, and a resource manager. In one embodiment, the techniques involve the storage element monitor determining a utilization of a storage element associated with a first stage of the plurality of stages, comparing the utilization of the storage element to a first threshold, generating a signal based on the comparison of the storage element to the first threshold, output the signal; and the resource manager receiving the signal, determining that the signal indicates an increase or decrease of resources for the first stage, and adjusting compute resources for the first stage based on the signal in order to effect a change in the utilization of the storage element.Type: ApplicationFiled: December 1, 2020Publication date: June 2, 2022Inventors: Christof SCHMITT, John T. OLSON
-
Publication number: 20220138158Abstract: Embodiments include a method for fault tolerance in the delivery of event information within a file system cluster. One or more processors to determine event information associated with file system activity performed by a node of the cluster. The one or more processors add the event information to an event log buffer in memory. The one or more processors receive a first log sequence number (LSN) associated with flushing of recovery information from a recovery log buffer. The one or more processors determine the event information in the event log buffer having a log sequence number less than or equal to the first log sequence number, and determining the event information includes log sequence numbers less than or equal to the first log sequence number, the one or more processors flush the corresponding event information from the event log buffer to disk storage.Type: ApplicationFiled: November 5, 2020Publication date: May 5, 2022Inventors: John T. Olson, DEEPAVALI M. BHAGWAT, Frank Schmuck, Shekhar Amlekar, Luis Teran, Jacob Morris Tick, April Brown
-
Patent number: 11281629Abstract: Provided are a computer program product, system, and method for using and training a machine learning module to determine actions to be taken in response to file system events in a file system. A file system event is detected. An action to be performed corresponding to the file system event is selected from an action list. A determination is made as to whether an outcome in the computing system resulting from the performed action satisfies an outcome threshold. A machine learning module is trained to increase a likelihood of selecting the performed action corresponding to the file system event when the outcome satisfies the outcome threshold. The machine learning module is trained to decrease a likelihood of selecting the performed action corresponding to the file system event when the outcome does not satisfy the outcome threshold.Type: GrantFiled: March 15, 2019Date of Patent: March 22, 2022Assignee: International Business Machines CorporationInventors: Subashini Balachandran, John T. Olson
-
Patent number: 11157457Abstract: A computing environment includes a home site and a cache site with nonhomogeneous thin-provisioned storage configurations. A file management system updates files stored at the cache site. Each updated file has an updated file size and a pre-update file size. When a resynchronization is needed between the cache site and the home site, for example due to an extended communication failure, the storage requirement changes for the updated files are calculated and a notification is sent to the home site. The notification identifies the updated files and the storage requirement changes. The home site sends a reply to the cache site. The reply identifies which files are approved for immediate processing. The cache site transfers resynchronization data for the approved files to the home site, and delays transferring resynchronization data for the unapproved files until subsequent replies from the home site indicate that the previously unapproved files are now approved.Type: GrantFiled: November 27, 2019Date of Patent: October 26, 2021Assignee: International Business Machines CorporationInventors: Shah M. R. Islam, John T. Olson, Sandeep R. Patil, Riyazahamad M. Shiraguppi
-
Patent number: 11119655Abstract: An embodiment of the invention may include a method, computer program product and system for optimizing data defragmentation. The embodiment may include collecting details related to contiguous storage space available on a disk drive. The embodiment may include identifying a type of object storage implementation utilized on the disk drive. The type of object storage implementation is based on how an object is stored within the disk drive. The embodiment may include identifying an important component of the object. The important component of the object is determined by a frequency of access. The embodiment may include identifying a non-important component of the object. The non-important component of the object is determined by a frequency of access. The embodiment may include moving the important component to an outer sector of the disk drive. The embodiment may include moving the non-important component to an inner sector of the disk drive.Type: GrantFiled: August 21, 2019Date of Patent: September 14, 2021Assignee: International Business Machines CorporationInventors: Duane Baldwin, Abhishek Dave, Sasikanth Eda, Nataraj Nagaratnam, John T. Olson, Sandeep R. Patil
-
Patent number: 11093532Abstract: A computer-implemented method according to one embodiment includes identifying at a pre-allocation module a size of object data to be stored within a storage node, identifying at the pre-allocation module file system parameters associated with the storage node, calculating at the pre-allocation module pre-allocated details needed for storing the object data within the storage node, utilizing the size of the object data and the file system parameters associated with the storage node, and sending the object data and the pre-allocated details from the pre-allocation module to the storage node.Type: GrantFiled: May 25, 2017Date of Patent: August 17, 2021Assignee: International Business Machines CorporationInventors: Sasikanth Eda, John T. Olson, Sandeep R. Patil, Sachin C. Punadikar
-
Patent number: 11076020Abstract: A system and method dynamically transitions the file system role of compute nodes in a distributed clustered file system for an object that includes an embedded compute engine (a storlet). Embodiments of the invention overcome prior art problems of a storlet in a distributed storage system with a storlet engine having a dynamic role module which dynamically assigns or changes a file system role served by the node to a role which is more optimally suited for a computation operation in the storlet. The role assignment is made based on a classification of the computation operation and the appropriate filesystem role that matches computation operation. For example, a role could be assigned which helps reduce storage needs, communication resources, etc.Type: GrantFiled: February 25, 2020Date of Patent: July 27, 2021Assignee: International Business Machines CorporationInventors: Duane M. Baldwin, Sasikanth Eda, John T. Olson, Sandeep R. Patil
-
Patent number: 10956214Abstract: A present invention embodiment manages resources of a distributed system to perform computational tasks within a specified time interval. A received object is classified into a type of computational processing, and a quantity of objects is maintained for each type. An execution time for processing a single object is estimated based on a corresponding computation resource template. A total execution time for the quantity of objects of a type of computational processing is determined based on the estimated execution time. In response to the total execution time exceeding a user-specified time interval, an amount of resources of the distributed system is determined to process the quantity of objects of the type within the user-specified time interval. Nodes of the distributed system with objects classified in the type use the determined amount of resources to process the quantity of objects for the type within the user-specified time interval.Type: GrantFiled: December 2, 2019Date of Patent: March 23, 2021Assignee: International Business Machines CorporationInventors: Duane M. Baldwin, Sasikanth Eda, John T. Olson, Sandeep R. Patil
-
Patent number: 10929347Abstract: Provided are a computer program product, system, and method for defragmenting files having file blocks in multiple point-in-time copies. Multiple point-in-time copies for a file having file blocks ap. Maintained. Each point-in-time copy to the file has at least one different block in the storage for at least one of the file blocks in the file. For each of a plurality of the point-in-time copies for the file, moving the blocks for the file blocks in the point-in-time copy to contiguous locations on the storage.Type: GrantFiled: September 4, 2018Date of Patent: February 23, 2021Assignee: International Business Machines CorporationInventors: Duane M. Baldwin, John T. Olson, Sandeep R. Patil, Riyazahamad M. Shiraguppi
-
Patent number: 10878014Abstract: An embodiment of the invention may include a method, computer program product, and system for data management. The embodiment may include receiving a login token and instruction from a user device. The login token is associated with a user of the user device. The instruction may be reading or writing profile data belonging to the user. The embodiment may include determining whether a user profile container exists for the user based on the received login token. The embodiment may include creating the user profile container for the user based on determining that the user profile container does not exist. Creating the user profile container for the user may include identifying profile data belonging to the user located in a plurality of locations within a file system and storing identified profile data belonging to the user within a single binary large object. The embodiment may include executing the received instruction.Type: GrantFiled: March 29, 2017Date of Patent: December 29, 2020Assignee: International Business Machines CorporationInventors: John T. Olson, Erik Rueger, Christof Schmitt, Michael L. Taylor
-
Publication number: 20200372114Abstract: Methods, systems, and computer program products for media language translation and synchronization are provided. Aspects include receiving, by a processor, audio data associated with a speaker, wherein the audio data is in a first language, determining speaker characteristics associated with the speaker from the audio data, converting the audio data to a source text in the first language, converting the source text to a target text, wherein the target text is in a second language, and generating an output audio in the second language for the target text based on the speaker characteristics.Type: ApplicationFiled: May 21, 2019Publication date: November 26, 2020Inventors: John J. Auvenshine, Anthony Ciaravella, John T. Olson, Richard A. Welp
-
Publication number: 20200293495Abstract: Provided are a computer program product, system, and method for using and training a machine learning module to determine actions to be taken in response to file system events in a file system. A file system event is detected. An action to be performed corresponding to the file system event is selected from an action list. A determination is made as to whether an outcome in the computing system resulting from the performed action satisfies an outcome threshold. A machine learning module is trained to increase a likelihood of selecting the performed action corresponding to the file system event when the outcome satisfies the outcome threshold. The machine learning module is trained to decrease a likelihood of selecting the performed action corresponding to the file system event when the outcome does not satisfy the outcome threshold.Type: ApplicationFiled: March 15, 2019Publication date: September 17, 2020Inventors: Subashini Balachandran, John T. Olson
-
Patent number: 10740288Abstract: Accessing objects in an erasure code supported object storage environment including receiving, from a requesting entity, a read request for an object stored in the object storage environment; identifying, using a placement data structure, an object fragment location of a first object fragment of the object; calculating, based on a filesystem root inode number and the object fragment location, a first inode address for the first object fragment of the object, wherein the first inode address identifies a location on a first storage node; reading, using the first inode address, the first object fragment and an inode structure, wherein the inode structure for the first inode address comprises a second inode address for a second object fragment of the object; reading the second object fragment using the second inode address, wherein the second inode address identifies a location on a second storage node; and providing, to the requesting entity, a reconstructed object comprising the first object fragment and the seType: GrantFiled: December 2, 2016Date of Patent: August 11, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sasikanth Eda, Rezaul S. Islam, John T. Olson, Sandeep R. Patil
-
Publication number: 20200236195Abstract: A system and method dynamically transitions the file system role of compute nodes in a distributed clustered file system for an object that includes an embedded compute engine (a storlet). Embodiments of the invention overcome prior art problems of a storlet in a distributed storage system with a storlet engine having a dynamic role module which dynamically assigns or changes a file system role served by the node to a role which is more optimally suited for a computation operation in the storlet. The role assignment is made based on a classification of the computation operation and the appropriate filesystem role that matches computation operation. For example, a role could be assigned which helps reduce storage needs, communication resources, etc.Type: ApplicationFiled: February 25, 2020Publication date: July 23, 2020Inventors: Duane M. Baldwin, Sasikanth Eda, John T. Olson, Sandeep R. Patil
-
Patent number: 10681180Abstract: A system and method dynamically transitions the file system role of compute nodes in a distributed clustered file system for an object that includes an embedded compute engine (a storlet). Embodiments of the invention overcome prior art problems of a storlet in a distributed storage system with a storlet engine having a dynamic role module which dynamically assigns or changes a file system role served by the node to a role which is more optimally suited for a computation operation in the storlet. The role assignment is made based on a classification of the computation operation and the appropriate filesystem role that matches computation operation. For example, a role could be assigned which helps reduce storage needs, communication resources, etc.Type: GrantFiled: March 16, 2019Date of Patent: June 9, 2020Assignee: International Business Machines CorporationInventors: Duane M. Baldwin, Sasikanth Eda, John T. Olson, Sandeep R. Patil
-
Publication number: 20200104179Abstract: A present invention embodiment manages resources of a distributed system to perform computational tasks within a specified time interval. A received object is classified into a type of computational processing, and a quantity of objects is maintained for each type. An execution time for processing a single object is estimated based on a corresponding computation resource template. A total execution time for the quantity of objects of a type of computational processing is determined based on the estimated execution time. In response to the total execution time exceeding a user-specified time interval, an amount of resources of the distributed system is determined to process the quantity of objects of the type within the user-specified time interval. Nodes of the distributed system with objects classified in the type use the determined amount of resources to process the quantity of objects for the type within the user-specified time interval.Type: ApplicationFiled: December 2, 2019Publication date: April 2, 2020Inventors: Duane M. Baldwin, Sasikanth Eda, John T. Olson, Sandeep R. Patil
-
Publication number: 20200097449Abstract: A computing environment includes a home site and a cache site with nonhomogeneous thin-provisioned storage configurations. A file management system updates files stored at the cache site. Each updated file has an updated file size and a pre-update file size. When a resynchronization is needed between the cache site and the home site, for example due to an extended communication failure, the storage requirement changes for the updated files are calculated and a notification is sent to the home site. The notification identifies the updated files and the storage requirement changes. The home site sends a reply to the cache site. The reply identifies which files are approved for immediate processing. The cache site transfers resynchronization data for the approved files to the home site, and delays transferring resynchronization data for the unapproved files until subsequent replies from the home site indicate that the previously unapproved files are now approved.Type: ApplicationFiled: November 27, 2019Publication date: March 26, 2020Inventors: Shah M. R. Islam, John T. Olson, Sandeep R. Patil, Riyazahamad M. Shiraguppi
-
Patent number: 10592415Abstract: An embodiment of the invention may include a method, computer program product and system for optimizing a wide area network caching infrastructure in a file based object storage architecture. The embodiment may include creating, by a parent partition, a heat map. The embodiment may include prioritizing prefetching by multiple dependent partitions based on the heat map. In response to prioritized prefetching by the multiple dependent partitions, the embodiment may include allocating wide area network caching threads. The embodiment may include providing, by the parent partition, objects for prefetching by the multiple dependent partitions utilizing the allocated wide area network caching threads.Type: GrantFiled: December 1, 2017Date of Patent: March 17, 2020Assignee: International Business Machines CorporationInventors: Duane Baldwin, Sasikanth Eda, John T. Olson, Sandeep R. Patil