Patents by Inventor Bret R. Olszewski
Bret R. Olszewski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9038084Abstract: Systems, methods and computer program products may provide managing utilization of one or more physical processors in a shared processor pool. A method of managing utilization of one or more physical processors in a shared processor pool may include determining a current amount of utilization of the one or more physical processors and generating an instruction message. The instruction message may be at least partially determined by the current amount of utilization. The method may further include sending the instruction message to a guest operating system, the guest operating system having a number of enabled virtual processors.Type: GrantFiled: February 23, 2012Date of Patent: May 19, 2015Assignee: International Business Machines CorporationInventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
-
Publication number: 20150113226Abstract: A method and computer program product for managing a file cache with a filesystem cache manager is disclosed. The method may include installing the filesystem cache manager for the file cache by a mount command. The filesystem cache manager may include a specified time interval and a first cache elimination instruction. The method may further include starting a first timer upon the installation of the filesystem cache manager. The method may further include running the first cache elimination instruction when the first timer reaches the specified time interval.Type: ApplicationFiled: December 5, 2013Publication date: April 23, 2015Applicant: International Business Machines CorporationInventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
-
Publication number: 20150113225Abstract: A method and computer program product for managing a file cache with a filesystem cache manager is disclosed. The method may include installing the filesystem cache manager for the file cache by a mount command. The filesystem cache manager may include a specified time interval and a first cache elimination instruction. The method may further include starting a first timer upon the installation of the filesystem cache manager. The method may further include running the first cache elimination instruction when the first timer reaches the specified time interval.Type: ApplicationFiled: October 18, 2013Publication date: April 23, 2015Applicant: International Business Machines CorporationInventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
-
Patent number: 8972669Abstract: An apparatus includes a processor and a volatile memory that is configured to be accessible in an active memory sharing configuration. The apparatus includes a machine-readable encoded with instructions executable by the processor. The instructions including first virtual machine instructions configured to access the volatile memory with a first virtual machine. The instructions including second virtual machine instructions configured to access the volatile memory with a second virtual machine. The instructions including virtual machine monitor instructions configured to page data out from a shared memory to a reserved memory section in the volatile memory responsive to the first virtual machine or the second virtual machine paging the data out from the shared memory or paging the data in to the shared memory. The shared memory is shared across the first virtual machine and the second virtual machine. The volatile memory includes the shared memory.Type: GrantFiled: February 15, 2013Date of Patent: March 3, 2015Assignee: International Business Machines CorporationInventors: Vaijayanthimala K. Anand, David Navarro, Bret R. Olszewski, Sergio Reyes
-
Patent number: 8973007Abstract: According to one aspect of the present disclosure, a method and technique for adaptive lock list searching of waiting threads includes determining an average service time for a lock associated with a shared computing resource; determining an average search time for selecting a thread to next receive the lock from a plurality of threads waiting for the lock; summing the average service time and the average search time; applying a search factor to the summed average service time and average search time to obtain a target search time for searching the waiting threads for selecting the next thread for obtaining the lock; determining a quantity of waiting threads to consider for next obtaining the lock based on the target search time and the average search time, the quantity being less than a total quantity of waiting threads; and identifying the next thread to obtain the lock from the quantity.Type: GrantFiled: December 9, 2013Date of Patent: March 3, 2015Assignee: International Business Machines CorporationInventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
-
Patent number: 8954974Abstract: A system and technique for adaptive lock list searching of waiting threads includes logic executable by a processor to: determine an average service time for a lock associated with a shared computing resource; determine an average search time for selecting a thread to next receive the lock from a plurality of threads waiting for the lock; sum the average service time and the average search time; apply a search factor to the summed average service time and average search time to obtain a target search time for searching the waiting threads for selecting the next thread for obtaining the lock; determine a quantity of waiting threads to consider for next obtaining the lock based on the target search time and the average search time, the quantity being less than a total quantity of waiting threads; and identify the next thread to obtain the lock from the quantity.Type: GrantFiled: November 10, 2013Date of Patent: February 10, 2015Assignee: International Business Machines CorporationInventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
-
Patent number: 8935700Abstract: Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor.Type: GrantFiled: December 13, 2013Date of Patent: January 13, 2015Assignee: International Business Machines CorporationInventors: Dirk Michel, Bret R. Olszewski, Basu Vaidyanathan
-
Patent number: 8930670Abstract: Illustrated embodiments provide a computer implemented method and data processing system for redispatching a partition by tracking a set of memory pages, belonging to the dispatched partition. In one illustrative embodiment the computer implemented method comprises finding an effective page address to real page address mapping for a page address miss in response to determining the page address miss in a page addressing buffer, and saving the mapping as an entry in an array. The computer implemented method creates a preserved array from the array in response to determining the dispatched partition to be an undispatched partition. The computer implemented method further analyzes of the preserved array for a compressed page in response to determining the undispatched partition is now redispatched, and decompresses the compressed page prior to the partition being redispatched.Type: GrantFiled: November 7, 2007Date of Patent: January 6, 2015Assignee: International Business Machines CorporationInventors: Vaijayanthimala K. Anand, Bret R. Olszewski, Mysore Sathyanarayana Srinivas
-
Patent number: 8930952Abstract: Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor.Type: GrantFiled: March 21, 2012Date of Patent: January 6, 2015Assignee: International Business Machines CorporationInventors: Dirk Michel, Bret R. Olszewski, Basu Vaidyanathan
-
Patent number: 8832383Abstract: A cache entry replacement unit can delay replacement of more valuable entries by replacing less valuable entries. When a miss occurs, the cache entry replacement unit can determine a cache entry for replacement (“a replacement entry”) based on a generic replacement technique. If the replacement entry is an entry that should be protected from replacement (e.g., a large page entry), the cache entry replacement unit can determine a second replacement entry. The cache entry replacement unit can “skip” the first replacement entry by replacing the second replacement entry with a new entry, if the second replacement entry is an entry that should not be protected (e.g., a small page entry). The first replacement entry can be skipped a predefined number of times before the first replacement entry is replaced with a new entry.Type: GrantFiled: May 20, 2013Date of Patent: September 9, 2014Assignee: International Business Machines CorporationInventors: Bret R. Olszewski, Basu Vaidyanathan, Steven W. White
-
Publication number: 20140223108Abstract: This disclosure includes a method for managing hardware prefetch policy of a partition in a partitioned environment which includes dispatching a virtual processor on a physical processor of a first node, assigning a home memory partition of a memory of a second node to the virtual processor, determining whether the first node and the second node are different nodes, disabling hardware prefetch for the virtual processor when the first node and the second node are different nodes, and enabling hardware prefetch when the first node and the second node are the same physical node.Type: ApplicationFiled: February 7, 2013Publication date: August 7, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Peter J. Heyrman, Bret R. Olszewski
-
Publication number: 20140223109Abstract: This disclosure includes a method for managing hardware prefetch policy of a partition in a partitioned environment which includes dispatching a virtual processor on a physical processor of a first node, assigning a home memory partition of a memory of a second node to the virtual processor, determining whether the first node and the second node are different nodes, disabling hardware prefetch for the virtual processor when the first node and the second node are different nodes, and enabling hardware prefetch when the first node and the second node are the same physical node.Type: ApplicationFiled: January 9, 2014Publication date: August 7, 2014Applicant: International Business Machines CorporationInventors: Peter J. Heyrman, Bret R. Olszewski
-
Patent number: 8775749Abstract: Management of a UNIX-style storage pools is enhanced by specially managing one or more memory management inodes associated with pinned and allocated pages of data storage by providing indirect access to the pinned and allocated pages by one or more user processes via a handle, while preventing direct access of the pinned and allocated pages by the user processes without use of the handles; scanning periodically hardware status bits in the inodes to determine which of the pinned and allocated pages have been recently accessed within a pre-determined period of time; requesting via a callback communication to each user process to determine which of the least-recently accessed pinned and allocated pages can be either deallocated or defragmented and compacted; and responsive to receiving one or more page indicators of pages unpinned by the user processes, compacting or deallocating one or more pages corresponding to the page indicators.Type: GrantFiled: June 26, 2013Date of Patent: July 8, 2014Assignee: International Business Machines CorporationInventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
-
Publication number: 20140149672Abstract: An information handling system (IHS) includes an operating system with a release-behind component that determines which file pages to release from a file cache in system memory. The release-behind component employs a history buffer to determine which file pages to release from the file cache to create room for a current page access. The history buffer stores entries that identify respective pages for which a page fault occurred. For each identified page, the history buffer stores respective repage information that indicates if a repage fault occurred for such page. The release-behind component identifies a candidate previous page for release from the file cache. The release-behind component checks the history buffer to determine if a repage fault occurred for that entry. If so, then the release-behind component does not discard the candidate previous page from the cache. Otherwise, the release-behind component discards the candidate previous page if a repage fault occurred.Type: ApplicationFiled: November 26, 2012Publication date: May 29, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Mathew Accapadi, Grover C Davidson, II, Dirk Michel, Bret R Olszewski
-
Publication number: 20140149675Abstract: An information handling system (IHS) includes an operating system with a release-behind component that determines which file pages to release from a file cache in system memory. The release-behind component employs a history buffer to determine which file pages to release from the file cache to create room for a current page access. The history buffer stores entries that identify respective pages for which a page fault occurred. For each identified page, the history buffer stores respective repage information that indicates if a repage fault occurred for such page. The release-behind component identifies a candidate previous page for release from the file cache. The release-behind component checks the history buffer to determine if a repage fault occurred for that entry. If so, then the release-behind component does not discard the candidate previous page from the cache. Otherwise, the release-behind component discards the candidate previous page if a repage fault occurred.Type: ApplicationFiled: December 11, 2013Publication date: May 29, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Matthew Accapadi, Grover C. Davidson, Dirk Michel, Bret R. Olszewski
-
Publication number: 20140101662Abstract: Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor.Type: ApplicationFiled: December 13, 2013Publication date: April 10, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Dirk Michel, Bret R. Olszewski, Basu Vaidyanathan
-
Patent number: 8639909Abstract: A virtual memory management unit can implement various techniques for managing paging space. The virtual memory management unit can monitor a number of unallocated large sized pages and can determine when the number of unallocated large sized pages drops below a page threshold. Unallocated contiguous smaller-sized pages can be aggregated to obtain unallocated larger-sized pages, which can then be allocated to processes as required to improve efficiency of disk I/O operations. Allocated smaller-sized pages can also be reorganized to obtain the unallocated contiguous smaller-sized pages that can then be aggregated to yield the larger-sized pages. Furthermore, content can also be compressed before being written to the paging space to reduce the number of pages that are to be allocated to processes. This can enable efficient management of the paging space without terminating processes.Type: GrantFiled: February 22, 2013Date of Patent: January 28, 2014Assignee: International Business Machines CorporationInventors: Bret R. Olszewski, Basu Vaidyanathan
-
Publication number: 20140007124Abstract: Automated techniques ensure that system central processing unit (“CPU”) power is not a bottleneck when migrating logical partitions from one system to another system or systems (e.g., in the event of a system evacuation). CPU resources needed to fully drive available bandwidth during the migration are computed. CPU resources of the system are then adjusted for the migration, which may comprise scaling down the CPU resources that are guaranteed for the executing partitions and/or adjusting relative partition variable weights to limit the amount of excess capacity that can be allocated to a partition.Type: ApplicationFiled: June 27, 2012Publication date: January 2, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rohith K. Ashok, Agustin Mena, III, Thuy B. Nguyen, Bret R. Olszewski
-
Publication number: 20140006741Abstract: Automated techniques ensure that system central processing unit (“CPU”) power is not a bottleneck when migrating logical partitions from one system to another system or systems (e.g., in the event of a system evacuation). CPU resources needed to fully drive available bandwidth during the migration are computed. CPU resources of the system are then adjusted for the migration, which may comprise scaling down the CPU resources that are guaranteed for the executing partitions and/or adjusting relative partition variable weights to limit the amount of excess capacity that can be allocated to a partition.Type: ApplicationFiled: February 14, 2013Publication date: January 2, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rohith K. Ashok, Agustin Mena, III, Thuy B. Nguyen, Bret R. Olszewski
-
Publication number: 20130346967Abstract: A technique for determining placement fitness for partitions under a hypervisor in a host computing system having non-uniform memory access (NUMA) nodes. In an embodiment, a partition resource specification is received from a partition score requester. The partition resource specification identifies a set of computing resources needed for a virtual machine partition to be created by a hypervisor in the host computing system. Resource availability within the NUMA nodes of the host computing system is assessed to determine possible partition placement options. A partition fitness score of a most suitable one of the partition placement options is calculated. The partition fitness score is reported to the partition score requester.Type: ApplicationFiled: June 21, 2012Publication date: December 26, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Vaijayanthimala K. Anand, Richard Mankowski, Bret R. Olszewski, Sergio Reyes