Patents by Inventor Bret R. Olszewski

Bret R. Olszewski has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11620154
    Abstract: In a computing system, an application thread is executed on a hardware thread. Based on a configuration of the computing system, a first threshold is determined comprising a threshold percentage of execution time spent servicing a set of interrupts to the application thread relative to a total execution time for the hardware thread. For the hardware thread, a length of a first time period spent servicing an interrupt in the set of interrupts and a length of a second time period spent executing the application thread are measured. A cumulative percentage of execution time spent in the first time period relative to execution time spent in the first time period and the second time period is calculated. Responsive to the cumulative percentage being above the threshold percentage, interrupt servicing on the hardware thread is disabled.
    Type: Grant
    Filed: January 2, 2020
    Date of Patent: April 4, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dirk Michel, Bret R. Olszewski, Matthew R. Ochs
  • Patent number: 11561899
    Abstract: Disclosed is a computer implemented method to manage a cache, the method comprising, determining that a primary application opens a first file, wherein opening the first file includes reading the first file into a file cache from a storage. The method also includes, setting a first monitoring variable in the primary application process proc structure, wherein the first monitoring variable is set in response to the primary application opening the first file, and the first monitoring variable records a set of operations completed on the first file by the primary application. The method comprises a first read of the first file being at a beginning of the first file. The method includes identifying that the first file is read according to a pattern that includes reading the first file sequentially and reading the first file entirely and removing the first file from the file cache.
    Type: Grant
    Filed: May 29, 2019
    Date of Patent: January 24, 2023
    Assignee: International Business Machines Corporation
    Inventors: Mathew Accapadi, Bret R. Olszewski, Grover Cleveland Davidson, II, Chad Collie
  • Publication number: 20220413902
    Abstract: An embodiment includes issuing an indication that a thread is a time-critical thread. The embodiment initiates an active partition migration, from a source server to a destination server, of a source partition on which the program is stored. The embodiment stores, during the migration, records of locations of pages in memory referenced by the time-critical thread. The embodiment detects that a migration threshold has been reached, indicative of a threshold portion of the migration being complete. Responsive to detecting the migration threshold, the embodiment performs a priority migration of the time-critical thread. The priority migration includes suspending execution of the time-critical thread at the source server, retrieving the records of the locations of the pages in memory referenced by the time-critical thread, and issuing a command to transfer content from the pages to the destination server. The embodiment also includes issuing a migration command to complete the migration.
    Type: Application
    Filed: June 23, 2021
    Publication date: December 29, 2022
    Applicant: International Business Machines Corporation
    Inventors: Bret R. Olszewski, Arnold Flores, Peter J. Heyrman, Tommy Tse
  • Patent number: 11157061
    Abstract: A process for processor management includes activating a delay thread running on a processor. A determination is made whether a wait event for a first thread running on the processor is in a queue. Responsive to determining that the wait event for the first thread is in the queue, a determination is made whether a wait time associated with the wait event has expired. Responsive to determining that the wait time has not expired, a determination is made if wait time exceeds a threshold. Responsive to determining that the wait time exceeds the threshold, a timer is set and a low power mode is initiated for the processor.
    Type: Grant
    Filed: February 4, 2018
    Date of Patent: October 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Bernard A. King-Smith, Bret R. Olszewski, Stephen Rees, Basu Vaidyanathan
  • Publication number: 20210208926
    Abstract: In a computing system, an application thread is executed on a hardware thread. Based on a configuration of the computing system, a first threshold is determined comprising a threshold percentage of execution time spent servicing a set of interrupts to the application thread relative to a total execution time for the hardware thread. For the hardware thread, a length of a first time period spent servicing an interrupt in the set of interrupts and a length of a second time period spent executing the application thread are measured. A cumulative percentage of execution time spent in the first time period relative to execution time spent in the first time period and the second time period is calculated. Responsive to the cumulative percentage being above the threshold percentage, interrupt servicing on the hardware thread is disabled.
    Type: Application
    Filed: January 2, 2020
    Publication date: July 8, 2021
    Applicant: International Business Machines Corporation
    Inventors: Dirk Michel, Bret R. Olszewski, Matthew R. Ochs
  • Patent number: 10929062
    Abstract: Embodiments of the present invention facilitate gracefully degrading performance while gradually throttling memory due to dynamic thermal conditions. An example method includes receiving, by pre-fetch throttling logic, a pre-fetch command requesting data from a memory and a priority level of the pre-fetch command. The priority level of the pre-fetch command indicates a likelihood that data requested by the pre-fetch command will be utilized by a processor. Thermal condition data from one or more sensors is received by the pre-fetch throttling logic. It is determined whether the pre-fetch command should be issued to the memory. The determining is based at least in part on the priority level of the pre-fetch command and the thermal condition data. The pre-fetch command is issued to the memory or prevented from being issued to the memory based at least in part on determining on the determining.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: February 23, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Hoa C. Nguyen, Bret R. Olszewski, Ram Raghavan
  • Publication number: 20200379906
    Abstract: Disclosed is a computer implemented method to manage a cache, the method comprising, determining that a primary application opens a first file, wherein opening the first file includes reading the first file into a file cache from a storage. The method also includes, setting a first monitoring variable in the primary application process proc structure, wherein the first monitoring variable is set in response to the primary application opening the first file, and the first monitoring variable records a set of operations completed on the first file by the primary application. The method comprises a first read of the first file being at a beginning of the first file. The method includes identifying that the first file is read according to a pattern that includes reading the first file sequentially and reading the first file entirely and removing the first file from the file cache.
    Type: Application
    Filed: May 29, 2019
    Publication date: December 3, 2020
    Inventors: Mathew Accapadi, Bret R. Olszewski, Grover Cleveland Davidson, II, Chad Collie
  • Patent number: 10831539
    Abstract: Examples of techniques for hardware thread switching for scheduling policy in a processor are described herein. An aspect includes, based on receiving a request from a first software thread to dispatch to a first hardware thread, determining that the first hardware thread is occupied by a second software thread that has a higher priority than the first software thread. Another aspect includes issuing an interrupt to switch the second software thread from the first hardware thread to a second hardware thread. Another aspect includes, based on switching of the second software thread from the first hardware thread to the second hardware thread, dispatching the first software thread to the first hardware thread.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: November 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mathew Accapadi, Chad Collie, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
  • Publication number: 20200301735
    Abstract: Examples of techniques for hardware thread switching for scheduling policy in a processor are described herein. An aspect includes, based on receiving a request from a first software thread to dispatch to a first hardware thread, determining that the first hardware thread is occupied by a second software thread that has a higher priority than the first software thread. Another aspect includes issuing an interrupt to switch the second software thread from the first hardware thread to a second hardware thread. Another aspect includes, based on switching of the second software thread from the first hardware thread to the second hardware thread, dispatching the first software thread to the first hardware thread.
    Type: Application
    Filed: March 18, 2019
    Publication date: September 24, 2020
    Inventors: Mathew Accapadi, Chad Collie, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
  • Patent number: 10740234
    Abstract: An approach is provided in which a first core broadcasts a cache line request in response to detecting a cache miss corresponding to a first virtual central processing unit (VCPU) executing on the first core. Next, the first core receives a cache line response from the second core responding to the cache line request that includes tag extension data. The first core determines a cache miss type of the cache miss based on the tag extension data and, in turn, sends the cache miss type to a hypervisor that utilizes the cache miss type during a future VCPU dispatch selection.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: August 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bret R. Olszewski, Ram Raghavan, Maria Lorena Pesantez, Gayathri Mohan
  • Publication number: 20200142635
    Abstract: Embodiments of the present invention facilitate gracefully degrading performance while gradually throttling memory due to dynamic thermal conditions. An example method includes receiving, by pre-fetch throttling logic, a pre-fetch command requesting data from a memory and a priority level of the pre-fetch command. The priority level of the pre-fetch command indicates a likelihood that data requested by the pre-fetch command will be utilized by a processor. Thermal condition data from one or more sensors is received by the pre-fetch throttling logic. It is determined whether the pre-fetch command should be issued to the memory. The determining is based at least in part on the priority level of the pre-fetch command and the thermal condition data. The pre-fetch command is issued to the memory or prevented from being issued to the memory based at least in part on determining on the determining.
    Type: Application
    Filed: November 7, 2018
    Publication date: May 7, 2020
    Inventors: Hoa C. Nguyen, Bret R. Olszewski, Ram Raghavan
  • Publication number: 20200073803
    Abstract: An approach is provided in which a first core broadcasts a cache line request in response to detecting a cache miss corresponding to a first virtual central processing unit (VCPU) executing on the first core. Next, the first core receives a cache line response from the second core responding to the cache line request that includes tag extension data. The first core determines a cache miss type of the cache miss based on the tag extension data and, in turn, sends the cache miss type to a hypervisor that utilizes the cache miss type during a future VCPU dispatch selection.
    Type: Application
    Filed: September 4, 2018
    Publication date: March 5, 2020
    Inventors: Bret R. Olszewski, Ram Raghavan, Maria Lorena Pesantez, Gayathri Mohan
  • Patent number: 10572411
    Abstract: According to one exemplary embodiment, a method for preventing a software thread from being blocked due to processing an external device interrupt is provided. The method may include receiving the software thread, whereby the software thread has an associated interrupt avoidance variable. The method may also include determining a processor to receive the software thread. The method may then include sending the software thread to the determined processor. The method may further include setting an interrupt mask bit associated with the processor based on the interrupt avoidance variable. The method may also include receiving the external device interrupt. The method may then include redirecting the received external device interrupt to a second processor, whereby the redirecting is based on the interrupt mask bit.
    Type: Grant
    Filed: June 25, 2018
    Date of Patent: February 25, 2020
    Assignee: International Business Machines Corporation
    Inventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
  • Patent number: 10540206
    Abstract: A method, program product, and system is provided for dynamic virtual processor management in a computer having a plurality of concurrent multi-threaded physical processors. An active logical partition is assigned to one of a plurality of shared processor pools, each shared processor pool having a virtual processor manager mode. A target performance metric for a workload in the active logical partition is compared to a calculated CPU utilization ratio or a calculated response time ratio. The workload in the active logical partition is dynamically moved from the assigned shared processor pool to a logical partition in another of the plurality of shared processor pools based on the target performance metric not being met in the active logical partition, and wherein the logical partition in another of the plurality of shared processor pools is configured to meet the target performance metric.
    Type: Grant
    Filed: May 2, 2018
    Date of Patent: January 21, 2020
    Assignee: International Business Machines Corporation
    Inventors: Dean J. Burdick, Bruce Mealey, Bret R. Olszewski, Basu Vaidyanathan
  • Patent number: 10394607
    Abstract: Virtual machines with low active thread counts are prioritized during periods of high system load in a virtualized computing environment to improve the performance of such virtual machines.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: August 27, 2019
    Assignee: International Business Machines Corporation
    Inventors: Qunying Gao, Peter J. Heyrman, Bret R. Olszewski
  • Patent number: 10394608
    Abstract: Virtual machines with low active thread counts are prioritized during periods of high system load in a virtualized computing environment to improve the performance of such virtual machines.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: August 27, 2019
    Assignee: International Business Machines Corporation
    Inventors: Qunying Gao, Peter J. Heyrman, Bret R. Olszewski
  • Publication number: 20180307637
    Abstract: According to one exemplary embodiment, a method for preventing a software thread from being blocked due to processing an external device interrupt is provided. The method may include receiving the software thread, whereby the software thread has an associated interrupt avoidance variable. The method may also include determining a processor to receive the software thread. The method may then include sending the software thread to the determined processor. The method may further include setting an interrupt mask bit associated with the processor based on the interrupt avoidance variable. The method may also include receiving the external device interrupt. The method may then include redirecting the received external device interrupt to a second processor, whereby the redirecting is based on the interrupt mask bit.
    Type: Application
    Filed: June 25, 2018
    Publication date: October 25, 2018
    Inventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
  • Patent number: 10108453
    Abstract: Techniques are disclosed for managing lock contention in a multithreaded processing system. In one embodiment, a method includes tracking a current total amount of time that one or more software threads are prevented from execution due to a lock, a current utilization of one or more hardware threads in the processor, and a current number of dispatchable software threads. If the current total amount of time exceeds a predetermined threshold, the method includes performing a comparison of the current total amount of time, the current utilization, and the current number of dispatchable software threads to one or more past measurements. Based on the comparison, the method includes determining if reducing a number of active hardware threads will reduce a wait time. If reducing the number of active hardware threads will reduce the wait time, reducing the number of active hardware threads.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: October 23, 2018
    Assignee: International Business Machines Corporation
    Inventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
  • Patent number: 10102037
    Abstract: Techniques are disclosed for managing lock contention in a multithreaded processing system. In one embodiment, a method includes tracking an amount of time that a lock on a first thread prevents a second thread from execution. The method also includes, if the amount of time is greater than a first threshold, storing the amount of time and an address associated with the lock. The method includes dispatching a third thread that utilizes the address associated with the lock. The method also includes increasing the hardware priority of the third thread during a lock operation.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: October 16, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Mathew Accapadi, Grover C. Davidson, II, Dirk Michel, Bret R. Olszewski
  • Publication number: 20180246761
    Abstract: A method, program product, and system is provided for dynamic virtual processor management in a computer having a plurality of concurrent multi-threaded physical processors. An active logical partition is assigned to one of a plurality of shared processor pools, each shared processor pool having a virtual processor manager mode. A target performance metric for a workload in the active logical partition is compared to a calculated CPU utilization ratio or a calculated response time ratio. The workload in the active logical partition is dynamically moved from the assigned shared processor pool to a logical partition in another of the plurality of shared processor pools based on the target performance metric not being met in the active logical partition, and wherein the logical partition in another of the plurality of shared processor pools is configured to meet the target performance metric.
    Type: Application
    Filed: May 2, 2018
    Publication date: August 30, 2018
    Inventors: Dean J. Burdick, Bruce Mealey, Bret R. Olszewski, Basu Vaidyanathan