Patents by Inventor Richard L. Arndt

Richard L. Arndt has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170139860
    Abstract: A technique for handling interrupts in a data processing system includes maintaining, at an interrupt presentation controller (IPC), an interrupt acknowledge count (IAC). The IAC provides an indication of a number of times a virtual processor thread implemented at a first software stack level has been interrupted in response to receipt of event notification messages (ENMs) from an interrupt source controller (ISC). In response to the IAC reaching a threshold level, the IPC transmits an escalate message to the ISC. The escalate message includes an escalate event number that is used by the ISC to generate a new ENM that targets a second software stack level that is different than the first software stack level and is associated with another virtual processor thread.
    Type: Application
    Filed: October 31, 2016
    Publication date: May 18, 2017
    Inventors: RICHARD L. ARNDT, FLORIAN A. AUERNHAMMER, BRUCE MEALEY
  • Publication number: 20170139853
    Abstract: A technique for handling interrupts in a data processing system includes receiving, at an interrupt presentation controller (IPC), an event notification message (ENM). The ENM specifies a level, an event target number, and a number of bits to ignore. The IPC determines a group of virtual processor threads that may be potentially interrupted based on the event target number, the number of bits to ignore, and a process identifier (ID) when the level specified in the ENM corresponds to a user level. The event target number identifies a specific virtual processor thread and the number of bits to ignore identifies the number of lower-order bits to ignore with respect to the specific virtual processor thread when determining a group of virtual processor threads that may be potentially interrupted.
    Type: Application
    Filed: October 26, 2016
    Publication date: May 18, 2017
    Inventors: RICHARD L. ARNDT, FLORIAN A. AUERNHAMMER
  • Publication number: 20170139862
    Abstract: A technique for handling queued interrupts includes accumulating, by an interrupt routing controller (IRC), respective backlog counts for respective event paths. The background counts track a number of events received but not delivered as interrupts to associated virtual processor (VP) threads upon which respective target interrupt handlers execute. An increment backlog (IB) message is received by the IRC. In response to receiving the TB message, the IRC determines an associated saturate value for an event path specified in the D3 message. The IRC increments an associated backlog count for the event path specified in the D3 message as long as the associated backlog count does not exceed the associated saturate value.
    Type: Application
    Filed: November 1, 2016
    Publication date: May 18, 2017
    Inventors: RICHARD L. ARNDT, FLORIAN A. AUERNHAMMER
  • Publication number: 20170139858
    Abstract: A technique for handling queued interrupts includes accumulating respective backlog counts for respective event paths. The background counts track a number of events received but not delivered as interrupts to associated virtual processor (VP) threads. In response to a lowering of an operating priority (OP) of a VP thread (VPT), a scan backlog (SB) message is received that identifies the VPT and specifies a current operating priority for the VPT. In response to receiving the SB message, a linked list of event paths associated with the VPT is scanned to search for backlog events that have a higher priority than the current OP for the VPT. In response to a backlog event being located that has a higher priority than the current OP of the VPT, an interrupt to the VPT is initiated starting with a highest priority event path and the backlog count for the VPT is decremented.
    Type: Application
    Filed: October 26, 2016
    Publication date: May 18, 2017
    Inventors: RICHARD L. ARNDT, FLORIAN A. AUERNHAMMER
  • Publication number: 20170139854
    Abstract: A technique for handling interrupts includes receiving an event notification message (ENM) that specifies an event target number (ETN) and a number of bits to ignore (NBI). The ETN identifies a specific virtual processor thread (VPT) and the NBI identifies the number of lower-order bits of the specific VPT to ignore when determining a group of VPTs that may be potentially interrupted. In response to two or more VPTs within the group of VPTs being dispatched and operating on an associated physical processor, whether multiple of the two or more VPTs do not have a pending interrupt is determined. In response to determining that multiple of the two or more VPTs do not have a pending interrupt, one of the two or more VPTs is selected to service an interrupt associated with the ENM based, at least in part, on respective preferred bits for the two or more VPTs.
    Type: Application
    Filed: October 26, 2016
    Publication date: May 18, 2017
    Inventors: RICHARD L. ARNDT, FLORIAN A. AUERNHAMMER, STUART Z. JACOBS, WADE B. OUREN
  • Publication number: 20170139859
    Abstract: A method of handling interrupts includes receiving an event notification message (ENM) that specifies a level, an event target number (ETN), and a number of bits to ignore. A group of virtual processor threads that may be potentially interrupted are determined based on the ETN, the number of bits to ignore, and a process identifier when the level specified in the ENM corresponds to a user level. The ETN identifies a specific virtual processor thread and the number of bits to ignore identifies the number of lower-order bits to ignore when determining a group of virtual processor threads that may be potentially interrupted. In response to no virtual processor thread within the group of virtual processor threads being dispatched and operating on an associated physical processor, an escalate message that includes an escalate event number is transmitted. The escalate event number is used to generate a subsequent ENM.
    Type: Application
    Filed: October 31, 2016
    Publication date: May 18, 2017
    Inventors: RICHARD L. ARNDT, FLORIAN A. AUERNHAMMER
  • Publication number: 20170139855
    Abstract: A method of handling interrupts in a data processing system includes maintaining a first interrupt destination buffer (IDB) for a first interrupt handler routine (IHR) and a second IDB for a second IHR. Whether a received interrupt is associated with the first IHR or the second IHR is determined. In response to the received interrupt being associated with the first IHR, event information associated with the received interrupt is stored in the first IDB. In response to the received interrupt being associated with the second IHR, the event information associated with the received interrupt in stored in the second IDB.
    Type: Application
    Filed: October 26, 2016
    Publication date: May 18, 2017
    Inventors: RICHARD L. ARNDT, FLORIAN A. AUERNHAMMER
  • Publication number: 20170139856
    Abstract: A technique for handling queued interrupts includes determining, by an interrupt presentation controller (IPC), whether a received memory mapped input/output (MMIO) store is associated with preempting a virtual processor (VP) thread. In response to determining the MIMO store is associated with preempting the VP thread, the IPC writes interrupt context information of the VP thread to a specified location in memory.
    Type: Application
    Filed: November 1, 2016
    Publication date: May 18, 2017
    Inventors: RICHARD L. ARNDT, FLORIAN A. AUERNHAMMER
  • Patent number: 9575913
    Abstract: A technique for handling cache-inhibited operations in a data processing system includes receiving, at a topology specific replicated bus unit, a cache-inhibited (CI) operation that is scope limited. The replicated bus unit determines whether an address associated with the CI operation matches an address for the replicated bus unit. In response to the address associated with the CI operation matching the address for the replicated bus unit, the replicated bus unit processes the CI operation based on the scope being limited to that of the replicated bus unit. In response to the address associated with the CI operation not matching the address for the replicated bus unit, the replicated bus unit ignores the CI operation.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: February 21, 2017
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Florian Auernhammer, Hugh Shen, Derek E. Williams
  • Patent number: 9529760
    Abstract: A technique for handling cache-inhibited operations in a data processing system includes receiving, at a replicated bus unit, a cache-inhibited (CI) operation. The replicated bus unit determines whether an address associated with the CI operation matches an address for the replicated bus unit and whether a source indicated by the CI operation is associated with the replicated bus unit. In response to the address associated with the CI operation matching the address for the replicated bus unit and the source indicated by the CI operation being associated with the replicated bus unit, the replicated bus unit processes the CI operation. In response to the address associated with the CI operation not matching the address for the replicated bus unit or the source indicated by the CI operation not being associated with the replicated bus unit, the replicated bus unit ignores the CI operation.
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: December 27, 2016
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Florian Auernhammer, Hugh Shen, Derek E. Williams
  • Patent number: 9514083
    Abstract: A technique for handling cache-inhibited operations in a data processing system includes receiving, at a replicated bus unit, a cache-inhibited (CI) operation. The replicated bus unit determines whether an address associated with the CI operation matches an address for the replicated bus unit and whether a source indicated by the CI operation is associated with the replicated bus unit. In response to the address associated with the CI operation matching the address for the replicated bus unit and the source indicated by the CI operation being associated with the replicated bus unit, the replicated bus unit processes the CI operation. In response to the address associated with the CI operation not matching the address for the replicated bus unit or the source indicated by the CI operation not being associated with the replicated bus unit, the replicated bus unit ignores the CI operation.
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: December 6, 2016
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Florian Auernhammer, Hugh Shen, Derek E. Williams
  • Patent number: 9494991
    Abstract: A method for managing energy. A processor unit identifies a plurality of groups of virtual machines in a computer system. The processor unit allocates the energy in the computer system to the plurality of groups of virtual machines based on a policy.
    Type: Grant
    Filed: April 11, 2012
    Date of Patent: November 15, 2016
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Freeman L. Rawson, III
  • Patent number: 9477286
    Abstract: An embodiment of a system for managing energy identifies a plurality of groups of virtual machines in a computer system and allocates the energy in the computer system for a next time interval to a plurality of groups of virtual machines based on an energy budget and a policy selected from a set of policies in conjunction with a minimum energy, a group priority and a virtual machine priority.
    Type: Grant
    Filed: November 5, 2010
    Date of Patent: October 25, 2016
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Freeman L. Rawson, III
  • Patent number: 9454405
    Abstract: Disclosed is a computer implemented method, computer program product, and apparatus to establish at least one paging partition in a data processing system. The virtualization control point (VCP) reserves up to the subset of physical memory for use in the shared memory pool. The VCP configures at least one logical partition as a shared memory partition. The VCP assigns a paging partition to the shared memory pool. The VCP determines whether a user requests a redundant assignment of the paging partition to the shared memory pool. The VCP assigns a redundant paging partition to the shared memory pool, responsive to a determination that the user requests a redundant assignment. The VCP assigns a paging device to the shared memory pool. The hypervisor may transmit at least one paging request to a virtual asynchronous services interface configured to support a paging device stream.
    Type: Grant
    Filed: March 30, 2009
    Date of Patent: September 27, 2016
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Carol B. Hernandez, Kyle A. Lucke, Timothy R. Marchini, Naresh Nayar, James A. Pafumi
  • Patent number: 9027021
    Abstract: A mechanism is provided in a logically partitioned data processing system for controlling depth and latency of exit of a virtual processor's idle state. A virtualization layer generates a cede latency setting information (CLSI) data. Responsive to booting a logical partition, the virtualization layer communicates the CLSI data to an operating system (OS) of the logical partition. The OS determines, based on the CLSI data, a particular idle state of a virtual processor under a control of the OS. Responsive to the OS calling the virtualization layer, the OS communicates the particular idle state of the virtual processor to the virtualization layer for assigning the particular idle state and wake-up characteristics to the virtual processor.
    Type: Grant
    Filed: April 12, 2012
    Date of Patent: May 5, 2015
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Naresh Nayar, Christopher Francois, Karthick Rajamani, Freeman L. Rawson, III, Randal C. Swanberg
  • Publication number: 20140129797
    Abstract: In response to a determination to allocate additional storage, within a real address space employed by a system memory of a data processing system, for translation control entries (TCEs) that translate addresses from an input/output (I/O) address space to the real address space, a determination is made whether or not a first real address range contiguous with an existing TCE data structure is available for allocation. In response to determining that the first real address range is available for allocation, the first real address range is allocated for storage of TCEs, and a number of levels in the TCE data structure is retained. In response to determining that the first real address range is not available for allocation, a second real address range discontiguous with the existing TCE data structure is allocated for storage of the TCEs, and a number of levels in the TCE data structure is increased.
    Type: Application
    Filed: December 3, 2013
    Publication date: May 8, 2014
    Inventors: RICHARD L. ARNDT, BENJAMIN HERRENSCHMIDT, ERIC N. LAIS, STEVEN M. THURBER
  • Publication number: 20140129795
    Abstract: In response to a determination to allocate additional storage, within a real address space employed by a system memory of a data processing system, for translation control entries (TCEs) that translate addresses from an input/output (I/O) address space to the real address space, a determination is made whether or not a first real address range contiguous with an existing TCE data structure is available for allocation. In response to determining that the first real address range is available for allocation, the first real address range is allocated for storage of TCEs, and a number of levels in the TCE data structure is retained. In response to determining that the first real address range is not available for allocation, a second real address range discontiguous with the existing TCE data structure is allocated for storage of the TCEs, and a number of levels in the TCE data structure is increased.
    Type: Application
    Filed: November 6, 2012
    Publication date: May 8, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: RICHARD L. ARNDT, BENJAMIN HERRENSCHMIDT, ERIC N. LAIS, STEVEN M. THURBER
  • Patent number: 8645661
    Abstract: A computer implemented method to establish at least one paging partition in a data processing system. The virtualization control point (VCP) reserves up to the subset of physical memory for use in the shared memory pool. The VCP configures at least one logical partition as a shared memory partition. The VCP assigns a paging partition to the shared memory pool. The VCP determines whether a user requests a redundant assignment of the paging partition to the shared memory pool. The VCP assigns a redundant paging partition to the shared memory pool, responsive to a determination that the user requests a redundant assignment. The VCP assigns a paging device to the shared memory pool. The hypervisor may transmit at least one paging request to a virtual asynchronous services interface configured to support a paging device stream.
    Type: Grant
    Filed: April 12, 2012
    Date of Patent: February 4, 2014
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Carol B. Hernandez, Kyle A. Lucke, Timothy R. Marchini, Naresh Nayar, James A. Pafumi
  • Patent number: 8635381
    Abstract: According to one aspect of the present disclosure a method and technique for monitoring memory access is disclosed. The method includes monitoring, by a plurality of memory controllers, access to a memory unit, wherein each memory controller is associated with a different range of memory addresses of the memory unit, and wherein each memory controller monitors access for its associated range of memory addresses. The method also includes updating an incrementor with access data corresponding to accesses to the memory unit, wherein each memory controller updates the access data based on access of its associated range of memory addresses. The method further includes storing, by each respective memory controller, the updated access data in a cache corresponding to the respective range of memory addresses and, responsive to the updated access data for a respective range of memory addresses exceeding a threshold, storing the access data for the respective range of memory addresses in memory unit.
    Type: Grant
    Filed: August 26, 2010
    Date of Patent: January 21, 2014
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Karthick Rajamani, Jeffrey A. Stuecheli
  • Patent number: 8589556
    Abstract: A mechanism is provided for allocating energy budgets to a plurality of logical partitions. An overall energy budget for the data processing system and a total of a set of requested initial energy budgets for the plurality of partitions are determined. A determination is made as to whether the total of the set of requested initial energy budgets for the plurality of partitions is greater than the overall energy budget for the data processing system. Responsive to the total of the set of requested initial energy budgets exceeding the overall energy budget, an initial energy budget is allocated to each partition in the plurality of partitions based on at least one of priority or proportionality of each partition in the plurality of partitions such that a total of the initial energy budgets for the plurality of partitions does not exceed the overall energy budget of the data processing system.
    Type: Grant
    Filed: November 5, 2010
    Date of Patent: November 19, 2013
    Assignee: International Business Machines Corporation
    Inventors: Richard L. Arndt, Heather L. Hanson, Charles R. Lefurgy, Karthick Rajamani, Freeman L. Rawson, III, Malcolm S. Ware