Patents by Inventor Jody B. Joyner

Jody B. Joyner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10884943
    Abstract: A method, computer program product, and a computer system are disclosed for processing information in a processor that in one or more embodiments includes setting a threshold number of free Effective to Real Address Translation (ERAT) cache entries in an ERAT cache; determining whether a total number of free ERAT cache entries is less than or equal to the threshold number of free ERAT cache entries; allocating, in response to determining that the total number of free entries is less than or equal to the threshold number, one or more active ERAT cache entries to be speculatively checked in to a memory management unit (MMU); and speculatively checking in the one or more active ERAT cache entries to the MMU.
    Type: Grant
    Filed: August 30, 2018
    Date of Patent: January 5, 2021
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, Jay G. Heaslip, Robert D. Herzl, Jody B. Joyner, Jeffrey A. Stuecheli
  • Patent number: 10671537
    Abstract: Reducing translation latency within a memory management unit (MMU) using external caching structures including requesting, by the MMU on a node, page table entry (PTE) data and coherent ownership of the PTE data from a page table in memory; receiving, by the MMU, the PTE data, a source flag, and an indication that the MMU has coherent ownership of the PTE data, wherein the source flag identifies a source location of the PTE data; performing a lateral cast out to a local high-level cache on the node in response to determining that the source flag indicates that the source location of the PTE data is external to the node; and directing at least one subsequent request for the PTE data to the local high-level cache.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: June 2, 2020
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Jody B. Joyner, Ronald N. Kalla, Michael S. Siegel, Jeffrey A. Stuecheli, Charles D. Wait, Frederick J. Ziegler
  • Patent number: 10649902
    Abstract: Reducing translation latency within a memory management unit (MMU) using external caching structures including requesting, by the MMU on a node, page table entry (PTE) data and coherent ownership of the PTE data from a page table in memory; receiving, by the MMU, the PTE data, a source flag, and an indication that the MMU has coherent ownership of the PTE data, wherein the source flag identifies a source location of the PTE data; performing a lateral cast out to a local high-level cache on the node in response to determining that the source flag indicates that the source location of the PTE data is external to the node; and directing at least one subsequent request for the PTE data to the local high-level cache.
    Type: Grant
    Filed: November 21, 2017
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Jody B. Joyner, Ronald N. Kalla, Michael S. Siegel, Jeffrey A. Stuecheli, Charles D. Wait, Frederick J. Ziegler
  • Patent number: 10599569
    Abstract: A technique for operating a memory management unit (MMU) of a processor includes the MMU detecting that one or more address translation invalidation requests are indicated for an accelerator unit (AU). In response to detecting that the invalidation requests are indicated, the MMU issues a raise barrier request for the AU. In response to detecting a raise barrier response from the AU to the raise barrier request the MMU issues the invalidation requests to the AU. In response to detecting an address translation invalidation response from the AU to each of the invalidation requests, the MMU issues a lower barrier request to the AU. In response to detecting a lower barrier response from the AU to the lower barrier request, the MMU resumes handling address translation check-in and check-out requests received from the AU.
    Type: Grant
    Filed: June 23, 2016
    Date of Patent: March 24, 2020
    Assignee: International Business Machines Corporation
    Inventors: Bartholomew Blaner, Jay G. Heaslip, Robert D. Herzl, Jody B. Joyner
  • Publication number: 20200073816
    Abstract: A method, computer program product, and a computer system are disclosed for processing information in a processor that in one or more embodiments includes setting a threshold number of free Effective to Real Address Translation (ERAT) cache entries in an ERAT cache; determining whether a total number of free ERAT cache entries is less than or equal to the threshold number of free ERAT cache entries; allocating, in response to determining that the total number of free entries is less than or equal to the threshold number, one or more active ERAT cache entries to be speculatively checked in to a memory management unit (MMU); and speculatively checking in the one or more active ERAT cache entries to the MMU.
    Type: Application
    Filed: August 30, 2018
    Publication date: March 5, 2020
    Applicants: International Business Machines Corporation, International Business Machines Corporation
    Inventors: Bartholomew Blaner, Jay G. Heaslip, Robert D. Herzl, Jody B. Joyner, Jeffrey A. Stuecheli
  • Patent number: 10380031
    Abstract: Ensuring forward progress for nested translations in a memory management unit (MMU) including receiving a plurality of nested translation requests, wherein each of the plurality of nested translation requests requires at least one congruence class lock; detecting, using a congruence class scoreboard, a collision of the plurality of nested translation requests based on the required congruence class locks; quiescing, in response to detecting the collision of the plurality of nested translation requests, a translation pipeline in the MMU including switching operation of the translation pipeline from a multi-thread mode to a single-thread mode and marking a first subset of the plurality of nested translation requests as high-priority nested translation requests; and servicing the high-priority nested translation requests through the translation pipeline in the single-thread mode.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: August 13, 2019
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Jody B. Joyner, Jon K. Kriegel, Bradley Nelson, Charles D. Wait
  • Patent number: 10318435
    Abstract: Ensuring forward progress for nested translations in a memory management unit (MMU) including receiving a plurality of nested translation requests, wherein each of the plurality of nested translation requests requires at least one congruence class lock; detecting, using a congruence class scoreboard, a collision of the plurality of nested translation requests based on the required congruence class locks; quiescing, in response to detecting the collision of the plurality of nested translation requests, a translation pipeline in the MMU including switching operation of the translation pipeline from a multi-thread mode to a single-thread mode and marking a first subset of the plurality of nested translation requests as high-priority nested translation requests; and servicing the high-priority nested translation requests through the translation pipeline in the single-thread mode.
    Type: Grant
    Filed: August 22, 2017
    Date of Patent: June 11, 2019
    Assignee: International Business Machines Corporation
    Inventors: Guy L. Guthrie, Jody B. Joyner, Jon K. Kriegel, Bradley Nelson, Charles D. Wait
  • Publication number: 20190065398
    Abstract: Ensuring forward progress for nested translations in a memory management unit (MMU) including receiving a plurality of nested translation requests, wherein each of the plurality of nested translation requests requires at least one congruence class lock; detecting, using a congruence class scoreboard, a collision of the plurality of nested translation requests based on the required congruence class locks; quiescing, in response to detecting the collision of the plurality of nested translation requests, a translation pipeline in the MMU including switching operation of the translation pipeline from a multi-thread mode to a single-thread mode and marking a first subset of the plurality of nested translation requests as high-priority nested translation requests; and servicing the high-priority nested translation requests through the translation pipeline in the single-thread mode.
    Type: Application
    Filed: August 22, 2017
    Publication date: February 28, 2019
    Inventors: GUY L. GUTHRIE, JODY B. JOYNER, JON K. KRIEGEL, BRADLEY NELSON, CHARLES D. WAIT
  • Publication number: 20190065380
    Abstract: Reducing translation latency within a memory management unit (MMU) using external caching structures including requesting, by the MMU on a node, page table entry (PTE) data and coherent ownership of the PTE data from a page table in memory; receiving, by the MMU, the PTE data, a source flag, and an indication that the MMU has coherent ownership of the PTE data, wherein the source flag identifies a source location of the PTE data; performing a lateral cast out to a local high-level cache on the node in response to determining that the source flag indicates that the source location of the PTE data is external to the node; and directing at least one subsequent request for the PTE data to the local high-level cache.
    Type: Application
    Filed: November 21, 2017
    Publication date: February 28, 2019
    Inventors: GUY L. GUTHRIE, JODY B. JOYNER, RONALD N. KALLA, MICHAEL S. SIEGEL, JEFFREY A. STUECHELI, CHARLES D. WAIT, FREDERICK J. ZIEGLER
  • Publication number: 20190065399
    Abstract: Ensuring forward progress for nested translations in a memory management unit (MMU) including receiving a plurality of nested translation requests, wherein each of the plurality of nested translation requests requires at least one congruence class lock; detecting, using a congruence class scoreboard, a collision of the plurality of nested translation requests based on the required congruence class locks; quiescing, in response to detecting the collision of the plurality of nested translation requests, a translation pipeline in the MMU including switching operation of the translation pipeline from a multi-thread mode to a single-thread mode and marking a first subset of the plurality of nested translation requests as high-priority nested translation requests; and servicing the high-priority nested translation requests through the translation pipeline in the single-thread mode.
    Type: Application
    Filed: November 27, 2017
    Publication date: February 28, 2019
    Inventors: GUY L. GUTHRIE, JODY B. JOYNER, JON K. KRIEGEL, BRADLEY NELSON, CHARLES D. WAIT
  • Publication number: 20190065379
    Abstract: Reducing translation latency within a memory management unit (MMU) using external caching structures including requesting, by the MMU on a node, page table entry (PTE) data and coherent ownership of the PTE data from a page table in memory; receiving, by the MMU, the PTE data, a source flag, and an indication that the MMU has coherent ownership of the PTE data, wherein the source flag identifies a source location of the PTE data; performing a lateral cast out to a local high-level cache on the node in response to determining that the source flag indicates that the source location of the PTE data is external to the node; and directing at least one subsequent request for the PTE data to the local high-level cache.
    Type: Application
    Filed: August 22, 2017
    Publication date: February 28, 2019
    Inventors: GUY L. GUTHRIE, JODY B. JOYNER, RONALD N. KALLA, MICHAEL S. SIEGEL, JEFFREY A. STUECHELI, CHARLES D. WAIT, FREDERICK J. ZIEGLER
  • Publication number: 20180300255
    Abstract: Maintaining agent inclusivity within a distributed memory management unit (MMU) including receiving, from an agent, a translation checkout request comprising an effective address (EA) and an effective address to real address table (ERAT) index; translating the EA to a real address (RA) using an entry in a translation lookaside buffer (TLB) within the MMU, wherein the TLB within the MMU comprises an RA for each EA in the ERAT within the agent; flagging the entry in the TLB as in use by the agent, wherein the entry in the TLB comprises a TLB index; storing the EA index and the TLB index in an in-use scoreboard (IUSB), wherein the IUSB maps TLB indexes identifying entries in the TLB within the MMU to ERAT indexes identifying entries in the ERAT within the agent; and sending, to the agent, a translation checkout response comprising the RA.
    Type: Application
    Filed: April 13, 2017
    Publication date: October 18, 2018
    Inventors: BARTHOLOMEW BLANER, JODY B. JOYNER, RONALD N. KALLA, JON K. KRIEGEL, CHARLES D. WAIT
  • Publication number: 20180300256
    Abstract: Maintaining agent inclusivity within a distributed memory management unit (MMU) including receiving, from an agent, a translation checkout request comprising an effective address (EA) and an effective address to real address table (ERAT) index; translating the EA to a real address (RA) using an entry in a translation lookaside buffer (TLB) within the MMU, wherein the TLB within the MMU comprises an RA for each EA in the ERAT within the agent; flagging the entry in the TLB as in use by the agent, wherein the entry in the TLB comprises a TLB index; storing the EA index and the TLB index in an in-use scoreboard (IUSB), wherein the IUSB maps TLB indexes identifying entries in the TLB within the MMU to ERAT indexes identifying entries in the ERAT within the agent; and sending, to the agent, a translation checkout response comprising the RA.
    Type: Application
    Filed: November 20, 2017
    Publication date: October 18, 2018
    Inventors: BARTHOLOMEW BLANER, JODY B. JOYNER, RONALD N. KALLA, JON K. KRIEGEL, CHARLES D. WAIT
  • Publication number: 20170371789
    Abstract: A technique for operating a memory management unit (MMU) of a processor includes the MMU detecting that one or more address translation invalidation requests are indicated for an accelerator unit (AU). In response to detecting that the invalidation requests are indicated, the MMU issues a raise barrier request for the AU. In response to detecting a raise barrier response from the AU to the raise barrier request the MMU issues the invalidation requests to the AU. In response to detecting an address translation invalidation response from the AU to each of the invalidation requests, the MMU issues a lower barrier request to the AU. In response to detecting a lower barrier response from the AU to the lower barrier request, the MMU resumes handling address translation check-in and check-out requests received from the AU.
    Type: Application
    Filed: June 23, 2016
    Publication date: December 28, 2017
    Inventors: BARTHOLOMEW BLANER, JAY G. HEASLIP, ROBERT D. HERZL, JODY B. JOYNER
  • Patent number: 9384136
    Abstract: A prefetch stream is established in a prefetch unit of a memory controller for a system memory at a lowest level of a volatile memory hierarchy of the data processing system based on a memory access request received from a processor core. The memory controller receives an indication of an upcoming high latency event affecting access to the system memory. In response to the indication, the memory controller temporarily increases a prefetch depth of the prefetch stream with respect to the system memory and issues, to the system memory, a plurality of prefetch requests in accordance with the temporarily increased prefetch depth in advance of the upcoming high latency event.
    Type: Grant
    Filed: April 12, 2013
    Date of Patent: July 5, 2016
    Assignee: International Business Machines Corporation
    Inventors: John S Dodson, Miles R. Dooley, Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, Eric E. Retter, Jeffrey A. Stuecheli
  • Patent number: 9378144
    Abstract: A prefetch stream is established in a prefetch unit of a memory controller for a system memory at a lowest level of a volatile memory hierarchy of the data processing system based on a memory access request received from a processor core. The memory controller receives an indication of an upcoming high latency event affecting access to the system memory. In response to the indication, the memory controller temporarily increases a prefetch depth of the prefetch stream with respect to the system memory and issues, to the system memory, a plurality of prefetch requests in accordance with the temporarily increased prefetch depth in advance of the upcoming high latency event.
    Type: Grant
    Filed: September 25, 2013
    Date of Patent: June 28, 2016
    Assignee: International Business Machines Corporation
    Inventors: John S Dodson, Miles R. Dooley, Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, Eric E. Retter, Jeffrey A. Stuecheli
  • Patent number: 9355035
    Abstract: A set associative cache is managed by a memory controller which places writeback instructions for modified (dirty) cache lines into a virtual write queue, determines when the number of the sets containing a modified cache line is greater than a high water mark, and elevates a priority of the writeback instructions over read operations. The controller can return the priority to normal when the number of modified sets is less than a low water mark. In an embodiment wherein the system memory device includes rank groups, the congruence classes can be mapped based on the rank groups. The number of writes pending in a rank group exceeding a different threshold can additionally be a requirement to trigger elevation of writeback priority. A dirty vector can be used to provide an indication that corresponding sets contain a modified cache line, particularly in least-recently used segments of the corresponding sets.
    Type: Grant
    Filed: November 18, 2013
    Date of Patent: May 31, 2016
    Assignee: GLOBALFOUNDRIES Inc.
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, William J. Starke, Jeffrey A. Stuecheli
  • Patent number: 9336145
    Abstract: A technique for performing cache injection includes monitoring, at a host fabric interface, snoop responses to an address on a bus. When the snoop responses indicate a data block associated with the address is in a shared state, input/output data associated with the address on the bus is directed to a cache that includes the data block in the shared state and is located physically closer to the host fabric interface than one or more other caches that include the data block associated with the address in the shared state.
    Type: Grant
    Filed: April 9, 2009
    Date of Patent: May 10, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lakshminarayana Baba Arimilli, Ravi K. Arimilli, Jody B. Joyner, William J. Starke
  • Patent number: 9268703
    Abstract: A technique for performing cache injection in a processor system includes monitoring, by a cache, addresses on a bus. Input/output data associated with an address of a data block stored in the cache is then requested from a remote node, via a network controller. Ownership of the input/output data is acquired by the cache when an address on the bus that is associated with the input/output data corresponds to the address of the data block stored in the cache.
    Type: Grant
    Filed: April 15, 2009
    Date of Patent: February 23, 2016
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana Baba Arimilli, Ravi K. Arimilli, Jody B. Joyner, William J. Starke
  • Patent number: 9218292
    Abstract: A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line.
    Type: Grant
    Filed: June 18, 2013
    Date of Patent: December 22, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli