Patents by Inventor Jody B. Joyner

Jody B. Joyner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9213647
    Abstract: A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line.
    Type: Grant
    Filed: September 23, 2013
    Date of Patent: December 15, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Publication number: 20150143059
    Abstract: A set associative cache is managed by a memory controller which places writeback instructions for modified (dirty) cache lines into a virtual write queue, determines when the number of the sets containing a modified cache line is greater than a high water mark, and elevates a priority of the writeback instructions over read operations. The controller can return the priority to normal when the number of modified sets is less than a low water mark. In an embodiment wherein the system memory device includes rank groups, the congruence classes can be mapped based on the rank groups. The number of writes pending in a rank group exceeding a different threshold can additionally be a requirement to trigger elevation of writeback priority. A dirty vector can be used to provide an indication that corresponding sets contain a modified cache line, particularly in least-recently used segments of the corresponding sets.
    Type: Application
    Filed: December 6, 2013
    Publication date: May 21, 2015
    Applicant: International Business Machines Corporation
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, William J. Starke, Jeffrey A. Stuecheli
  • Publication number: 20150143056
    Abstract: A set associative cache is managed by a memory controller which places writeback instructions for modified (dirty) cache lines into a virtual write queue, determines when the number of the sets containing a modified cache line is greater than a high water mark, and elevates a priority of the writeback instructions over read operations. The controller can return the priority to normal when the number of modified sets is less than a low water mark. In an embodiment wherein the system memory device includes rank groups, the congruence classes can be mapped based on the rank groups. The number of writes pending in a rank group exceeding a different threshold can additionally be a requirement to trigger elevation of writeback priority. A dirty vector can be used to provide an indication that corresponding sets contain a modified cache line, particularly in least-recently used segments of the corresponding sets.
    Type: Application
    Filed: November 18, 2013
    Publication date: May 21, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, William J. Starke, Jeffrey A. Stuecheli
  • Publication number: 20140372704
    Abstract: A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line.
    Type: Application
    Filed: June 18, 2013
    Publication date: December 18, 2014
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Publication number: 20140372705
    Abstract: A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line.
    Type: Application
    Filed: September 23, 2013
    Publication date: December 18, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Benjiman L. Goodman, Jody B. Joyner, Stephen J. Powell, Aaron C. Sawdey, Jeffrey A. Stuecheli
  • Publication number: 20140310478
    Abstract: A prefetch stream is established in a prefetch unit of a memory controller for a system memory at a lowest level of a volatile memory hierarchy of the data processing system based on a memory access request received from a processor core. The memory controller receives an indication of an upcoming high latency event affecting access to the system memory.
    Type: Application
    Filed: September 25, 2013
    Publication date: October 16, 2014
    Inventors: JOHN S. DODSON, MILES R. DOOLEY, BENJIMAN L. GOODMAN, JODY B. JOYNER, STEPHEN J. POWELL, ERIC E. RETTER, JEFFREY A. STUECHELI
  • Publication number: 20140310477
    Abstract: A prefetch stream is established in a prefetch unit of a memory controller for a system memory at a lowest level of a volatile memory hierarchy of the data processing system based on a memory access request received from a processor core. The memory controller receives an indication of an upcoming high latency event affecting access to the system memory.
    Type: Application
    Filed: April 12, 2013
    Publication date: October 16, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: JOHN S. DODSON, MILES R. DOOLEY, BENJIMAN L. GOODMAN, JODY B. JOYNER, STEPHEN J. POWELL, ERIC E. RETTER, JEFFREY A. STUECHELI
  • Patent number: 8417778
    Abstract: A mechanism is provided for collective acceleration unit tree flow control forms a logical tree (sub-network) among those processors and transfers “collective” packets on this tree. The system supports many collective trees, and each collective acceleration unit (CAU) includes resources to support a subset of the trees. Each CAU has limited buffer space, and the connection between two CAUs is not completely reliable. Therefore, to address the challenge of collective packets traversing on the tree without colliding with each other for buffer space and guaranteeing the end-to-end packet delivery, each CAU in the system effectively flow controls the packets, detects packet loss, and retransmits lost packets.
    Type: Grant
    Filed: December 17, 2009
    Date of Patent: April 9, 2013
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Bernard C. Drerup, Jody B. Joyner, Paul F. Lecocq, Hanhong Xue
  • Patent number: 8302109
    Abstract: A synchronization optimized queuing method and device to minimize software/hardware interaction in network interface hardware during an end-of-initiative process, including network adapter queue implementations for network interface hardware for optimized communication in a computer system. An end-of-initiative procedure to ensure that the network interface hardware has received an interrupt enable and to recheck the interrupt queue is unnecessary in the present invention.
    Type: Grant
    Filed: February 24, 2009
    Date of Patent: October 30, 2012
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana Arimilli, Claude Basso, Piyush Chaudhary, Bernard C. Drerup, Jody B. Joyner, Jan-Bernd Themann, Christoph Raisch, Colin B. Verrilli
  • Patent number: 8077602
    Abstract: Mechanisms for performing dynamic request routing based on broadcast depth queue information are provided. Each processor chip in the system may use a synchronized heartbeat signal it generates to provide queue depth information to each of the other processor chips in the system. The queue depth information identifies a number of requests or amount of data in each of the queues of a processor chip that originated the heartbeat signal. The queue depth information from each of the processor chips in the system may be used by the processor chips in determining optimal routing paths for data from a source processor chip to a destination processor chip. As a result, the congestion of data for processing at each of the processor chips along each possible routing path may be taken into account when selecting to which processor chip to forward data.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: December 13, 2011
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Ravi K. Arimilli, Bernard C. Drerup, Jody B. Joyner, Jerry D. Lewis
  • Publication number: 20110173258
    Abstract: A mechanism is provided for collective acceleration unit tree flow control forms a logical tree (sub-network) among those processors and transfers “collective” packets on this tree. The system supports many collective trees, and each collective acceleration unit (CAU) includes resources to support a subset of the trees. Each CAU has limited buffer space, and the connection between two CAUs is not completely reliable. Therefore, to address the challenge of collective packets traversing on the tree without colliding with each other for buffer space and guaranteeing the end-to-end packet delivery, each CAU in the system effectively flow controls the packets, detects packet loss, and retransmits lost packets.
    Type: Application
    Filed: December 17, 2009
    Publication date: July 14, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lakshminarayana B. Arimilli, Bernard C. Drerup, Jody B. Joyner, Paul F. Lecocq, Hanhong Xue
  • Patent number: 7921316
    Abstract: Mechanisms for providing a cluster-wide system clock in a multi-tiered full graph (MTFG) interconnect architecture are provided. Heartbeat signals transmitted by each of the processor chips in the computing cluster are synchronized. Internal system clock signals are generated in each of the processor chips based on the synchronized heartbeat signals. As a result, the internal system clock signals of each of the processor chips are synchronized since the heartbeat signals, that are the basis for the internal system clock signals, are synchronized. Mechanisms are provided for performing such synchronization using direct couplings of processor chips within the same processor book, different processor books in the same supernode, and different processor books in different supernodes of the MTFG interconnect architecture.
    Type: Grant
    Filed: September 11, 2007
    Date of Patent: April 5, 2011
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Ravi K. Arimilli, Bernard C. Drerup, Jody B. Joyner, Jerry D. Lewis
  • Patent number: 7827428
    Abstract: A system for providing a cluster-wide system clock in a multi-tiered full graph (MTFG) interconnect architecture are provided. Heartbeat signals transmitted by each of the processor chips in the computing cluster are synchronized. Internal system clock signals are generated in each of the processor chips based on the synchronized heartbeat signals. As a result, the internal system clock signals of each of the processor chips are synchronized since the heartbeat signals, that are the basis for the internal system clock signals, are synchronized. Mechanisms are provided for performing such synchronization using direct couplings of processor chips within the same processor book, different processor books in the same supernode, and different processor books in different supernodes of the MTFG interconnect architecture.
    Type: Grant
    Filed: August 31, 2007
    Date of Patent: November 2, 2010
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Ravi K. Arimilli, Bernard C. Drerup, Jody B. Joyner, Jerry D. Lewis
  • Publication number: 20100268896
    Abstract: A technique for performing cache injection in a processor system includes monitoring, by a cache, addresses on a bus. Input/output data associated with an address of a data block stored in the cache is then requested from a remote node, via a network controller. Ownership of the input/output data is acquired by the cache when an address on the bus that is associated with the input/output data corresponds to the address of the data block stored in the cache.
    Type: Application
    Filed: April 15, 2009
    Publication date: October 21, 2010
    Applicant: INTERNATIONAL BUISNESS MACHINES CORPORATION
    Inventors: Lakshminarayana Baba Arimilli, Ravi K. Arimilli, Jody B. Joyner, William J. Starke
  • Publication number: 20100262787
    Abstract: A technique for performing cache injection includes monitoring, at a host fabric interface, snoop responses to an address on a bus. When the snoop responses indicate a data block associated with the address is in a shared state, input/output data associated with the address on the bus is directed to a cache that includes the data block in the shared state and is located physically closer to the host fabric interface than one or more other caches that include the data block associated with the address in the shared state.
    Type: Application
    Filed: April 9, 2009
    Publication date: October 14, 2010
    Applicant: INTERNATIONAL BUISNESS MACHINES CORPORATION
    Inventors: Lakshminarayana Baba Arimilli, Ravi K. Arimilli, Jody B. Joyner, William J. Starke
  • Publication number: 20100217905
    Abstract: A synchronization optimized queuing method and device to minimize software/hardware interaction in network interface hardware during an end-of-initiative process, including network adapter queue implementations for network interface hardware for optimized communication in a computer system. An end-of-initiative procedure to ensure that the network interface hardware has received an interrupt enable and to recheck the interrupt queue is unnecessary in the present invention.
    Type: Application
    Filed: February 24, 2009
    Publication date: August 26, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lakshminarayana Arimilli, Claude Basso, Piyush Chaudhary, Benard C. Drerup, Jody B. Joyner, Jan-Bernd Themann, Christoph Raisch, Colin B. Verrilli
  • Patent number: 7779148
    Abstract: A mechanism for performing dynamic request routing based on broadcast source request information is provided. Each processor chip in the system may use a synchronized heartbeat signal it generates to provide source request information to each of the other processor chips in the system. The source request information identifies the number of active source requests sent by the processor chip that originated the heartbeat signal. The source request information from each of the processor chips in the system may be used by the processor chips in determining optimal routing paths for data from a source processor chip to a destination processor chip. As a result, the congestion of data for processing at each of the processor chips along each possible routing path may be taken into account when selecting to which processor chip to forward data.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: August 17, 2010
    Assignee: International Business Machines Corporation
    Inventors: Lakshminarayana B. Arimilli, Ravi K. Arimilli, Bernard C. Drerup, Jody B. Joyner, Jerry D. Lewis
  • Publication number: 20090198957
    Abstract: A system and method for performing dynamic request routing based on broadcast depth queue information are provided. Each processor chip in the system may use a synchronized heartbeat signal it generates to provide queue depth information to each of the other processor chips in the system. The queue depth information identifies a number of requests or amount of data in each of the queues of a processor chip that originated the heartbeat signal. The queue depth information from each of the processor chips in the system may be used by the processor chips in determining optimal routing paths for data from a source processor chip to a destination processor chip. As a result, the congestion of data for processing at each of the processor chips along each possible routing path may be taken into account when selecting to which processor chip to forward data.
    Type: Application
    Filed: February 1, 2008
    Publication date: August 6, 2009
    Inventors: Lakshminarayana B. Arimilli, Ravi K. Arimilli, Bernard C. Drerup, Jody B. Joyner, Jerry D. Lewis
  • Publication number: 20090198958
    Abstract: A system and method for performing dynamic request routing based on broadcast source request information are provided. Each processor chip in the system may use a synchronized heartbeat signal it generates to provide source request information to each of the other processor chips in the system. The source request information identifies the number of active source requests sent by the processor chip that originated the heartbeat signal. The source request information from each of the processor chips in the system may be used by the processor chips in determining optimal routing paths for data from a source processor chip to a destination processor chip. As a result, the congestion of data for processing at each of the processor chips along each possible routing path may be taken into account when selecting to which processor chip to forward data.
    Type: Application
    Filed: February 1, 2008
    Publication date: August 6, 2009
    Inventors: Lakshminarayana B. Arimilli, Ravi K. Arimilli, Bernard C. Drerup, Jody B. Joyner, Jerry D. Lewis
  • Publication number: 20090070617
    Abstract: A method for providing a cluster-wide system clock in a multi-tiered full graph (MTFG) interconnect architecture are provided. Heartbeat signals transmitted by each of the processor chips in the computing cluster are synchronized. Internal system clock signals are generated in each of the processor chips based on the synchronized heartbeat signals. As a result, the internal system clock signals of each of the processor chips are synchronized since the heartbeat signals, that are the basis for the internal system clock signals, are synchronized. Mechanisms are provided for performing such synchronization using direct couplings of processor chips within the same processor book, different processor books in the same supernode, and different processor books in different supernodes of the MTFG interconnect architecture.
    Type: Application
    Filed: September 11, 2007
    Publication date: March 12, 2009
    Inventors: Lakshminarayana B. Arimilli, Ravi K. Arimilli, Bernard C. Drerup, Jody B. Joyner, Jerry D. Lewis