Patents by Inventor Francis Patrick O'Connell

Francis Patrick O'Connell has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9465744
    Abstract: A technique for data prefetching for a multi-core chip includes determining memory utilization of the multi-core chip. In response to the memory utilization of the multi-core chip exceeding a first level, data prefetching for the multi-core chip is modified from a first data prefetching arrangement to a second data prefetching arrangement to minimize unused prefetched cache lines. In response to the memory utilization of the multi-core chip not exceeding the first level, the first data prefetching arrangement is maintained. The first and second data prefetching arrangements are different.
    Type: Grant
    Filed: July 29, 2014
    Date of Patent: October 11, 2016
    Assignee: International Business Machines Corporation
    Inventors: Jason Nathaniel Dale, Miles R. Dooley, Richard J Eickemeyer, Jr., John Barry Griswell, Jr., Francis Patrick O'Connell, Jeffrey A. Stuecheli
  • Publication number: 20160034400
    Abstract: A technique for data prefetching for a multi-core chip includes determining memory utilization of the multi-core chip. In response to the memory utilization of the multi-core chip exceeding a first level, data prefetching for the multi-core chip is modified from a first data prefetching arrangement to a second data prefetching arrangement to minimize unused prefetched cache lines. In response to the memory utilization of the multi-core chip not exceeding the first level, the first data prefetching arrangement is maintained. The first and second data prefetching arrangements are different.
    Type: Application
    Filed: July 29, 2014
    Publication date: February 4, 2016
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: JASON NATHANIEL DALE, MILES R. DOOLEY, RICHARD J. EICKEMEYER, JR., JOHN BARRY GRISWELL, JR., FRANCIS PATRICK O'CONNELL, JEFFREY A. STUECHELI
  • Patent number: 8490065
    Abstract: The present invention provides a computer implemented method, apparatus, and computer usable program code for compiling instructions to manage a cache system. Loop constructs are analyzed to identify data usage characteristics for cache and prefetching conditions in instructions to form identified prefetch conditions. A set of control instructions are inserted into the instructions based on the data usage characteristics and the identified prefetch conditions to form multiple modified instructions. The set of multiple modified instructions are compiled to generate code for execution to form compiled instructions. The set of control instructions in the compiled instructions form a cache management policy to control movement of data in a memory system during execution of the compiled instructions.
    Type: Grant
    Filed: October 13, 2005
    Date of Patent: July 16, 2013
    Assignee: International Business Machines Corporation
    Inventors: Roch Archambault, Yaoqing Gao, Francis Patrick O'Connell, Robert Brett Tremaine, Michael Edward Wazlowski, Steven Wayne White, Lixin Zhang
  • Patent number: 8413127
    Abstract: A mechanism for minimizing effective memory latency without unnecessary cost through fine-grained software-directed data prefetching using integrated high-level and low-level code analysis and optimizations is provided. The mechanism identifies and classifies streams, identifies data that is most likely to incur a cache miss, exploits effective hardware prefetching to determine the proper number of streams to be prefetched, exploits effective data prefetching on different types of streams in order to eliminate redundant prefetching and avoid cache pollution, and uses high-level transformations with integrated lower level cost analysis in the instruction scheduler to schedule prefetch instructions effectively.
    Type: Grant
    Filed: December 22, 2009
    Date of Patent: April 2, 2013
    Assignee: International Business Machines Corporation
    Inventors: Roch G. Archambault, Robert J. Blainey, Yaoqing Gao, Allan R. Martin, James L. McInnes, Francis Patrick O'Connell
  • Patent number: 7904661
    Abstract: A method of prefetching data in a microprocessor includes identifying a data stream associated with a process and determining a depth associated with the data stream based upon prefetch factors including the number of currently concurrent data streams and data consumption rates associated with the concurrent data streams. Data prefetch requests are allocated with the data stream to reflect the determined depth of the data stream. Allocating data prefetch requests may include allocating prefetch requests for a number of cache lines away from the cache line currently being referenced, wherein the number of cache lines is equal to the determined depth. The method may include, responsive to determining the depth associated with a data stream, configuring prefetch hardware to reflect the determined depth for the identified data stream. Prefetch control bits in an instruction executed by the processor control the prefetch hardware configuration.
    Type: Grant
    Filed: December 10, 2007
    Date of Patent: March 8, 2011
    Assignee: International Business Machines Corporation
    Inventors: Eric Jason Fluhr, Bradly George Frey, John Barry Griswell, Jr., Hung Qui Le, Cathy May, Francis Patrick O'Connell, Edward John Silha, Albert Thomas Williams
  • Patent number: 7890738
    Abstract: A method and logical apparatus for managing processing system resource use for speculative execution reduces the power and performance burden associated with inefficient speculative execution of program instructions. A measure of the efficiency of speculative execution is used to reduce resources allocated to a thread while the speculation efficiency is low. The resource control applied may be the number of instruction fetches allocated to the thread or the number of execution time slices. Alternatively, or in combination, the size of a prefetch instruction storage allocated to the thread may be limited. The control condition may be comparison of the number of correct or incorrect speculations to a threshold, comparison of the number of correct to incorrect speculations, or a more complex evaluator such as the size of a ratio of incorrect to total speculations.
    Type: Grant
    Filed: January 20, 2005
    Date of Patent: February 15, 2011
    Assignee: International Business Machines Corporation
    Inventors: Lee Evan Eisen, David Stephen Levitan, Francis Patrick O'Connell, Wolfram M. Sauer
  • Patent number: 7716427
    Abstract: In a microprocessor having a load/store unit and prefetch hardware, the prefetch hardware includes a prefetch queue containing entries indicative of allocated data streams. A prefetch engine receives an address associated with a store instruction executed by the load/store unit. The prefetch engine determines whether to allocate an entry in the prefetch queue corresponding to the store instruction by comparing entries in the queue to a window of addresses encompassing multiple cache blocks, where the window of addresses is derived from the received address. The prefetch engine compares entries in the prefetch queue to a window of 2M contiguous cache blocks. The prefetch engine suppresses allocation of a new entry when any entry in the prefetch queue is within the address window. The prefetch engine further suppresses allocation of a new entry when the data address of the store instruction is equal to an address in a border area of the address window.
    Type: Grant
    Filed: January 4, 2008
    Date of Patent: May 11, 2010
    Assignee: International Business Machines Corporation
    Inventors: John Barry Griswell, Jr., Hung Qui Le, Francis Patrick O'Connell, William J. Starke, Jeffrey Adam Stuecheli, Albert Thomas Williams
  • Publication number: 20100095271
    Abstract: A mechanism for minimizing effective memory latency without unnecessary cost through fine-grained software-directed data prefetching using integrated high-level and low-level code analysis and optimizations is provided. The mechanism identifies and classifies streams, identifies data that is most likely to incur a cache miss, exploits effective hardware prefetching to determine the proper number of streams to be prefetched, exploits effective data prefetching on different types of streams in order to eliminate redundant prefetching and avoid cache pollution, and uses high-level transformations with integrated lower level cost analysis in the instruction scheduler to schedule prefetch instructions effectively.
    Type: Application
    Filed: December 22, 2009
    Publication date: April 15, 2010
    Applicant: International Business Machines Corporation
    Inventors: Roch Georges Archambault, Robert James Blainey, Yaoqing Gao, Allan Russell Martin, James Lawrence McInnes, Francis Patrick O'Connell
  • Patent number: 7689774
    Abstract: A system and method for improving the page crossing performance of a data prefetcher is presented. A prefetch engine tracks times at which a data stream terminates due to a page boundary. When a certain percentage of data streams terminate at page boundaries, the prefetch engine sets an aggressive profile flag. In turn, when the data prefetch engine receives a real address that corresponds to the beginning/end of a new page, and the aggressive profile flag is set, the prefetch engine uses an aggressive startup profile to generate and schedule prefetches on the assumption that the real address is highly likely to be the continuation of a long data stream. As a result, the system and method minimize latency when crossing real page boundaries when a program is predominately accessing long streams.
    Type: Grant
    Filed: April 6, 2007
    Date of Patent: March 30, 2010
    Assignee: International Business Machines Corporation
    Inventors: Francis Patrick O'Connell, Jeffrey A. Stuecheli
  • Patent number: 7689775
    Abstract: Computer implemented method, system and computer program product for prefetching data in a data processing system. A computer implemented method for prefetching data in a data processing system includes generating attribute information of prior data streams by associating attributes of each prior data stream with a storage access instruction which caused allocation of the data stream, and then recording the generated attribute information. The recorded attribute information is accessed, and a behavior of a new data stream is modified using the accessed recorded attribute information.
    Type: Grant
    Filed: March 9, 2009
    Date of Patent: March 30, 2010
    Assignee: International Business Machines Corporation
    Inventors: John Barry Griswell, Jr., Francis Patrick O'Connell
  • Patent number: 7669194
    Abstract: A mechanism for minimizing effective memory latency without unnecessary cost through fine-grained software-directed data prefetching using integrated high-level and low-level code analysis and optimizations is provided. The mechanism identifies and classifies streams, identifies data that is most likely to incur a cache miss, exploits effective hardware prefetching to determine the proper number of streams to be prefetched, exploits effective data prefetching on different types of streams in order to eliminate redundant prefetching and avoid cache pollution, and uses high-level transformations with integrated lower level cost analysis in the instruction scheduler to schedule prefetch instructions effectively.
    Type: Grant
    Filed: August 26, 2004
    Date of Patent: February 23, 2010
    Assignee: International Business Machines Corporation
    Inventors: Roch Georges Archambault, Robert James Blainey, Yaoqing Gao, Allan Russell Martin, James Lawrence McInnes, Francis Patrick O'Connell
  • Publication number: 20090164509
    Abstract: Computer implemented method, system and computer program product for prefetching data in a data processing system. A computer implemented method for prefetching data in a data processing system includes generating attribute information of prior data streams by associating attributes of each prior data stream with a storage access instruction which caused allocation of the data stream, and then recording the generated attribute information. The recorded attribute information is accessed, and a behavior of a new data stream is modified using the accessed recorded attribute information.
    Type: Application
    Filed: March 9, 2009
    Publication date: June 25, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John Barry Griswell, Jr., Francis Patrick O'Connell
  • Patent number: 7530063
    Abstract: A method and system of modifying instructions forming a loop is provided. A method of modifying instructions forming a loop includes modifying instructions forming a loop including: determining static and dynamic characteristics for the instructions; selecting a modification factor for the instructions based on a number of separate equivalent sections forming a cache in a processor which is processing the instructions; and modifying the instructions to interleave the instructions in the loop according to the modification factor and the static and dynamic characteristics when the instructions satisfy a modification criteria based on the static and dynamic characteristics.
    Type: Grant
    Filed: May 27, 2004
    Date of Patent: May 5, 2009
    Assignee: International Business Machines Corporation
    Inventors: Roch Georges Archambault, Robert James Blainey, Yaoqing Gao, John David McCalpin, Francis Patrick O'Connell, Pascal Vezolle, Steven Wayne White
  • Patent number: 7516279
    Abstract: Computer implemented method, system and computer program product for prefetching data in a data processing system. A computer implemented method for prefetching data in a data processing system includes generating attribute information of prior data streams by associating attributes of each prior data stream with a storage access instruction which caused allocation of the data stream, and then recording the generated attribute information. The recorded attribute information is accessed, and a behavior of a new data stream is modified using the accessed recorded attribute information.
    Type: Grant
    Filed: February 28, 2006
    Date of Patent: April 7, 2009
    Assignee: International Business Machines Corporation
    Inventors: John Barry Griswell, Jr., Francis Patrick O'Connell
  • Publication number: 20090070556
    Abstract: In a microprocessor having a load/store unit and prefetch hardware, the prefetch hardware includes a prefetch queue containing entries indicative of allocated data streams. A prefetch engine receives an address associated with a store instruction executed by the load/store unit. The prefetch engine determines whether to allocate an entry in the prefetch queue corresponding to the store instruction by comparing entries in the queue to a window of addresses encompassing multiple cache blocks, where the window of addresses is derived from the received address. The prefetch engine compares entries in the prefetch queue to a window of 2M contiguous cache blocks. The prefetch engine suppresses allocation of a new entry when any entry in the prefetch queue is within the address window. The prefetch engine further suppresses allocation of a new entry when the data address of the store instruction is equal to an address in a border area of the address window.
    Type: Application
    Filed: January 4, 2008
    Publication date: March 12, 2009
    Inventors: John Barry Griswell, JR., Hung Qui Le, Francis Patrick O'Connell, William J. Starke, Jeffrey Adam Stuecheli, Albert Thomas Williams
  • Publication number: 20080250208
    Abstract: A system and method for improving the page crossing performance of a data prefetcher is presented. A prefetch engine tracks times at which a data stream terminates due to a page boundary. When a certain percentage of data streams terminate at page boundaries, the prefetch engine sets an aggressive profile flag. In turn, when the data prefetch engine receives a real address that corresponds to the beginning/end of a new page, and the aggressive profile flag is set, the prefetch engine uses an aggressive startup profile to generate and schedule prefetches on the assumption that the real address is highly likely to be the continuation of a long data stream. As a result, the system and method minimize latency when crossing real page boundaries when a program is predominately accessing long streams.
    Type: Application
    Filed: April 6, 2007
    Publication date: October 9, 2008
    Inventors: Francis Patrick O'Connell, Jeffrey A. Stuecheli
  • Patent number: 7380066
    Abstract: In a microprocessor having a load/store unit and prefetch hardware, the prefetch hardware includes a prefetch queue containing entries indicative of allocated data streams. A prefetch engine receives an address associated with a store instruction executed by the load/store unit. The prefetch engine determines whether to allocate an entry in the prefetch queue corresponding to the store instruction by comparing entries in the queue to a window of addresses encompassing multiple cache blocks, where the window of addresses is derived from the received address. The prefetch engine compares entries in the prefetch queue to a window of 2M contiguous cache blocks. The prefetch engine suppresses allocation of a new entry when any entry in the prefetch queue is within the address window. The prefetch engine further suppresses allocation of a new entry when the data address of the store instruction is equal to an address in a border area of the address window.
    Type: Grant
    Filed: February 10, 2005
    Date of Patent: May 27, 2008
    Assignee: International Business Machines Corporation
    Inventors: John Barry Griswell, Jr., Hung Qui Le, Francis Patrick O'Connell, William J. Starke, Jeffrey Adam Stuecheli, Albert Thomas Williams
  • Patent number: 7350029
    Abstract: A method of prefetching data in a microprocessor includes identifying a data stream associated with a process and determining a depth associated with the data stream based upon prefetch factors including the number of currently concurrent data streams and data consumption rates associated with the concurrent data streams. Data prefetch requests are allocated with the data stream to reflect the determined depth of the data stream. Allocating data prefetch requests may include allocating prefetch requests for a number of cache lines away from the cache line currently being referenced, wherein the number of cache lines is equal to the determined depth. The method may include, responsive to determining the depth associated with a data stream, configuring prefetch hardware to reflect the determined depth for the identified data stream. Prefetch control bits in an instruction executed by the processor control the prefetch hardware configuration.
    Type: Grant
    Filed: February 10, 2005
    Date of Patent: March 25, 2008
    Assignee: International Business Machines Corporation
    Inventors: Eric Jason Fluhr, Bradly George Frey, John Barry Griswell, Jr., Hung Qui Le, Cathy May, Francis Patrick O'Connell, Edward John Silha, Albert Thomas Williams
  • Patent number: 7254693
    Abstract: A method, apparatus, and computer program product are disclosed for selectively prohibiting speculative conditional branch execution. A particular type of conditional branch instruction is selected. An indication is stored within each instruction that is the particular type of conditional branch instruction. A processor then fetches a first instruction from code that is to be executed. A determination is made regarding whether the first instruction includes the indication. In response to determining that the instruction includes the indication: speculative execution of the first instruction is prohibited, an actual location to which the first instruction will branch is resolved, and execution of the code is branched to the actual location. In response to determining that the instruction does not include the indication, the first instruction is speculatively executed.
    Type: Grant
    Filed: December 2, 2004
    Date of Patent: August 7, 2007
    Assignee: International Business Machines Corporation
    Inventors: Lee Evan Eisen, Francis Patrick O'Connell
  • Patent number: 7039760
    Abstract: A method and apparatus for managing cache lines in a data processing system. A special purpose register is employed in which this register may be manipulated by user code and operating system code to set preferences, such as a level 2 cache management policy preference for an application thread. These preferences may be dynamically set and an arbitration mechanism is employed to best satisfy preferences of multiple threads with a single aggregate preference. Members are represented using a least recently used tree. The least recent used tree has a set of nodes forming a path to member cache lines in a hierarchical structure. A state of a selected node is selectively biased within the set of nodes in the least recently used tree. At least one node on a level below the selected node is eliminated from being selected in managing the cache lines. In this manner, members can be biased against or for selection as victims when replacing cache lines in a cache memory.
    Type: Grant
    Filed: April 28, 2003
    Date of Patent: May 2, 2006
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, John David McCalpin, Francis Patrick O'Connell, William John Starke