Patents by Inventor Timothy C. Bronson

Timothy C. Bronson has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130339608
    Abstract: Embodiments relate to accessing a cache line on a multi-level cache system having a system memory. Based on a request for exclusive ownership of a specific cache line at the local node, requests are concurrently sent to the system memory and remote nodes of the plurality of nodes for the specific cache line by the local node. The specific cache line is found in a specific remote node. The specific remote node is one of the remote nodes. The specific cache line is removed from the specific remote node for exclusive ownership by another node. Based on the specified node having the specified cache line in ghost state, any subsequent fetch request is initiated for the specific cache line from the specific node encounters the ghost state. When the ghost state is encountered, the subsequent fetch request is directed only to nodes of the plurality of nodes.
    Type: Application
    Filed: June 13, 2012
    Publication date: December 19, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Michael A. Blake, Craig R. Walters, Pak-Kin Mak
  • Publication number: 20130339785
    Abstract: A technique is provided for a cache. A cache controller accesses a set in a congruence class and determines that the set contains corrupted data based on an error being found. The cache controller determines that a delete parameter for taking the set offline is met and determines that a number of currently offline sets in the congruence class is higher than an allowable offline number threshold. The cache controller determines not to take the set in which the error was found offline based on determining that the number of currently offline sets in the congruence class is higher than the allowable offline number threshold.
    Type: Application
    Filed: June 13, 2012
    Publication date: December 19, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh
  • Publication number: 20130339623
    Abstract: A technique for cache coherency is provided. A cache controller selects a first set from multiple sets in a congruence class based on a cache miss for a first transaction, and places a lock on the entire congruence class in which the lock prevents other transactions from accessing the congruence class. The cache controller designates in a cache directory the first set with a marked bit indicating that the first transaction is working on the first set, and the marked bit for the first set prevents the other transactions from accessing the first set within the congruence class. The cache controller removes the lock on the congruence class based on the marked bit being designated for the first set, and resets the marked bit for the first set to an unmarked bit based on the first transaction completing work on the first set in the congruence class.
    Type: Application
    Filed: January 22, 2013
    Publication date: December 19, 2013
    Applicant: International Business Machines Corporation
    Inventors: Ekaterina M. Ambroladze, Michael A. Blake, Timothy C. Bronson, Garrett M. Drapala, Pak-kin Mak, Arthur J. O'Neill
  • Publication number: 20130339609
    Abstract: Embodiments relate to accessing a cache line on a multi-level cache system having a system memory. Based on a request for exclusive ownership of a specific cache line at the local node, requests are concurrently sent to the system memory and remote nodes of the plurality of nodes for the specific cache line by the local node. The specific cache line is found in a specific remote node. The specific remote node is one of the remote nodes. The specific cache line is removed from the specific remote node for exclusive ownership by another node. Based on the specified node having the specified cache line in ghost state, any subsequent fetch request is initiated for the specific cache line from the specific node encounters the ghost state. When the ghost state is encountered, the subsequent fetch request is directed only to nodes of the plurality of nodes.
    Type: Application
    Filed: March 11, 2013
    Publication date: December 19, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Michael A. Blake, Craig R. Walters, Pak-Kin Mak
  • Patent number: 8560891
    Abstract: A computer implemented method of embedded dynamic random access memory (EDRAM) macro disablement. The method includes isolating an EDRAM macro of a cache memory bank, the cache memory bank being divided into at least three rows of a plurality of EDRAM macros, the EDRAM macro being associated with one of the at least three rows. Each line of the EDRAM macro is iteratively tested, the testing including attempting at least one write operation at each line of the EDRAM macro. It is determined that an error occurred during the testing. Write perations for an entire row of EDRAM macros associated with the EDRAM macro are disabled based on the determining.
    Type: Grant
    Filed: October 18, 2012
    Date of Patent: October 15, 2013
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh, Pak-kin Mak
  • Patent number: 8560767
    Abstract: Embodiments relate to embedded Dynamic Random Access Memory (eDRAM) refresh rates in a high performance cache architecture. An aspect includes receiving a plurality of first signals. A refresh request is transmitted via a refresh requestor to a cache memory at a first refresh rate which includes an interval, including a subset of the first signals. The first refresh rate corresponds to a maximum refresh rate. A refresh counter is reset based on receiving a second signal. The refresh counter is incremented after receiving each of a number of refresh requests. A current count is transmitted from a refresh counter to the refresh requestor based on receiving a third signal. The refresh request is transmitted at a second refresh rate, which is less than the first refresh rate. The refresh request is transmitted based on receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.
    Type: Grant
    Filed: July 11, 2012
    Date of Patent: October 15, 2013
    Assignee: International Business Machines Corporation
    Inventors: Timothy C. Bronson, Michael Fee, Arthur J. O'Neill, Jr., Scott B. Swaney
  • Patent number: 8458405
    Abstract: Various embodiments of the present invention manage access to a cache memory. In one embodiment, a set of cache bank availability vectors are generated based on a current set of cache access requests currently operating on a set of cache banks and at least a variable busy time of a cache memory includes the set of cache banks. The set of cache bank availability vectors indicate an availability of the set of cache banks. A set of cache access requests for accessing a set of given cache banks within the set of cache banks is received. At least one cache access request in the set of cache access requests is selected to access a given cache bank based on the a cache bank availability vectors associated with the given cache bank and the set of access request parameters associated with the at least one cache access that has been selected.
    Type: Grant
    Filed: June 23, 2010
    Date of Patent: June 4, 2013
    Assignee: International Business Machines Corporation
    Inventors: Timothy C. Bronson, Garrett M. Drapala, Hieu T. Huynh, Kenneth D. Klapproth
  • Patent number: 8381019
    Abstract: Embedded dynamic random access memory (EDRAM) macro disablement in a cache memory includes isolating an EDRAM macro of a cache memory bank, the cache memory bank being divided into at least three rows of a plurality of EDRAM macros, the EDRAM macro being associated with one of the at least three rows, iteratively testing each line of the EDRAM macro, the testing including attempting at least one write operation at each line of the EDRAM macro, determining if an error occurred during the testing, and disabling write operations for an entire row of EDRAM macros associated with the EDRAM macro based on the determining.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: February 19, 2013
    Assignee: International Business Machines Corporation
    Inventors: Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh, Pak-kin Mak
  • Publication number: 20120278548
    Abstract: Optimizing EDRAM refresh rates in a high performance cache architecture. An aspect of the invention includes receiving a plurality of first signals. A refresh request is transmitted via a refresh requestor to a cache memory at a first refresh rate which includes an interval, including a subset of the first signals. The first refresh rate corresponds to a maximum refresh rate. A refresh counter is reset based on receiving a second signal. The refresh counter is incremented after receiving each of a number of refresh requests. A current count is transmitted from a refresh counter to the refresh requestor based on receiving a third signal. The refresh request is transmitted at a second refresh rate, which is less than the first refresh rate. The refresh request is transmitted based on receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.
    Type: Application
    Filed: July 11, 2012
    Publication date: November 1, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Michael Fee, Arthur J. O'Neill, JR., Scott B. Swaney
  • Patent number: 8291157
    Abstract: Concurrent refresh in a cache memory includes calculating a refresh time interval at a centralized refresh controller, the centralized refresh controller being common to all cache memory banks of the cache memory, transmitting a starting time of the refresh time interval to a bank controller, the bank controller being local to, and associated with, only one cache memory bank of the cache memory, sampling a continuous refresh status indicative of a number of refreshes necessary to maintain data within the cache memory bank associated with the bank controller, requesting a gap in a processing pipeline of the cache memory to facilitate the number of refreshes necessary, receiving a refresh grant in response to the requesting, and transmitting an encoded refresh command to the bank controller, the encoded refresh command indicating a number of refresh operations granted to the cache memory bank associated with the bank controller.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: October 16, 2012
    Assignee: International Business Machines Corporation
    Inventors: Timothy C. Bronson, Hieu T. Huynh, Charlie C. Hwang, Kenneth D. Klapproth
  • Publication number: 20120210070
    Abstract: A mechanism for data buffering is provided. A portion of a cache is allocated as buffer regions, and another portion of the cache is designated as random access memory (RAM). One of the buffer regions is assigned to a processor. A data block is stored to the one of the buffer regions of the cache according an instruction of the processor. The data block is stored from the one of the buffer regions of the cache to the memory.
    Type: Application
    Filed: April 19, 2012
    Publication date: August 16, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Craig R. Walters
  • Patent number: 8244972
    Abstract: Controlling refresh request transmission rates in a cache comprising: a refresh requestor configured to transmit a refresh request to a cache memory at a first refresh rate, the first refresh rate comprising an interval, the interval comprising receiving a plurality of first signals, the first refresh rate corresponding to a maximum refresh rate, and a refresh counter operatively coupled to the refresh requestor and configured to reset in response to receiving a second signal, increment in response to receiving each of a plurality of refresh requests from the refresh requestor, and reset and transmit a current count to the refresh requestor in response to receiving a third signal, wherein the refresh requestor is configured to transmit a refresh request at a second refresh rate, in response to receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.
    Type: Grant
    Filed: June 24, 2010
    Date of Patent: August 14, 2012
    Assignee: International Business Machines Corporation
    Inventors: Timothy C. Bronson, Michael Fee, Arthur J. O'Neill, Jr., Scott B. Swaney
  • Publication number: 20110320778
    Abstract: Serializing instructions in a multiprocessor system includes receiving a plurality of processor requests at a central point in the multiprocessor system. Each of the plurality of processor requests includes a needs register having a requestor needs switch and a resource needs switch. The method also includes establishing a tail switch indicating the presence of the plurality of processor requests at the central point, establishing a sequential order of the plurality of processor requests, and processing the plurality of processor requests at the central point in the sequential order.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Garrett M. Drapala, Michael A. Blake, Timothy C. Bronson, Lawrence D. Curley
  • Publication number: 20110320729
    Abstract: Various embodiments of the present invention manage access to a cache memory. In one embodiment, a set of cache bank availability vectors are generated based on a current set of cache access requests currently operating on a set of cache banks and at least a variable busy time of a cache memory includes the set of cache banks. The set of cache bank availability vectors indicate an availability of the set of cache banks. A set of cache access requests for accessing a set of given cache banks within the set of cache banks is received. At least one cache access request in the set of cache access requests is selected to access a given cache bank based on the a cache bank availability vectors associated with the given cache bank and the set of access request parameters associated with the at least one cache access that has been selected.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: International Business Machines Corporation
    Inventors: TIMOTHY C. BRONSON, Garrett M. Drapala, Hieu T. Huynh, Kenneth D. Klapproth
  • Publication number: 20110320730
    Abstract: A mechanism for data buffering is provided. A portion of a cache is allocated as buffer regions, and another portion of the cache is designated as random access memory (RAM). One of the buffer regions is assigned to a processor. A data block is stored to the one of the buffer regions of the cache according an instruction of the processor. The data block is stored from the one of the buffer regions of the cache to the memory.
    Type: Application
    Filed: June 23, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael A. Blake, Timothy C. Bronson, Pak-kin Mak, Craig R. Walters
  • Publication number: 20110320700
    Abstract: Concurrent refresh in a cache memory includes calculating a refresh time interval at a centralized refresh controller, the centralized refresh controller being common to all cache memory banks of the cache memory, transmitting a starting time of the refresh time interval to a bank controller, the bank controller being local to, and associated with, only one cache memory bank of the cache memory, sampling a continuous refresh status indicative of a number of refreshes necessary to maintain data within the cache memory bank associated with the bank controller, requesting a gap in a processing pipeline of the cache memory to facilitate the number of refreshes necessary, receiving a refresh grant in response to the requesting, and transmitting an encoded refresh command to the bank controller, the encoded refresh command indicating a number of refresh operations granted to the cache memory bank associated with the bank controller.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Hieu T. Huynh, Charlie C. Hwang, Kenneth D. Klapproth
  • Publication number: 20110320701
    Abstract: Optimizing refresh request transmission rates in a high performance cache comprising: a refresh requestor configured to transmit a refresh request to a cache memory at a first refresh rate, the first refresh rate comprising an interval, the interval comprising receiving a plurality of first signals, the first refresh rate corresponding to a maximum refresh rate, and a refresh counter operatively coupled to the refresh requestor and configured to reset in response to receiving a second signal, increment in response to receiving each of a plurality of refresh requests from the refresh requestor, and reset and transmit a current count to the refresh requestor in response to receiving a third signal, wherein the refresh requestor is configured to transmit a refresh request at a second refresh rate, in response to receiving the current count from the refresh counter and determining that the current count is greater than a refresh threshold.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Timothy C. Bronson, Michael Fee, Arthur J. O'Neill, JR., Scott B. Swaney
  • Publication number: 20110320862
    Abstract: Embedded dynamic random access memory (EDRAM) macro disablement in a cache memory includes isolating an EDRAM macro of a cache memory bank, the cache memory bank being divided into at least three rows of a plurality of EDRAM macros, the EDRAM macro being associated with one of the at least three rows, iteratively testing each line of the EDRAM macro, the testing including attempting at least one write operation at each line of the EDRAM macro, determining if an error occurred during the testing, and disabling write operations for an entire row of EDRAM macros associated with the EDRAM macro based on the determining.
    Type: Application
    Filed: June 24, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Blake, Timothy C. Bronson, Hieu T. Huynh, Pak-kin Mak
  • Patent number: 6985990
    Abstract: Private devices are implemented on the secondary interface of PCI bridge by re-routing the activation of device select signals (IDSEL) during the address phase of a Type 0 configuration operation on the secondary bus in response to a Type 1 configuration operation on its primary bus. Under control of a mask register and device select reroute circuit, if a configuration command on the primary interface attempts to activate the IDSEL line associated with one of the private, or reroute, devices on the secondary interface, a different IDSEL is activated to select a monitoring device on the secondary interface.
    Type: Grant
    Filed: March 29, 2002
    Date of Patent: January 10, 2006
    Assignee: International Business Machines Corporation
    Inventors: Timothy C. Bronson, John M. Sheplock, Phillip G. Williams
  • Patent number: 6973528
    Abstract: To prevent data performance impacts when dealing with target devices that can only transfer data for a limited number of bytes before disconnecting, the invention implements a short term data cache on the bridge. Using this feature, the bridge will cache additional data beyond a predetermined quantity of data following a disconnect with the requesting device. As such, the bridge may continue to prefetch additional data up to an amount specified by a prefetch read byte count and return the additional data should the requesting device request additional data resuming at the point of disconnect. However, the bridge will discard the additional data when at least one of the following occurs: a) the requesting device disconnects data transfer, and b) a further READ request that resumes at the point of disconnect is not received within a predetermined time.
    Type: Grant
    Filed: May 22, 2002
    Date of Patent: December 6, 2005
    Assignee: International Business Machines Corporation
    Inventors: Timothy C. Bronson, Glenn D. Gilda, John M. Sheplock, Phillip G. Williams