Patents by Inventor Arnold S. Tran

Arnold S. Tran has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9201654
    Abstract: Disclosed are a processor and a processing method incorporating an instruction pipeline with direction prediction (i.e., taken or not taken) for conditional branch instructions. In the embodiments, reading of a branch instruction history table (BHT) and a branch instruction target address cache (BTAC) for branch direction prediction occurs in parallel with the current instruction fetch in order to minimize delay in the next instruction fetch. Additionally, direction prediction is performed in the very next clock cycle based either on an initial direction prediction for the specific instruction, as stored in the BHT, or, if applicable, on a prior entry for the specific instruction in the BTAC. An override bit associated with each entry in the BTAC is the determining factor for whether or the BTAC or BHT is controlling. Override bits in the BTAC can be pre-established based on the branch instruction type in order to ensure prediction accuracy.
    Type: Grant
    Filed: June 28, 2011
    Date of Patent: December 1, 2015
    Assignee: International Business Machines Corporation
    Inventors: Jason F. Cantin, Jack R. Smith, Arnold S. Tran, Kenichi Tsuchiya
  • Patent number: 8667223
    Abstract: A cache for use in a central processing unit (CPU) of a computer includes a data array; a tag array configured to hold a list of addresses corresponding to each data entry held in the data array; a least recently used (LRU) array configured to hold data indicating least recently used data entries in the data array; a line fill buffer configured to receive data from an address in main memory that is located external to the cache in the event of a cache miss; and a shadow register associated with the line fill buffer, wherein the shadow register is configured to hold LRU data indicating a current state of the LRU array.
    Type: Grant
    Filed: August 11, 2011
    Date of Patent: March 4, 2014
    Assignee: International Business Machines Corporation
    Inventors: Thomas B. Chadwick, Jr., Robert D. Herzl, Kenneth A. Lauricella, Arnold S. Tran
  • Publication number: 20130042068
    Abstract: A cache for use in a central processing unit (CPU) of a computer includes a data array; a tag array configured to hold a list of addresses corresponding to each data entry held in the data array; a least recently used (LRU) array configured to hold data indicating least recently used data entries in the data array; a line fill buffer configured to receive data from an address in main memory that is located external to the cache in the event of a cache miss; and a shadow register associated with the line fill buffer, wherein the shadow register is configured to hold LRU data indicating a current state of the LRU array.
    Type: Application
    Filed: August 11, 2011
    Publication date: February 14, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Thomas B. Chadwick, JR., Robert D. Herzl, Kenneth A. Lauricella, Arnold S. Tran
  • Publication number: 20130007425
    Abstract: Disclosed are a processor and a processing method incorporating an instruction pipeline with direction prediction (i.e., taken or not taken) for conditional branch instructions. In the embodiments, reading of a branch instruction history table (BHT) and a branch instruction target address cache (BTAC) for branch direction prediction occurs in parallel with the current instruction fetch in order to minimize delay in the next instruction fetch. Additionally, direction prediction is performed in the very next clock cycle based either on an initial direction prediction for the specific instruction, as stored in the BHT, or, if applicable, on a prior entry for the specific instruction in the BTAC. An override bit associated with each entry in the BTAC is the determining factor for whether or the BTAC or BHT is controlling. Override bits in the BTAC can be pre-established based on the branch instruction type in order to ensure prediction accuracy.
    Type: Application
    Filed: June 28, 2011
    Publication date: January 3, 2013
    Applicant: International Business Machines Corporation
    Inventors: Jason F. Cantin, Jack R. Smith, Arnold S. Tran, Kenichi Tsuchiya
  • Patent number: 8108609
    Abstract: A hardware description language (HDL) design structure embodied on a machine-readable data storage medium includes elements that when processed in a computer aided design system generates a machine executable representation of a device for implementing dynamic refresh protocols for DRAM based cache. The HDL design structure further includes a DRAM cache partitioned into a refreshable portion and a non-refreshable portion; and a cache controller configured to assign incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines; wherein cache lines corresponding to data having a usage history below a defined frequency are assigned by the controller to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache.
    Type: Grant
    Filed: May 23, 2008
    Date of Patent: January 31, 2012
    Assignee: International Business Machines Corporation
    Inventors: John E. Barth, Philip G. Emma, Erik L. Hedberg, Hillery C. Hunter, Peter A. Sandon, Vijayalakshmi Srinivasan, Arnold S. Tran
  • Patent number: 8024513
    Abstract: A method for implementing dynamic refresh protocols for DRAM based cache includes partitioning a DRAM cache into a refreshable portion and a non-refreshable portion, and assigning incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines. Cache lines corresponding to data having a usage history below a defined frequency are assigned to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache.
    Type: Grant
    Filed: December 4, 2007
    Date of Patent: September 20, 2011
    Assignee: International Business Machines Corporation
    Inventors: John E. Barth, Jr., Philip G. Emma, Erik L. Hedberg, Hillery C. Hunter, Peter A. Sandon, Vijayalakshmi Srinivasan, Arnold S. Tran
  • Patent number: 7962695
    Abstract: A method of integrating a hybrid architecture in a set associative cache having a first type of memory structure for one or more ways in each congruence class, and a second type of memory structure for the remaining ways of the congruence class, includes determining whether a memory access request results in a cache hit or a cache miss; in the event of a cache miss, determining whether LRU way of the first type memory structure is also the LRU way of the entire congruence class, and if not, then copying the contents of the LRU way of the first type memory structure into the LRU way of the entire congruence class, and filling the LRU way of the first type memory structure with a new cache line in the event of a cache miss; and updating LRU bits, depending upon the results of the memory access request.
    Type: Grant
    Filed: December 4, 2007
    Date of Patent: June 14, 2011
    Assignee: International Business Machines Corporation
    Inventors: Marc R. Faucher, Hillery C. Hunter, William R. Reohr, Peter A. Sandon, Vijayalakshmi Srinivasan, Arnold S. Tran
  • Patent number: 7882302
    Abstract: A method for implementing prioritized refresh of a multiple way, set associative DRAM based cache includes identifying, for each of a plurality of sets of the cache, the existence of a most recently used way that has not been accessed during a current assessment period; and for each set, refreshing only the identified most recently used way of the set not accessed during the current assessment period, while ignoring the remaining ways of the set; wherein a complete examination of each set for most recently used ways therein during the current assessment period constitutes a sweep of the cache.
    Type: Grant
    Filed: December 4, 2007
    Date of Patent: February 1, 2011
    Assignee: International Business Machines Corporation
    Inventors: Marc R. Faucher, Peter A. Sandon, Arnold S. Tran
  • Publication number: 20090193187
    Abstract: A design structure for an embedded DRAM (eDRAM) having multi-use refresh cycles is described. In one embodiment, there is a multi-level cache memory system that comprises a pending write queue configured to receive pending prefetch operations from at least one of the levels of cache. A prefetch queue is configured to receive prefetch operations for at least one of the levels of cache. A refresh controller is configured to determine addresses within each level of cache that are due for a refresh. The refresh controller is configured to assert a refresh write-in signal to write data supplied from the pending write queue specified for an address due for a refresh rather than refresh existing data. The refresh controller asserts the refresh write-in signal in response to a determination that there is pending data to supply to the address specified to have the refresh.
    Type: Application
    Filed: April 15, 2008
    Publication date: July 30, 2009
    Applicant: International Business Machines Corporation
    Inventors: John E. Barth, Jr., Philip G. Emma, Hillery C. Hunter, Vijayalakshmi Srinivasan, Arnold S. Tran
  • Publication number: 20090193186
    Abstract: An embedded DRAM (eDRAM) having multi-use refresh cycles is described. In one embodiment, there is a multi-level cache memory system that comprises a pending write queue configured to receive pending prefetch operations from at least one of the levels of cache. A prefetch queue is configured to receive prefetch operations for at least one of the levels of cache. A refresh controller is configured to determine addresses within each level of cache that are due for a refresh. The refresh controller is configured to assert a refresh write-in signal to write data supplied from the pending write queue specified for an address due for a refresh rather than refresh existing data. The refresh controller asserts the refresh write-in signal in response to a determination that there is pending data to supply to the address specified to have the refresh.
    Type: Application
    Filed: January 25, 2008
    Publication date: July 30, 2009
    Inventors: John E. Barth, JR., Philip G. Emma, Hillery C. Hunter, Vijayalakshmi Srinivasan, Arnold S. Tran
  • Publication number: 20090144503
    Abstract: A method of integrating a hybrid architecture in a set associative cache having a first type of memory structure for one or more ways in each congruence class, and a second type of memory structure for the remaining ways of the congruence class, includes determining whether a memory access request results in a cache hit or a cache miss; in the event of a cache miss, determining whether LRU way of the first type memory structure is also the LRU way of the entire congruence class, and if not, then copying the contents of the LRU way of the first type memory structure into the LRU way of the entire congruence class, and filling the LRU way of the first type memory structure with a new cache line in the event of a cache miss; and updating LRU bits, depending upon the results of the memory access request.
    Type: Application
    Filed: December 4, 2007
    Publication date: June 4, 2009
    Inventors: Marc R. Faucher, Hillery C. Hunter, William R. Reohr, Peter A. Sandon, Vijayalakshmi Srinivasan, Arnold S. Tran
  • Publication number: 20090144491
    Abstract: A method for implementing prioritized refresh of a multiple way, set associative DRAM based cache includes identifying, for each of a plurality of sets of the cache, the existence of a most recently used way that has not been accessed during a current assessment period; and for each set, refreshing only the identified most recently used way of the set not accessed during the current assessment period, while ignoring the remaining ways of the set; wherein a complete examination of each set for most recently used ways therein during the current assessment period constitutes a sweep of the cache.
    Type: Application
    Filed: December 4, 2007
    Publication date: June 4, 2009
    Inventors: Marc R. Faucher, Peter A. Sandon, Arnold S. Tran
  • Publication number: 20090144492
    Abstract: A hardware description language (HDL) design structure embodied on a machine-readable data storage medium includes elements that when processed in a computer aided design system generates a machine executable representation of a device for implementing dynamic refresh protocols for DRAM based cache. The HDL design structure further includes a DRAM cache partitioned into a refreshable portion and a non-refreshable portion; and a cache controller configured to assign incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines; wherein cache lines corresponding to data having a usage history below a defined frequency are assigned by the controller to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache.
    Type: Application
    Filed: May 23, 2008
    Publication date: June 4, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John E. Barth, Philip G. Emma, Erik L. Hedberg, Hillery C. Hunter, Peter A. Sandon, Vijayalakshmi Srinivasan, Arnold S. Tran
  • Publication number: 20090144506
    Abstract: A method for implementing dynamic refresh protocols for DRAM based cache includes partitioning a DRAM cache into a refreshable portion and a non-refreshable portion, and assigning incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines. Cache lines corresponding to data having a usage history below a defined frequency are assigned to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache.
    Type: Application
    Filed: December 4, 2007
    Publication date: June 4, 2009
    Inventors: John E. Barth, JR., Philip G. Emma, Erik L. Hedberg, Hillery C. Hunter, Peter A. Sandon, Vijayalakshmi Srinivasan, Arnold S. Tran
  • Patent number: 5319761
    Abstract: An improved DLAT structure distinguishes between address spaces and data spaces. The DLAT structure classifies data spaces by one or more space identifications which control assignment of virtual page addresses to DLAT rows. In one embodiment, a "private space bit" is used to select different DLAT addressing algorithms. In another embodiment, data spaces are sub-classified using space identification bits, and for each sub-class, a unique algorithm is selected based on the page address bits. An Exclusive OR function is used to generate the DLAT selection bits. This approach minimizes private space synonyms while maximizing common space synonyms. The result is improved performance since the former minimizes thrashing and the latter maximizes the value of the DLAT common bit.
    Type: Grant
    Filed: August 12, 1991
    Date of Patent: June 7, 1994
    Assignee: International Business Machines Corporation
    Inventors: Kevin A. Chiarot, Richard J. Schmalz, Theodore J. Schmitt, Arnold S. Tran, Shih-Hsiung S. Tung