Patents by Inventor Hans-Werner Tast

Hans-Werner Tast has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150032964
    Abstract: Handling virtual memory address synonyms in a multi-level cache hierarchy structure. The multi-level cache hierarchy structure having a first level, L1 cache, the L1 cache being operatively connected to a second level, L2 cache split into a L2 data cache directory and a L2 instruction cache. The L2 data cache directory including directory entries having information of data currently stored in the L1 cache, the L2 cache being operatively connected to a third level, L3 cache. The first level cache is virtually indexed while the second and third levels are physically indexed. Counter bits are allocated in a directory entry of the L2 data cache directory for storing a counter number. The directory entry corresponds to at least one first L1 cache line. A first search is performed in the L1 cache for a requested virtual memory address, wherein the virtual memory address corresponds to a physical memory address tag at a second L1 cache line.
    Type: Application
    Filed: July 18, 2014
    Publication date: January 29, 2015
    Inventors: Christian Habermann, Christian Jacobi, Gerrit Koch, Martin Recktenwald, Hans-Werner Tast
  • Patent number: 8891279
    Abstract: A mechanism is provided in a data processing system for enhancing wiring structure for a cache supporting an auxiliary data output. The mechanism splits the data cache into a first data portion and a second data portion. The first data portion provides a first set of data elements and the second data portion provides a second set of data elements. The mechanism connects a first data path to provide the first set of data elements to a primary output and connects a second data path to provide the second set of data elements to the primary output. The mechanism feeds the first data path back into the second data path and feeds the second data path back into the first data path. The mechanism connects a secondary output to the second data path.
    Type: Grant
    Filed: September 17, 2012
    Date of Patent: November 18, 2014
    Assignee: International Business Machines Corporation
    Inventors: Christian Habermann, Walter Lipponer, Martin Recktenwald, Hans-Werner Tast
  • Patent number: 8856444
    Abstract: Data caching for use in a computer system including a lower cache memory and a higher cache memory. The higher cache memory receives a fetch request. It is then determined by the higher cache memory the state of the entry to be replaced next. If the state of the entry to be replaced next indicates that the entry is exclusively owned or modified, the state of the entry to be replaced next is changed such that a following cache access is processed at a higher speed compared to an access processed if the state would stay unchanged.
    Type: Grant
    Filed: April 28, 2012
    Date of Patent: October 7, 2014
    Assignee: International Business Machines Corporation
    Inventors: Christian Habermann, Martin Recktenwald, Hans-Werner Tast, Ralf Winkelmann
  • Publication number: 20140281238
    Abstract: Systems and methods for providing data from a cache memory to requestors includes a number of cache memory levels arranged in a hierarchy. The method includes receiving a request for fetching data from the cache memory and determining one or more addresses in a cache memory level which is one level higher than a current cache memory level using one or more prediction algorithms. Further, the method includes pre-fetching the one or more addresses from the high cache memory level and determining if the data is available in the addresses. If data is available in the one or more addresses then data is fetched from the high cache level, else addresses of a next level which is higher than the high cache memory level are determined and pre-fetched. Furthermore, the method includes providing the fetched data to the requestor.
    Type: Application
    Filed: March 15, 2013
    Publication date: September 18, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Habermann, Christian Jacobi, Sascha Junghans, Martin Recktenwald, Hans-Werner Tast
  • Patent number: 8751749
    Abstract: Data caching for use in a computer system including a lower cache memory and a higher cache memory. The higher cache memory receives a fetch request. It is then determined by the higher cache memory the state of the entry to be replaced next. If the state of the entry to be replaced next indicates that the entry is exclusively owned or modified, the state of the entry to be replaced next is changed such that a following cache access is processed at a higher speed compared to an access processed if the state would stay unchanged.
    Type: Grant
    Filed: June 14, 2011
    Date of Patent: June 10, 2014
    Assignee: International Business Machines Corporation
    Inventors: Christian Habermann, Martin Recktenwald, Hans-Werner Tast, Ralf Winkelmann
  • Publication number: 20140129773
    Abstract: A hierarchical cache structure comprises at least one higher level cache comprising a unified cache array for data and instructions and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache and a data cache of a split second level cache are connected to a third level cache; and an instruction cache of a split first level cache is connected to the instruction cache of the split second level cache, and a data cache of the split first level cache is connected to the instruction cache and the data cache of the split second level cache.
    Type: Application
    Filed: November 4, 2013
    Publication date: May 8, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Habermann, Christian Jacobi, Martin Recktenwald, Hans-Werner Tast
  • Publication number: 20140129774
    Abstract: A hierarchical cache structure includes at least one real indexed higher level cache with a directory and a unified cache array for data and instructions, and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache of a split real indexed second level cache includes a directory and a corresponding cache array connected to the real indexed third level cache. A data cache of the split second level cache includes a directory connected to the third level cache. An instruction cache of a split virtually indexed first level cache is connected to the second level instruction cache. A cache array of a data cache of the first level cache is connected to the cache array of the second level instruction cache and to the cache array of the third level cache. A directory of the first level data cache is connected to the second level instruction cache directory and to the third level cache directory.
    Type: Application
    Filed: November 4, 2013
    Publication date: May 8, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Habermann, Christian Jacobi, Martin Recktenwald, Hans-Werner Tast
  • Publication number: 20140082290
    Abstract: A mechanism is provided in a data processing system for enhancing wiring structure for a cache supporting an auxiliary data output. The mechanism splits the data cache into a first data portion and a second data portion. The first data portion provides a first set of data elements and the second data portion provides a second set of data elements. The mechanism connects a first data path to provide the first set of data elements to a primary output and connects a second data path to provide the second set of data elements to the primary output. The mechanism feeds the first data path back into the second data path and feeds the second data path back into the first data path. The mechanism connects a secondary output to the second data path.
    Type: Application
    Filed: September 17, 2012
    Publication date: March 20, 2014
    Applicant: International Business Machines Corporation
    Inventors: Christian Habermann, Walter Lipponer, Martin Recktenwald, Hans-Werner Tast
  • Publication number: 20140082293
    Abstract: Provided are techniques for handling a store buffer in conjunction with a processor, the store buffer comprising a free list; a merge window; and an evict list; and logic, for, upon receipt of a T_STORE operation, comparing a first address associated with the T_STORE operation with a plurality of addresses associated with previous T_STORE operations, wherein the previous T_STORE operations are part of the same transaction as the T_STORE operation and the entries corresponding to the previous T_STORE operations are stored in the merge window; in response to a match between the first address and a second address, associated with a second T_STORE operation, of the plurality of addresses, merging a first entry corresponding to the first T_STORE operation with a second entry corresponding to the second T_STORE operation; and consolidating results associated with the first T_STORE operation with results associated with the second T_STORE operation.
    Type: Application
    Filed: September 16, 2012
    Publication date: March 20, 2014
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Khary J. ALexander, Christian Jacobi, Gerrit Koch, Martin Recktenwald, Timothy J. Slegel, Hans-Werner Tast
  • Publication number: 20130339616
    Abstract: Embodiments relate to controlling observability of transactional and non-transactional stores. An aspect includes receiving one or more store instructions. The one or more store instructions are initiated within an active transaction and include store data. The active transaction effectively delays committing stores to memory until successful completion of the active transaction. The store data is stored in a local storage buffer causing alterations to the local storage buffer from a first state to a second state. A signal is received that the active transaction has terminated. If the active transaction has terminated abnormally then: the local storage buffer is reverted back to the first state if the store data was stored by a transactional store instruction, and is propagated to a shared cache if the store instruction is non-transactional.
    Type: Application
    Filed: March 7, 2013
    Publication date: December 19, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Khary J. Alexander, Christian Jacobi, Hans-Werner Tast, Patrick M. West
  • Publication number: 20130339615
    Abstract: Embodiments relate to controlling observability of transactional and non-transactional stores. An aspect includes receiving one or more store instructions. The one or more store instructions are initiated within an active transaction and include store data. The active transaction effectively delays committing stores to memory until successful completion of the active transaction. The store data is stored in a local storage buffer causing alterations to the local storage buffer from a first state to a second state. A signal is received that the active transaction has terminated. If the active transaction has terminated abnormally then: the local storage buffer is reverted back to the first state if the store data was stored by a transactional store instruction, and is propagated to a shared cache if the store instruction is non-transactional.
    Type: Application
    Filed: June 15, 2012
    Publication date: December 19, 2013
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Khary J. Alexander, Christian Jacobi, Hans-Werner Tast, Patrick M. West
  • Patent number: 8516200
    Abstract: A mechanism is provided for avoiding cross-interrogates for a streaming data optimized level one cache. The mechanism adds a set of dedicated registers, referred to as “copex registers,” to track ownership of the cache lines that the co-processor's L1 cache holds exclusive. The mechanism extends the cache directory of the L2 cache by a bit that identifies exclusive ownership of a cache line in the co-processor cache. The co-processor continuously provides an indication of which copex registers are valid. On any action that requires a directory lookup in the L2 cache, the mechanism compares the valid copex registers against the lookup address in parallel to the directory lookup. The mechanism considers the “exclusive ownership in co-processor” bit in the directory valid only if the cache line is also currently in a valid copex register.
    Type: Grant
    Filed: September 7, 2010
    Date of Patent: August 20, 2013
    Assignee: International Business Machines Corporation
    Inventors: Christian Habermann, Christian Jacobi, Martin Recktenwald, Hans-Werner Tast
  • Patent number: 8495452
    Abstract: Handling corrupted background data in an out of order processing environment. Modified data is stored on a byte of a word having at least one byte of background data. A byte valid vector and a byte store bit are added to the word. Parity checking is done on the word. If the word does not contain corrupted background date, the word is propagated to the next level of cache. If the word contains corrupted background data, a copy of the word is fetched from a next level of cache that is ECC protected, the byte having the modified data is extracted from the word and swapped for the corresponding byte in the word copy. The word copy is then written into the next level of cache that is ECC protected.
    Type: Grant
    Filed: February 10, 2011
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Michael Fee, Christian Habermann, Christian Jacobi, Diana L. Orf, Martin Recktenwald, Hans-Werner Tast, Ralf Winkelmann
  • Patent number: 8302043
    Abstract: A method and system for verifying a logic circuit design using dynamic clock gating is disclosed. The method comprises choosing at least one master seed to determine initial values as initialization for said logic circuit and/or stimuli data for at least one interface of said logic circuit, choosing at least two different dynamic clock gating configurations for every chosen master seed, executing a functional simulation with said logic circuit for every chosen dynamic clock gating configuration by using said determined initialization and/or stimuli data based on a corresponding master seed, comparing simulation results of functional simulations against each other executed with said logic circuit for at least two different chosen dynamic clock gating configurations, and reporting an error if said at least two simulation results are not identical.
    Type: Grant
    Filed: September 7, 2010
    Date of Patent: October 30, 2012
    Assignee: International Business Machines Corporation
    Inventors: Christian Habermann, Christian Jacobi, Matthias Pflanz, Hans-Werner Tast, Ralf Winkelmann
  • Publication number: 20120215983
    Abstract: Data caching for use in a computer system including a lower cache memory and a higher cache memory. The higher cache memory receives a fetch request. It is then determined by the higher cache memory the state of the entry to be replaced next. If the state of the entry to be replaced next indicates that the entry is exclusively owned or modified, the state of the entry to be replaced next is changed such that a following cache access is processed at a higher speed compared to an access processed if the state would stay unchanged.
    Type: Application
    Filed: April 28, 2012
    Publication date: August 23, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Habermann, Martin Recktenwald, Hans-Werner Tast, Ralf Winkelmann
  • Publication number: 20120210188
    Abstract: Handling corrupted background data in an out of order processing environment. Modified data is stored on a byte of a word having at least one byte of background data. A byte valid vector and a byte store bit are added to the word. Parity checking is done on the word. If the word does not contain corrupted background date, the word is propagated to the next level of cache. If the word contains corrupted background data, a copy of the word is fetched from a next level of cache that is ECC protected, the byte having the modified data is extracted from the word and swapped for the corresponding byte in the word copy. The word copy is then written into the next level of cache that is ECC protected.
    Type: Application
    Filed: February 10, 2011
    Publication date: August 16, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michael Fee, Christian Habermann, Christian Jacobi, Diana L. Orf, Martin Recktenwald, Hans-Werner Tast, Ralf Winkelmann
  • Publication number: 20120059996
    Abstract: A mechanism is provided for avoiding cross-interrogates for a streaming data optimized level one cache. The mechanism adds a set of dedicated registers, referred to as “copex registers,” to track ownership of the cache lines that the co-processor's L1 cache holds exclusive. The mechanism extends the cache directory of the L2 cache by a bit that identifies exclusive ownership of a cache line in the co-processor cache. The co-processor continuously provides an indication of which copex registers are valid. On any action that requires a directory lookup in the L2 cache, the mechanism compares the valid copex registers against the lookup address in parallel to the directory lookup. The mechanism considers the “exclusive ownership in co-processor” bit in the directory valid only if the cache line is also currently in a valid copex register.
    Type: Application
    Filed: September 7, 2010
    Publication date: March 8, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Habermann, Christian Jacobi, Martin Recktenwald, Hans-Werner Tast
  • Patent number: 8108197
    Abstract: A coherency algorithm for a multi processor environment to run on a single processor model is verified by: generating a reference model reflecting a private cache hierarchy of a single processor within the multi processor environment, stimulating the private cache hierarchy with simulated requests and/or cross invalidations from a core side and/or from a nest side, and augmenting all data available in the private cache hierarchy with two construction dates and two expiration dates, which are set based on interface events. Multi processor coherency is not observed if the cache hierarchy ever returns data to the processor with an expiration date that is older than the latest construction date of all data used before. Further, a single processor model and a computer program product can be employed to execute the method.
    Type: Grant
    Filed: December 4, 2008
    Date of Patent: January 31, 2012
    Assignee: International Business Machines Corporation
    Inventors: Christian Habermann, Ralf Winkelmann, Hans-Werner Tast, Christian Jacobi
  • Patent number: 8082399
    Abstract: Cache bounded reference counting for computer languages having automated memory management in which, for example, a reference to an object “Z” initially stored in an object “O” is fetched and the cache hardware is queried whether the reference to the object “Z” is a valid reference, is in a cache, and has a continuity flag set to “on”. If the object “Z” is a valid reference, is in the cache, and has a continuity flag set to “on”, the object “O” is locked for an update, a reference counter is decremented for the object “Z” if the object “Z” resides in the cache, and a return code is set to zero to indicate that the object “Z” is de-referenced and that its storage memory can be released and re-used if the reference counter for the object “Z” reaches zero. Thereafter, the cache hardware is similarly queried regarding an object “N” that will become a new reference of object “O”.
    Type: Grant
    Filed: July 31, 2008
    Date of Patent: December 20, 2011
    Assignee: International Business Machines Corporation
    Inventors: Eberhard Pasch, Hans-Werner Tast, Achim Haessler, Markus Nosse, Elmar Zipp
  • Publication number: 20110307666
    Abstract: Data caching for use in a computer system including a lower cache memory and a higher cache memory. The higher cache memory receives a fetch request. It is then determined by the higher cache memory the state of the entry to be replaced next. If the state of the entry to be replaced next indicates that the entry is exclusively owned or modified, the state of the entry to be replaced next is changed such that a following cache access is processed at a higher speed compared to an access processed if the state would stay unchanged.
    Type: Application
    Filed: June 14, 2011
    Publication date: December 15, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Habermann, Martin Recktenwald, Hans-Werner Tast, Ralf Winkelmann