Patents by Inventor Christian Habermann
Christian Habermann has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230269253Abstract: Embodiments of the present invention provide computer-implemented methods, computer program products and computer systems for enabling modifying a behavior of a consuming component. The method comprises requesting a smart client by a consuming component. The smart client is adapted to modify a control logic of the consuming component and to establish a network connection. Next, the consuming component executes the smart client and receives a modification for the smart client of the consuming component. Further, a computer-implemented method for enabling modifying a behavior of a consuming component by a serving component is disclosed. This method comprises, upon receiving a request for a smart client by the serving component, providing a smart client, transmitting the smart client, receiving service requests comprising procedure calls to be executed by the serving component, and, upon detecting a performance change by the serving component, sending a modification for the smart client.Type: ApplicationFiled: February 24, 2022Publication date: August 24, 2023Inventors: Daniel Pittner, Martin Smolny, Christian Habermann, Silke Wastl, Michael Haide
-
Patent number: 11316947Abstract: A method, computer system, and a computer program product for execution of a stateless service on a node in a workload execution environment is provided. The present invention may include defining for each node a workload container including a cache component of a cache-mesh. The present invention may include, upon receiving a state request from a stateless requesting service from one of the cache components of the cache-mesh in an execution container, determining whether a requested state is present in the cache component of a related execution container. The present invention may include, upon a cache miss, broadcasting the state request to other cache components of the cache-mesh, determining, by the other cache components, whether the requested state is present in respective caches, and upon any cache component identifying the requested state, sending the requested state to the requesting service using a protocol for communication.Type: GrantFiled: March 30, 2020Date of Patent: April 26, 2022Assignee: International Business Machines CorporationInventors: Sven Sterbling, Christian Habermann, Sachin Lingadahalli Vittal
-
Publication number: 20210306438Abstract: A method, computer system, and a computer program product for execution of a stateless service on a node in a workload execution environment is provided. The present invention may include defining for each node a workload container including a cache component of a cache-mesh. The present invention may include, upon receiving a state request from a stateless requesting service from one of the cache components of the cache-mesh in an execution container, determining whether a requested state is present in the cache component of a related execution container. The present invention may include, upon a cache miss, broadcasting the state request to other cache components of the cache-mesh, determining, by the other cache components, whether the requested state is present in respective caches, and upon any cache component identifying the requested state, sending the requested state to the requesting service using a protocol for communication.Type: ApplicationFiled: March 30, 2020Publication date: September 30, 2021Inventors: Sven Sterbling, Christian Habermann, Sachin Lingadahalli Vittal
-
Patent number: 11099919Abstract: Methods testing a data coherency algorithm via a simulated multi-processor environment are provided, which include implementing: (i) a transactional footprint keeping the address of each cache line used by the processor core, (ii) a reference model operating on and keeping a set of timestamps for a cache line, the set including a construction date representing a global timestamp when new data arrives at a private cache hierarchy and an expiration date representing another global timestamp when a cross-invalidation hits the private cache hierarchy, (iii) a core observed timestamp representing a global timestamp of an oldest construction date of data used before, and (iv) interface events monitoring instruction sequences guaranteed by transactional execution to ensure atomicity of a transaction. Upon detecting a transaction end event and finding a cache line of the transactional footprint having an expiration date older than or equal to a core observed time, an error is reported.Type: GrantFiled: January 6, 2020Date of Patent: August 24, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christian Habermann, Gerrit Koch, Martin Recktenwald, Ralf Winkelmann
-
Publication number: 20200142767Abstract: Methods testing a data coherency algorithm via a simulated multi-processor environment are provided, which include implementing: (i) a transactional footprint keeping the address of each cache line used by the processor core, (ii) a reference model operating on and keeping a set of timestamps for a cache line, the set including a construction date representing a global timestamp when new data arrives at a private cache hierarchy and an expiration date representing another global timestamp when a cross-invalidation hits the private cache hierarchy, (iii) a core observed timestamp representing a global timestamp of an oldest construction date of data used before, and (iv) interface events monitoring instruction sequences guaranteed by transactional execution to ensure atomicity of a transaction. Upon detecting a transaction end event and finding a cache line of the transactional footprint having an expiration date older than or equal to a core observed time, an error is reported.Type: ApplicationFiled: January 6, 2020Publication date: May 7, 2020Inventors: Christian HABERMANN, Gerrit KOCH, Martin RECKTENWALD, Ralf WINKELMANN
-
Patent number: 10558510Abstract: Testing a data coherency algorithm of a multi-processor environment. The testing includes implementing a global time incremented every processor cycle and used for timestamping; implementing a transactional execution flag representing a processor core guaranteeing the atomicity and coherency of the currently executed instructions; implementing a transactional footprint, which keeps the address of each cache line that was used by the processor core; implementing a reference model, which operates on every cache line and keeps a set of timestamps for every cache line; implementing a core observed timestamp representing a global timestamp, which is the oldest construction date of data used before; implementing interface events; and reporting an error whenever a transaction end event is detected and any cache line is found in the transactional footprint with an expiration date that is older than or equal to the core observed time.Type: GrantFiled: January 15, 2018Date of Patent: February 11, 2020Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christian Habermann, Gerrit Koch, Martin Recktenwald, Ralf Winkelmann
-
Publication number: 20180136998Abstract: Testing a data coherency algorithm of a multi-processor environment. The testing includes implementing a global time incremented every processor cycle and used for timestamping; implementing a transactional execution flag representing a processor core guaranteeing the atomicity and coherency of the currently executed instructions; implementing a transactional footprint, which keeps the address of each cache line that was used by the processor core; implementing a reference model, which operates on every cache line and keeps a set of timestamps for every cache line; implementing a core observed timestamp representing a global timestamp, which is the oldest construction date of data used before; implementing interface events; and reporting an error whenever a transaction end event is detected and any cache line is found in the transactional footprint with an expiration date that is older than or equal to the core observed time.Type: ApplicationFiled: January 15, 2018Publication date: May 17, 2018Inventors: Christian Habermann, Gerrit Koch, Martin Recktenwald, Ralf Winkelmann
-
Patent number: 9959155Abstract: Testing a data coherency algorithm of a multi-processor environment. The testing includes implementing a global time incremented every processor cycle and used for timestamping; implementing a transactional execution flag representing a processor core guaranteeing the atomicity and coherency of the currently executed instructions; implementing a transactional footprint, which keeps the address of each cache line that was used by the processor core; implementing a reference model, which operates on every cache line and keeps a set of timestamps for every cache line; implementing a core observed timestamp representing a global timestamp, which is the oldest construction date of data used before; implementing interface events; and reporting an error whenever a transaction end event is detected and any cache line is found in the transactional footprint with an expiration date that is older than or equal to the core observed time.Type: GrantFiled: June 29, 2016Date of Patent: May 1, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christian Habermann, Gerrit Koch, Martin Recktenwald, Ralf Winkelmann
-
Patent number: 9928127Abstract: Testing a data coherency algorithm of a multi-processor environment. The testing includes implementing a global time incremented every processor cycle and used for timestamping; implementing a transactional execution flag representing a processor core guaranteeing the atomicity and coherency of the currently executed instructions; implementing a transactional footprint, which keeps the address of each cache line that was used by the processor core; implementing a reference model, which operates on every cache line and keeps a set of timestamps for every cache line; implementing a core observed timestamp representing a global timestamp, which is the oldest construction date of data used before; implementing interface events; and reporting an error whenever a transaction end event is detected and any cache line is found in the transactional footprint with an expiration date that is older than or equal to the core observed time.Type: GrantFiled: January 29, 2016Date of Patent: March 27, 2018Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christian Habermann, Gerrit Koch, Martin Recktenwald, Ralf Winkelmann
-
Publication number: 20170220437Abstract: Testing a data coherency algorithm of a multi-processor environment. The testing includes implementing a global time incremented every processor cycle and used for timestamping; implementing a transactional execution flag representing a processor core guaranteeing the atomicity and coherency of the currently executed instructions; implementing a transactional footprint, which keeps the address of each cache line that was used by the processor core; implementing a reference model, which operates on every cache line and keeps a set of timestamps for every cache line; implementing a core observed timestamp representing a global timestamp, which is the oldest construction date of data used before; implementing interface events; and reporting an error whenever a transaction end event is detected and any cache line is found in the transactional footprint with an expiration date that is older than or equal to the core observed time.Type: ApplicationFiled: January 29, 2016Publication date: August 3, 2017Inventors: Christian Habermann, Gerrit Koch, Martin Recktenwald, Ralf Winkelmann
-
Publication number: 20170220439Abstract: Testing a data coherency algorithm of a multi-processor environment. The testing includes implementing a global time incremented every processor cycle and used for timestamping; implementing a transactional execution flag representing a processor core guaranteeing the atomicity and coherency of the currently executed instructions; implementing a transactional footprint, which keeps the address of each cache line that was used by the processor core; implementing a reference model, which operates on every cache line and keeps a set of timestamps for every cache line; implementing a core observed timestamp representing a global timestamp, which is the oldest construction date of data used before; implementing interface events; and reporting an error whenever a transaction end event is detected and any cache line is found in the transactional footprint with an expiration date that is older than or equal to the core observed time.Type: ApplicationFiled: June 29, 2016Publication date: August 3, 2017Inventors: Christian Habermann, Gerrit Koch, Martin Recktenwald, Ralf Winkelmann
-
Patent number: 9665486Abstract: A hierarchical cache structure comprises at least one higher level cache comprising a unified cache array for data and instructions and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache and a data cache of a split second level cache are connected to a third level cache; and an instruction cache of a split first level cache is connected to the instruction cache of the split second level cache, and a data cache of the split first level cache is connected to the instruction cache and the data cache of the split second level cache.Type: GrantFiled: April 7, 2016Date of Patent: May 30, 2017Assignee: International Business Machines CorporationInventors: Christian Habermann, Christian Jacobi, Martin Recktenwald, Hans-Werner Tast
-
Patent number: 9563568Abstract: A hierarchical cache structure includes at least one real indexed higher level cache with a directory and a unified cache array for data and instructions, and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache of a split real indexed second level cache includes a directory and a corresponding cache array connected to the real indexed third level cache. A data cache of the split second level cache includes a directory connected to the third level cache. An instruction cache of a split virtually indexed first level cache is connected to the second level instruction cache. A cache array of a data cache of the first level cache is connected to the cache array of the second level instruction cache and to the cache array of the third level cache. A directory of the first level data cache is connected to the second level instruction cache directory and to the third level cache directory.Type: GrantFiled: November 9, 2015Date of Patent: February 7, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christian Habermann, Christian Jacobi, Martin Recktenwald, Hans-Werner Tast
-
Publication number: 20160224467Abstract: A hierarchical cache structure comprises at least one higher level cache comprising a unified cache array for data and instructions and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache and a data cache of a split second level cache are connected to a third level cache; and an instruction cache of a split first level cache is connected to the instruction cache of the split second level cache, and a data cache of the split first level cache is connected to the instruction cache and the data cache of the split second level cache.Type: ApplicationFiled: April 7, 2016Publication date: August 4, 2016Inventors: Christian Habermann, Christian Jacobi, Martin Recktenwald, Hans-Werner Tast
-
Patent number: 9384131Abstract: Systems and methods for providing data from a cache memory to requestors includes a number of cache memory levels arranged in a hierarchy. The method includes receiving a request for fetching data from the cache memory and determining one or more addresses in a cache memory level which is one level higher than a current cache memory level using one or more prediction algorithms. Further, the method includes pre-fetching the one or more addresses from the high cache memory level and determining if the data is available in the addresses. If data is available in the one or more addresses then data is fetched from the high cache level, else addresses of a next level which is higher than the high cache memory level are determined and pre-fetched. Furthermore, the method includes providing the fetched data to the requestor.Type: GrantFiled: March 15, 2013Date of Patent: July 5, 2016Assignee: International Business Machines CorporationInventors: Christian Habermann, Christian Jacobi, Sascha Junghans, Martin Recktenwald, Hans-Werner Tast
-
Patent number: 9323673Abstract: A hierarchical cache structure comprises at least one higher level cache comprising a unified cache array for data and instructions and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache and a data cache of a split second level cache are connected to a third level cache; and an instruction cache of a split first level cache is connected to the instruction cache of the split second level cache, and a data cache of the split first level cache is connected to the instruction cache and the data cache of the split second level cache.Type: GrantFiled: November 4, 2013Date of Patent: April 26, 2016Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christian Habermann, Christian Jacobi, Martin Recktenwald, Hans-Werner Tast
-
Publication number: 20160062905Abstract: A hierarchical cache structure includes at least one real indexed higher level cache with a directory and a unified cache array for data and instructions, and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache of a split real indexed second level cache includes a directory and a corresponding cache array connected to the real indexed third level cache. A data cache of the split second level cache includes a directory connected to the third level cache. An instruction cache of a split virtually indexed first level cache is connected to the second level instruction cache. A cache array of a data cache of the first level cache is connected to the cache array of the second level instruction cache and to the cache array of the third level cache. A directory of the first level data cache is connected to the second level instruction cache directory and to the third level cache directory.Type: ApplicationFiled: November 9, 2015Publication date: March 3, 2016Inventors: Christian Habermann, Christian Jacobi, Martin Recktenwald, Hans-Werner Tast
-
Patent number: 9274959Abstract: Handling virtual memory address synonyms in a multi-level cache hierarchy structure. The multi-level cache hierarchy structure having a first level, L1 cache, the L1 cache being operatively connected to a second level, L2 cache split into a L2 data cache directory and a L2 instruction cache. The L2 data cache directory including directory entries having information of data currently stored in the L1 cache, the L2 cache being operatively connected to a third level, L3 cache. The first level cache is virtually indexed while the second and third levels are physically indexed. Counter bits are allocated in a directory entry of the L2 data cache directory for storing a counter number. The directory entry corresponds to at least one first L1 cache line. A first search is performed in the L1 cache for a requested virtual memory address, wherein the virtual memory address corresponds to a physical memory address tag at a second L1 cache line.Type: GrantFiled: July 18, 2014Date of Patent: March 1, 2016Assignee: GLOBALFOUNDRIES Inc.Inventors: Christian Habermann, Christian Jacobi, Gerrit Koch, Martin Recktenwald, Hans-Werner Tast
-
Patent number: 9183146Abstract: A hierarchical cache structure includes at least one real indexed higher level cache with a directory and a unified cache array for data and instructions, and at least two lower level caches, each split in an instruction cache and a data cache. An instruction cache of a split real indexed second level cache includes a directory and a corresponding cache array connected to the real indexed third level cache. A data cache of the split second level cache includes a directory connected to the third level cache. An instruction cache of a split virtually indexed first level cache is connected to the second level instruction cache. A cache array of a data cache of the first level cache is connected to the cache array of the second level instruction cache and to the cache array of the third level cache. A directory of the first level data cache is connected to the second level instruction cache directory and to the third level cache directory.Type: GrantFiled: November 4, 2013Date of Patent: November 10, 2015Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christian Habermann, Christian Jacobi, Martin Recktenwald, Hans-Werner Tast
-
Patent number: 9075732Abstract: Data caching for use in a computer system including a lower cache memory and a higher cache memory. The higher cache memory receives a fetch request. It is then determined by the higher cache memory the state of the entry to be replaced next. If the state of the entry to be replaced next indicates that the entry is exclusively owned or modified, the state of the entry to be replaced next is changed such that a following cache access is processed at a higher speed compared to an access processed if the state would stay unchanged.Type: GrantFiled: June 14, 2011Date of Patent: July 7, 2015Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Christian Habermann, Martin Recktenwald, Hans-Werner Tast, Ralf Winkelmann