Patents by Inventor Daniel J. Colglazier
Daniel J. Colglazier has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11030115Abstract: An apparatus for using a dataless cache entry includes a cache memory and a cache controller configured to identify a first cache entry in cache memory as a potential cache entry to be replaced according to a cache replacement algorithm, compare a data value of the first cache entry to a predefined value, and write a memory address tag and state bits of the first cache entry to a dataless cache entry in response to the data value of the first cache entry matching the predefined value, wherein the dataless cache entry in the cache memory stores a memory address tag and state bits associated with the memory address, wherein the dataless cache entry represents the predefined value, and wherein the dataless cache entry occupies fewer bits than the first cache entry.Type: GrantFiled: June 27, 2019Date of Patent: June 8, 2021Assignee: LENOVO Enterprise Solutions (Singapore) PTE. LTDInventor: Daniel J Colglazier
-
Publication number: 20200409867Abstract: An apparatus for using a dataless cache entry includes a cache memory and a cache controller configured to identify a first cache entry in cache memory as a potential cache entry to be replaced according to a cache replacement algorithm, compare a data value of the first cache entry to a predefined value, and write a memory address tag and state bits of the first cache entry to a dataless cache entry in response to the data value of the first cache entry matching the predefined value, wherein the dataless cache entry in the cache memory stores a memory address tag and state bits associated with the memory address, wherein the dataless cache entry represents the predefined value, and wherein the dataless cache entry occupies fewer bits than the first cache entry.Type: ApplicationFiled: June 27, 2019Publication date: December 31, 2020Inventor: Daniel J. Colglazier
-
Patent number: 10853267Abstract: A method of managing a direct-mapped cache is provided. The method includes a direct-mapped cache receiving memory references indexed to a particular cache line, using a first cache line replacement algorithm to select a main memory block as a candidate for storage in the cache line in response to each memory reference, and using a second cache line replacement algorithm to select a main memory block as a candidate for storage in the cache line in response to each memory reference. The method further includes identifying, over a plurality of most recently received memory references, which one of the algorithms has selected a main memory block that matches a next memory reference a greater number of times, and storing a block of main memory in the cache line, wherein the block of main memory stored in the cache line is the main memory block selected by the identified algorithm.Type: GrantFiled: June 14, 2016Date of Patent: December 1, 2020Assignee: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTD.Inventor: Daniel J. Colglazier
-
Patent number: 10387330Abstract: Apparatuses, systems, methods, and program products are disclosed for cache replacement. An apparatus includes a cache memory structure, a processor, and memory that stores code executable by the processor. The code is executable by the processor to receive a value to be stored in the cache memory structure, identify, in response to determining that the received value is not currently stored in an entry of the cache memory structure, a least recently used (“LRU”) set of entries of the cache memory structure where the received value can be stored, and select a least frequently used (“LFU”) entry of the identified LRU set of entries for storing the received value.Type: GrantFiled: April 30, 2018Date of Patent: August 20, 2019Assignee: LENOVO ENTERPRISE SOLUTIONS (SINGAPORE) PTE. LTDInventor: Daniel J. Colglazier
-
Patent number: 10176118Abstract: A method includes storing a first block of main memory in a cache line of a direct-mapped cache, storing a first tag in a current tag field of the cache line, wherein the first tag identifies a first memory address for the first block of main memory, and storing a second tag in a previous miss tag field of the cache line in response to receiving a memory reference having a tag that does not match the tag stored in the current tag field. The second tag identifies a second memory address for a second block of main memory, and the first and second blocks are both mapped to the cache line. The method may further include storing a binary value in a last reference bit field to indicate whether the most recently received memory reference was directed to the current tag field or previous miss tag field.Type: GrantFiled: March 31, 2016Date of Patent: January 8, 2019Assignee: Lenovo Enterprise Solutions (Singapore) Pte. Ltd.Inventor: Daniel J. Colglazier
-
Publication number: 20170357598Abstract: A method of managing a direct-mapped cache is provided. The method includes a direct-mapped cache receiving memory references indexed to a particular cache line, using a first cache line replacement algorithm to select a main memory block as a candidate for storage in the cache line in response to each memory reference, and using a second cache line replacement algorithm to select a main memory block as a candidate for storage in the cache line in response to each memory reference. The method further includes identifying, over a plurality of most recently received memory references, which one of the algorithms has selected a main memory block that matches a next memory reference a greater number of times, and storing a block of main memory in the cache line, wherein the block of main memory stored in the cache line is the main memory block selected by the identified algorithm.Type: ApplicationFiled: June 14, 2016Publication date: December 14, 2017Inventor: Daniel J. Colglazier
-
Publication number: 20170286317Abstract: A method includes storing a first block of main memory in a cache line of a direct-mapped cache, storing a first tag in a current tag field of the cache line, wherein the first tag identifies a first memory address for the first block of main memory, and storing a second tag in a previous miss tag field of the cache line in response to receiving a memory reference having a tag that does not match the tag stored in the current tag field. The second tag identifies a second memory address for a second block of main memory, and the first and second blocks are both mapped to the cache line. The method may further include storing a binary value in a last reference bit field to indicate whether the most recently received memory reference was directed to the current tag field or previous miss tag field.Type: ApplicationFiled: March 31, 2016Publication date: October 5, 2017Inventor: Daniel J. Colglazier
-
Patent number: 8838909Abstract: A method, system, and computer program product for providing lines of data from shared resources to caching agents are provided. The method, system, and computer program product provide for receiving a request from a caching agent for a line of data stored in a shared resource, assigning one of a plurality of coherency states as an initial coherency state for the line of data, each of the plurality of coherency states being assignable as the initial coherency state for the line of data, and providing the line of data to the caching agent in the initial coherency state assigned to the line of data.Type: GrantFiled: July 9, 2007Date of Patent: September 16, 2014Assignee: International Business Machines CorporationInventors: Daniel J. Colglazier, Marcus L. Kornegay, Ngan N. Pham, Cristian G. Rojas
-
Publication number: 20140208038Abstract: A sectored cache replacement algorithm is implemented via a method and computer program product. The method and computer program product select a cache sector among a plurality of cache sectors for replacement in a computer system. The method may comprise selecting a cache sector to be replaced that is not the most recently used and that has the least amount of modified data. In the case in which there is a tie among cache sectors, the sector to be replaced may be the sector among such cache sectors with the least amount of valid data. In the case in which there is still a tie among cache sectors, the sector to be replaced may be randomly selected among such cache sectors. Unlike conventional sectored cache replacement algorithms, the algorithm implemented by the method and computer program product accounts for both hit rate and bus utilization.Type: ApplicationFiled: March 26, 2014Publication date: July 24, 2014Applicant: International Business Machines CorporationInventor: DANIEL J. COLGLAZIER
-
Patent number: 8745334Abstract: An improved sectored cache replacement algorithm is implemented via a method and computer program product. The method and computer program product select a cache sector among a plurality of cache sectors for replacement in a computer system. The method may comprise selecting a cache sector to be replaced that is not the most recently used and that has the least amount of modified data. In the case in which there is a tie among cache sectors, the sector to be replaced may be the sector among such cache sectors with the least amount of valid data. In the case in which there is still a tie among cache sectors, the sector to be replaced may be randomly selected among such cache sectors. Unlike conventional sectored cache replacement algorithms, the improved algorithm implemented by the method and computer program product accounts for both hit rate and bus utilization.Type: GrantFiled: June 17, 2009Date of Patent: June 3, 2014Assignee: International Business Machines CorporationInventor: Daniel J. Colglazier
-
Publication number: 20140136785Abstract: Embodiments of the present invention provide a method, system and computer program product for enhanced cache coordination in a multi-level cache. In an embodiment of the invention, a method for enhanced cache coordination in a multi-level cache is provided. The method includes receiving a processor memory request to access data in a multi-level cache and servicing the processor memory request with data in either an L1 cache or an L2 cache of the multi-level cache. The method additionally includes marking a cache line in the L1 cache and also a corresponding cache line in the L2 cache as most recently used responsive to determining that the processor memory request is serviced from the cache line in the L1 cache and that the cache line in the L1 cache is not currently marked most recently used.Type: ApplicationFiled: December 3, 2012Publication date: May 15, 2014Applicant: International Business Machines CorporationInventor: Daniel J. Colglazier
-
Publication number: 20140136784Abstract: Embodiments of the present invention provide a method, system and computer program product for enhanced cache coordination in a multi-level cache. In an embodiment of the invention, a method for enhanced cache coordination in a multi-level cache is provided. The method includes receiving a processor memory request to access data in a multi-level cache and servicing the processor memory request with data in either an L1 cache or an L2 cache of the multi-level cache. The method additionally includes marking a cache line in the L1 cache and also a corresponding cache line in the L2 cache as most recently used responsive to determining that the processor memory request is serviced from the cache line in the L1 cache and that the cache line in the L1 cache is not currently marked most recently used.Type: ApplicationFiled: November 9, 2012Publication date: May 15, 2014Applicant: International Business Machines CorporationInventor: Daniel J. Colglazier
-
Patent number: 8131943Abstract: A design structure embodied in a machine readable storage medium for designing, manufacturing, and testing a system for providing lines of data from shared resources to caching agents are provided. The system provides for receiving a request from a caching agent for a line of data stored in a shared resource, assigning one of a plurality of coherency states as an initial coherency state for the line of data, each of the plurality of coherency states being assignable as the initial coherency state for the line of data, and providing the line of data to the caching agent in the initial coherency state assigned to the line of data.Type: GrantFiled: May 4, 2008Date of Patent: March 6, 2012Assignee: International Business Machines CorporationInventors: Daniel J. Colglazier, Marcus L. Kornegay, Ngan N. Pham, Cristian G. Rojas
-
Publication number: 20100325365Abstract: An improved sectored cache replacement algorithm is implemented via a method and computer program product. The method and computer program product select a cache sector among a plurality of cache sectors for replacement in a computer system. The method may comprise selecting a cache sector to be replaced that is not the most recently used and that has the least amount of modified data. In the case in which there is a tie among cache sectors, the sector to be replaced may be the sector among such cache sectors with the least amount of valid data. In the case in which there is still a tie among cache sectors, the sector to be replaced may be randomly selected among such cache sectors. Unlike conventional sectored cache replacement algorithms, the improved algorithm implemented by the method and computer program product accounts for both hit rate and bus utilization.Type: ApplicationFiled: June 17, 2009Publication date: December 23, 2010Applicant: International Business Machines CorporationInventor: Daniel J. Colglazier
-
Publication number: 20090019230Abstract: A method, system, and computer program product for providing lines of data from shared resources to caching agents are provided. The method, system, and computer program product provide for receiving a request from a caching agent for a line of data stored in a shared resource, assigning one of a plurality of coherency states as an initial coherency state for the line of data, each of the plurality of coherency states being assignable as the initial coherency state for the line of data, and providing the line of data to the caching agent in the initial coherency state assigned to the line of data.Type: ApplicationFiled: July 9, 2007Publication date: January 15, 2009Inventors: Daniel J. COLGLAZIER, Marcus L. KORNEGAY, Ngan N. PHAM, Cristian G. ROJAS
-
Publication number: 20090019233Abstract: A design structure embodied in a machine readable storage medium for designing, manufacturing, and testing a system for providing lines of data from shared resources to caching agents are provided. The system provides for receiving a request from a caching agent for a line of data stored in a shared resource, assigning one of a plurality of coherency states as an initial coherency state for the line of data, each of the plurality of coherency states being assignable as the initial coherency state for the line of data, and providing the line of data to the caching agent in the initial coherency state assigned to the line of data.Type: ApplicationFiled: May 4, 2008Publication date: January 15, 2009Inventors: DANIEL J. COLGLAZIER, Marcus L. Kornegay, Ngan N. Pham, Cristian G. Rojas
-
Publication number: 20080104323Abstract: The invention is directed to the identifying, tracking, and storing of hot cache lines in an SMP environment. A method in accordance with an embodiment of the present invention includes: accessing, by a first processor, a cache line from main memory; modifying and storing the cache line in the L2 cache of the first processor; requesting, by a second processor, the cache line; identifying, by the first processor, that the cache line stored in the L2 cache of the first processor has previously been modified; marking, by the first processor, the cache line as a hot cache line; forwarding the hot cache line to the second processor; modifying, by the second processor, the hot cache line; and storing the hot cache line in the hot cache of the second processor.Type: ApplicationFiled: October 26, 2006Publication date: May 1, 2008Inventors: Daniel J. Colglazier, Marcus L. Kornegay, Ngan N. Pham, Jorge R. Rodriguez
-
Patent number: 6715035Abstract: A cache for use in a memory controller, which processes data in a computer system having at least one processor, and a method for processing data utilizing a cache, are disclosed. The cache comprises a first array such as a tag array, a second array such as a data array, and a pointer for pointing to a portion of the second array that is associated with a portion of the first array, wherein the portion of the second array comprises the data to be processed, and wherein the number of times the at least one processor must undergo a first transfer latency is reduced. This is done by incorporating a prefetch mechanism within the cache. The computer system may include a plurality of processors with each data entry in the data array having an owner bit for each processor. The memory controller may also include a line preloader for prefetching data into the cache. Also, this design can be used in both single processor and multiprocessor systems.Type: GrantFiled: February 17, 2000Date of Patent: March 30, 2004Assignee: International Business Machines CorporationInventors: Daniel J. Colglazier, Chris Dombrowski, Thomas B. Genduso