Patents by Inventor Ram Raghavan
Ram Raghavan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10929062Abstract: Embodiments of the present invention facilitate gracefully degrading performance while gradually throttling memory due to dynamic thermal conditions. An example method includes receiving, by pre-fetch throttling logic, a pre-fetch command requesting data from a memory and a priority level of the pre-fetch command. The priority level of the pre-fetch command indicates a likelihood that data requested by the pre-fetch command will be utilized by a processor. Thermal condition data from one or more sensors is received by the pre-fetch throttling logic. It is determined whether the pre-fetch command should be issued to the memory. The determining is based at least in part on the priority level of the pre-fetch command and the thermal condition data. The pre-fetch command is issued to the memory or prevented from being issued to the memory based at least in part on determining on the determining.Type: GrantFiled: November 7, 2018Date of Patent: February 23, 2021Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Hoa C. Nguyen, Bret R. Olszewski, Ram Raghavan
-
Patent number: 10740234Abstract: An approach is provided in which a first core broadcasts a cache line request in response to detecting a cache miss corresponding to a first virtual central processing unit (VCPU) executing on the first core. Next, the first core receives a cache line response from the second core responding to the cache line request that includes tag extension data. The first core determines a cache miss type of the cache miss based on the tag extension data and, in turn, sends the cache miss type to a hypervisor that utilizes the cache miss type during a future VCPU dispatch selection.Type: GrantFiled: September 4, 2018Date of Patent: August 11, 2020Assignee: International Business Machines CorporationInventors: Bret R. Olszewski, Ram Raghavan, Maria Lorena Pesantez, Gayathri Mohan
-
Publication number: 20200142635Abstract: Embodiments of the present invention facilitate gracefully degrading performance while gradually throttling memory due to dynamic thermal conditions. An example method includes receiving, by pre-fetch throttling logic, a pre-fetch command requesting data from a memory and a priority level of the pre-fetch command. The priority level of the pre-fetch command indicates a likelihood that data requested by the pre-fetch command will be utilized by a processor. Thermal condition data from one or more sensors is received by the pre-fetch throttling logic. It is determined whether the pre-fetch command should be issued to the memory. The determining is based at least in part on the priority level of the pre-fetch command and the thermal condition data. The pre-fetch command is issued to the memory or prevented from being issued to the memory based at least in part on determining on the determining.Type: ApplicationFiled: November 7, 2018Publication date: May 7, 2020Inventors: Hoa C. Nguyen, Bret R. Olszewski, Ram Raghavan
-
Publication number: 20200073803Abstract: An approach is provided in which a first core broadcasts a cache line request in response to detecting a cache miss corresponding to a first virtual central processing unit (VCPU) executing on the first core. Next, the first core receives a cache line response from the second core responding to the cache line request that includes tag extension data. The first core determines a cache miss type of the cache miss based on the tag extension data and, in turn, sends the cache miss type to a hypervisor that utilizes the cache miss type during a future VCPU dispatch selection.Type: ApplicationFiled: September 4, 2018Publication date: March 5, 2020Inventors: Bret R. Olszewski, Ram Raghavan, Maria Lorena Pesantez, Gayathri Mohan
-
Publication number: 20180101478Abstract: In one embodiment, a set-associative cache memory has a plurality of congruence classes each including multiple entries for storing cache lines of data. The cache memory includes a bank of counters, which includes a respective one of a plurality of counters for each cache line stored in the plurality of congruence classes. The cache memory selects victim cache lines for eviction from the cache memory by reference to counter values of counters within the bank of counters. A dynamic distribution of counter values of counters within the bank of counters is determined. In response, an amount counter values of counters within the bank of counters are adjusted on a cache miss is adjusted based on the dynamic distribution of the counter values.Type: ApplicationFiled: October 7, 2016Publication date: April 12, 2018Inventors: BERNARD C. DRERUP, RAM RAGHAVAN, SAHIL SABHARWAL, JEFFREY A. STUECHELI
-
Publication number: 20180101476Abstract: A set-associative cache memory includes a bank of counters including a respective one of a plurality of counters for each cache line stored in a plurality of congruence classes of the cache memory. Prior to receiving a memory access request that maps to a particular congruence class of the cache memory, the cache memory pre-selects a first victim cache line stored in a particular entry of a particular congruence class for eviction based on at least a counter value of the victim cache line. In response to receiving a memory access request that maps to the particular congruence class and that misses, the cache memory evicts the pre-selected first victim cache line from the particular entry, installs a new cache line in the particular entry, and pre-selects a second victim cache line from the particular congruence class based on at least a counter value of the second victim cache line.Type: ApplicationFiled: October 7, 2016Publication date: April 12, 2018Inventors: BERNARD C. DRERUP, RAM RAGHAVAN, SAHIL SABHARWAL, JEFFREY A. STUECHELI
-
Patent number: 9940239Abstract: A set-associative cache memory includes a bank of counters including a respective one of a plurality of counters for each cache line stored in a plurality of congruence classes of the cache memory. Prior to receiving a memory access request that maps to a particular congruence class of the cache memory, the cache memory pre-selects a first victim cache line stored in a particular entry of a particular congruence class for eviction based on at least a counter value of the victim cache line. In response to receiving a memory access request that maps to the particular congruence class and that misses, the cache memory evicts the pre-selected first victim cache line from the particular entry, installs a new cache line in the particular entry, and pre-selects a second victim cache line from the particular congruence class based on at least a counter value of the second victim cache line.Type: GrantFiled: October 7, 2016Date of Patent: April 10, 2018Assignee: International Business Machines CorporationInventors: Bernard C. Drerup, Ram Raghavan, Sahil Sabharwal, Jeffrey A. Stuecheli
-
Patent number: 9940246Abstract: In one embodiment, a set-associative cache memory has a plurality of congruence classes each including multiple entries for storing cache lines of data. The cache memory includes a bank of counters, which includes a respective one of a plurality of counters for each cache line stored in the plurality of congruence classes. The cache memory selects victim cache lines for eviction from the cache memory by reference to counter values of counters within the bank of counters. A dynamic distribution of counter values of counters within the bank of counters is determined. In response, an amount counter values of counters within the bank of counters are adjusted on a cache miss is adjusted based on the dynamic distribution of the counter values.Type: GrantFiled: October 7, 2016Date of Patent: April 10, 2018Assignee: International Business Machines CorporationInventors: Bernard C. Drerup, Ram Raghavan, Sahil Sabharwal, Jeffrey A. Stuecheli
-
Patent number: 9569364Abstract: Techniques are disclosed for prefetching cache lines. One technique includes dispatching a virtual processor and recording a first set of addresses associated with one or more cache lines used by the virtual processor. The technique also includes redispatching the virtual processor and recording a second set of addresses associated with one or more cache lines used by the virtual processor. The technique further includes comparing the first set of addresses with the second set of addresses to determine one or more common addresses between the first set and the second set. The technique includes placing the one or more common addresses into a memory. Finally, the technique includes redispatching the virtual processor.Type: GrantFiled: February 8, 2016Date of Patent: February 14, 2017Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Peter J. Heyrman, Bret R. Olszewski, Ram Raghavan
-
Patent number: 9403137Abstract: A sintered polycrystalline diamond material (PCD) of extremely fine grain size is manufactured by sintering under high pressure/high temperature (HP/HT) processing, a diamond powder which is blended with a pre-milled source catalyst metal compound. The PCD material has an average sintered diamond grain structure of less than about 1.0 ?m.Type: GrantFiled: March 17, 2009Date of Patent: August 2, 2016Assignee: Diamond Innovations, Inc.Inventors: William C. Russell, Susanne Sowers, Steven Webb, Ram Raghavan
-
Patent number: 9323527Abstract: A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache.Type: GrantFiled: October 15, 2010Date of Patent: April 26, 2016Assignee: International Business Machines CorporationInventors: Robert H. Bell, Jr., Hong L. Hua, Ram Raghavan, Mysore S. Srinivas
-
Patent number: 9298458Abstract: A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache.Type: GrantFiled: March 22, 2012Date of Patent: March 29, 2016Assignee: International Business Machines CorporationInventors: Robert H. Bell, Jr., Hong L. Hua, Ram Raghavan, Mysore S. Srinivas
-
Patent number: 8695011Abstract: Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.Type: GrantFiled: April 27, 2012Date of Patent: April 8, 2014Assignee: International Business Machines CorporationInventors: Diane G. Flemming, William A. Maron, Ram Raghavan, Satya Prakash Sharma, Mysore S. Srinivas
-
Patent number: 8688960Abstract: A method, system and computer-usable medium are disclosed for managing prefetch streams in a virtual machine environment. Compiled application code in a first core, which comprises a Special Purpose Register (SPR) and a plurality of first prefetch engines, initiates a prefetch stream request. If the prefetch stream request cannot be initiated due to unavailability of a first prefetch engine, then an indicator bit indicating a Prefetch Stream Dispatch Fault is set in the SPR, causing a Hypervisor to interrupt the execution of the prefetch stream request. The Hypervisor then calls its associated operating system (OS), which determines prefetch engine availability for a second core comprising a plurality of second prefetch engines. If a second prefetch engine is available, then the OS migrates the prefetch stream request from the first core to the second core, where it is initiated on an available second prefetch engine.Type: GrantFiled: October 15, 2010Date of Patent: April 1, 2014Assignee: International Business Machines CorporationInventors: Matthew Accapadi, Robert H. Bell, Jr., Hong Lam Hua, Ram Raghavan, Mysore Sathyanarayana Srinivas
-
Patent number: 8688961Abstract: A method, system and computer-usable medium are disclosed for managing prefetch streams in a virtual machine environment. Compiled application code in a first core, which comprises a Special Purpose Register (SPR) and a plurality of first prefetch engines, initiates a prefetch stream request. If the prefetch stream request cannot be initiated due to unavailability of a first prefetch engine, then an indicator bit indicating a Prefetch Stream Dispatch Fault is set in the SPR, causing a Hypervisor to interrupt the execution of the prefetch stream request. The Hypervisor then calls its associated operating system (OS), which determines prefetch engine availability for a second core comprising a plurality of second prefetch engines. If a second prefetch engine is available, then the OS migrates the prefetch stream request from the first core to the second core, where it is initiated on an available second prefetch engine.Type: GrantFiled: March 22, 2012Date of Patent: April 1, 2014Assignee: International Business Machines CorporationInventors: Matthew Accapadi, Robert H. Bell, Jr., Hong Lam Hua, Ram Raghavan, Mysore Sathyanarayana Srinivas
-
Patent number: 8677371Abstract: Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.Type: GrantFiled: December 31, 2009Date of Patent: March 18, 2014Assignee: International Business Machines CorporationInventors: Diane G. Flemming, William A. Maron, Ram Raghavan, Satya Prakash Sharma, Mysore S. Srinivas
-
Publication number: 20140060937Abstract: A cutting element may comprise a substrate, a first polycrystalline diamond volume, and a second diamond or diamond like volume. The first polycrystalline diamond volume may contain a catalyst material. The first polycrystalline diamond volume may be bonded to the substrate. The second diamond or diamond like volume may be formed predominantly from carbon atoms and free of catalyst materials. The second diamond or diamond like volume may be adjacent to a working surface of cutting element. The second diamond or diamond like volume may be bonded to the first polycrystalline diamond volume.Type: ApplicationFiled: August 31, 2012Publication date: March 6, 2014Applicant: Diamond Innovations, Inc.Inventors: Valeriy V. Konovalov, Ram Raghavan
-
Patent number: 8438338Abstract: An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core.Type: GrantFiled: August 15, 2010Date of Patent: May 7, 2013Assignee: International Business Machines CorporationInventors: Diane Garza Flemming, William A. Maron, Ram Raghavan, Mysore Sathyanarayana Srinivas, Basu Vaidyanathan
-
Patent number: 8417889Abstract: An approach is provided to identify a disabled processing core and an active processing core from a set of processing cores included in a processing node. Each of the processing cores is assigned a cache memory. The approach extends a memory map of the cache memory assigned to the active processing core to include the cache memory assigned to the disabled processing core. A first amount of data that is used by a first process is stored by the active processing core to the cache memory assigned to the active processing core. A second amount of data is stored by the active processing core to the cache memory assigned to the inactive processing core using the extended memory map.Type: GrantFiled: July 24, 2009Date of Patent: April 9, 2013Assignee: International Business Machines CorporationInventors: Diane Garza Flemming, William A. Maron, Ram Raghavan, Mysore Sathyanarayana Srinivas, Basu Vaidyanathan
-
Publication number: 20120216214Abstract: Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.Type: ApplicationFiled: April 27, 2012Publication date: August 23, 2012Applicant: International Business Machines CorporationInventors: Diane G. Flemming, William A. Maron, Ram Raghavan, Satya Prakash Sharma, Mysore S. Srinivas