Patents by Inventor Ram Raghavan

Ram Raghavan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20120180052
    Abstract: A method, system and computer-usable medium are disclosed for managing prefetch streams in a virtual machine environment. Compiled application code in a first core, which comprises a Special Purpose Register (SPR) and a plurality of first prefetch engines, initiates a prefetch stream request. If the prefetch stream request cannot be initiated due to unavailability of a first prefetch engine, then an indicator bit indicating a Prefetch Stream Dispatch Fault is set in the SPR, causing a Hypervisor to interrupt the execution of the prefetch stream request. The Hypervisor then calls its associated operating system (OS), which determines prefetch engine availability for a second core comprising a plurality of second prefetch engines. If a second prefetch engine is available, then the OS migrates the prefetch stream request from the first core to the second core, where it is initiated on an available second prefetch engine.
    Type: Application
    Filed: March 22, 2012
    Publication date: July 12, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew Accapadi, Robert H. Bell, JR., Hong L. Hua, Ram Raghavan, Mysore S. Srinivas
  • Publication number: 20120179873
    Abstract: A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache.
    Type: Application
    Filed: March 22, 2012
    Publication date: July 12, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert H. Bell, JR., Hong L. Hua, Ram Raghavan, Mysore S. Srinivas
  • Patent number: 8180941
    Abstract: Mechanisms for priority control in resource allocation is provided. With these mechanisms, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource.
    Type: Grant
    Filed: December 4, 2009
    Date of Patent: May 15, 2012
    Assignee: International Business Machines Corporation
    Inventors: Wen-Tzer T. Chen, Charles R. Johns, Ram Raghavan, Andrew H. Wottreng
  • Publication number: 20120096241
    Abstract: A method, system and computer-usable medium are disclosed for managing transient instruction streams. Transient flags are defined in Branch-and-Link (BRL) instructions that are known to be infrequently executed. A bit is likewise set in a Special Purpose Register (SPR) of the hardware (e.g., a core) that is executing an instruction request thread. Subsequent fetches or prefetches in the request thread are treated as transient and are not written to lower-level caches. If an instruction is non-transient, and if a lower-level cache is non-inclusive of the L1 instruction cache, a fetch or prefetch miss that is obtained from memory may be written in both the L1 and the lower-level cache. If it is not inclusive, a cast-out from the L1 instruction cache may be written in the lower-level cache.
    Type: Application
    Filed: October 15, 2010
    Publication date: April 19, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert H. Bell, JR., Hong L. Hua, Ram Raghavan, Mysore S. Srinivas
  • Publication number: 20120096240
    Abstract: A method, system and computer-usable medium are disclosed for managing prefetch streams in a virtual machine environment. Compiled application code in a first core, which comprises a Special Purpose Register (SPR) and a plurality of first prefetch engines, initiates a prefetch stream request. If the prefetch stream request cannot be initiated due to unavailability of a first prefetch engine, then an indicator bit indicating a Prefetch Stream Dispatch Fault is set in the SPR, causing a Hypervisor to interrupt the execution of the prefetch stream request. The Hypervisor then calls its associated operating system (OS), which determines prefetch engine availability for a second core comprising a plurality of second prefetch engines. If a second prefetch engine is available, then the OS migrates the prefetch stream request from the first core to the second core, where it is initiated on an available second prefetch engine.
    Type: Application
    Filed: October 15, 2010
    Publication date: April 19, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthew Accapadi, Robert H. Bell, JR., Hong L. Hua, Ram Raghavan, Mysore S. Srinivas
  • Patent number: 8131974
    Abstract: An access speculation predictor is provided that may be implemented using idle command processing resources, such as registers of idle finite state machines (FSMs) in a memory controller. The access speculation predictor may predict whether to perform speculative retrieval of data for a data request from a main memory of the data processing system based on history information stored for a memory region targeted by the data request. In particular, a first address may be extracted from the data request and compared to memory regions associated with second addresses stored in address registers of a plurality of FSMs of the memory controller. A FSM whose memory region includes the first address may be selected. History information for the memory region may be obtained from the selected FSM. The history information may be used to control whether to speculatively retrieve the data for the data request from a main memory.
    Type: Grant
    Filed: April 18, 2008
    Date of Patent: March 6, 2012
    Assignee: International Business Machines Corporation
    Inventors: Richard Nicholas, Ram Raghavan, Eric E. Retter, Jeffrey A. Stuecheli
  • Publication number: 20120042131
    Abstract: An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core.
    Type: Application
    Filed: August 15, 2010
    Publication date: February 16, 2012
    Applicant: International Business Machines Corporation
    Inventors: Diane Garza Flemming, William A. Maron, Ram Raghavan, Mysore Sathyanarayana Srinivas, Basu Vaidyanathan
  • Publication number: 20110161979
    Abstract: Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.
    Type: Application
    Filed: December 31, 2009
    Publication date: June 30, 2011
    Applicant: International Business Machines Corporation
    Inventors: Diane G. Flemming, William A. Maron, Ram Raghavan, Satya Prakash Sharma, Mysore S. Srinivas
  • Publication number: 20110022803
    Abstract: An approach is provided to identify a disabled processing core and an active processing core from a set of processing cores included in a processing node. Each of the processing cores is assigned a cache memory. The approach extends a memory map of the cache memory assigned to the active processing core to include the cache memory assigned to the disabled processing core. A first amount of data that is used by a first process is stored by the active processing core to the cache memory assigned to the active processing core. A second amount of data is stored by the active processing core to the cache memory assigned to the inactive processing core using the extended memory map.
    Type: Application
    Filed: July 24, 2009
    Publication date: January 27, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Diane Garza Flemming, William A. Maron, Ram Raghavan, Mysore Sathyanarayana Srinivas, Basu Vaidyanathan
  • Patent number: 7774563
    Abstract: A computer-implemented method, data processing system, and computer usable program code are provided for reducing memory access latency. A memory controller receives a memory access request and determines if an address associated with the memory access request falls within an address range of a plurality of paired memory address range registers. The memory controller determines if an enable bit associated with the address range is set to 1 in response to the address falling within one of the address ranges. The memory controller flags the memory access request as a high-priority request in response to the enable bit being set to 1 and places the high-priority request on a request queue. A dispatcher receives an indication that a memory bank is idle. The dispatcher determines if high-priority requests are present in the request queue and, if so, sends the earliest high-priority request to the idle memory bank.
    Type: Grant
    Filed: January 9, 2007
    Date of Patent: August 10, 2010
    Assignee: International Business Machines Corporation
    Inventor: Ram Raghavan
  • Publication number: 20100146512
    Abstract: Mechanisms for priority control in resource allocation is provided. With these mechanisms, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource.
    Type: Application
    Filed: December 4, 2009
    Publication date: June 10, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wen-Tzer T. Chen, Charles R. Johns, Ram Raghavan, Andrew H. Wottreng
  • Patent number: 7631131
    Abstract: A mechanism for priority control in resource allocation for low request rate, latency-sensitive units is provided. With this mechanism, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource.
    Type: Grant
    Filed: October 27, 2005
    Date of Patent: December 8, 2009
    Assignee: International Business Machines Corporation
    Inventors: Wen-Tzer T. Chen, Charles R. Johns, Ram Raghavan, Andrew H. Wottreng
  • Publication number: 20090265293
    Abstract: An access speculation predictor is provided that may be implemented using idle command processing resources, such as registers of idle finite state machines (FSMs) in a memory controller. The access speculation predictor may predict whether to perform speculative retrieval of data for a data request from a main memory of the data processing system based on history information stored for a memory region targeted by the data request. In particular, a first address may be extracted from the data request and compared to memory regions associated with second addresses stored in address registers of a plurality of FSMs of the memory controller. A FSM whose memory region includes the first address may be selected. History information for the memory region may be obtained from the selected FSM. The history information may be used to control whether to speculatively retrieve the data for the data request from a main memory.
    Type: Application
    Filed: April 18, 2008
    Publication date: October 22, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Richard Nicholas, Ram Raghavan, Eric E. Retter, Jeffrey A. Stuecheli
  • Publication number: 20090178345
    Abstract: A sintered polycrystalline diamond material (PCD) of extremely fine grain size is manufactured by sintering under high pressure/high temperature (HP/HT) processing, a diamond powder which is blended with a pre-milled source catalyst metal compound. The PCD material has an average sintered diamond grain structure of less than about 1.0 ?m.
    Type: Application
    Filed: March 17, 2009
    Publication date: July 16, 2009
    Applicant: Diamond Innovations, Inc.
    Inventors: William C. Russell, Susanne Sowers, Steven Webb, Ram Raghavan
  • Publication number: 20080168241
    Abstract: A computer-implemented method, data processing system, and computer usable program code are provided for reducing memory access latency. A memory controller receives a memory access request and determines if an address associated with the memory access request falls within an address range of a plurality of paired memory address range registers. The memory controller determines if an enable bit associated with the address range is set to 1 in response to the address falling within one of the address ranges. The memory controller flags the memory access request as a high-priority request in response to the enable bit being set to 1 and places the high-priority request on a request queue. A dispatcher receives an indication that a memory bank is idle. The dispatcher determines if high-priority requests are present in the request queue and, if so, sends the earliest high-priority request to the idle memory bank.
    Type: Application
    Filed: January 9, 2007
    Publication date: July 10, 2008
    Inventor: RAM RAGHAVAN
  • Publication number: 20070101033
    Abstract: A mechanism for priority control in resource allocation for low request rate, latency-sensitive units is provided. With this mechanism, when a unit makes a request to a token manager, the unit identifies the priority of its request as well as the resource which it desires to access and the unit's resource access group (RAG). This information is used to set a value of a storage device associated with the resource, priority, and RAG identified in the request. When the token manager generates and grants a token to the RAG, the token is in turn granted to a unit within the RAG based on a priority of the pending requests identified in the storage devices associated with the resource and RAG. Priority pointers are utilized to provide a round-robin fairness scheme between high and low priority requests within the RAG for the resource.
    Type: Application
    Filed: October 27, 2005
    Publication date: May 3, 2007
    Inventors: Wen-Tzer Chen, Charles Johns, Ram Raghavan, Andrew Wottreng
  • Publication number: 20070056778
    Abstract: A sintered polycrystalline diamond material (PCD) of extremely fine grain size is manufactured by sintering a diamond powder with pre-blended catalyst metal under high pressure/high temperature (HP/HT) processing. The PCD material has an average sintered diamond grain structure of less than 1.0 ?m.
    Type: Application
    Filed: September 13, 2006
    Publication date: March 15, 2007
    Inventors: Steven Webb, Ram Raghavan
  • Patent number: 6996647
    Abstract: A method and apparatus are provided for efficiently managing hot spots in a resource managed computer system. The system utilizes a controller, a series of requestor groups, and a series of loan registers. The controller is configured to allocate and is configured to reallocate resources among the requestor groups to efficiently manage the computer system. The loan registers account for reallocated resources such that intended preallocation of use of shared resources is closely maintained. Hence, the computer system is able to operate efficiently while preventing any single requestor or group of requestors from monopolizing shared resources.
    Type: Grant
    Filed: December 17, 2003
    Date of Patent: February 7, 2006
    Assignee: International Business Machines Corporation
    Inventors: Ram Raghavan, Wen-Tzer Thomas Chen
  • Patent number: 6986002
    Abstract: The present invention provides for a bus system having a local bus ring coupled to a remote bus ring. A processing unit is coupled to the local bus node and is employable to request data. A cache is coupled to the processing unit through a command bus. A cache investigator, coupled to the cache, is employable to determine whether the cache contains the requested data. The cache investigator is further employable to generate and broadcast cache utilization parameters, which contain information as to the degree of accessing the cache by other caches, its own associated processing unit, and so on. In one aspect, the cache is a local cache. In another aspect, the cache is a remote cache.
    Type: Grant
    Filed: December 17, 2002
    Date of Patent: January 10, 2006
    Assignee: International Business Machines Corporation
    Inventor: Ram Raghavan
  • Publication number: 20050138254
    Abstract: A method and apparatus are provided for efficiently managing hot spots in a resource managed computer system. The system utilizes a controller, a series of requester groups, and a series of loan registers. The controller is configured to allocate and is configured to reallocate resources among the requestor groups to efficiently manage the computer system. The loan registers account for reallocated resources such that intended preallocation of use of shared resources is closely maintained. Hence, the computer system is able to operate efficiently while preventing any single requestor or group of requesters from monopolizing shared resources.
    Type: Application
    Filed: December 17, 2003
    Publication date: June 23, 2005
    Applicant: International Business Machines Corporation
    Inventors: Ram Raghavan, Wen-Tzer Chen