Patents by Inventor Mysore Srinivas
Mysore Srinivas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8407499Abstract: Handling requests for power reduction by first enabling a request for an amount of power change, e.g. reduction by any partition. In response to the request for power reduction, an equal proportion of the whole amount of power reduction is distributed between each of a set of cores providing the entitlements to the partitions, and the entitlement of the requesting partition is reduced by an amount corresponding to the whole amount of the power change.Type: GrantFiled: April 20, 2010Date of Patent: March 26, 2013Assignee: International Business Machines CorporationInventors: Vaijayanthimala K Anand, Diane Garza Flemming, William A Maron, Mysore Srinivas
-
Publication number: 20110258468Abstract: Handling requests for power reduction by first enabling a request for an amount of power change, e.g. reduction by any partition. In response to the request for power reduction, an equal proportion of the whole amount of power reduction is distributed between each of a set of cores providing the entitlements to the partitions, and the entitlement of the requesting partition is reduced by an amount corresponding to the whole amount of the power change.Type: ApplicationFiled: April 20, 2010Publication date: October 20, 2011Applicant: International Business Machines CorporationInventors: Vaijayanthimala K. Anand, William A. Maron, Mysore Srinivas, Diane Garza Flemming
-
Publication number: 20100275206Abstract: Standalone software performance optimizer systems for hybrid systems include a hybrid system having a plurality of processors, memory operably connected to the processors, an operating system including a dispatcher loaded into the memory, a multithreaded application read into the memory, and a static performance analysis program loaded into the memory; wherein the static performance analysis program instructs at least one processor to perform static performance analysis on each of the threads, the static performance analysis program instructs at least one processor to assign each thread to a CPU class based on the static performance analysis, and the static performance analysis program instructs at least one processor to store each thread's CPU class.Type: ApplicationFiled: April 22, 2009Publication date: October 28, 2010Applicant: International Business Machines CorporationInventors: Greg Mewhinney, Diane Flemming, David Whitworth, William Maron, Mysore Srinivas
-
Publication number: 20080098397Abstract: A system and method for scheduling threads in a Simultaneous Multithreading (SMT) processor environment utilizing multiple SMT processors is provided. Poor performing threads that are being run on each of the SMT processors are identified. After being identified, the poor performing threads are moved to a different SMT processor. Data is captured regarding the performance of threads. In one embodiment, this data includes each threads' CPI value. When a thread is moved, data regarding the thread and its performance at the time it was moved is recorded along with a timestamp. The data regarding previous moves is used to determine whether a thread's performance is improved following the move.Type: ApplicationFiled: December 13, 2007Publication date: April 24, 2008Inventors: Jos Accapadi, Andrew Dunshea, Dirk Michel, Mysore Srinivas
-
Publication number: 20070101333Abstract: A first collection of threads which represent a collection of tasks to be executed by at least one of a collection of processing units is monitored. In response to detecting a request by a thread among the first collection of threads to access a shared resource locked by a second thread among the collection of threads, the first thread attempts to access a list associated with the shared resource. The list orders at least one thread among the collection of threads by priority of access to the shared resource. In response to determining the list is locked by a third thread among the collection of threads, the first thread is placed into a sleep state to be reawakened in a fixed period of time. In response to determining that at least one of the collection of processing units has entered into an idle state, the first thread is awakened from the sleep state before the fixed period of time has expired.Type: ApplicationFiled: October 27, 2005Publication date: May 3, 2007Inventors: Greg Mewhinney, Mysore Srinivas
-
Publication number: 20070061810Abstract: A method and system for providing access to a shared resource utilizing selective locking are disclosed. According to one embodiment, a method is provided comprising receiving a request to perform a resource access operation on a shared resource, invoking a first routine to perform the resource access operation, detecting a data processing system exception generated in response to invoking the first routine, and invoking a second routine to perform the resource access operation in response to such detecting. In the described embodiment, the first routine comprises a dereference instruction to dereference a pointer to memory associated with the shared resource, the second routine comprises a lock acquisition instruction to acquire a global lock associated with the shared resource prior to a performance of the resource access operation and a lock release instruction to release the global lock once resource access operation has been performed.Type: ApplicationFiled: September 15, 2005Publication date: March 15, 2007Inventors: David Mehaffy, Greg Mewhinney, Mysore Srinivas
-
Publication number: 20070038809Abstract: A computer implemented method, apparatus, and computer usable code for managing cache data. A partition identifier is associated with a cache entry in a cache, wherein the partition identifier identifies a last partition accessing the cache entry. The partition identifier associated with the cache entry is compared with a previous partition identifier located in a processor register in response to the cache entry being moved into a lower level cache relative to the cache. The cache entry is marked if the partition identifier associated with the cache entry matches the previous partition identifier located in the processor register to form a marked cache entry, wherein the marked cache entry is aged at a slower rate relative to an unmarked cache entry.Type: ApplicationFiled: August 11, 2005Publication date: February 15, 2007Inventors: Jos Accapadi, Andrew Dunshea, Greg Mewhinney, Mysore Srinivas
-
Publication number: 20070033371Abstract: A computer implemented method, apparatus, and computer usable code for managing cache information in a logical partitioned data processing system. A determination is made as to whether a unique identifier in a tag associated with a cache entry in a cache matches a previous unique identifier for a currently executing partition in the logical partitioned data processing system when the cache entry is selected for removal from the cache, and saves the tag in a storage device if the partition identifier in the tag matches the previous unique identifier.Type: ApplicationFiled: August 4, 2005Publication date: February 8, 2007Inventors: Andrew Dunshea, Greg Mewhinney, Mysore Srinivas
-
Publication number: 20060288186Abstract: A system and method for dynamically altering a Virtual Memory Manager (VMM) Sequential-Access Read Ahead settings based upon current system memory conditions is provided. Normal VMM operations are performed using the Sequential-Access Read Ahead values set by the user. When low memory is detected, the system either turns off Sequential-Access Read Ahead operations or decreases the maximum page ahead (maxpgahead) value based upon whether the amount of free space is simply low or has reached a critically low level. The altered VMM Sequential-Access Read Ahead state remains in effect until enough free space is available so that normal VMM Sequential-Access Read Ahead operations can be performed (at which point the altered Sequential-Access Read Ahead values are reset to their original levels).Type: ApplicationFiled: August 8, 2006Publication date: December 21, 2006Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jos Accapadi, Andrew Dunshea, Li Li, Grover Neuman, Mysore Srinivas, David Hepkin
-
Publication number: 20060282620Abstract: The present invention provides a method, system, and apparatus for communicating to associative cache which data is least important to keep. The method, system, and apparatus determine which cache line has the least important data so that this less important data is replaced before more important data. In a preferred embodiment, the method begins by determining the weight of each cache line within the cache. Then the cache line or lines with the lowest weight is determined.Type: ApplicationFiled: June 14, 2005Publication date: December 14, 2006Inventors: Sujatha Kashyap, Mysore Srinivas
-
Publication number: 20060155936Abstract: A multiprocessor data processing system (MDPS) with a weakly-ordered architecture providing processing logic for substantially eliminating issuing sync instructions after every store instruction of a well-behaved application. Instructions of a well-behaved application are translated and executed by a weakly-ordered processor. The processing logic includes a lock address tracking utility (LATU), which provides an algorithm and a table of lock addresses, within which each lock address is stored when the lock is acquired by the weakly-ordered processor. When a store instruction is encountered in the instruction stream, the LATU compares the target address of the store instruction against the table of lock addresses. If the target address matches one of the lock addresses, indicating that the store instruction is the corresponding unlock instruction (or lock release instruction), a sync instruction is issued ahead of the store operation.Type: ApplicationFiled: December 7, 2004Publication date: July 13, 2006Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Andrew Dunshea, Satya Sharma, Mysore Srinivas
-
Publication number: 20060047911Abstract: A method, apparatus, and computer instructions for accessing data. In response to identifying a transaction requiring data, address information is obtained for the data. The address information includes an indication of whether the data is unlikely to be located on remote caches for local nodes. The remote caches for local nodes are searched if the indication is present in the address information. The data is requested from main memory if the indication is absent.Type: ApplicationFiled: September 2, 2004Publication date: March 2, 2006Applicant: International Business Machines CorporationInventors: Greg Mewhinney, Mysore Srinivas
-
Publication number: 20060037020Abstract: Methods, systems, and computer program products are provided for scheduling threads in a multiprocessor computer. Embodiments include selecting a thread in a ready queue to be dispatched to a processor and determining whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, embodiments typically include selecting a processor, setting a current processor priority register of the selected processor to least favored, and dispatching the thread from the ready queue to the selected processor. In some embodiments, setting the current processor priority register of the selected processor to least favored is carried out by storing a value associated with the highest interrupt priority in the current processor priority register.Type: ApplicationFiled: August 12, 2004Publication date: February 16, 2006Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Jos Accapadi, Mathew Accapadi, Andrew Dunshea, Mark Hack, Agustin Mena, Mysore Srinivas
-
Publication number: 20050283573Abstract: A method, computer program product, and a data processing system for maintaining objects in a lookup cache is provided. A primary list is populated with a first plurality of objects. The primary list is an unordered list of the first plurality of objects. A secondary list is populated with a second plurality of objects. The secondary list is an ordered list of the second plurality of objects. Periodically, at least one object of the first plurality of objects is demoted to the secondary list, and at least one object of the second plurality of objects is promoted to the primary list.Type: ApplicationFiled: June 17, 2004Publication date: December 22, 2005Applicant: International Business Machines CorporationInventors: Greg Mewhinney, Mysore Srinivas
-
Publication number: 20050257020Abstract: A method, system, and program for dynamic memory management of unallocated memory in a logical partitioned data processing system. A logical partitioned data processing system typically includes multiple memory units, processors, I/O adapters, and other resources enabled for allocation to multiple logical partitions. A partition manager operating within the data processing system manages allocation of the resources to each logical partition. In particular, the partition manager manages allocation of a first portion of the multiple memory units to at least one logical partition. In addition, the partition manager manages a memory pool of unallocated memory from among the multiple memory units. Responsive to receiving a request for a memory loan from one of the allocated logical partitions, a second selection of memory units from the memory pool is loaned to the requesting logical partition.Type: ApplicationFiled: May 13, 2004Publication date: November 17, 2005Applicant: International Business Machines CorporationInventors: Sujatha Kashyap, Mysore Srinivas
-
Publication number: 20050235125Abstract: A system and method for dynamically altering a Virtual Memory Manager (VMM) Sequential-Access Read Ahead settings based upon current system memory conditions is provided. Normal VMM operations are performed using the Sequential-Access Read Ahead values set by the user. When low memory is detected, the system either turns off Sequential-Access Read Ahead operations or decreases the maximum page ahead (maxpgahead) value based upon whether the amount of free space is simply low or has reached a critically low level. The altered VMM Sequential-Access Read Ahead state remains in effect until enough free space is available so that normal VMM Sequential-Access Read Ahead operations can be performed (at which point the altered Sequential-Access Read Ahead values are reset to their original levels).Type: ApplicationFiled: April 20, 2004Publication date: October 20, 2005Applicant: International Business Machines CorporationInventors: Jos Accapadi, Andrew Dunshea, Li Li, Grover Neuman, Mysore Srinivas, David Hepkin
-
Publication number: 20050198635Abstract: In a multiprocessor system where each processor has the capacity to executing multiple hardware threads, a method, system, and program for monitoring the percentage usage of the total capacity of the physical processors is provided. A processor capacity monitor calculates a logical usage percentage of each of the available hardware threads. In addition, the processor capacity monitor calculates a physical usage percentage of each of the processors by each of the available threads. Then, the processor capacity monitor multiplies the logical usage percentage and physical usage percentage for each of the threads and sums the result. The summed result is divided by the number of physical processors to determine the percentage usage of the total capacity of the physical processors.Type: ApplicationFiled: February 26, 2004Publication date: September 8, 2005Applicant: International Business Machines CorporationInventors: Bret Olszewski, Luc Smolders, Mysore Srinivas
-
Publication number: 20050086660Abstract: A system and method for identifying compatible threads in a Simultaneous Multithreading (SMT) processor environment is provided by calculating a performance metric, such as CPI, that occurs when two threads are running on the SMT processor. The CPI that is achieved when both threads were executing on the SMT processor is determined. If the CPI that was achieved is better than the compatibility threshold, then information indicating the compatibility is recorded. When a thread is about to complete, the scheduler looks at the run queue from which the completing thread belongs to dispatch another thread. The scheduler identifies a thread that is (1) compatible with the thread that is still running on the SMT processor (i.e., the thread that is not about to complete), and (2) ready to execute. The CPI data is continually updated so that threads that are compatible with one another are continually identified.Type: ApplicationFiled: September 25, 2003Publication date: April 21, 2005Applicant: International Business Machines CorporationInventors: Jos Accapadi, Andrew Dunshea, Dirk Michel, Mysore Srinivas
-
Publication number: 20050081183Abstract: A system and method for scheduling threads in a Simultaneous Multithreading (SMT) processor environment utilizing multiple SMT processors is provided. Poor performing threads that are being run on each of the SMT processors are identified. After being identified, the poor performing threads are moved to a different SMT processor. Data is captured regarding the performance of threads. In one embodiment, this data includes each threads' CPI value. When a thread is moved, data regarding the thread and its performance at the time it was moved is recorded along with a timestamp. The data regarding previous moves is used to determine whether a thread's performance is improved following the move.Type: ApplicationFiled: September 25, 2003Publication date: April 14, 2005Applicant: International Business Machines CorporationInventors: Jos Accapadi, Andrew Dunshea, Dirk Michel, Mysore Srinivas