Patents by Inventor K. Anand

K. Anand has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8327085
    Abstract: An approach is provided that uses a hypervisor to allocate a shared memory pool amongst a set of partitions (e.g., guest operating systems) being managed by the hypervisor. The hypervisor retrieves memory related metrics from shared data structures stored in a memory, with each of the shared data structures corresponding to a different one of the partitions. The memory related metrics correspond to a usage of the shared memory pool allocated to the corresponding partition. The hypervisor identifies a memory stress associated with each of the partitions with this identification based in part on the memory related metrics retrieved from the shared data structures. The hypervisor then reallocates the shared memory pool amongst the plurality of partitions based on the identified memory stress of the plurality of partitions.
    Type: Grant
    Filed: May 5, 2010
    Date of Patent: December 4, 2012
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Richard Louis Arndt, David Alan Hepkin, Sergio Reyes, Kenneth Charles Vossen
  • Patent number: 8301840
    Abstract: Mechanisms are provided, for implementation in a data processing system having at least one physical processor and at least one associated cache memory, for allocating cache resources of the at least one cache memory to virtual processors of the data processing system. The mechanisms identify a plurality of high priority virtual processors in the data processing system. The mechanisms further determine a percentage of cache lines of the at least one cache memory to be assigned to high priority virtual processors. Moreover, the mechanisms mark a portion of the cache lines in the at least one cache memory as being evictable by only high priority virtual processors based on the determined percentage of cache lines to be assigned to high priority virtual processors. The marked portion of the cache lines cannot be evicted by lower priority virtual processors having a priority lower than the high priority virtual processors.
    Type: Grant
    Filed: December 15, 2009
    Date of Patent: October 30, 2012
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Diane G. Flemming, William A. Maron, Mysore S. Srinivas
  • Patent number: 8302102
    Abstract: Improving system resource utilization in a data processing system is provided. A determination is made as to whether there is at least one ceded virtual processor in a plurality of virtual processors in a shared resource pool. Responsive to existence of the at least one ceded virtual processor, a determination is made as to whether there is at least one dedicated logical partition configured for a hybrid mode. Responsive to identifying at least one hybrid configured dedicated logical partition, a determination is made as to whether the at least one hybrid configured dedicated logical partition requires additional virtual processor cycles. If the at least one hybrid configured dedicated logical partition requiring additional virtual processor cycles, the at least one ceded virtual processor is deallocated from the plurality of virtual processors and allocated to a surrogate resource pool for use by the at least one hybrid configured dedicated logical partition.
    Type: Grant
    Filed: February 27, 2008
    Date of Patent: October 30, 2012
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Ananda K. Venkataraman
  • Patent number: 8291430
    Abstract: A mechanism for optimizing system performance using spare processing cores in a virtualized environment. When detecting a workload partition needs to run on a virtual processor in the virtualized system, a state of the virtual processor is changed to a wait state. A first node comprising memory that is local to the workload partition is determined. A determination is also made as to whether a non-spare processor core in the first node is available to run the workload partition. If no non-spare processor core is available, a free non-spare processor core in a second node is located, and the state of the free non-spare processor core in the second node is changed to an inactive state. The state of a spare processor core in the first node is changed to an active state, and the workload partition is dispatched to the spare processor core in the first node for execution.
    Type: Grant
    Filed: July 10, 2009
    Date of Patent: October 16, 2012
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Mysore Sathyanarayana Srinivas
  • Patent number: 8291148
    Abstract: Methods and apparatus are provided for virtualizing resources including peripheral components and peripheral interfaces. Peripheral component such as hardware accelerators and peripheral interfaces such as port adapters are offloaded from individual servers onto a resource virtualization switch. Multiple servers are connected to the resource virtualization switch over an I/O bus fabric such as PCI Express or PCI-AS. The resource virtualization switch allows efficient access, sharing, management, and allocation of resources.
    Type: Grant
    Filed: April 12, 2012
    Date of Patent: October 16, 2012
    Assignee: Xsigo Systems, Inc.
    Inventors: Shreyas Shah, Subramaniam Vinod, Ramalingam K. Anand, Ashok Krishnamurthi
  • Patent number: 8271989
    Abstract: The present invention provides a computer implemented method, data processing system, and computer program product for mapping and dispatching virtual processors in a data processing system having at least a first partition and a second partition. The data processing system runs a first partition on a virtual processor during a first timeslice. The data processing system identifies an at least one physical page used by the first partition and the second partition. The data processing system maps the at least one physical page to the first partition and the second partition. The data processing system determines a fitness value based on the mapping. The data processing system dispatches the Virtual processor to the second partition on a second timeslice based on the fitness value, wherein the second timeslice immediately succeeds after the first timeslice, whereby the at least one physical page remains in cache during at least the first timeslice and the second timeslice.
    Type: Grant
    Filed: February 7, 2008
    Date of Patent: September 18, 2012
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Peter J. Heyrman, Bret R. Olszewski
  • Publication number: 20120173923
    Abstract: Enabling application instructions to access mathematical functions from an accelerated function library to perform instructions. In the performance of the instructions, applying a predefined test instruction on a value, the value being at least one of an input argument, an intermediate result or a final result to determine if the value is a general-case or a predetermined special-case. Responsive to a determination that the value is a special-case, performing a predetermined set of special-case instructions for the performance of the mathematical function.
    Type: Application
    Filed: December 31, 2010
    Publication date: July 5, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Robert F. Enenkel, Robert W. Hay, Martin S. Schmookler, Christopher K. Anand
  • Publication number: 20120124254
    Abstract: Techniques for estimating processor load by using queue depth information of a peripheral adapter provides processor loading information that can be used to adapt interrupt latency to improve performance in a processing system. A mathematical function of the depth of one or more queues of the adapter is compared to its historical value in order to provide an estimate of processor load. The estimated processor load can then be used to set a parameter that controls the frequency of an interrupt generator. The mathematical function may be the ratio of the transmit queue depth to the receive queue depth and the historical value may be predetermined, user-settable, obtained during a calibration interval or obtained by taking a long-term average of the mathematical function of the queue depths.
    Type: Application
    Filed: December 30, 2011
    Publication date: May 17, 2012
    Inventors: Vaijayanthimala K. Anand, Janice Marie Girouard, Emily Jane Ratliff
  • Patent number: 8180949
    Abstract: Methods and apparatus are provided for virtualizing resources including peripheral components and peripheral interfaces. Peripheral component such as hardware accelerators and peripheral interfaces such as port adapters are offloaded from individual servers onto a resource virtualization switch. Multiple servers are connected to the resource virtualization switch over an I/O bus fabric such as PCI Express or PCI-AS. The resource virtualization switch allows efficient access, sharing, management, and allocation of resources.
    Type: Grant
    Filed: September 9, 2011
    Date of Patent: May 15, 2012
    Assignee: Xsigo Systems, Inc.
    Inventors: Shreyas Shah, Subramaniam Vinod, Ramalingam K. Anand, Ashok Krishnamurthi
  • Patent number: 8156498
    Abstract: A mechanism is provided for biasing placement of a software thread on a currently idle and dispatched processor. The operating system starts with the last logical processor on which the software thread ran and determines whether that processor is idle and dispatched and considers each logical processor until a currently dispatched and idle logical processor is found. If a currently dispatched and idle logical processor is not found, then the operating system biases placing the software thread on an idle logical processor.
    Type: Grant
    Filed: May 30, 2008
    Date of Patent: April 10, 2012
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Dean J. Burdick, Bret R. Olszewski
  • Patent number: 8122167
    Abstract: A software thread is dispatched for causing the system to poll a device for determining whether a condition has occurred. Subsequently, the software thread is undispatched and, in response thereto, an interrupt is enabled on the device, so that the device is enabled to generate the interrupt in response to an occurrence of the condition, and so that the system ceases polling the device for determining whether the condition has occurred. Eventually, the software thread is redispatched and, in response thereto, the interrupt is disabled on the device, so that the system resumes polling the device for determining whether the condition has occurred.
    Type: Grant
    Filed: August 6, 2010
    Date of Patent: February 21, 2012
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Ronen Grosman, Michael E. Lyons, Bret R. Olszewski
  • Publication number: 20120036292
    Abstract: A software thread is dispatched for causing the system to poll a device for determining whether a condition has occurred. Subsequently, the software thread is undispatched and, in response thereto, an interrupt is enabled on the device, so that the device is enabled to generate the interrupt in response to an occurrence of the condition, and so that the system ceases polling the device for determining whether the condition has occurred. Eventually, the software thread is redispatched and, in response thereto, the interrupt is disabled on the device, so that the system resumes polling the device for determining whether the condition has occurred.
    Type: Application
    Filed: August 6, 2010
    Publication date: February 9, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vaijayanthimala K. Anand, Ronen Grosman, Michael E. Lyons, Bret R. Olszewski
  • Patent number: 8112555
    Abstract: Interrupt frequency control by estimating processor load in the peripheral adapter provides adaptive interrupt latency to improve performance in a processing system. A mathematical function of the depth of one or more queues of the adapter is compared to its historical value in order to provide an estimate of processor load. The estimated processor load is then used to set a parameter that controls the frequency of an interrupt generator, which may be controlled by setting an interrupt queue depth threshold, packet frequency threshold or interrupt hold-off time value. The mathematical function may be the ratio of the transmit queue depth to the receive queue depth and the historical value may be predetermined, user-settable, obtained during a calibration interval or obtained by taking a long-term average of the mathematical function of the queue depths.
    Type: Grant
    Filed: August 28, 2009
    Date of Patent: February 7, 2012
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Janice Marie Girouard, Emily Jane Ratliff
  • Patent number: 8108866
    Abstract: A mechanism is provided for determining whether to use cache affinity as a criterion for software thread dispatching in a shared processor logical partitioning data processing system. The server firmware may store data about when and/or how often logical processors are dispatched. Given these data, the operating system may collect metrics. Using the logical processor metrics, the operating system may determine whether cache affinity is likely to provide a significant performance benefit relative to the cost of dispatching a particular logical processor to the operating system.
    Type: Grant
    Filed: May 30, 2008
    Date of Patent: January 31, 2012
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Dean J. Burdick, Bret R. Olszewski
  • Publication number: 20120005401
    Abstract: An apparatus includes a processor and a volatile memory that is configured to be accessible in an active memory sharing configuration. The apparatus includes a machine-readable encoded with instructions executable by the processor. The instructions including first virtual machine instructions configured to access the volatile memory with a first virtual machine. The instructions including second virtual machine instructions configured to access the volatile memory with a second virtual machine. The instructions including virtual machine monitor instructions configured to page data out from a shared memory to a reserved memory section in the volatile memory responsive to the first virtual machine or the second virtual machine paging the data out from the shared memory or paging the data in to the shared memory. The shared memory is shared across the first virtual machine and the second virtual machine. The volatile memory includes the shared memory.
    Type: Application
    Filed: June 30, 2010
    Publication date: January 5, 2012
    Applicant: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, David Navarro, Bret R. Olszewski, Sergio Reyes
  • Patent number: 8076338
    Abstract: The present invention relates to compounds of the Formula (I) and (II) wherein R, R21, R25-R33, m, n, X21-X23, and Q1 are defined herein. The compounds modulate protein kinase enzymatic activity to modulate cellular activities such as proliferation, differentiation, programmed cell death, migration and chemoinvasion. Compounds of the invention inhibit, regulate and/or modulate kinases, particularly p70S6 and/or Akt kinases. Methods of using and preparing the compounds, and pharmaceutical compositions thereof, to treat kinase-dependent diseases and conditions are also an aspect of the invention.
    Type: Grant
    Filed: April 22, 2005
    Date of Patent: December 13, 2011
    Assignee: Exelixis, Inc.
    Inventors: Neel K. Anand, Charles M. Blazey, Owen Joseph Bowles, Joerg Bussenius, Lynne Canne Bannen, Diva Sze-Ming Chan, Baili Chen, Erick Wang Co, Simona Costanzo, Steven Charles Defina, Larisa Dubenko, Maurizio Franzini, Ping Huang, Vasu Jammalamadaka, Richard George Khoury, Moon Hwan Kim, Rhett Ronald Klein, Donna Tra Le, Morrison B. Mac, John M. Nuss, Jason Jevious Parks, Kenneth D. Rice, Tsze H. Tsang, Amy Lew Tsuhako, Yong Wang, Wei Xu
  • Publication number: 20110296146
    Abstract: A set of instructions for implementation in a floating-point unit or other computer processor hardware is disclosed herein. In one embodiment, an extended-range fused multiply-add operation, a first look-up operation, and a second look-up operation are each embodied in hardware instructions configured to be operably executed in a processor. These operations are accompanied by a table which provides a set of defined values in response to various function types, supporting the computation of elementary functions such as reciprocal, square, cube, fourth roots and their reciprocals, exponential, and logarithmic functions. By allowing each of these functions to be computed with a hardware instruction, branching and predicated execution may be reduced or eliminated, while also permitting the use of distributed instructions across a number of execution units.
    Type: Application
    Filed: May 27, 2010
    Publication date: December 1, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christopher K. Anand, Robert F. Enenkel, Anuroop Sharma, Daniel M. Zabawa
  • Publication number: 20110276742
    Abstract: An approach is provided that uses a hypervisor to allocate a shared memory pool amongst a set of partitions (e.g., guest operating systems) being managed by the hypervisor. The hypervisor retrieves memory related metrics from shared data structures stored in a memory, with each of the shared data structures corresponding to a different one of the partitions. The memory related metrics correspond to a usage of the shared memory pool allocated to the corresponding partition. The hypervisor identifies a memory stress associated with each of the partitions with this identification based in part on the memory related metrics retrieved from the shared data structures. The hypervisor then reallocates the shared memory pool amongst the plurality of partitions based on the identified memory stress of the plurality of partitions.
    Type: Application
    Filed: May 5, 2010
    Publication date: November 10, 2011
    Applicant: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Richard Louis Arndt, David Alan Hepkin, Sergio Reyes, Kenneth Charles Vossen
  • Publication number: 20110258468
    Abstract: Handling requests for power reduction by first enabling a request for an amount of power change, e.g. reduction by any partition. In response to the request for power reduction, an equal proportion of the whole amount of power reduction is distributed between each of a set of cores providing the entitlements to the partitions, and the entitlement of the requesting partition is reduced by an amount corresponding to the whole amount of the power change.
    Type: Application
    Filed: April 20, 2010
    Publication date: October 20, 2011
    Applicant: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, William A. Maron, Mysore Srinivas, Diane Garza Flemming
  • Patent number: 8041875
    Abstract: Methods and apparatus are provided for virtualizing resources including peripheral components and peripheral interfaces. Peripheral component such as hardware accelerators and peripheral interfaces such as port adapters are offloaded from individual servers onto a resource virtualization switch. Multiple servers are connected to the resource virtualization switch over an I/O bus fabric such as PCI Express or PCI-AS. The resource virtualization switch allows efficient access, sharing, management, and allocation of resources.
    Type: Grant
    Filed: October 14, 2008
    Date of Patent: October 18, 2011
    Assignee: Xsigo Systems, Inc.
    Inventors: Shreyas Shah, Subramaniam Vinod, Ramalingam K. Anand, Ashok Krishnamurthi