Patents by Inventor Seetharami R. Seelam
Seetharami R. Seelam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20180039524Abstract: Using a metadata of a layer, a prediction factor including a level of participation of the layer in a set of container images is computed. Each container image includes a corresponding set of layers and is usable to configure a container in a container-based virtualized data processing environment. Using a set of levels of participation corresponding to a set of layers, and using a condition in a prediction algorithm, a subset of layers that have to be pre-provisioned at a node is predicted. The subset of layers is adjusted, to form an adjusted subset of layers, by looking ahead at a container requirement of a workload that is planned for processing at a future time. The adjusted subset of layers is caused to be provisioned on the node prior to the future time.Type: ApplicationFiled: August 3, 2016Publication date: February 8, 2018Applicant: International Business Machines CorporationInventors: Paolo Dettori, Andrew R. Low, Aaron J. Quirk, Seetharami R. Seelam Seelam, Michael J. Spreitzer, Malgorzata Steinder, Lin Sun
-
Publication number: 20170371725Abstract: A method, information processing system, and computer program product are provided for managing operating system interference on applications in a parallel processing system. A mapping of hardware multi-threading threads to at least one processing core is determined, and first and second sets of logical processors of the at least one processing core are determined. The first set includes at least one of the logical processors of the at least one processing core, and the second set includes at least one of a remainder of the logical processors of the at least one processing core. A processor schedules application tasks only on the logical processors of the first set of logical processors of the at least one processing core. Operating system interference events are scheduled only on the logical processors of the second set of logical processors of the at least one processing core.Type: ApplicationFiled: August 24, 2017Publication date: December 28, 2017Applicant: International Business Machines CorporationInventors: John DIVIRGILIO, Liana L. FONG, John LEWARS, Seetharami R. SEELAM, Brian F. VEALE
-
Patent number: 9836334Abstract: A method, information processing system, and computer program product are provided for managing operating system interference on applications in a parallel processing system. A mapping of hardware multi-threading threads to at least one processing core is determined, and first and second sets of logical processors of the at least one processing core are determined. The first set includes at least one of the logical processors of the at least one processing core, and the second set includes at least one of a remainder of the logical processors of the at least one processing core. A processor schedules application tasks only on the logical processors of the first set of logical processors of the at least one processing core. Operating system interference events are scheduled only on the logical processors of the second set of logical processors of the at least one processing core.Type: GrantFiled: June 11, 2013Date of Patent: December 5, 2017Assignee: International Business Machines CorporationInventors: John Divirgilio, Liana L. Fong, John Lewars, Seetharami R. Seelam, Brian F. Veale
-
Publication number: 20170134518Abstract: A method for reducing reactivation time of services that includes examining page faults that occur during processing of a service after the service has been inactive to provide a plurality of prefetch groups, and formulating a prefetch decision tree from page fault data in the prefetch groups. Pages from an initial page table for the service following a reactivated service request are then compared with the prefetched pages in the resident memory in accordance with the prefetch decision tree. Pages in the page table that are not included in said prefetched pages are paged in. A process to provide to provide the service is executed using the page table. Executing the process substantially avoids page faults.Type: ApplicationFiled: November 6, 2015Publication date: May 11, 2017Inventors: Bulent Abali, Hubertus Franke, Chung-Sheng Li, Seetharami R. Seelam
-
Publication number: 20170097856Abstract: Various embodiments monitor system noise in a parallel computing system. In one embodiment, at least one set of system noise data is stored in a shared buffer during a first computation interval. The set of system noise data is detected during the first computation interval and is associated with at least one parallel thread in a plurality of parallel threads. Each thread in the plurality of parallel threads is a thread of a program. The set of system noise data is filtered during a second computation interval based on at least one filtering condition creating a filtered set of system noise data. The filtered set of system noise data is then stored.Type: ApplicationFiled: December 16, 2016Publication date: April 6, 2017Applicant: International Business Machines CorporationInventors: Keun Soo YIM, Seetharami R. SEELAM, Liana L. FONG, Arun IYENGAR, John LEWARS
-
Patent number: 9590873Abstract: Composite service provisioning is provided. One or more processors pre-provisions a first pool of service instances of a first composite service. One or more processors pre-provisions a second pool of service instances of a sub-service of the first composite service, wherein instances of the first pool of service instances have placeholder credentials identifying the second pool of service instances.Type: GrantFiled: December 10, 2015Date of Patent: March 7, 2017Assignee: International Business Machines CorporationInventors: Paolo Dettori, David C. Frank, Seetharami R. Seelam
-
Patent number: 9558095Abstract: Various embodiments monitor system noise in a parallel computing system. In one embodiment, at least one set of system noise data is stored in a shared buffer during a first computation interval. The set of system noise data is detected during the first computation interval and is associated with at least one parallel thread in a plurality of parallel threads. Each thread in the plurality of parallel threads is a thread of a program. The set of system noise data is filtered during a second computation interval based on at least one filtering condition creating a filtered set of system noise data. The filtered set of system noise data is then stored.Type: GrantFiled: March 30, 2016Date of Patent: January 31, 2017Assignee: International Business Machines CorporationInventors: Keun Soo Yim, Seetharami R. Seelam, Liana L. Fong, Arun Iyengar, John Lewars
-
Patent number: 9547534Abstract: A tool for autoscaling applications in a shared cloud resource environment. The tool registers, by one or more computer processors, one or more trigger conditions. The tool initiates, by one or more computer processors, a scaling event based, at least in part, on at least one of the one or more trigger conditions. The tool determines, by one or more computer processors, a scaling decision for the scaling event based, at least in part, on one or more scaling rules related to the one or more trigger conditions.Type: GrantFiled: October 10, 2014Date of Patent: January 17, 2017Assignee: International Business Machines CorporationInventors: Paolo Dettori, Xiaoqiao Meng, Seetharami R. Seelam, Peter H. Westerink
-
Publication number: 20160210212Abstract: Various embodiments monitor system noise in a parallel computing system. In one embodiment, at least one set of system noise data is stored in a shared buffer during a first computation interval. The set of system noise data is detected during the first computation interval and is associated with at least one parallel thread in a plurality of parallel threads. Each thread in the plurality of parallel threads is a thread of a program. The set of system noise data is filtered during a second computation interval based on at least one filtering condition creating a filtered set of system noise data. The filtered set of system noise data is then stored.Type: ApplicationFiled: March 30, 2016Publication date: July 21, 2016Applicant: International Business Machines CorporationInventors: Keun Soo YIM, Seetharami R. SEELAM, Liana L. FONG, Arun IYENGAR, John LEWARS
-
Publication number: 20160179669Abstract: A data processing system includes a plurality of virtual machines each having associated memory pages; a shared memory page cache that is accessible by each of the plurality of virtual machines; and a global hash map that is accessible by each of the plurality of virtual machines. The data processing system is configured such that, for a particular memory page stored in the shared memory page cache that is associated with two or more of the plurality of virtual machines, there is a single key stored in the global hash map that identifies at least a storage location in the shared memory page cache of the particular memory page. The system can be embodied at least partially in a cloud computing system.Type: ApplicationFiled: March 1, 2016Publication date: June 23, 2016Inventors: Parijat Dube, Xavier R. Guerin, Seetharami R. Seelam
-
Patent number: 9372086Abstract: Managing routes to meet one or more predetermined conditions, in one aspect, may comprise receiving user information associated with a user via a user's device. Based on the user information, at least a target location to where the user is traveling may be determined. Path information associated with one or more intermediary locations leading to the target location may be received. The path information may be received automatically from one or more sensors installed at the respective intermediary locations for detecting the path information. A route strategy that meets one or more conditions may be estimated by analyzing the user information and the path information. The user information may be obtained automatically from one or more of social network profile data associated with the user, electronic calendar data associated with the user, or historical data associated with the user stored in a user profile database.Type: GrantFiled: September 16, 2013Date of Patent: June 21, 2016Assignee: International Business Machines CorporationInventors: Carlos Henrique Cardonha, Dimitri Kanevsky, Peter K. Malkin, Seetharami R. Seelam
-
Patent number: 9361202Abstract: Various embodiments monitor system noise in a parallel computing system. In one embodiment, at least one set of system noise data is stored in a shared buffer during a first computation interval. The set of system noise data is detected during the first computation interval and is associated with at least one parallel thread in a plurality of parallel threads. Each thread in the plurality of parallel threads is a thread of a program. The set of system noise data is filtered during a second computation interval based on at least one filtering condition creating a filtered set of system noise data. The filtered set of system noise data is then stored.Type: GrantFiled: July 18, 2013Date of Patent: June 7, 2016Assignee: International Business Machines CorporationInventors: Keun Soo Yim, Seetharami R. Seelam, Liana L. Fong, Arun Iyengar, John Lewars
-
Patent number: 9323677Abstract: A data processing system includes a plurality of virtual machines each having associated memory pages; a shared memory page cache that is accessible by each of the plurality of virtual machines; and a global hash map that is accessible by each of the plurality of virtual machines. The data processing system is configured such that, for a particular memory page stored in the shared memory page cache that is associated with two or more of the plurality of virtual machines, there is a single key stored in the global hash map that identifies at least a storage location in the shared memory page cache of the particular memory page. The system can be embodied at least partially in a cloud computing system.Type: GrantFiled: August 15, 2013Date of Patent: April 26, 2016Assignee: International Business Machines CorporationInventors: Parijat Dube, Xavier R. Guerin, Seetharami R. Seelam
-
Publication number: 20160103717Abstract: A tool for autoscaling applications in a shared cloud resource environment. The tool registers, by one or more computer processors, one or more trigger conditions. The tool initiates, by one or more computer processors, a scaling event based, at least in part, on at least one of the one or more trigger conditions. The tool determines, by one or more computer processors, a scaling decision for the scaling event based, at least in part, on one or more scaling rules related to the one or more trigger conditions.Type: ApplicationFiled: October 10, 2014Publication date: April 14, 2016Inventors: Paolo Dettori, Xiaoqiao Meng, Seetharami R. Seelam, Peter H. Westerink
-
Publication number: 20160094409Abstract: Composite service provisioning is provided. One or more processors pre-provisions a first pool of service instances of a first composite service. One or more processors pre-provisions a second pool of service instances of a sub-service of the first composite service, wherein instances of the first pool of service instances have placeholder credentials identifying the second pool of service instances.Type: ApplicationFiled: December 10, 2015Publication date: March 31, 2016Inventors: Paolo Dettori, David C. Frank, Seetharami R. Seelam
-
Patent number: 9258196Abstract: Composite service provisioning is provided. A processor receives a first demand value of a first composite service. The processor identifies a sub-service based on the first composite service. The processor pre-provisions a first pool of service instances corresponding to the first composite service, the first pool of service instances having a quantity of service instances based, at least in part, on the first demand value. The processor determines a second demand value of the sub-service based, at least in part, on the quantity of service instances of the first pool of service instances. The processor pre-provisions a second pool of service instances corresponding to the sub-service, the second pool of service instances having a quantity of service instances based, at least in part, on the second demand value. The processor modifies each of the first pool of service instances with placeholder credentials that identify the second pool of service instances.Type: GrantFiled: February 5, 2014Date of Patent: February 9, 2016Assignee: International Business Machines CorporationInventors: Paolo Dettori, David C. Frank, Seetharami R. Seelam
-
Patent number: 9250857Abstract: Managing buffers in a hybrid system, in one aspect, may comprise selecting a first buffer management method from a plurality of buffer management methods; capturing statistics associated with access to the buffer in the hybrid system running under the initial buffer management method; analyzing the captured statistics; identifying a second buffer management method based on the analyzed captured statistics; determining whether the second buffer management method is more optimal than the first buffer management method; in response to determining that the second buffer management method is more optimal than the first buffer management method, invoking the second buffer management method; and repeating the capturing, the analyzing, the identifying and the determining.Type: GrantFiled: August 28, 2013Date of Patent: February 2, 2016Assignee: International Business Machines CorporationInventors: Michael H. Dawson, Yuqing Gao, Megumi Ito, Graeme Johnson, Seetharami R. Seelam
-
Patent number: 9229783Abstract: Methods and apparatus are provided for evaluating potential resource capacity in a system where there is elasticity and competition between a plurality of containers. A dynamic potential capacity is determined for at least one container in a plurality of containers competing for a total capacity of a larger container. A current utilization by each of the plurality of competing containers is obtained, and an equilibrium capacity is determined for each of the competing containers. The equilibrium capacity indicates a capacity that the corresponding container is entitled to. The dynamic potential capacity is determined based on the total capacity, a comparison of one or more of the current utilizations to one or more of the corresponding equilibrium capacities and a relative resource weight of each of the plurality of competing containers. The dynamic potential capacity is optionally recalculated when the set of plurality of containers is changed or after the assignment of each work element.Type: GrantFiled: March 31, 2010Date of Patent: January 5, 2016Assignee: International Business Machines CorporationInventors: Fabio Benedetti, Norman Bobroff, Liana Liyow Fong, Yanbin Liu, Seetharami R. Seelam
-
Patent number: 9170950Abstract: An exemplary method in accordance with embodiments of this invention includes, at a virtual machine that forms a part of a cluster of virtual machines, computing a key for an instance of a memory page that is to be swapped out to a shared memory cache that is accessible by all virtual machines of the cluster of virtual machines; determining if the computed key is already present in a global hash map that is accessible by all virtual machines of the cluster of virtual machines; and only if it is determined that the computed key is not already present in the global hash map, storing the computed key in the global hash map and the instance of the memory page in the shared memory cache.Type: GrantFiled: January 16, 2013Date of Patent: October 27, 2015Assignee: International Business Machines CorporationInventors: Parijat Dube, Xavier R. Guerin, Seetharami R. Seelam
-
Patent number: 9158497Abstract: Managing buffers in a hybrid system, in one aspect, may comprise selecting a first buffer management method from a plurality of buffer management methods; capturing statistics associated with access to the buffer in the hybrid system running under the initial buffer management method; analyzing the captured statistics; identifying a second buffer management method based on the analyzed captured statistics; determining whether the second buffer management method is more optimal than the first buffer management method; in response to determining that the second buffer management method is more optimal than the first buffer management method, invoking the second buffer management method; and repeating the capturing, the analyzing, the identifying and the determining.Type: GrantFiled: January 2, 2013Date of Patent: October 13, 2015Assignee: International Business Machines CorporationInventors: Michael H. Dawson, Yuqing Gao, Megumi Ito, Graeme Johnson, Seetharami R. Seelam