Patents by Inventor Parijat Dube

Parijat Dube has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20150312110
    Abstract: A method for scaling a cloud infrastructure, comprises receiving at least one of resource-level metrics and application-level metrics, estimating parameters of at least one application based on the received metrics, automatically and dynamically determining directives for scaling application deployment based on the estimated parameters, and providing the directives to a cloud service provider to execute the scaling.
    Type: Application
    Filed: July 7, 2015
    Publication date: October 29, 2015
    Inventors: Parijat Dube, Anshul Gandhi, Alexei Karve, Andrzej Kochut, Li Zhang
  • Patent number: 9170950
    Abstract: An exemplary method in accordance with embodiments of this invention includes, at a virtual machine that forms a part of a cluster of virtual machines, computing a key for an instance of a memory page that is to be swapped out to a shared memory cache that is accessible by all virtual machines of the cluster of virtual machines; determining if the computed key is already present in a global hash map that is accessible by all virtual machines of the cluster of virtual machines; and only if it is determined that the computed key is not already present in the global hash map, storing the computed key in the global hash map and the instance of the memory page in the shared memory cache.
    Type: Grant
    Filed: January 16, 2013
    Date of Patent: October 27, 2015
    Assignee: International Business Machines Corporation
    Inventors: Parijat Dube, Xavier R. Guerin, Seetharami R. Seelam
  • Patent number: 9164814
    Abstract: Predicting acceleration in a hybrid system may comprise determining a number of cross system calls in a first host-accelerator computer architecture running a workload. Host machine overhead and accelerator overhead in the first host-accelerator computer architecture associated with each of the cross system calls may be determined. Communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload may be determined. An application response time may be predicted for a candidate application to be run in a second host-accelerator computer architecture, based at least on the determined host machine overhead, the accelerator overhead, and the communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload.
    Type: Grant
    Filed: October 22, 2013
    Date of Patent: October 20, 2015
    Assignee: International Business Machines Corporation
    Inventors: Parijat Dube, Xiaoqiao Meng, Li Zhang
  • Publication number: 20150254111
    Abstract: Predicting acceleration in a hybrid system may comprise determining a number of cross system calls in a first host-accelerator computer architecture running a workload. Host machine overhead and accelerator overhead in the first host-accelerator computer architecture associated with each of the cross system calls may be determined. Communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload may be determined. An application response time may be predicted for a candidate application to be run in a second host-accelerator computer architecture, based at least on the determined host machine overhead, the accelerator overhead, and the communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload.
    Type: Application
    Filed: May 21, 2015
    Publication date: September 10, 2015
    Inventors: Parijat Dube, Xiaoqiao Meng, Li Zhang
  • Patent number: 9104505
    Abstract: Predicting acceleration in a hybrid system may comprise determining a number of cross system calls in a first host-accelerator computer architecture running a workload. Host machine overhead and accelerator overhead in the first host-accelerator computer architecture associated with each of the cross system calls may be determined. Communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload may be determined. An application response time may be predicted for a candidate application to be run in a second host-accelerator computer architecture, based at least on the determined host machine overhead, the accelerator overhead, and the communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload.
    Type: Grant
    Filed: October 3, 2013
    Date of Patent: August 11, 2015
    Assignee: International Business Machines Corporation
    Inventors: Parijat Dube, Xiaoqiao Meng, Li Zhang
  • Publication number: 20150186268
    Abstract: Embodiments include methods, systems and computer program products for providing an extendable job structure for executing instructions on an accelerator. The method includes creating a number of data descriptor blocks, each having a fixed number of memory location addresses and a pointer to a next of the number of the data descriptor block. The method further includes creating a last data descriptor block having the fixed number of memory location addresses and a last block indicator. Based on determining that additional memory is required for executing instructions on the accelerator, the method includes modifying the last data descriptor block to become a data extender block having a pointer to one of one or more new data descriptor blocks and creating a new last data descriptor block.
    Type: Application
    Filed: December 31, 2013
    Publication date: July 2, 2015
    Applicant: International Business Machines Corporation
    Inventors: Sameh W. Asaad, Parijat Dube, Hong Min, Donald W. Schmidt, Bharat Sukhwani, Mathew S. Thoennes
  • Publication number: 20150178129
    Abstract: Identifying resource bottleneck in multi-stage workflow processing may include identifying dependencies between logical stages and physical resources in a computing system to determine which logical stage involves what set of resources; for each of the identified dependencies, determining a functional relationship between a usage level of a physical resource and concurrency level of a logical stage; estimating consumption of the physical resources by each of the logical stages based on the functional relationship determined for each of the logical stages; and performing a predictive modeling based on the estimated consumption to determine a concurrency level at which said each of the logical stages will become bottleneck.
    Type: Application
    Filed: December 19, 2013
    Publication date: June 25, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Parijat Dube, Xiaoqiao Meng, Jian Tan, Li Zhang
  • Publication number: 20150169291
    Abstract: A method for scaling a cloud infrastructure, comprises receiving at least one of resource-level metrics and application-level metrics, estimating parameters of at least one application based on the received metrics, automatically and dynamically determining directives for scaling application deployment based on the estimated parameters, and providing the directives to a cloud service provider to execute the scaling.
    Type: Application
    Filed: November 26, 2014
    Publication date: June 18, 2015
    Inventors: Parijat Dube, Anshul Gandhi, Alexei Karve, Andrzej Kochut, Li Zhang
  • Publication number: 20150100972
    Abstract: Predicting acceleration in a hybrid system may comprise determining a number of cross system calls in a first host-accelerator computer architecture running a workload. Host machine overhead and accelerator overhead in the first host-accelerator computer architecture associated with each of the cross system calls may be determined. Communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload may be determined. An application response time may be predicted for a candidate application to be run in a second host-accelerator computer architecture, based at least on the determined host machine overhead, the accelerator overhead, and the communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload.
    Type: Application
    Filed: October 22, 2013
    Publication date: April 9, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Parijat Dube, Xiaoqiao Meng, Li Zhang
  • Publication number: 20150100971
    Abstract: Predicting acceleration in a hybrid system may comprise determining a number of cross system calls in a first host-accelerator computer architecture running a workload. Host machine overhead and accelerator overhead in the first host-accelerator computer architecture associated with each of the cross system calls may be determined. Communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload may be determined. An application response time may be predicted for a candidate application to be run in a second host-accelerator computer architecture, based at least on the determined host machine overhead, the accelerator overhead, and the communication delay associated with each of the cross system calls in the first host-accelerator computer architecture running a workload.
    Type: Application
    Filed: October 3, 2013
    Publication date: April 9, 2015
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Parijat Dube, Xiaoqiao Meng, Li Zhang
  • Patent number: 8983992
    Abstract: Methods and arrangements for facilitating accelerations of database functions. A field programmable gate array is incorporated. At least one query control block is incorporated in the field programmable gate array, and database management system operations are accelerated via the field programmable gate array. The accelerating includes employing the at least one query control block to execute a query without reconfiguring the field programmable gate array.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: March 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: Sameh Asaad, Bernard V. Brezzo, Donna N Eng Dillenberger, Parijat Dube, Balakrishna Raghavendra Iyer, Hong Min, Bharat Sukhwani, Mathew S. Thoennes
  • Patent number: 8977637
    Abstract: Methods and arrangements for facilitating accelerations of database functions. A field programmable gate array is incorporated. At least one query control block is incorporated in the field programmable gate array, and database management system operations are accelerated via the field programmable gate array. The accelerating includes employing the at least one query control block to execute a query without reconfiguring the field programmable gate array.
    Type: Grant
    Filed: August 30, 2012
    Date of Patent: March 10, 2015
    Assignee: International Business Machines Corporation
    Inventors: Sameh Asaad, Bernard V. Brezzo, Donna N Eng Dillenberger, Parijat Dube, Balakrishna Raghavendra Iyer, Hong Min, Bharat Sukhwani, Mathew S. Thoennes
  • Publication number: 20150046427
    Abstract: Embodiments include methods, systems and computer program products a for offloading multiple processing operations to an accelerator includes receiving, by a processing device, a database query from an application. The method also includes performing analysis on the database query and selecting an accelerator template from a plurality of accelerator templates based on the analysis of the database query. The method further includes transmitting an indication of the accelerator template to the accelerator and executing at least a portion of the database query on the accelerator.
    Type: Application
    Filed: August 7, 2013
    Publication date: February 12, 2015
    Applicant: International Business Machines Corporation
    Inventors: Sameh W. Asaad, Parijat Dube, Hong Min, Bharat Sukhwani, Mathew S. Thoennes
  • Publication number: 20150046428
    Abstract: Embodiments include methods, systems and computer program products for offloading multiple processing operations to an accelerator. Aspects include receiving a database query from an application, performing an analysis on the query, and identifying a plurality of available accelerators. Aspects further include retrieving cost information for one or more templates available on each of the plurality of available accelerators, determining a query execution plan based on the cost information and the analysis on the query, and offloading one or more query operations to at least one of the plurality of accelerators based on the query execution plan.
    Type: Application
    Filed: August 7, 2013
    Publication date: February 12, 2015
    Applicant: International Business Machines Corporation
    Inventors: Sameh W. Asaad, Parijat Dube, Hong Min, Bharat Sukhwani, Mathew S. Thoennes
  • Publication number: 20150046486
    Abstract: Embodiments include methods, systems and computer program products a for offloading multiple processing operations to an accelerator includes receiving, by a processing device, a database query from an application. The method also includes performing analysis on the database query and selecting an accelerator template from a plurality of accelerator templates based on the analysis of the database query. The method further includes transmitting an indication of the accelerator template to the accelerator and executing at least a portion of the database query on the accelerator.
    Type: Application
    Filed: September 5, 2013
    Publication date: February 12, 2015
    Applicant: International Business Machines Corporation
    Inventors: Sameh W. Asaad, Parijat Dube, Hong Min, Bharat Sukhwani, Mathew S. Thoennes
  • Publication number: 20150046430
    Abstract: Embodiments include methods, systems and computer program products for offloading multiple processing operations to an accelerator. Aspects include receiving a database query from an application, performing an analysis on the query, and identifying a plurality of available accelerators. Aspects further include retrieving cost information for one or more templates available on each of the plurality of available accelerators, determining a query execution plan based on the cost information and the analysis on the query, and offloading one or more query operations to at least one of the plurality of accelerators based on the query execution plan.
    Type: Application
    Filed: September 5, 2013
    Publication date: February 12, 2015
    Applicant: International Business Machines Corporation
    Inventors: Sameh W. Asaad, Parijat Dube, Hong Min, Bharat Sukhwani, Mathew S. Thoennes
  • Publication number: 20150026197
    Abstract: In an exemplary embodiment of this disclosure, a computer-implemented method includes determining that a database query warrants a first projection operation to project a plurality of input rows to a plurality of projected rows, where each of the plurality of input rows has one or more variable-length columns. A first projection control block is constructed, by a computer processor, to describe the first projection operation. The first projection operation is offloaded to a hardware accelerator. The first projection control block is provided to the hardware accelerator, and the first projection control block enables the hardware accelerator to perform the first projection operation at streaming rate.
    Type: Application
    Filed: July 19, 2013
    Publication date: January 22, 2015
    Inventors: Sameh W. Assad, Parijat Dube, Hong Min, Bharat Sukhwani, Mathew S. Thoennes
  • Publication number: 20150026220
    Abstract: In an exemplary embodiment of this disclosure, a computer-implemented method includes determining that a database query warrants a first projection operation to project a plurality of input rows to a plurality of projected rows, where each of the plurality of input rows has one or more variable-length columns. A first projection control block is constructed, by a computer processor, to describe the first projection operation. The first projection operation is offloaded to a hardware accelerator. The first projection control block is provided to the hardware accelerator, and the first projection control block enables the hardware accelerator to perform the first projection operation at streaming rate.
    Type: Application
    Filed: August 20, 2013
    Publication date: January 22, 2015
    Applicant: International Business Machines Corporation
    Inventors: Sameh W. Asaad, Parijat Dube, Hong Min, Bharat Sukhwani, Mathew S. Thoennes
  • Patent number: 8924189
    Abstract: A system and method for workload generation include a processor for identifying a workload model by determining each of a hierarchy for workload generation, time scales for workload generation, and states and transitions at each of the time scales, and defining a parameter by determining each of fields for user specific attributes, application specific attributes, network specific attributes, content specific attributes, and a probability distribution function for each of the attributes; a user level template unit corresponding to a relatively slow time scale in signal communication with the processor; an application level template corresponding to a relatively faster time scale in signal communication with the processor; a stream level template corresponding to a relatively fastest time scale in signal communication with the processor; and a communications adapter in signal communication with the processor for defining a workload generating unit responsive to the template units.
    Type: Grant
    Filed: May 29, 2008
    Date of Patent: December 30, 2014
    Assignee: International Business Machines Corporation
    Inventors: Kay S. Anderson, Eric P. Bouillet, Parijat Dube, Zhen Liu, Dimitrios Pendarakis
  • Patent number: 8869119
    Abstract: Affinity-based preferential call technique, in one aspect, may improve performance of distributed applications in a hybrid system having heterogeneous platforms. A segment of code in a program being executed on a processor may be intercepted or trapped in runtime. A platform is selected in the hybrid system for executing said segment of code, the platform determined to run the segment of code with best efficiency among a plurality of platforms in the hybrid system. The segment of code is dynamically executed on the selected platform determined to run the segment of code with best efficiency.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: October 21, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael H. Dawson, Parijat Dube, Liana L. Fong, Yuqing Gao, Xavier R. Guerin, Michel H. T. Hack, Megumi Ito, Graeme Johnson, Nai K. Ling, Yanbin Liu, Xiaoqiao Meng, Pramod B. Nagaraja, Seetharami R. Seelam, Wei Tan, Li Zhang