Patents by Inventor Shicong MENG
Shicong MENG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9354938Abstract: Methods and arrangements for task scheduling. A job is accepted, the job comprising a plurality of phases, each of the phases comprising at least one task. For each of a plurality of slots, a fetching cost associated with receipt of one or more of the tasks is determined. The slots are grouped into a plurality of sets. A pair of thresholds is determined for each of the sets, the thresholds being associated with the determined fetching costs and comprising upper and lower numerical bounds for guiding receipt of one or more of the tasks. Other variants and embodiments are broadly contemplated herein.Type: GrantFiled: April 10, 2013Date of Patent: May 31, 2016Assignee: International Business Machines CorporationInventors: Shicong Meng, Xiaoqiao Meng, Jian Tan, Li Zhang
-
Publication number: 20160150060Abstract: A method for selecting a consensus protocol comprises separating a consensus protocol into one or more communication steps, wherein the consensus protocol is useable to substantially maintain data consistency between nodes in a distributed computing system, and wherein a communication step comprises a message transfer, attributable to the consensus protocol, in the distributed computing system, and computing an estimated protocol-level delay based on one or more attributes associated with the separated communication steps of the consensus protocol.Type: ApplicationFiled: July 10, 2015Publication date: May 26, 2016Inventors: Shicong Meng, Xiaoqiao Meng, Jian Tan, Li Zhang
-
Publication number: 20160150059Abstract: A method for selecting a consensus protocol comprises separating a consensus protocol into one or more communication steps, wherein the consensus protocol is useable to substantially maintain data consistency between nodes in a distributed computing system, and wherein a communication step comprises a message transfer, attributable to the consensus protocol, in the distributed computing system, and computing an estimated protocol-level delay based on one or more attributes associated with the separated communication steps of the consensus protocol.Type: ApplicationFiled: November 21, 2014Publication date: May 26, 2016Inventors: Shicong Meng, Xiaoqiao Meng, Jian Tan, Xiao Yu, Li Zhang
-
Publication number: 20160117186Abstract: Methods, systems, and computer programs for performing management tasks in a virtual infrastructure are presented. The method includes detecting a change, beyond a predetermined threshold, in a number of tasks waiting to be processed by a plurality of management modules executing as execution environments in the virtual infrastructure, each of the plurality of management modules being a management execution environments for the managed objects. If the detected change is a decrease, the method includes selecting one or more of the management modules to be removed and distributing managed objects handled by the selected management modules to one or more non-selected management modules. If the detected change is an increase, the method includes spawning one or more additional management modules executing as execution environments and distributing selected managed objects from the existing management modules to the additional management modules.Type: ApplicationFiled: January 4, 2016Publication date: April 28, 2016Inventors: Vijayaraghavan SOUNDARARAJAN, Shicong MENG
-
Patent number: 9229754Abstract: Methods, systems, and computer programs for performing management tasks in a virtual infrastructure are presented. The method includes detecting a decrease, below a predetermined threshold, in a number of tasks waiting to be processed by a plurality of virtual centers (VCs) executing as virtual machines (VMs) in a virtual infrastructure, wherein each of the plurality of VCs is a management VM for the managed objects of the virtual infrastructure. The method further includes, based on the detected decrease in the number of tasks waiting to be processed, selecting one or more VCs of the plurality of VCs to be removed, distributing managed objects handled by the selected one or more VCs to one or more non-selected VCs of the plurality of VCs, and removing the selected one or more VCs.Type: GrantFiled: January 14, 2014Date of Patent: January 5, 2016Assignee: VMware, Inc.Inventors: Vijayaraghavan Soundararajan, Shicong Meng
-
Publication number: 20150347243Abstract: In various embodiments a distributed computing node in a plurality of distributed computing nodes logs transactions in a distributed processing system. In one embodiment, a set of information associated with at least one transaction is recorded in a transaction log. At least a portion of memory in at least one information processing system involved in the transaction is accessed. The portion of memory is directly accessed without involving a processor of the at least one information processing system. The set of information from the transaction log is written to the portion of memory. The set of information is directly written to the portion of memory without involving a processor of the at least one information processing system.Type: ApplicationFiled: May 27, 2014Publication date: December 3, 2015Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Xavier R. GUERIN, Shicong MENG
-
Publication number: 20150309731Abstract: Methods, systems, and computer program products for dynamic tuning of memory in MapReduce systems are provided herein. A method includes analyzing (i) memory usage of a first sub-set of multiple tasks associated with a MapReduce job and (ii) an amount of data utilized across the first sub-set of the multiple tasks; determining a memory size to be allocated to the first sub-set of the multiple tasks based on said analyzing, wherein said memory size minimizes a cost function related to said memory usage and said amount of data utilized; performing a task-wise performance comparison among a second sub-set of the multiple tasks associated with the MapReduce job using the determined memory size to be allocated to the first sub-set of the multiple tasks to generate a set of memory allocation results; and dynamically applying the set of memory allocation results to one or more additional tasks associated with the MapReduce job.Type: ApplicationFiled: April 25, 2014Publication date: October 29, 2015Applicant: International Business Machines CorporationInventors: Nicholas C. Fuller, Min Li, Shicong Meng, Jian Tan, Liangzhao Zeng, Li Zhang
-
Publication number: 20150261563Abstract: In various embodiments a distributed computing node in a plurality of distributed computing nodes executes transactions in a distributed processing system. In one embodiment, a transaction commit message is received from a client computing node for a transaction. The transaction commit message includes at least an identifier of the transaction and a transaction sequence for the transaction. The transaction sequence indicates a sequence of execution for the transaction on the plurality of distributed computing nodes. An entry within the transaction sequence associated with the distributed computing node is identified. The entry includes a sequence number for executing the transaction on the distributed computing node with respect to other transactions. The transaction is executed based on the sequence number in the entry.Type: ApplicationFiled: March 17, 2014Publication date: September 17, 2015Applicant: International Business Machines CorporationInventors: Xavier R. GUERIN, Shicong MENG
-
Publication number: 20150227393Abstract: Methods, systems, and articles of manufacture for dynamic resource allocation in MapReduce are provided herein. A method includes partitioning input data into one or more sized items of input data associated with a MapReduce job; determining a total number of mapper components, and a total number of reducer components for the MapReduce job based on said partitioning; dynamically determining an allocation of resources to each of the total number of mapper components and reducer components during run-time of the MapReduce job, wherein said dynamically determining the allocation of resources comprises monitoring one or more utilization parameters for each of the total number of mapper components and total number of reducer components during run-time of the MapReduce job; and dynamically determining a number of concurrently executing mapper components and reducer components from the total number of mapper components and the total number of reducer components for the MapReduce job.Type: ApplicationFiled: February 10, 2014Publication date: August 13, 2015Applicant: International Business Machines CorporationInventors: Nicholas C. Fuller, Min Li, Shicong Meng, Jian Tan, Liangzhao Zeng, Li Zhang
-
Publication number: 20150227392Abstract: Methods, systems, and articles of manufacture for enabling dynamic task-level configuration in MapReduce are provided herein. A method includes generating a first set of configurations for a currently executing MapReduce job, wherein said set of configurations comprises job-level configurations and task-level configurations; dynamically modifying configurations associated with a mapper component and/or a reducer component associated with at least one ongoing map task and/or ongoing reduce task of the MapReduce job based on the generated first set of configurations; and deploying said first set of configurations to the mapper component and/or the reducer component associated with the MapReduce job.Type: ApplicationFiled: February 10, 2014Publication date: August 13, 2015Applicant: International Business Machines CorporationInventors: Nicholas C. Fuller, Minkyong Kim, Min Li, Shicong Meng, Jian Tan, Liangzhao Zeng, Li Zhang
-
Publication number: 20150227399Abstract: Methods and arrangements for managing data segments. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task, and the at least one of the dependee set of tasks is executed. There is extracted, from the at least one of the dependee set of tasks, at least one service common to at least another of the dependee set of tasks. Other variants and embodiments are broadly contemplated herein.Type: ApplicationFiled: February 7, 2014Publication date: August 13, 2015Applicant: International Business Machines CorporationInventors: Alicia Elena Chin, Yonggang Hu, Zhenhua Hu, Shicong Meng, Xiaoqiao Meng, Jian Tan, Li Zhang
-
Publication number: 20150227394Abstract: Methods and arrangements for yielding resources in data processing. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task, and the at least one of the dependee set of tasks is executed. At least one resource of the at least one of the dependee set of tasks is yielded upon detection of resource underutilization in at least one other location. Other variants and embodiments are broadly contemplated herein. Other variants and embodiments are broadly contemplated herein.Type: ApplicationFiled: February 7, 2014Publication date: August 13, 2015Applicant: International Business Machines CorporationInventors: Alicia Elena Chin, Yonggang Hu, Zhenhua Hu, Shicong Meng, Xiaoqiao Meng, Jian Tan, Li Zhang
-
Publication number: 20150227389Abstract: Methods and arrangements for assembling tasks in a progressive queue. At least one job is received, each job comprising a dependee set of tasks and a depender set of at least one task. The dependee tasks are assembled in a progressive queue for execution, and the dependee tasks are executed. Other variants and embodiments are broadly contemplated herein.Type: ApplicationFiled: February 7, 2014Publication date: August 13, 2015Applicant: International Business Machines CorporationInventors: Alicia Elena Chin, Michael Feiman, Yonggang Hu, Zhenhua Hu, Shicong Meng, Xiaoqiao Meng, Jian Tan, Li Zhang
-
Publication number: 20140310236Abstract: A method of transaction processing includes receiving a plurality of transactions from an execution queue, acquiring a plurality of locks corresponding to data items needed for execution of the plurality of transactions, executing each transaction of the plurality of transactions upon acquiring all locks needed for execution of each transaction, and releasing the locks needed for execution of each transaction of the plurality of transactions upon committing each transaction. The plurality of transactions have a specified order within the execution queue, the plurality of locks are sequentially acquired based on the specified order of the plurality of transactions within the execution queue, and an order of execution of the plurality of transactions is different from the specified order of the plurality of transactions within the execution queue.Type: ApplicationFiled: April 15, 2013Publication date: October 16, 2014Applicant: International Business Machines CorporationInventors: Shicong Meng, Li Zhang
-
Publication number: 20140310240Abstract: A method of transaction replication includes transmitting at least one transaction received during an epoch from a local node to remote nodes of a domain of 2N+1 nodes at the end of an epoch (N is an integer greater than or equal to 1). The remote nodes log receipt of the at least one transaction, notify the local node of the receipt of the at least one transaction, transmit the at least one transaction to all of the 2N+1 nodes, and add the at least one transaction to an execution order upon receiving at least N+1 copies of the at least one transaction.Type: ApplicationFiled: September 11, 2013Publication date: October 16, 2014Applicant: International Business Machines CorporationInventors: SHICONG MENG, LI ZHANG
-
Publication number: 20140310239Abstract: A method of transaction replication includes transmitting at least one transaction received during an epoch from a local node to remote nodes of a domain of 2N+1 nodes at the end of an epoch (N is an integer greater than or equal to 1). The remote nodes log receipt of the at least one transaction, notify the local node of the receipt of the at least one transaction, transmit the at least one transaction to all of the 2N+1 nodes, and add the at least one transaction to an execution order upon receiving at least N+1 copies of the at least one transaction.Type: ApplicationFiled: April 15, 2013Publication date: October 16, 2014Applicant: International Business Machines CorporationInventors: Shicong Meng, Li Zhang
-
Publication number: 20140310253Abstract: A method of transaction processing includes receiving a plurality of transactions from an execution queue, acquiring a plurality of locks corresponding to data items needed for execution of the plurality of transactions, executing each transaction of the plurality of transactions upon acquiring all locks needed for execution of each transaction, and releasing the locks needed for execution of each transaction of the plurality of transactions upon committing each transaction. The plurality of transactions have a specified order within the execution queue, the plurality of locks are sequentially acquired based on the specified order of the plurality of transactions within the execution queue, and an order of execution of the plurality of transactions is different from the specified order of the plurality of transactions within the execution queue.Type: ApplicationFiled: September 11, 2013Publication date: October 16, 2014Applicant: International Business Machines CorporationInventors: SHICONG MENG, Li Zhang
-
Publication number: 20140310712Abstract: Methods and arrangements for task scheduling. A job is accepted, the job comprising a plurality of phases, each of the phases comprising at least one task. For each of a plurality of slots, a fetching cost associated with receipt of one or more of the tasks is determined. The slots are grouped into a plurality of sets. A pair of thresholds is determined for each of the sets, the thresholds being associated with the determined fetching costs and comprising upper and lower numerical bounds for guiding receipt of one or more of the tasks. Other variants and embodiments are broadly contemplated herein.Type: ApplicationFiled: April 10, 2013Publication date: October 16, 2014Applicant: International Business Machines CorporationInventors: Shicong Meng, Xiaoqiao Meng, Jian Tan, Li Zhang
-
Publication number: 20140130048Abstract: Methods, systems, and computer programs for performing management tasks in a virtual infrastructure are presented. The method includes detecting a decrease, below a predetermined threshold, in a number of tasks waiting to be processed by a plurality of virtual centers (VCs) executing as virtual machines (VMs) in a virtual infrastructure, wherein each of the plurality of VCs is a management VM for the managed objects of the virtual infrastructure. The method further includes, based on the detected decrease in the number of tasks waiting to be processed, selecting one or more VCs of the plurality of VCs to be removed, distributing managed objects handled by the selected one or more VCs to one or more non-selected VCs of the plurality of VCs, and removing the selected one or more VCs.Type: ApplicationFiled: January 14, 2014Publication date: May 8, 2014Applicant: VMware, Inc.Inventors: Vijayaraghavan SOUNDARARAJAN, Shicong MENG
-
Publication number: 20140089495Abstract: Various embodiments predict performance of a system including a plurality of server tiers. In one embodiment, a first set of performance information is collected for a base allocation of computing resources across multiple server tiers in the plurality of sever tiers for a set of workloads. A set of experimental allocations of the computing resources is generated on a tier-by-tier basis. Each of the set of experimental allocations varies the computing resources allocated by the base allocation for a single server tier of the multiple server tiers. A second set of performance information associated with the single server tier for each of the set of experimental allocations is collected for a plurality of workloads. At least one performance characteristic of at least one candidate allocation of computing resources across the multiple server tiers is predicted for a given workload based on the first and second sets of performance information.Type: ApplicationFiled: January 4, 2013Publication date: March 27, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rahul P. AKOLKAR, Arun IYENGAR, Shicong MENG, Isabelle ROUVELLOU, Ignacio SILVA-LEPE