Patents by Inventor Layne Lin Peng

Layne Lin Peng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240012680
    Abstract: Techniques for facilitating inter-cloud federated learning (FL) are provided. In one set of embodiments, these techniques comprise an FL lifecycle manager that enables users to centrally manage the lifecycles of FL components across different cloud platforms. The lifecycle management operations enabled by the FL lifecycle manager can include deploying/installing FL components on the cloud platforms, updating the components, and uninstalling the components. In a further set of embodiments, these techniques comprise an FL job manager that enables users to centrally manage the execution of FL training runs (i.e., FL jobs) on FL components that have been deployed via the FL lifecycle manager. For example, the FL job manager can enable users to define the parameters and configuration of an FL job, initiate the job, monitor the job's status, take actions on the running job, and collect the job's results.
    Type: Application
    Filed: July 26, 2022
    Publication date: January 11, 2024
    Inventors: Fangchi Wang, Hai Ning Zhang, Layne Lin Peng, Renming Zhao, Siyu Qiu
  • Publication number: 20230342171
    Abstract: Capacity forecasting may be performed for distributed storage resources in a virtualized computing environment. Historical data indicative of usage of the storage resources is transformed into a privacy-preserving format and is preprocessed to remove outliers, to fill in missing values, and to perform normalization. The preprocessed historical data is inputted into a machine-learning model, which applies a piecewise regression to the historical data to generate a prediction output.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 26, 2023
    Applicant: VMware, Inc.
    Inventors: Yang YANG, Hexin ZHANG, Layne Lin PENG, Jiahao CHEN, Chengmao LU, Sixuan YANG
  • Publication number: 20230229640
    Abstract: A collaborative data schema management system for federated learning (i.e., federated data manager (FDM)) is provided. Among other things, FDM enables the members of a federated learning alliance to (1) propose data schemas for use by the alliance, (2) identify and bind local datasets to proposed schemas, (3) create, based on the proposed schemas, training datasets for addressing various ML tasks, and (4) control, for each training dataset, which of the local datasets bound to that training dataset (and thus, which alliance members) will actually participate in the training of a particular ML model. FDM enables these features while ensuring that the contents of the members' local datasets remain hidden from each other, thereby preserving the privacy of that data.
    Type: Application
    Filed: January 20, 2022
    Publication date: July 20, 2023
    Inventors: Layne Lin Peng, Hai Ning Zhang, Jia Hao Chen, Fangchi Wang
  • Patent number: 11663050
    Abstract: A resource management method comprises: in response to receiving, from an application operating on a client, a resource allocation request indicating an amount of dedicated processing resources required by the application, acquiring a mapping between a group of physical dedicated processing resources provided by a group of servers and a group of logical dedicated processing resources, the group of physical dedicated processing resources being divided into the group of logical dedicated processing resources; determining allocation statuses of the group of logical dedicated processing resources; determining, based at least on the mapping and the allocation statuses, a first amount of logical dedicated processing resources to be allocated to the application from the group of logical dedicated processing resources; and indicating the first amount of logical dedicated processing resources to the application, to allow the application to utilize physical dedicated processing resources provided by at least one of the
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: May 30, 2023
    Assignee: EMC IP Holding Company LLC
    Inventors: Layne Lin Peng, Junping Zhao, Wei Cui
  • Patent number: 11507540
    Abstract: In a multi-cloud computing environment comprising a plurality of cloud platforms, wherein one cloud platform is a source of a model and a data set and further wherein the model is to be executed against the data set on one or more of the other cloud platforms, the method maintains a decentralized architecture comprising a file system and a message bus, wherein the file system comprises a plurality of decentralized file system nodes corresponding to the plurality of cloud platforms, and the message bus comprises a plurality of decentralized message bus nodes corresponding to the plurality of cloud platforms. Further, the method manages sharing of the model and the data set via at least a portion of the decentralized file system nodes and manages messaging related to execution of the model against the data set via at least a portion of the decentralized message bus nodes.
    Type: Grant
    Filed: February 15, 2018
    Date of Patent: November 22, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Stephen J. Todd, Kun Wang, Layne Lin Peng, Pengfei Wu
  • Patent number: 11436050
    Abstract: Embodiments of the present disclosure provide a method, apparatus and computer program product for resource scheduling. The method comprises obtaining a processing requirement for a deep learning task, the processing requirement being specified by a user and at least including a requirement related to a completion time of the deep learning task. The method further comprises determining, based on the processing requirement, a resource required by the deep learning task such that processing of the deep learning task based on the resource satisfies the processing requirement. Through the embodiments of the present disclosure, the resources can be scheduled reasonably and flexibly to satisfy the user's processing requirement for a particular deep learning task without requiring the user to manually specify the requirement on the resources.
    Type: Grant
    Filed: April 10, 2019
    Date of Patent: September 6, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Layne Lin Peng, Kun Wang, Sanping Li
  • Patent number: 11275417
    Abstract: The present disclosure provides a power management apparatus, method and system. The apparatus comprises: a client management module for configuring power management client module(s) on one or more clients, the power management client module being for power management of the client; a data collector module for collecting, via the power management client module(s), data related to the power management of one or more user accounts on one or more clients; and a repository module for storing the collected data.
    Type: Grant
    Filed: December 3, 2019
    Date of Patent: March 15, 2022
    Assignee: EMC IP HOLDING COMPANY, LLC
    Inventors: Feng Golfen Guo, Grissom Tianqing Wang, Roby Qiyan Chen, Layne Lin Peng, Vivian Yun Zhang, Kay Kai Yan
  • Patent number: 11249811
    Abstract: Implementations of the present disclosure relate to a method, apparatus and computer program product for processing a computing task. The method comprises: obtaining status information of multiple computing resources; in response to receiving a neural network model-based computing task, determining configuration information of multiple layers associated with the neural network model; obtaining parameter data associated with at least one part of the multiple layers on the basis of the configuration information; and based on the status information and the parameter data, selecting from the multiple computing resources a group of computing resources for processing the computing task. According to the example implementations of the present disclosure, multiple computing resources may be utilized sufficiently, and it may be guaranteed that a load balance may be stricken between the multiple computing resources.
    Type: Grant
    Filed: April 12, 2019
    Date of Patent: February 15, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Layne Lin Peng, Zhi Ying, Kun Wang
  • Patent number: 11201836
    Abstract: Embodiments of the present disclosure relate to a method and a device for managing a stateful application on a server. The method includes, in response to receiving a first request from a client for initializing the stateful application, allocating a storage resource to the stateful application. The method further includes, in response to receiving a second request from the client for processing data, storing the data in the storage resource. The method also includes enabling the stateful application to process the stored data.
    Type: Grant
    Filed: June 28, 2018
    Date of Patent: December 14, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Jie Bao, Kun Wang, Junping Frank Zhao, Layne Lin Peng
  • Patent number: 11061731
    Abstract: A method of scheduling a dedicated processing resource includes: obtaining source code of an application to be compiled; extracting, during compiling of the source code, metadata associated with the application, the metadata indicating an amount of the dedicated processing resource required by the application; and obtaining, based on the metadata, the dedicated processing resource allocated to the application. In this manner, performance of the dedicated processing resource scheduling system and resource utilization is improved.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: July 13, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Kun Wang, Layne Lin Peng, Fei Chen
  • Patent number: 10983828
    Abstract: Embodiments of the present disclosure relate to a method, apparatus and computer program product for scheduling dedicated processing resources. The method comprises: in response to receiving a scheduling request for a plurality of dedicated processing resources, obtaining a topology of the plurality of dedicated processing resources, the topology being determined based on connection attributes related to connections among the plurality of dedicated processing resources; and determining, based on the topology, a target dedicated processing resource satisfying the scheduling request from the plurality of dedicated processing resources. In this manner, the performance and the resource utilization rate of scheduling the dedicated processing resources are improved.
    Type: Grant
    Filed: January 11, 2019
    Date of Patent: April 20, 2021
    Assignee: Dell Products L.P.
    Inventors: Junping Zhao, Layne Lin Peng, Zhi Ying
  • Patent number: 10877807
    Abstract: Embodiments of the present disclosure relate to a method and apparatus of allocating a processing resource to an application. The method comprises receiving from the application a request for executing a task. The method further comprises determining a characteristic of the application based on the request. The method may determine stored historical data associated with the application. In addition, the method may further automatically select, based on the characteristic of the application and the historical data, the processing resource applicable to the application for allocation to the application.
    Type: Grant
    Filed: October 17, 2018
    Date of Patent: December 29, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Fei Chen, Layne Lin Peng, Kun Wang
  • Patent number: 10756979
    Abstract: Embodiments of the present invention provide a method and apparatus for performing cross-layer orchestration of resources in a data center having a multi-layer architecture. The method comprises: performing unified control of all resources in all layers of the data center; performing unified storage of all topologies and machine-generated data of all layers of the data center; and orchestrating the resources of the data center based on the unified control and the unified storage. Embodiments of the present invention provide a higher level orchestration than methods in the prior art, and employ some functions provided by methods in the prior art to provide a unified manner when the demand changes for orchestrating a layered cloud data center, in order to immediately provide a suitable capability.
    Type: Grant
    Filed: December 28, 2015
    Date of Patent: August 25, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Layne Lin Peng, Jie Bao, Grissom Tianqing Wang, Vivian Yun Zhang, Roby Qiyan Chen, Feng Golfen Guo, Kay Kai Yan, Yicang Wu
  • Patent number: 10705981
    Abstract: Embodiments of the present disclosure provide a method and apparatus for providing data storage service. The method comprises: receiving a storage service template from an user, the storage service template specifying a storage service policy for the user and a service instance to launch; and providing a storage service according to the storage service template; wherein the storage service policy defines a storage function to be performed for data of the user. With the method and apparatus according to embodiments of the present disclosure, a unified solution for overall orchestration of storage functions can be provided to enable the user to customize the required storage function flexibly.
    Type: Grant
    Filed: March 13, 2017
    Date of Patent: July 7, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Layne Lin Peng, Accela Yilong Zhao, Junping Frank Zhao, Yu Cao, Xiaoyan Guo, Zhe Dong, Sanping Li
  • Patent number: 10678464
    Abstract: Embodiments of the present disclosure provide methods and apparatuses for data migration of storage devices including registering at least one executing unit for data migration, each of the at least one executing unit corresponding to description file; extracting and storing information contained in the description file corresponding to each of the at least one executing unit; receiving a data migration request from a user; in response to the data migration request from the user, selecting an executing unit for data migration of the user at least based on part of the stored information contained in the description file; and scheduling an instance of the selected executing unit to execute data migration of the user. The methods or apparatuses according to embodiments of the present disclosure can implement, in a uniform and scalable manner, data migration for various formats, various performance requirements, and application scenarios.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: June 9, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Frank Zhao, Layne Lin Peng, Yu Cao, Sanping Li, Zhe Dong
  • Publication number: 20200103948
    Abstract: The present disclosure provides a power management apparatus, method and system. The apparatus comprises: a client management module for configuring power management client module(s) on one or more clients, the power management client module being for power management of the client; a data collector module for collecting, via the power management client module(s), data related to the power management of one or more user accounts on one or more clients; and a repository module for storing the collected data.
    Type: Application
    Filed: December 3, 2019
    Publication date: April 2, 2020
    Inventors: Feng Golfen Guo, Grissom Tianqing Wang, Roby Qiyan Chen, Layne Lin Peng, Vivian Yun Zhang, Kay Kai Yan
  • Patent number: 10564938
    Abstract: Embodiments of the present disclosure relate to a method and a device of resource orchestration resources using an object-oriented language, and a program. Specifically the present disclosure discloses a method of resource orchestration using an object-oriented language comprising: creating a correspondence relationship from concepts in the object-oriented language to a requirement of resource orchestration; creating a workflow for implementing the resource orchestration and based upon the correspondence relationship; and implementing the resource orchestration based upon the correspondence relationship and the workflow. The present disclosure also discloses a device of resource orchestration using an object-oriented language, and a computer program product for performing steps of a method of resource orchestration using an object-oriented language.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: February 18, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Accela Yilong Zhao, Yu Cao, Layne Lin Peng, Jie Bao
  • Patent number: 10496144
    Abstract: The present disclosure provides a power management apparatus, method and system. The apparatus comprises: a client management module for configuring power management client module(s) on one or more clients, the power management client module being for power management of the client; a data collector module for collecting, via the power management client module(s), data related to the power management of one or more user accounts on one or more clients; and a repository module for storing the collected data.
    Type: Grant
    Filed: March 26, 2015
    Date of Patent: December 3, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Feng Golfen Guo, Grissom Tianqing Wang, Roby Qiyan Chen, Layne Lin Peng, Vivian Yun Zhang, Kay Kai Yan
  • Publication number: 20190324809
    Abstract: Implementations of the present disclosure relate to a method, apparatus and computer program product for processing a computing task. The method comprises: obtaining status information of multiple computing resources; in response to receiving a neural network model-based computing task, determining configuration information of multiple layers associated with the neural network model; obtaining parameter data associated with at least one part of the multiple layers on the basis of the configuration information; and based on the status information and the parameter data, selecting from the multiple computing resources a group of computing resources for processing the computing task. According to the example implementations of the present disclosure, multiple computing resources may be utilized sufficiently, and it may be guaranteed that a load balance may be stricken between the multiple computing resources.
    Type: Application
    Filed: April 12, 2019
    Publication date: October 24, 2019
    Inventors: Junping Zhao, Layne Lin Peng, Zhi Ying, Kun Wang
  • Publication number: 20190324821
    Abstract: A resource management method comprises: in response to receiving, from an application operating on a client, a resource allocation request indicating an amount of dedicated processing resources required by the application, acquiring a mapping between a group of physical dedicated processing resources provided by a group of servers and a group of logical dedicated processing resources, the group of physical dedicated processing resources being divided into the group of logical dedicated processing resources; determining allocation statuses of the group of logical dedicated processing resources; determining, based at least on the mapping and the allocation statuses, a first amount of logical dedicated processing resources to be allocated to the application from the group of logical dedicated processing resources; and indicating the first amount of logical dedicated processing resources to the application, to allow the application to utilize physical dedicated processing resources provided by at least one of the
    Type: Application
    Filed: April 15, 2019
    Publication date: October 24, 2019
    Inventors: Layne Lin Peng, Junping Zhao, Wei Cui