Patents by Inventor Faraz AHMED

Faraz AHMED has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240126460
    Abstract: A scheduling platform for scheduling serverless application tasks in persistent memory (PMEM) is provided. A profiler receives application requests from processes of serverless applications. The profiler categorizes the processes as persistent or non-persistent based on the application requests. A read/write batcher creates batches of the persistent requests including the read requests and write requests and assigns the batches to persistent memory banks. A scheduler creates a schedule of the batches to the persistent memory banks in a manner enabling optimization of job completion time.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 18, 2024
    Inventors: Faraz AHMED, Lianjie CAO, Puneet SHARMA, Amit SAMANTA
  • Patent number: 11914982
    Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.
    Type: Grant
    Filed: June 2, 2023
    Date of Patent: February 27, 2024
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Lianjie Cao, Anu Mercian, Diman Zad Tootaghaj, Faraz Ahmed, Puneet Sharma
  • Publication number: 20240004710
    Abstract: Systems and methods are provided for optimally allocating resources used to perform multiple tasks/jobs, e.g., machine learning training jobs. The possible resource configurations or candidates that can be used to perform such jobs are generated. A first batch of training jobs can be randomly selected and run using one of the possible resource configuration candidates. Subsequent batches of training jobs may be performed using other resource configuration candidates that have been selected using an optimization process, e.g., Bayesian optimization. Upon reaching a stopping criterion, the resource configuration resulting in a desired optimization metric, e.g., fastest job completion time can be selected and used to execute the remaining training jobs.
    Type: Application
    Filed: September 19, 2023
    Publication date: January 4, 2024
    Inventors: Lianjie Cao, Faraz Ahmed, Puneet Sharma
  • Patent number: 11797340
    Abstract: Systems and methods are provided for optimally allocating resources used to perform multiple tasks/jobs, e.g., machine learning training jobs. The possible resource configurations or candidates that can be used to perform such jobs are generated. A first batch of training jobs can be randomly selected and run using one of the possible resource configuration candidates. Subsequent batches of training jobs may be performed using other resource configuration candidates that have been selected using an optimization process, e.g., Bayesian optimization. Upon reaching a stopping criterion, the resource configuration resulting in a desired optimization metric, e.g., fastest job completion time can be selected and used to execute the remaining training jobs.
    Type: Grant
    Filed: May 14, 2020
    Date of Patent: October 24, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Lianjie Cao, Faraz Ahmed, Puneet Sharma
  • Publication number: 20230325166
    Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.
    Type: Application
    Filed: June 2, 2023
    Publication date: October 12, 2023
    Inventors: Lianjie Cao, Anu Mercian, Diman Zad Tootaghaj, Faraz Ahmed, Puneet Sharma
  • Publication number: 20230275848
    Abstract: Systems and methods are provided for updating resource allocation in a distributed network. For example, the method may comprise allocating a plurality of resource containers in a distributed network in accordance with a first distributed resource configuration. Upon determining that a processing workload value exceeds a stabilization threshold of the distributed network, determining a resource efficiency value of the plurality of resource containers in the distributed network. When a resource efficiency value is greater than or equal to the threshold resource efficiency value, the method may generate a second distributed resource configuration that includes a resource upscaling process, or when the resource efficiency value is less than the threshold resource efficiency value, the method may generate the second distributed resource configuration that includes a resource outscaling process. The resource allocation may transmit the second to update the resource allocation.
    Type: Application
    Filed: May 3, 2023
    Publication date: August 31, 2023
    Inventors: Ali Tariq, Lianjie Cao, Faraz Ahmed, Puneet Sharma
  • Publication number: 20230222034
    Abstract: Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.
    Type: Application
    Filed: February 27, 2023
    Publication date: July 13, 2023
    Inventors: Diman Zad Tootaghaj, Puneet Sharma, Faraz Ahmed, Michael Zayats
  • Patent number: 11698780
    Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.
    Type: Grant
    Filed: April 21, 2021
    Date of Patent: July 11, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Lianjie Cao, Anu Mercian, Diman Zad Tootaghaj, Faraz Ahmed, Puneet Sharma
  • Patent number: 11665106
    Abstract: Systems and methods are provided for updating resource allocation in a distributed network. For example, the method may comprise allocating a plurality of resource containers in a distributed network in accordance with a first distributed resource configuration. Upon determining that a processing workload value exceeds a stabilization threshold of the distributed network, determining a resource efficiency value of the plurality of resource containers in the distributed network. When a resource efficiency value is greater than or equal to the threshold resource efficiency value, the method may generate a second distributed resource configuration that includes a resource upscaling process, or when the resource efficiency value is less than the threshold resource efficiency value, the method may generate the second distributed resource configuration that includes a resource outscaling process. The resource allocation may transmit the second to update the resource allocation.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: May 30, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Ali Tariq, Lianjie Cao, Faraz Ahmed, Puneet Sharma
  • Publication number: 20230123074
    Abstract: Systems, methods, and computer-readable media are described for employing a machine learning-based approach such as adaptive Bayesian optimization to learn over time the most optimized assignments of incoming network requests to service function chains (SFCs) created within network slices of a 5G network. An optimized SFC assignment may be an assignment that minimizes an unknown objective function for a given set of incoming network service requests. For example, an optimized SFC assignment may be one that minimizes request response time or one that maximizes throughput for one or more network service requests corresponding to one or more network service types. The optimized SFC for a network request of a given network service type may change over time based on the dynamic nature of network performance. The machine-learning based approaches described herein train a model to dynamically determine optimized SFC assignments based on the dynamically changing network conditions.
    Type: Application
    Filed: October 15, 2021
    Publication date: April 20, 2023
    Inventors: Faraz Ahmed, Lianjie Cao, Puneet Sharma
  • Publication number: 20230071281
    Abstract: Systems and methods are provided for updating resource allocation in a distributed network. For example, the method may comprise allocating a plurality of resource containers in a distributed network in accordance with a first distributed resource configuration. Upon determining that a processing workload value exceeds a stabilization threshold of the distributed network, determining a resource efficiency value of the plurality of resource containers in the distributed network. When a resource efficiency value is greater than or equal to the threshold resource efficiency value, the method may generate a second distributed resource configuration that includes a resource upscaling process, or when the resource efficiency value is less than the threshold resource efficiency value, the method may generate the second distributed resource configuration that includes a resource outscaling process. The resource allocation may transmit the second to update the resource allocation.
    Type: Application
    Filed: September 7, 2021
    Publication date: March 9, 2023
    Inventors: ALI TARIQ, Lianjie Cao, Faraz Ahmed, Puneet Sharma
  • Patent number: 11593210
    Abstract: Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.
    Type: Grant
    Filed: December 29, 2020
    Date of Patent: February 28, 2023
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Diman Zad Tootaghaj, Puneet Sharma, Faraz Ahmed, Michael Zayats
  • Patent number: 11502936
    Abstract: An example network orchestrator includes processing circuitry and a memory. The memory includes instructions that cause the network orchestrator to receive network probe information including delay times of network probes associated with a set of flows between devices. The instructions further cause the network orchestrator to generate a correlation matrix including correlations representing shared congested links between pairs of flows. The instructions further cause the network orchestrator to for each flow of the set of flows, determine a routing solution optimized for the each flow and select a total minimum cost solution from the determined routing solutions.
    Type: Grant
    Filed: April 18, 2019
    Date of Patent: November 15, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Diman Zad Tootaghaj, Puneet Sharma, Faraz Ahmed
  • Publication number: 20220342649
    Abstract: Embodiments described herein are generally directed to an edge-CaaS (eCaaS) framework for providing life-cycle management of containerized applications on the edge. According to an example, declarative intents are received indicative of a use case for which a cluster of a container orchestration platform is to be deployed within an edge site that is to be created based on infrastructure associated with a private network. A deployment template is created by performing intent translation on the declarative intents and based on a set of constraints. The deployment template identifies the container orchestration platform selected by the intent translation. The deployment template is then executed to deploy and configure the edge site, including provisioning and configuring the infrastructure, installing the container orchestration platform on the infrastructure, configuring the cluster within the container orchestration platform, and deploying a containerized application or portion thereof on the cluster.
    Type: Application
    Filed: April 21, 2021
    Publication date: October 27, 2022
    Inventors: Lianjie Cao, Anu Mercian, Diman Zad Tootaghaj, Faraz Ahmed, Puneet Sharma
  • Publication number: 20220292303
    Abstract: Systems and methods can be configured to determine a plurality of computing resource configurations used to perform machine learning model training jobs. A computing resource configuration can comprise: a first tuple including numbers of worker nodes and parameter server nodes, and a second tuple including resource allocations for the worker nodes and parameter server nodes. At least one machine learning training job can be executed using a first computing resource configuration having a first set of values associated with the first tuple. During the executing the machine learning training job: resource usage of the worker nodes and parameter server nodes caused by a second set of values associated with the second tuple can be monitored, and whether to adjust the second set of values can be determined. Whether a stopping criterion is satisfied can be determined. One of the plurality of computing resource configurations can be selected.
    Type: Application
    Filed: March 11, 2021
    Publication date: September 15, 2022
    Inventors: LIANJIE CAO, FARAZ AHMED, PUNEET SHARMA, ALI TARIQ
  • Patent number: 11431609
    Abstract: An example client device includes processing circuitry and a memory including instructions that, when executed by the processing circuitry, cause the client device to undertake certain actions. Certain instructions cause the device to periodically measure active network performance data for a network, calculate expected rewards for the plurality of entry points, select an expected best entry point based on the expected rewards, route data to the selected entry point, measure passive network performance data for the selected entry point, and update a reinforcement learning algorithm, based in part on the measured passive network performance data.
    Type: Grant
    Filed: April 30, 2020
    Date of Patent: August 30, 2022
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Faraz Ahmed, Puneet Sharma, Diman Zad Tootaghaj
  • Publication number: 20220206900
    Abstract: Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.
    Type: Application
    Filed: December 29, 2020
    Publication date: June 30, 2022
    Inventors: Diman Zad Tootaghaj, Puneet Sharma, Faraz Ahmed, Michael Zayats
  • Publication number: 20210392070
    Abstract: An example network orchestrator includes processing circuitry and a memory. The memory includes instructions that cause the network orchestrator to receive network probe information including delay times of network probes associated with a set of flows between devices. The instructions further cause the network orchestrator to generate a correlation matrix including correlations representing shared congested links between pairs of flows. The instructions further cause the network orchestrator to for each flow of the set of flows, determine a routing solution optimized for the each flow and select a total minimum cost solution from the determined routing solutions.
    Type: Application
    Filed: April 18, 2019
    Publication date: December 16, 2021
    Inventors: Diman Zad Tootaghaj, Puneet Sharma, Faraz Ahmed
  • Publication number: 20210357256
    Abstract: Systems and methods are provided for optimally allocating resources used to perform multiple tasks/jobs, e.g., machine learning training jobs. The possible resource configurations or candidates that can be used to perform such jobs are generated. A first batch of training jobs can be randomly selected and run using one of the possible resource configuration candidates. Subsequent batches of training jobs may be performed using other resource configuration candidates that have been selected using an optimization process, e.g., Bayesian optimization. Upon reaching a stopping criterion, the resource configuration resulting in a desired optimization metric, e.g., fastest job completion time can be selected and used to execute the remaining training jobs.
    Type: Application
    Filed: May 14, 2020
    Publication date: November 18, 2021
    Inventors: LIANJIE CAO, FARAZ AHMED, PUNEET SHARMA
  • Publication number: 20210344587
    Abstract: An example client device includes processing circuitry and a memory including instructions that, when executed by the processing circuitry, cause the client device to undertake certain actions. Certain instructions cause the device to periodically measure active network performance data for a network, calculate expected rewards for the plurality of entry points, select an expected best entry point based on the expected rewards, route data to the selected entry point, measure passive network performance data for the selected entry point, and update a reinforcement learning algorithm, based in part on the measured passive network performance data.
    Type: Application
    Filed: April 30, 2020
    Publication date: November 4, 2021
    Inventors: Faraz AHMED, Puneet SHARMA, Diman ZAD TOOTAGHAJ