Patents by Inventor Ishai Menache

Ishai Menache has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240419428
    Abstract: Updates are managed across partitions in a distributed cloud allocation system. Updates are managed in a variety of dimensions, e.g., by partition, time, or upgrade domain, to maintain a sufficient number of allocator instances to maintain service. An update service may receive, organize, schedule, and deliver updates to VM allocator instances to limit service disruptions. Updates may be aggregated based on partition scope. Updates to one or more partitions may be batched in a single update. Delivery and timing of updates may be configurable on a per partition basis. Allocator instances may receive batched updates at the same or different times. An update service may dynamically adapt to prevailing service conditions if an essential update is in progress and/or request demand is above a threshold.
    Type: Application
    Filed: June 13, 2023
    Publication date: December 19, 2024
    Inventors: Kyung Hoon SEO, Abhisek PAN, Robert Warren GRUEN, Yaswanth MALLEEDI, Ishai MENACHE, David Allen DION, Thomas MOSCIBRODA
  • Publication number: 20240419472
    Abstract: A search space for allocating a virtual machine is pruned. An allocation request for allocating a virtual machine to a plurality of clusters is received. A valid set of clusters is generated. The valid set of clusters includes clusters of the plurality of clusters that satisfy the allocation request. An attribute associated with the allocation request is identified. A truncation parameter is determined, by a trained search space classification model, based on the identified attribute. The valid set of clusters is filtered based on the truncation parameter. A server is selected from the filtered valid set of clusters. The virtual machine is allocated to the selected server. In an aspect of the disclosure, a search space pruner generates an analysis summary based on an analysis of received telemetry data. The search space pruner trains the search space classification model to determine truncation parameters based on the analysis summary.
    Type: Application
    Filed: June 19, 2023
    Publication date: December 19, 2024
    Inventors: Saurabh AGARWAL, Abhisek PAN, Brendon MACHADO, David Allen DION, Ishai MENACHE, Karthikeyan SUBRAMANIAN, Luke Jonathon MARSHALL, Neha KESHARI, Thomas MOSCIBRODA, Yiran WEI
  • Patent number: 11474945
    Abstract: Methods, systems, apparatuses, and computer program products are provided for prefetching data. A workload analyzer may identify job characteristics for a plurality of previously executed jobs in a workload executing on a cluster of one or more compute resources. For each job, identified job characteristics may include identification of an input dataset and an input bandwidth characteristic for the input dataset. A future workload predictor may identify future jobs expected to execute on the cluster based at least on the identified job characteristics. A cache assignment determiner may determine a cache assignment that identifies a prefetch dataset for at least one of the future jobs. A network bandwidth allocator may determine a network bandwidth assignment for the prefetch dataset. A plan instructor may instruct a compute resource of the cluster to load data to a cache local to the cluster according to the cache assignment and the network bandwidth assignment.
    Type: Grant
    Filed: June 2, 2021
    Date of Patent: October 18, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Virajith Jalaparti, Sriram S. Rao, Christopher W. Douglas, Ashvin Agrawal, Avrilia Floratou, Ishai Menache, Srikanth Kandula, Mainak Ghosh, Joseph Naor
  • Publication number: 20210286728
    Abstract: Methods, systems, apparatuses, and computer program products are provided for prefetching data. A workload analyzer may identify job characteristics for a plurality of previously executed jobs in a workload executing on a cluster of one or more compute resources. For each job, identified job characteristics may include identification of an input dataset and an input bandwidth characteristic for the input dataset. A future workload predictor may identify future jobs expected to execute on the cluster based at least on the identified job characteristics. A cache assignment determiner may determine a cache assignment that identifies a prefetch dataset for at least one of the future jobs. A network bandwidth allocator may determine a network bandwidth assignment for the prefetch dataset. A plan instructor may instruct a compute resource of the cluster to load data to a cache local to the cluster according to the cache assignment and the network bandwidth assignment.
    Type: Application
    Filed: June 2, 2021
    Publication date: September 16, 2021
    Inventors: Virajith Jalaparti, Sriram S. Rao, Christopher W. Douglas, Ashvin Agrawal, Avrilia Floratou, Ishai Menache, Srikanth Kandula, Mainak Ghosh, Joseph Naor
  • Patent number: 11055225
    Abstract: Methods, systems, apparatuses, and computer program products are provided for prefetching data. A workload analyzer may identify job characteristics for a plurality of previously executed jobs in a workload executing on a cluster of one or more compute resources. For each job, identified job characteristics may include identification of an input dataset and an input bandwidth characteristic for the input dataset. A future workload predictor may identify future jobs expected to execute on the cluster based at least on the identified job characteristics. A cache assignment determiner may determine a cache assignment that identifies a prefetch dataset for at least one of the future jobs. A network bandwidth allocator may determine a network bandwidth assignment for the prefetch dataset. A plan instructor may instruct a compute resource of the cluster to load data to a cache local to the cluster according to the cache assignment and the network bandwidth assignment.
    Type: Grant
    Filed: October 22, 2019
    Date of Patent: July 6, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Virajith Jalaparti, Sriram S. Rao, Christopher W. Douglas, Ashvin Agrawal, Avrilia Floratou, Ishai Menache, Srikanth Kandula, Mainak Ghosh, Joseph Naor
  • Publication number: 20210096996
    Abstract: Methods, systems, apparatuses, and computer program products are provided for prefetching data. A workload analyzer may identify job characteristics for a plurality of previously executed jobs in a workload executing on a cluster of one or more compute resources. For each job, identified job characteristics may include identification of an input dataset and an input bandwidth characteristic for the input dataset. A future workload predictor may identify future jobs expected to execute on the cluster based at least on the identified job characteristics. A cache assignment determiner may determine a cache assignment that identifies a prefetch dataset for at least one of the future jobs. A network bandwidth allocator may determine a network bandwidth assignment for the prefetch dataset. A plan instructor may instruct a compute resource of the cluster to load data to a cache local to the cluster according to the cache assignment and the network bandwidth assignment.
    Type: Application
    Filed: October 22, 2019
    Publication date: April 1, 2021
    Inventors: Virajith Jalaparti, Sriram S. Rao, Christopher W. Douglas, Ashvin Agrawal, Avrilia Floratou, Ishai Menache, Srikanth Kandula, Mainak Ghosh, Joseph Naor
  • Patent number: 10747665
    Abstract: In an embodiment, a partition cost of one or more of the plurality of partitions and a data block cost for one or more data blocks that may be subjected to a garbage collection operation are determined. The partition cost and the data block cost are combined into an overall reclaim cost by specifying both the partition cost and the data block cost in terms of a computing system latency. A byte constant multiplier that is configured to modify the overall reclaim cost to account for the amount of data objects that may be rewritten during the garbage collection operation may be applied. The one or more partitions and/or one or more data blocks that have the lowest overall reclaim cost while reclaiming an acceptable amount of data block space may be determined and be included in a garbage collection schedule.
    Type: Grant
    Filed: April 1, 2019
    Date of Patent: August 18, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Shane Kumar Mainali, Rushi Srinivas Surla, Peter Bodik, Ishai Menache, Yang Lu
  • Patent number: 10693812
    Abstract: Various technologies pertaining to scheduling network traffic in a network are described. A request to transfer data from a first computing device to a second computing device includes data that identifies a volume of the data to be transferred and a deadline, where the data is to be transferred prior to the deadline. A long-term schedule is computed based upon the request, wherein the long-term schedule defines flow of traffic through the network over a relatively long time horizon. A short-term schedule is computed based upon the long-term schedule, where devices in the network are configured based upon the short-term schedule.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: June 23, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Srikanth Kandula, Ishai Menache, Roy Schwartz
  • Patent number: 10644966
    Abstract: A system for managing allocation of resources based on service level agreements between application owners and cloud operators. Under some service level agreements, the cloud operator may have responsibility for managing allocation of resources to the software application and may manage the allocation such that the software application executes within an agreed performance level. Operating a cloud computing platform according to such a service level agreement may alleviate for the application owners the complexities of managing allocation of resources and may provide greater flexibility to cloud operators in managing their cloud computing platforms.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: May 5, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Navendu Jain, Ishai Menache
  • Patent number: 10509687
    Abstract: There is provided a method and system for process migration in a data center network. The method includes selecting processes to be migrated from a number of overloaded servers within a data center network based on an overload status of each overloaded server. Additionally, the method includes selecting, for each selected process, one of a number of underloaded servers to which to migrate the selected process based on an underload status of each underloaded server, and based on a parameter of a network component by which the selected process is to be migrated. The method also includes migrating each selected process to the selected underloaded server such that a migration finishes within a specified budget.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: December 17, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Navendu Jain, Ishai Menache, F. Bruce Shepherd, Joseph (Seffi) Naor
  • Publication number: 20190260692
    Abstract: Various technologies pertaining to scheduling network traffic in a network are described. A request to transfer data from a first computing device to a second computing device includes data that identifies a volume of the data to be transferred and a deadline, where the data is to be transferred prior to the deadline. A long-term schedule is computed based upon the request, wherein the long-term schedule defines flow of traffic through the network over a relatively long time horizon. A short-term schedule is computed based upon the long-term schedule, where devices in the network are configured based upon the short-term schedule.
    Type: Application
    Filed: January 18, 2019
    Publication date: August 22, 2019
    Inventors: Srikanth Kandula, Ishai Menache, Roy Schwartz
  • Publication number: 20190227928
    Abstract: In an embodiment, a partition cost of one or more of the plurality of partitions and a data block cost for one or more data blocks that may be subjected to a garbage collection operation are determined. The partition cost and the data block cost are combined into an overall reclaim cost by specifying both the partition cost and the data block cost in terms of a computing system latency. A byte constant multiplier that is configured to modify the overall reclaim cost to account for the amount of data objects that may be rewritten during the garbage collection operation may be applied. The one or more partitions and/or one or more data blocks that have the lowest overall reclaim cost while reclaiming an acceptable amount of data block space may be determined and be included in a garbage collection schedule.
    Type: Application
    Filed: April 1, 2019
    Publication date: July 25, 2019
    Inventors: Shane Kumar MAINALI, Rushi Srinivas SURLA, Peter BODIK, Ishai MENACHE, Yang LU
  • Patent number: 10248562
    Abstract: In an embodiment, a partition cost of one or more of the plurality of partitions and a data block cost for one or more data blocks that may be subjected to a garbage collection operation are determined. The partition cost and the data block cost are combined into an overall reclaim cost by specifying both the partition cost and the data block cost in terms of a computing system latency. A byte constant multiplier that is configured to modify the overall reclaim cost to account for the amount of data objects that may be rewritten during the garbage collection operation may be applied. The one or more partitions and/or one or more data blocks that have the lowest overall reclaim cost while reclaiming an acceptable amount of data block space may be determined and be included in a garbage collection schedule.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: April 2, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Shane Kumar Mainali, Rushi Srinivas Surla, Peter Bodik, Ishai Menache, Yang Lu
  • Patent number: 10218639
    Abstract: Various technologies pertaining to scheduling network traffic in a network are described. A request to transfer data from a first computing device to a second computing device includes data that identifies a volume of the data to be transferred and a deadline, where the data is to be transferred prior to the deadline. A long-term schedule is computed based upon the request, wherein the long-term schedule defines flow of traffic through the network over a relatively long time horizon. A short-term schedule is computed based upon the long-term schedule, where devices in the network are configured based upon the short-term schedule.
    Type: Grant
    Filed: March 14, 2014
    Date of Patent: February 26, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Srikanth Kandula, Ishai Menache, Roy Schwartz
  • Publication number: 20190004943
    Abstract: In an embodiment, a partition cost of one or more of the plurality of partitions and a data block cost for one or more data blocks that may be subjected to a garbage collection operation are determined. The partition cost and the data block cost are combined into an overall reclaim cost by specifying both the partition cost and the data block cost in terms of a computing system latency. A byte constant multiplier that is configured to modify the overall reclaim cost to account for the amount of data objects that may be rewritten during the garbage collection operation may be applied. The one or more partitions and/or one or more data blocks that have the lowest overall reclaim cost while reclaiming an acceptable amount of data block space may be determined and be included in a garbage collection schedule.
    Type: Application
    Filed: June 30, 2017
    Publication date: January 3, 2019
    Inventors: Shane Kumar MAINALI, Rushi Srinivas SURLA, Peter BODIK, Ishai MENACHE, Yang LU
  • Publication number: 20180165618
    Abstract: Schedules are generated that satisfy the objectives of a field services provider given a set of resources and a set of work orders. More particularly, work orders are identified, as well as the identity of resources that are capable with fulfilling one or more of the work orders, are obtained. Feasible paths are established for each resource that identify a sequence of one or more work orders that can be fulfilled by the resource over the course of the resource's work shift and which reflect one or more scheduling objectives. These feasible paths are established in a series of iterations, with each iteration identifying additional paths. After each iteration, it is determined if a pre-selected time limit has been exceeded, and whenever the time limit has been exceeded, path generation ceases. Schedules are established for the resources using the generated paths and are then provided to the field service provider.
    Type: Application
    Filed: December 14, 2016
    Publication date: June 14, 2018
    Inventors: Ishai Menache, Mohit Singh, Bishara Kharoufeh, Chris Mossell, Janeth Guerrero Gomez, Konstantina Mellou, Kyle S. Young
  • Patent number: 9886316
    Abstract: A data center system is described which includes multiple data centers powered by multiple power sources, including any combination of renewable power sources and on-grid utility power sources. The data center system also includes a management system for managing execution of computational tasks by moving data components associated with the computational tasks within the data center system, in lieu of, or in addition to, moving power itself. The movement of data components can involve performing pre-computation or delayed computation on data components within any data center, as well as moving data components between data centers. The management system also includes a price determination module for determining prices for performing the computational tasks based on different pricing models. The data center system also includes a “stripped down” architecture to complement its use in the above-summarized data-centric environment.
    Type: Grant
    Filed: August 20, 2014
    Date of Patent: February 6, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Christian L. Belady, James R. Larus, Danny A. Reed, Christian H. Borgs, Jennifer Tour Chayes, Ilan Lobel, Ishai Menache, Hamid Nazerzadeh, Navendu Jain
  • Publication number: 20170300368
    Abstract: There is provided a method and system for process migration in a data center network. The method includes selecting processes to be migrated from a number of overloaded servers within a data center network based on an overload status of each overloaded server. Additionally, the method includes selecting, for each selected process, one of a number of underloaded servers to which to migrate the selected process based on an underload status of each underloaded server, and based on a parameter of a network component by which the selected process is to be migrated. The method also includes migrating each selected process to the selected underloaded server such that a migration finishes within a specified budget.
    Type: Application
    Filed: February 28, 2017
    Publication date: October 19, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Navendu Jain, Ishai Menache, F. Bruce Shepherd, Joseph (Seffi) Naor
  • Patent number: 9652288
    Abstract: A method for adaptively allocating resources to a plurality of jobs. The method comprises selecting a first policy from a plurality of policies for a first job in the plurality of jobs by using a policy selection mechanism, allocating at least one resource to the first job in accordance with the first policy, and in response to completion of the first job, updating the policy selection mechanism to obtain an updated policy selection mechanism by using at least one processor. Updating the policy selection mechanism comprises evaluating the performance of the first policy with respect to the first job by calculating a value of a metric of utility for the first policy based on conditions associated with execution of the first job and updating the policy selection mechanism based on the calculated value and a delay of execution of the first job.
    Type: Grant
    Filed: March 16, 2012
    Date of Patent: May 16, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Navendu Jain, Ishai Menache, Ohad Shamir
  • Patent number: 9619297
    Abstract: There is provided a method and system for process migration in a data center network. The method includes selecting processes to be migrated from a number of overloaded servers within a data center network based on an overload status of each overloaded server. Additionally, the method includes selecting, for each selected process, one of a number of underloaded servers to which to migrate the selected process based on an underload status of each underloaded server, and based on a parameter of a network component by which the selected process is to be migrated. The method also includes migrating each selected process to the selected underloaded server such that a migration finishes within a specified budget.
    Type: Grant
    Filed: June 25, 2012
    Date of Patent: April 11, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Navendu Jain, Ishai Menache, F. Bruce Shepherd, Joseph (Seffi) Naor