Patents by Inventor Marcel Apfelbaum

Marcel Apfelbaum has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954534
    Abstract: A request to execute a workload that utilizes an amount of resources to be executed is received from a client device. Corresponding resources that are available at multiple non-uniform memory access (NUMA) nodes are received from one or more host systems. A particular NUMA node of the multiple NUMA nodes is identified in view of the particular NUMA node having available resources that are greater than the amount of resources to execute the workload. A scheduling hint is assigned to the workload that indicates that the particular NUMA node is to be used to execute the workload.
    Type: Grant
    Filed: June 21, 2021
    Date of Patent: April 9, 2024
    Assignee: Red Hat, Inc.
    Inventors: Swati Sehgal, Marcel Apfelbaum
  • Patent number: 11868805
    Abstract: Techniques of scheduling workload(s) on partitioned resources of host systems are described. The techniques can be used, for example, in a container-orchestration system. One technique includes retrieving information characterizing at least one schedulable partition and determining an availability and a suitability of one or more of the schedulable partition(s) for executing a workload in view of the information. Each of the schedulable partition(s) includes resources of one or more host systems. The technique also includes selecting one or more of the schedulable partition(s) to execute the workload in view of the availability and the suitability.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: January 9, 2024
    Assignee: Red Hat, Inc.
    Inventors: Marcel Apfelbaum, Swati Sehgal
  • Patent number: 11836525
    Abstract: A system includes a memory, a processor in communication with the memory, and an operating system (“OS”) executing on the processor. The processor belongs to a processor socket. The OS is configured to pin a workload of a plurality of workloads to the processor belonging to the processor socket. Each respective processor belonging to the processor socket shares a common last-level cache (“LLC”). The OS is also configured to measure an LLC occupancy for the workload, reserve the LLC occupancy for the workload thereby isolating the workload from other respective workloads of the plurality of workloads sharing the processor socket, and maintain isolation by monitoring the LLC occupancy for the workload.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: December 5, 2023
    Assignee: Red Hat, Inc.
    Inventors: Orit Wasserman, Marcel Apfelbaum
  • Patent number: 11768704
    Abstract: Systems and methods for intelligently scheduling a pod in a cluster of worker nodes are described. A scheduling service may account for previous scheduling attempts by considering the time and node (scheduling data) on which a preceding attempt to schedule a node were made, and factoring this information into the scheduling decision. Upon making a determination of a node on which to attempt to schedule the pod, the scheduling data may be updated with the time and node ID of the determined node and the pod may be scheduled on the determined node. In response to determining that the pod has been evicted from the determined node, the above process may continue iteratively until the pod has been successfully scheduled.
    Type: Grant
    Filed: April 28, 2021
    Date of Patent: September 26, 2023
    Assignee: Red Hat, Inc.
    Inventors: Swat Sehgal, Marcel Apfelbaum
  • Patent number: 11755375
    Abstract: A system and method for aggregating host machines into a single cloud node for workloads requiring excessive resources. The method includes providing a plurality of computing devices in association with a cloud service system. The method includes defining an aggregated node of the cloud service system corresponding to at least two computing devices of the plurality of computing devices. The method includes exposing an application programming interface (API) that is indicative of combined resources of the at least two computing devices of the plurality of computing devices. The method includes receiving a query to perform a workload requiring a set of resources that exceed the resources provided by each of the computing devices of the cloud service system. The method includes forwarding, to the aggregated node, the query to cause the at least two computing devices to perform the workload using the combined resources of the least two computing device.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: September 12, 2023
    Assignee: Red Hat, Inc.
    Inventors: Swati Sehgal, Marcel Apfelbaum
  • Patent number: 11716378
    Abstract: A network device queue manager receives a request to execute a workload on a node of a cloud computing environment, where the cloud computing environment comprises a plurality of nodes; determines that the workload is to be executed by a dedicated processor resource; identifies a set of one or more shared processor resources associated with the node, wherein each shared processor resource of the set of shared processor resources processes device interrupts; selects a processor resource from the set of one or more shared processor resources to execute the first workload on the first node; bans the selected processor resource from processing device interrupts while executing the workload; and executes the workload with the selected processor resource.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: August 1, 2023
    Assignee: Red Hat, Inc.
    Inventors: Yanir Quinn, Marcel Apfelbaum
  • Publication number: 20230195650
    Abstract: Disclosed is a method of managing memory of a virtual machine (VM), including providing a physical IOMMU device on a host, and performing a memory translation using the physical IOMMU device on the host.
    Type: Application
    Filed: February 16, 2023
    Publication date: June 22, 2023
    Inventors: Gal Hammer, Marcel Apfelbaum
  • Publication number: 20230168943
    Abstract: A system and method for aggregating host machines into a single cloud node for workloads requiring excessive resources. The method includes providing a plurality of computing devices in association with a cloud service system. The method includes defining an aggregated node of the cloud service system corresponding to at least two computing devices of the plurality of computing devices. The method includes exposing an application programming interface (API) that is indicative of combined resources of the at least two computing devices of the plurality of computing devices. The method includes receiving a query to perform a workload requiring a set of resources that exceed the resources provided by each of the computing devices of the cloud service system. The method includes forwarding, to the aggregated node, the query to cause the at least two computing devices to perform the workload using the combined resources of the least two computing device.
    Type: Application
    Filed: November 29, 2021
    Publication date: June 1, 2023
    Inventors: Swati Sehgal, Marcel Apfelbaum
  • Patent number: 11630782
    Abstract: Disclosed is a method of managing memory of a virtual machine (VM), including receiving, at a physical input-output memory management unit (IOMMU) of a processing device operating the VM, a request from a VM IOMMU for VM memory address translation for a VM peripheral component interconnect (PCI) device created on the VM; determining, by the physical IOMMU, a corresponding VM memory address translation result based on the request as received and a memory translation table; and transmitting, by the physical IOMMU to the VM IOMMU, the corresponding VM memory address translation result for servicing the request for VM memory address translation of the VM PCI device.
    Type: Grant
    Filed: September 14, 2021
    Date of Patent: April 18, 2023
    Assignee: Red Hat, Inc.
    Inventors: Gal Hammer, Marcel Apfelbaum
  • Publication number: 20230101885
    Abstract: A device attachment request to attach a device to a container within a virtual machine is received. The virtual machine is monitored to determine whether the virtual machine is ready for a hot-plug of the device. An indication that the virtual machine is ready for the hot-plug of the device is received from the virtual machine. A device hot-plug operation is issued to cause the device to be hot-plugged to the virtual machine.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Inventors: Marcel Apfelbaum, Gal Hammer
  • Publication number: 20230093884
    Abstract: A network device queue manager receives a request to execute a workload on a node of a cloud computing environment, where the cloud computing environment comprises a plurality of nodes; determines that the workload is to be executed by a dedicated processor resource; identifies a set of one or more shared processor resources associated with the node, wherein each shared processor resource of the set of shared processor resources processes device interrupts; selects a processor resource from the set of one or more shared processor resources to execute the first workload on the first node; bans the selected processor resource from processing device interrupts while executing the workload; and executes the workload with the selected processor resource.
    Type: Application
    Filed: September 28, 2021
    Publication date: March 30, 2023
    Inventors: Yanir Quinn, Marcel Apfelbaum
  • Patent number: 11611619
    Abstract: Data can be placed by an edge node in a computing environment using multiple criteria in a placement policy. For example, a processing device of an edge node can receive a write request for storing a data object. The processing device can select first and second criteria from a placement policy based on a tag for the data object. The processing device can determine a set of remote components that fulfill the first criterion. The processing device can then identify, from the set, a destination component that fulfills the second criterion. The processing device can transmit the data object to the destination component.
    Type: Grant
    Filed: June 24, 2022
    Date of Patent: March 21, 2023
    Assignee: Red Hat, Inc.
    Inventors: Orit Wasserman, Marcel Apfelbaum
  • Publication number: 20230082195
    Abstract: Disclosed is a method of managing memory of a virtual machine (VM), including receiving, at a physical input-output memory management unit (IOMMU) of a processing device operating the VM, a request from a VM IOMMU for VM memory address translation for a VM peripheral component interconnect (PCI) device created on the VM; determining, by the physical IOMMU, a corresponding VM memory address translation result based on the request as received and a memory translation table; and transmitting, by the physical IOMMU to the VM IOMMU, the corresponding VM memory address translation result for servicing the request for VM memory address translation of the VM PCI device.
    Type: Application
    Filed: September 14, 2021
    Publication date: March 16, 2023
    Inventors: Gal Hammer, Marcel Apfelbaum
  • Publication number: 20230063893
    Abstract: An example system includes a processor and a node agent executing on the processor. The node agent is configured to receive a message indicative of a workload, a processor policy of the workload, and a number of processor threads requested for the workload. The node agent is also configured to allow simultaneous allocation of a processor core to the workload and another workload based on the processor policy being a first policy. The node agent is also configured to prevent simultaneous allocation of the processor core to the workload and the other workload based on the processor policy being a second policy or a third policy. The node agent is also configured to allow simultaneous allocation of the processor core for two or more of the requested processor threads based on the processor policy being the second policy.
    Type: Application
    Filed: September 1, 2021
    Publication date: March 2, 2023
    Inventors: Marcel Apfelbaum, Swati Sehgal
  • Publication number: 20220405135
    Abstract: A request to execute a workload that utilizes an amount of resources to be executed is received from a client device. Corresponding resources that are available at multiple non-uniform memory access (NUMA) nodes are received from one or more host systems. A particular NUMA node of the multiple NUMA nodes is identified in view of the particular NUMA node having available resources that are greater than the amount of resources to execute the workload. A scheduling hint is assigned to the workload that indicates that the particular NUMA node is to be used to execute the workload.
    Type: Application
    Filed: June 21, 2021
    Publication date: December 22, 2022
    Inventors: Swati Sehgal, Marcel Apfelbaum
  • Publication number: 20220391223
    Abstract: A query for available plugin extensions is received from a client device. A request for operator images including plugin extensions is transmitted to a repository including multiple operator images, wherein each of the operator images includes corresponding metadata identifying the plugin extensions. Identifications of the operator images including the plugin extensions and the corresponding metadata identifying the plugin extensions are received from the repository. A listing of the plugin extensions is transmitted to the client device.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Inventors: Marcel Apfelbaum, Aviel Yosef
  • Publication number: 20220350656
    Abstract: Systems and methods for intelligently scheduling a pod in a cluster of worker nodes are described. A scheduling service may account for previous scheduling attempts by considering the time and node (scheduling data) on which a preceding attempt to schedule a node were made, and factoring this information into the scheduling decision. Upon making a determination of a node on which to attempt to schedule the pod, the scheduling data may be updated with the time and node ID of the determined node and the pod may be scheduled on the determined node. In response to determining that the pod has been evicted from the determined node, the above process may continue iteratively until the pod has been successfully scheduled.
    Type: Application
    Filed: April 28, 2021
    Publication date: November 3, 2022
    Inventors: Swati Sehgal, Marcel Apfelbaum
  • Publication number: 20220326986
    Abstract: Techniques of scheduling workload(s) on partitioned resources of host systems are described. The techniques can be used, for example, in a container-orchestration system. One technique includes retrieving information characterizing at least one schedulable partition and determining an availability and a suitability of one or more of the schedulable partition(s) for executing a workload in view of the information. Each of the schedulable partition(s) includes resources of one or more host systems. The technique also includes selecting one or more of the schedulable partition(s) to execute the workload in view of the availability and the suitability.
    Type: Application
    Filed: April 13, 2021
    Publication date: October 13, 2022
    Inventors: Marcel Apfelbaum, Swati Sehgal
  • Publication number: 20220321654
    Abstract: Data can be placed by an edge node in a computing environment using multiple criteria in a placement policy. For example, a processing device of an edge node can receive a write request for storing a data object. The processing device can select first and second criteria from a placement policy based on a tag for the data object. The processing device can determine a set of remote components that fulfill the first criterion. The processing device can then identify, from the set, a destination component that fulfills the second criterion. The processing device can transmit the data object to the destination component.
    Type: Application
    Filed: June 24, 2022
    Publication date: October 6, 2022
    Inventors: Orit Wasserman, Marcel Apfelbaum
  • Patent number: 11405456
    Abstract: Data can be placed by an edge node in a computing environment using multiple criteria in a placement policy. For example, a processing device of an edge node can receive a write request for storing a data object. The processing device can select first and second criteria from a placement policy based on a tag for the data object. The first criterion may correspond to a required characteristic and the second criterion may correspond to a prioritized characteristic. The processing device can determine a set of remote components that fulfill the first criterion. The processing device can then identify, from the set, a destination component that fulfills the second criterion. The processing device can transmit the data object to the destination component.
    Type: Grant
    Filed: December 22, 2020
    Date of Patent: August 2, 2022
    Assignee: RED HAT, INC.
    Inventors: Orit Wasserman, Marcel Apfelbaum