Patents Examined by Charlie Sun
  • Patent number: 12020053
    Abstract: A method includes receiving, by a hypervisor executing on a computing system, a request to associate an input/output (I/O) device with a virtual machine running on the computing system. The I/O device corresponds to a physical device attached to a first peripheral bus of a first bus type. The method further includes determining whether the I/O device is a trusted I/O device. The method further includes, in response to determining that the I/O device is not a trusted I/O device, exposing the I/O device to the virtual machine via a first virtual bus of a second bus type. Exposing the I/O device to the virtual machine via the first virtual bus causes the virtual machine to initiate a first security protocol associated with the first virtual bus.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: June 25, 2024
    Assignee: Red Hat, Inc.
    Inventor: Michael Tsirkin
  • Patent number: 12020079
    Abstract: A method of partitioning a graph for processing may include sorting two or more vertices of the graph based on incoming edges and outgoing edges, placing a first one of the vertices with fewer incoming edges in a first partition, and placing a second one of the vertices with fewer outgoing edges in a second partition. The first one of the vertices may have a lowest number of incoming edges, and the first one of the vertices may be placed in a first available partition. The second one of the vertices may have a lowest number of outgoing edges, and the second one of the vertices may be placed in a second available partition. A method for updating vertices of a graph may include storing a first update in a first buffer, storing a second update in a second buffer, and transferring the first and second updates to a memory using different threads.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: June 25, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Soheil Khadirsharbiyani, Nima Elyasi, Armin Haj Aboutalebi, Changho Choi
  • Patent number: 12020068
    Abstract: Methods to automatically prioritize input/output (I/O) for Network Function Virtualization (NFV) workloads at platform overload and associated apparatus and mechanisms. During lab or runtime workload operations, various platform telemetry data are collected and analyzed to determine whether a current workload is uncore-sensitive—that is, sensitive to operations involving utilization of the uncore circuitry such as I/O-related operations, memory bandwidth utilization, LLC utilization, network traffic, core-to-core traffic etc. For uncore sensitive workloads, upon detection of a platform overload condition such as a thermal load approaching a TDP limit, the uncore circuitry is prioritized over the core circuitry such that the frequency of the core is reduced first. A closed-loop feedback mechanism is used to adjust the frequencies of the core and uncore under various workload conditions. The mechanism enables I/O throughput to be maintained for NFV workloads, while reducing the processor thermal load.
    Type: Grant
    Filed: September 16, 2020
    Date of Patent: June 25, 2024
    Assignee: Intel Corporation
    Inventors: Chris MacNamara, Amruta Misra, John Browne
  • Patent number: 12020047
    Abstract: A virtualization infrastructure control device (10) includes: a monitoring unit (13) for acquiring load information regarding a resource; a scaling control unit (11) for determining, based on the load information, whether or not to perform scaling and calculating a resource amount required for the scaling; and a resource delivery control unit (12) for converting the resource amount required for the scaling to a change number for the number of allocated CPUs regarding a CPU core that is a resource allocated to VM-by-VM, selecting the change number of CPU cores that are to be subjected to the scaling, and transmitting, to a compute, resource change instruction information including information regarding the selected CPU cores whose allocation is to be changed.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: June 25, 2024
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventor: Eriko Iwasa
  • Patent number: 12020060
    Abstract: Techniques for managing proxy virtual machines are disclosed. In some embodiments, a computer system deploys proxy virtual machines on a data center in an intelligent way in order to optimize performance and efficiency for backing up data from and restoring data to the data center, using the topology of the data center to determine how many proxy virtual machines to deploy and on which specific hosts to deploy the proxy virtual machines. Rather than determining the number of proxy virtual machines to deploy based on a coaxing out of all of the ports on each proxy virtual machine to handle a planned quantity of backup jobs, the computer system may calculate the number of proxy virtual machines to use based on a rule that ports be left available for un-planned on-demand restore jobs.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: June 25, 2024
    Assignee: Rubrik, Inc.
    Inventors: Samir Rishi Chaudhry, Li Ding
  • Patent number: 12014215
    Abstract: An active scheduling method performed with a master processor and a plurality of slave processors. The method includes determining whether a job to be performed has a dependency by referencing a job queue; in a case in which it is determined that the job to be performed has a dependency, updating a state of the job to be performed in a table in which information of each of a plurality of jobs is recorded; analyzing a state of a job preceding the job to be performed based on the table; and in a case in which the job preceding the job to be performed is determined to have been completed, performing the job to be performed by retrieving the job to be performed from the job queue.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: June 18, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jieun Lee, Jin-Hong Kim, Jaehyung Ahn, Sungduk Cho
  • Patent number: 12001881
    Abstract: Event prioritization for an ordered event stream (OES) is disclosed. Unlike conventional prioritization techniques, the disclosed subject matter can be performed by an OES data storage system to provide direct, rather than indirect, control of prioritization. In an embodiment, a prioritized hashed key (PHK) can be determined from an event characteristic and an indicated event priority value based on a selectable priority-sensitive hashing function. As such, events with a same key characteristic but different indicated priorities can have different PHKs, events with different key characteristics but the same indicated priority can have different PHKs, and events with the same key characteristic and the same priority can have a same PHK. An event priority can be inherently comprised in the PHK without needing to explicitly store the priority value with a written event in the OES. Moreover, the disclosed prioritization for the OES can be compatible with OES scaling techniques.
    Type: Grant
    Filed: April 12, 2021
    Date of Patent: June 4, 2024
    Assignee: EMC IP HOLDING COMPANY LLC
    Inventors: Mikhail Danilov, Maksim Vazhenin
  • Patent number: 11966773
    Abstract: Migration rules for a migration engine can be automatically generated using an automated pipeline. In one example, a system can receive first information indicating first characteristics of first software and can receive second information indicating second characteristics of second software. The system can determine a difference between the first characteristics and the second characteristics. The system can then generate a rule for a migration engine based on the difference, the rule including a conditional statement configured for use by the migration engine to detect the difference in relation to a migration process for migrating the first software to the second software. The system can provide the rule for use by the migration engine, to enable the migration engine to detect the difference and responsively generate a notification associated with the difference.
    Type: Grant
    Filed: February 9, 2021
    Date of Patent: April 23, 2024
    Assignee: RED HAT, INC.
    Inventors: Marco Rizzi, Paolo Antinori
  • Patent number: 11948000
    Abstract: Systems, apparatuses, and methods for performing command buffer gang submission are disclosed. A system includes at least first and second processors and a memory. The first processor (e.g., CPU) generates a command buffer and stores the command buffer in the memory. A mechanism is implemented where a granularity of work provided to the second processor (e.g., GPU) is increased which, in turn, increases the opportunities for parallel work. In gang submission mode, the user-mode driver (UMD) specifies a set of multiple queues and command buffers to execute on those multiple queues, and that work is guaranteed to execute as a single unit from the GPU operating system scheduler point of view. Using gang submission, synchronization between command buffers executing on multiple queues in the same submit is safe. This opens up optimization opportunities for application use (explicit gang submission) and for internal driver use (implicit gang submission).
    Type: Grant
    Filed: March 31, 2021
    Date of Patent: April 2, 2024
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mitchell Howard Singer, Derrick Trevor Owens
  • Patent number: 11934853
    Abstract: Various embodiments of the present invention relate to a method for managing a memory in a Java execution environment, and an electronic device for performing same, and an electronic device may comprise a processor and a memory electrically connected to the processor, wherein: the memory is configured to store multiple Java application programs, and stores instructions that, when executed, cause the processor to execute a virtual machine configured to execute at least one Java application stored in the memory; and when generation of an object is detected during execution of the Java application, the virtual machine executed by the processor generates a reference for the generated object, identifies an application, which has generated the object by a threshold or more, on the basis of the generated reference, and provides information on the identified application to the processor. Other embodiments may also be possible.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: March 19, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kyungseok Lee, Jingu Kang, Kihun Heo, Hyojong Kim, Hakryoul Kim, Hyunjoon Kim, Donggyu Ahn, Haewook Lee, Kwanhee Jeong, Mooyoung Kim, Minjung Kim
  • Patent number: 11934877
    Abstract: A workflow resource manager receives a request to execute a workflow in a cloud computing environment. The workflow resource manager determines that a first set of cloud computing resource requirements associated with the first set of operations for the workflow is satisfied by available cloud computing resources, and responsive to determining that a second set of cloud computing resource requirements associated with a subsequent set of operations for the workflow is not satisfied by the available cloud computing resources, rejects the request to execute the workflow.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: March 19, 2024
    Assignee: Red Hat, Inc.
    Inventor: Juana Nakfour
  • Patent number: 11928508
    Abstract: Systems and methods provide an extensible, multi-stage, realtime application program processing load adaptive, manycore data processing architecture shared dynamically among instances of parallelized and pipelined application software programs, according to processing load variations of said programs and their tasks and instances, as well as contractual policies. The invented techniques provide, at the same time, both application software development productivity, through presenting for software a simple, virtual static view of the actually dynamically allocated and assigned processing hardware resources, together with high program runtime performance, through scalable pipelined and parallelized program execution with minimized overhead, as well as high resource efficiency, through adaptively optimized processing resource allocation.
    Type: Grant
    Filed: May 17, 2022
    Date of Patent: March 12, 2024
    Assignee: ThroughPuter, Inc.
    Inventor: Mark Henrik Sandstrom
  • Patent number: 11928494
    Abstract: Embodiments described herein are directed to configuring managed virtual machines. For instance, a management service (e.g., a mobile device manager) may provide configuration settings to a parent virtual machine. Upon successful application of the configuration settings, the parent virtual machine notifies a configuration service that it is in a steady state and provides the configuration settings to the configuration service. The configuration service notifies a cloud-based service (e.g., a virtual desktop service) that it is configured to instantiate virtual machines. The notification informs the cloud-based service that it is permitted to instantiate child virtual machines. Responsive to receiving the notification, the cloud-based service instantiates child virtual machine(s) as needed.
    Type: Grant
    Filed: April 15, 2022
    Date of Patent: March 12, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Peter J Kaufman, Shayak Lahiri, Yi Zhao, Go Komatsu, Pieter Willem Wigleven, Randall R. Cook
  • Patent number: 11928512
    Abstract: A reconfigurable data processor comprises an array of configurable units configurable to allocate a plurality of sets of configurable units in the array to implement respective execution fragments of the data processing operation. Quiesce logic is coupled to configurable units in the array, configurable to respond to a quiesce control signal to quiesce the sets of configurable units in the array on quiesce boundaries of the respective execution fragments, and to forward quiesce ready signals for the respective execution fragments when the corresponding sets of processing units are ready. An array quiesce controller distributes the quiesce control signal to configurable units in the array, and receives quiesce ready signals for the respective execution fragments from the quiesce logic.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: March 12, 2024
    Assignee: SambaNova Systems, Inc.
    Inventors: Raghu Prabhakar, Manish K. Shah, Pramod Nataraja, David Brian Jackson, Kin Hing Leung, Ram Sivaramakrishnan, Sumti Jairath, Gregory Frederick Grohoski
  • Patent number: 11924072
    Abstract: Systems, methods, and computer-readable media for annotating process and user information for network flows. In some embodiments, a capturing agent, executing on a first device in a network, can monitor a network flow associated with the first device. The first device can be, for example, a virtual machine, a hypervisor, a server, or a network device. Next, the capturing agent can generate a control flow based on the network flow. The control flow may include metadata that describes the network flow. The capturing agent can then determine which process executing on the first device is associated with the network flow and label the control flow with this information. Finally, the capturing agent can transmit the labeled control flow to a second device, such as a collector, in the network.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: March 5, 2024
    Assignee: Cisco Technology, Inc.
    Inventors: Navindra Yadav, Abhishek Ranjan Singh, Anubhav Gupta, Shashidhar Gandham, Jackson Ngoc Ki Pang, Shih-Chun Chang, Hai Trong Vu
  • Patent number: 11922197
    Abstract: Virtual machine (VM) proliferation may be reduced through the use of Virtual Server Agents (VSAs) assigned to a group of VM hosts that may determine the availability of a VM to perform a task. Tasks may be assigned to existing VMs instead of creating a new VM to perform the task. Furthermore, a VSA coordinator may determine a grouping of VMs or VM hosts based on one or more factors associated with the VMs or the VM hosts, such as VM type or geographical location of the VM hosts. The VSA coordinator may also assign one or more VSAs to facilitate managing the group of VM hosts. In some embodiments, the VSA coordinators may facilitate load balancing of VSAs during operation, such as during a backup operation, a restore operation, or any other operation between a primary storage system and a secondary storage system.
    Type: Grant
    Filed: December 10, 2020
    Date of Patent: March 5, 2024
    Assignee: Commvault Systems, Inc.
    Inventors: Rajiv Kottomtharayil, Rahul S. Pawar, Ashwin Gautamchand Sancheti, Sumer Dilip Deshpande, Sri Karthik Bhagi, Henry Wallace Dornemann, Ananda Venkatesha
  • Patent number: 11900164
    Abstract: In accordance with some aspects of the present disclosure, an apparatus is disclosed. The apparatus includes a processor and a memory, wherein the memory includes programmed instructions that when executed by the processor, cause the apparatus to receive a request to join a plurality of entity data structures using a first join order, determine a first performance cost of the first join order, determine a second performance cost of a second join order, determine whether the second performance cost is lower than the first performance cost, in response to determining that the second performance cost is lower than or exceeds the first performance cost, select the second join order or the first join order, respectively, join the plurality of entity data structures using the selected join order, and send the joined plurality of entity data structures.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: February 13, 2024
    Assignee: Nutanix, Inc.
    Inventors: Abhinay Nagpal, Cong Liu, Himanshu Shukla, Sourav Kumar
  • Patent number: 11894996
    Abstract: Systems, methods, and computer-readable media for annotating process and user information for network flows. In some embodiments, a capturing agent, executing on a first device in a network, can monitor a network flow associated with the first device. The first device can be, for example, a virtual machine, a hypervisor, a server, or a network device. Next, the capturing agent can generate a control flow based on the network flow. The control flow may include metadata that describes the network flow. The capturing agent can then determine which process executing on the first device is associated with the network flow and label the control flow with this information. Finally, the capturing agent can transmit the labeled control flow to a second device, such as a collector, in the network.
    Type: Grant
    Filed: January 29, 2021
    Date of Patent: February 6, 2024
    Assignee: Cisco Technology, Inc.
    Inventors: Navindra Yadav, Abhishek Ranjan Singh, Anubhav Gupta, Shashidhar Gandham, Jackson Ngoc Ki Pang, Shih-Chun Chang, Hai Trong Vu
  • Patent number: 11886918
    Abstract: An apparatus and method for intelligently scheduling threads across a plurality of logical processors. For example, one embodiment of a processor comprises: a plurality of cores; one or more peripheral component interconnects to couple the plurality of cores to memory, and in response to a core configuration command to deactivate a core of the plurality of cores, a region within the memory is updated with an indication of deactivation of the core.
    Type: Grant
    Filed: April 11, 2022
    Date of Patent: January 30, 2024
    Assignee: INTEL CORPORATION
    Inventors: Ankush Varma, Nikhil Gupta, Vasudevan Srinivasan, Krishnakanth Sistla, Nilanjan Palit, Abhinav Karhu, Eugene Gorbatov, Eliezer Weissmann
  • Patent number: 11880700
    Abstract: Various embodiments of the present invention relate to a method for managing a memory in a Java execution environment, and an electronic device for performing same, and an electronic device may comprise a processor and a memory electrically connected to the processor, wherein: the memory is configured to store multiple Java application programs, and stores instructions that, when executed, cause the processor to execute a virtual machine configured to execute at least one Java application stored in the memory; and when generation of an object is detected during execution of the Java application, the virtual machine executed by the processor generates a reference for the generated object, identifies an application, which has generated the object by a threshold or more, on the basis of the generated reference, and provides information on the identified application to the processor. Other embodiments may also be possible.
    Type: Grant
    Filed: April 25, 2019
    Date of Patent: January 23, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Kyungseok Lee, Jingu Kang, Kihun Heo, Hyojong Kim, Hakryoul Kim, Hyunjoon Kim, Donggyu Ahn, Haewook Lee, Kwanhee Jeong, Mooyoung Kim, Minjung Kim