Patents Examined by Zujia Xu
  • Patent number: 11327785
    Abstract: A computing system includes an application configured to request execution of at least one translation including at least one command. A first coupling facility is configured to perform a first modification process to modify a first structure based on a received command associated with an ongoing transaction. A second coupling facility includes a secondary circular queue loaded with first data blocks indicating the first modification process, and is configured to output a message response block (MRB). The application determines a most recent modification process performed by the secondary coupling facility based on the MRB.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: May 10, 2022
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Peter D. Driever, Jeffrey W. Josten, Georgette L. Kurdt, David H. Surman
  • Patent number: 11321185
    Abstract: A method for performing a backup operation includes receiving, by a backup storage device, a backup request, and in response to the backup request: identifying a plurality of virtual machines (VMs) associated with the backup request, identifying a VM of the plurality of VMs that is in an orphaned state; and, initiating a backup for each of the plurality of VMs except the VM.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: May 3, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Sharath Talkad Srinivasan, Smitha Prakash Kalburgi
  • Patent number: 11314557
    Abstract: A method for processing a computing task comprises: dividing multiple computing resources into multiple groups on the basis of topology information describing a connection relationship between the multiple computing resources; selecting at least one computing resource from at least one group of the multiple groups; determining processing performance of processing the computing task with the selected at least one computing resource; and allocating the at least one computing resource on the basis of the processing performance to process the computing task. Accordingly, the multiple computing resources can be utilized sufficiently, so that the computing task can be processed with better processing performance.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: April 26, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Junping Zhao, Kun Wang
  • Patent number: 11237872
    Abstract: Real-time job distribution software architectures for high bandwidth, hybrid processor computation systems for semiconductor inspection and metrology are disclosed. The imaging processing computer architecture can be scalable by changing the number of CPUs and GPUs to meet computing needs. The architecture is defined using a master node and one or more worker nodes to run image processing jobs in parallel for maximum throughput. The master node can receive input image data from a semiconductor wafer or reticle. Jobs based on the input image data are distributed to one of the worker nodes. Each worker node can include at least one CPU and at least one GPU. The image processing job can contain multiple tasks, and each of the tasks can be assigned to one of the CPU or GPU in the worker node using a worker job manager to process the image.
    Type: Grant
    Filed: May 14, 2018
    Date of Patent: February 1, 2022
    Assignee: KLA-TENCOR CORPORATION
    Inventors: Ajay Gupta, Sankar Venkataraman, Sashi Balasingam, Mohan Mahadevan
  • Patent number: 11216315
    Abstract: Methods and systems for allocating disk space and other limited resources (e.g., network bandwidth) for a cluster of data storage nodes using distributed semaphores with atomic updates are described. The distributed semaphores may be built on top of a distributed key-value store and used to reserve disk space, global disk streams for writing data to disks, and per node network bandwidth settings. A distributed semaphore comprising two or more semaphores that are accessed with different keys may be used to reduce contention and allow a globally accessible semaphore to scale as the number of data storage nodes within the cluster increases over time. In some cases, the number of semaphores within the distributed semaphore may be dynamically adjusted over time and may be set based on the total amount of disk space within the cluster and/or the number of contention fails that have occurred to the distributed semaphore.
    Type: Grant
    Filed: February 21, 2018
    Date of Patent: January 4, 2022
    Assignee: Rubrik, Inc.
    Inventors: Noel Moldvai, Fabiano Botelho
  • Patent number: 11210122
    Abstract: A virtualization management/orchestration apparatus is provided with: a Network Function Virtualization Orchestrator (NFVO) that reads a Network Service Descriptor (NSD) in which an entry defining dependency between a VNF and a prescribed element is provided, and creates the VNF and the prescribed element according to the dependency defined in the NSD; and/or a VNF manager (VNFM) that reads a Virtualized Network Function Descriptor (VNFD) provided with an entry defining dependency between a VM and a prescribed element, and creates the VM and the prescribed element according to the dependency defined in the VNFD.
    Type: Grant
    Filed: January 27, 2016
    Date of Patent: December 28, 2021
    Assignee: NEC CORPORATION
    Inventors: Naoya Yabushita, Ryota Mibu, Hirokazu Shinozawa, Yoshiki Kikuchi
  • Patent number: 11188367
    Abstract: A method is provided for a protection module or a process to use a hypervisor to protect memory pages of a guest operating system on the hypervisor. The method includes modifying a shared memory page in a context of the process, which causes the guest operating system to allocate a private memory page to the process, copy data from the shared memory page to the private memory page, and modify the private memory page. The method further includes causing the hypervisor to protect the private memory page by monitoring the private memory page and generating an alert when the private memory page is accessed.
    Type: Grant
    Filed: January 11, 2018
    Date of Patent: November 30, 2021
    Assignee: NICIRA INC.
    Inventor: Sukrut Patil
  • Patent number: 11188379
    Abstract: Embodiments of the present invention disclose a method, computer program product, and system for processing a thread of execution on a plurality of independent processing cores. In various embodiments, a run state and a local maximum thermal power is assigned to each of at least part of the cores. A first one of the cores is set to the active state. The thread on the first core in the active state is processed. The processing of the thread on the first core for fulfilment of an interrupt condition is monitored. A second one of the cores is set to the active state. The processing of the thread on the first core is halted. The processing of the thread to the second core is transferred. The processing of the thread on the second core in the active state continues and the first core is set to the cooling state.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: November 30, 2021
    Assignee: International Business Machines Corporation
    Inventors: Marco Kraemer, Matteo Michel, Carsten Otte, Christoph Raisch
  • Patent number: 11175958
    Abstract: A plurality of interfaces that share a plurality of resources in a storage controller are maintained. In response to an occurrence of a predetermined number of operations associated with an interface of the plurality of interfaces, an input is provided on a plurality of attributes of the storage controller to a machine learning module. In response to receiving the input, the machine learning module generates an output value corresponding to a number of resources of the plurality of resources to allocate to the interface in the storage controller.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: November 16, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Matthew R. Craig, Beth Ann Peterson, Kevin John Ash
  • Patent number: 11175959
    Abstract: A machine learning module receives inputs comprising attributes of a storage controller, wherein the attributes affect allocation of a plurality of resources to a plurality of interfaces. In response to a predetermined number of I/O operations occurring in the storage controller, a generation is made via forward propagation through a plurality of layers of the machine learning module, of an output value corresponding to a number of resources to allocate to an interface. A margin of error is calculated based on comparing the generated output value to an expected output value that is generated from an indication of a predetermined function based at least on a number of I/O operations that are waiting for a resource and a number of available resources. An adjustment is made of weights of links that interconnect nodes of the plurality of layers via back propagation, to reduce the margin of error.
    Type: Grant
    Filed: May 1, 2019
    Date of Patent: November 16, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lokesh M. Gupta, Matthew R. Craig, Beth Ann Peterson, Kevin John Ash
  • Patent number: 11150920
    Abstract: Techniques for implementing 3DI API redirection for VDI desktops are provided. In one set of embodiments, a server system can intercept a call to a 3D API made by a 3D application running within a VM on the server system, where the VM hosts a desktop that is presented to a user of a client system. The server system can determine metadata associated with the call, where the metadata including a name of the 3D API and one or more input parameter values to the call, and can transmit the metadata to the client system. In response, the client system can reconstruct the call to the 3D API using the metadata and execute the call using one or more physical GPUs residing on the client system.
    Type: Grant
    Filed: May 25, 2018
    Date of Patent: October 19, 2021
    Assignee: VMware, Inc.
    Inventors: Yuping Wei, Ke Xiao, Kejing Meng, Qiao Huang
  • Patent number: 11144412
    Abstract: A synchronization process for virtual-machine images (and other segmented files) provides for generating a “delta” bitmap indicating which segments (e.g., clusters) of a first virtual-machine image were changed to obtain a second (e.g., updated) virtual-machine image on a source node. The delta bitmap can be applied to the second-virtual-machine image to generate a delta file. The delta file can be sent along with the delta bitmap to a target node that already has a copy of the first virtual-machine image. The transferred delta bitmap and delta file can then be used on the target node to generate a replica of the second virtual-machine image, thereby effecting synchronization. In variations, different bitmaps and delta files can be transferred to optimize the synchronization process.
    Type: Grant
    Filed: June 5, 2017
    Date of Patent: October 12, 2021
    Assignee: VMware, Inc.
    Inventors: Oleg Zaydman, Preeti Kota
  • Patent number: 11099899
    Abstract: A computing device receives, from a thread of a multi-thread application, a release message. Each of the threads indicates operation(s) on a memory associated with the application. The release message indicates that a data object used by the thread is released. The device indicates that a memory slot of a data pool is unlocked permitting storage of an indication of a location of the data object in the memory. Each memory slot of the data pool is individually lockable such that a locked memory slot of the data pool indicates storing a location in the locked memory slot will not be permitted even though storing the location in an unlocked memory slot of the data pool will be permitted. The device stores, in the memory slot of the data pool, an indication of a location of the data object. The data object comprises the location of the memory slot.
    Type: Grant
    Filed: April 2, 2020
    Date of Patent: August 24, 2021
    Assignee: SAS Institute Inc.
    Inventor: Charles S. Shorb
  • Patent number: 11080097
    Abstract: Customers of a computing resource service provider may transmit requests to instantiate compute instances associated with a plurality of logical partitions. The compute instances may be executed by a server computer system associated with a particular logical partition of the plurality of logical partitions. For example, a compute service may determine a set of server computer systems that are capable of executing the compute instance based at least in part on placement information and/or a diversity constraint of the plurality of logical partitions.
    Type: Grant
    Filed: May 30, 2017
    Date of Patent: August 3, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Vikas Panghal, Alan Hadley Goodman, André Mostert, Stig Manning, Joshua Dawie Mentz, Gustav Karl Mauer, Marnus Freeman
  • Patent number: 11036553
    Abstract: A priority-based resource allocation method, includes accepting a resource application submitted by a job, the resource application including resource demand information and job priority information; determining, according to the resource demand information of the resource application, whether remaining resources of a system meet the resource application, and traversing, in an allocated resource application queue when the remaining resources do not meet the resource application, allocated resource applications having job priorities lower than that of the resource application; using the sum of system resources occupied by all traversed resource applications plus the remaining resources as available resources; and stopping traversing when the available resources meet the resource application, and allocating the available resources to the resource application.
    Type: Grant
    Filed: June 9, 2017
    Date of Patent: June 15, 2021
    Assignee: Alibaba Group Holding Limited
    Inventors: Yang Zhang, Yihui Feng, Jin Ouyang, Qiaohuan Han, Fang Wang
  • Patent number: 10990449
    Abstract: Application relationships may be categorized and managed at a service layer, such as creating application relationship, updating application relationship, retrieving application relationship, deleting application relationship, or discovering application relationship. Services may be based on application relationship awareness.
    Type: Grant
    Filed: October 30, 2015
    Date of Patent: April 27, 2021
    Assignee: Convida Wireless, LLC
    Inventors: Chonggang Wang, Qing Li, Hongkun Li, Zhuo Chen, Tao Han, Paul L. Russell, Jr.
  • Patent number: 10990445
    Abstract: In various embodiments, a resource allocation management circuit may allocate a plurality of different types of hardware resources (e.g., different types of registers) to a plurality of threads. The different types of hardware resources may correspond to a plurality of hardware resource allocation circuits. The resource allocation management circuit may track allocation of the hardware resources to the threads using state identification values of the threads. In response to determining that fewer than a respective requested number of one or more types of the hardware resources are available, the resource allocation management circuit may identify one or more threads for deallocation. As a result, the hardware resource allocation system may allocate hardware resources to threads more efficiently (e.g.
    Type: Grant
    Filed: August 4, 2017
    Date of Patent: April 27, 2021
    Assignee: Apple Inc.
    Inventors: Mark D. Earl, Dimitri Tan, Christopher L. Spencer, Jeffrey T. Brady, Ralph C. Taylor, Terence M. Potter
  • Patent number: 10956195
    Abstract: One or more embodiments provide techniques for migrating a virtualized computing instance between source and destination virtualized computing systems. A migration assist agent creates a content based read cache (CBRC), which generates one or more digest files. Each of the one or more digest files corresponds to a container file. The migration assist agent transmits CBRC metadata and the one or more digest files to the destination virtualized computing system. The migration assist agent transmits one or more pages belonging to the CBRC to the destination virtualized computing system. For each container file, the migration assist agent references the digest file corresponding to the container file with the CBRC to determine if a hash value is in the CBRC. Responsive to determining that the hash value in the digest file is in the CBRC, the migration assist agent marks the container file as complete.
    Type: Grant
    Filed: July 21, 2017
    Date of Patent: March 23, 2021
    Assignee: VMware, Inc.
    Inventor: Pavan Karkun
  • Patent number: 10956197
    Abstract: A server includes a hardware platform, a hypervisor platform, and at least one virtual machine operating as an independent guest computing device. The hypervisor includes a memory facilitator, at least one hardware emulator, and an emulator manager. The memory facilitator provides memory for a virtual machine, with the memory having state data associated therewith at a current location within the virtual machine. The at least one hardware emulator provides at least one set of hardware resources for the virtual machine, with the at least one set of hardware resources having state data associated therewith at the current location within the virtual machine. The emulator manager coordinates transfer of the respective state data from the current location to a different location, and tracks progress of the transfer of the respective state data to the different location.
    Type: Grant
    Filed: December 13, 2017
    Date of Patent: March 23, 2021
    Assignee: CITRIX SYSTEMS, INC.
    Inventor: Jennifer Rachel Herbert
  • Patent number: 10949242
    Abstract: Disclosed by the present invention are a running method for an embedded type virtual device and a system, an embedded type device being divided into a managing process, a plurality of real-time modules and a plurality of non-real-time modules. The managing process reading a configuration file, loading real-time and non-real-time module libraries of each processor and completing initialization interaction by means of a virtual controller area network (CAN) bus and first in, first out (FIFO) communication. The managing process starting a real-time thread and serially scheduling real-time task according to a task period setting relation. The managing process starting a plurality of non-real-time threads, calling a period task of a non-real-time module and carrying out parallel communication with a plurality of debugging clients. The real-time modules exchange data with each other by means of a virtual data bus, and the real-time modules exchange data with the non-real-time modules by means of a sharing memory.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: March 16, 2021
    Assignees: NR ELECTRIC CO., LTD, NR ENGINEERING CO., LTD
    Inventors: Hongjun Chen, Qiang Zhou, Jifeng Wen, Jiuhu Li, Dongfang Xu, Guanghua Li, Wei Liu, Dewen Li, Lei Zhou, Tianen Zhao