Process Scheduling Patents (Class 718/102)
  • Patent number: 10783108
    Abstract: The present invention provides a mechanism whereby active servers are able to extend their RAM by using memory available in standby servers. This can be achieved, without having to take the servers out of their standby mode, by implementing a memory manager operating in at least one active server and configured to directly access the memory of the servers in standby mode, without requiring the processor of these servers in standby mode to be active. In these servers in standby mode, at least their memory, their network card and their communication means are active, whereas at least their processor is in standby mode.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: September 22, 2020
    Assignees: INSTITUT NATIONAL POLYTECHNIQUE DE TOULOUSE, CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE
    Inventors: Daniel Hagimont, Alain Tchana
  • Patent number: 10771536
    Abstract: Systems, methods, and computer-readable media for coordinating processing of data by multiple networked computing resources include monitoring data associated with a plurality of networked computing resources, and coordinating the routing of data processing segments to the networked computing resources.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: September 8, 2020
    Assignee: ROYAL BANK OF CANADA
    Inventors: Walter Michael Pitio, Philip Iannaccone, Daniel Aisen, Bradley Katsuyama, Robert Park, John Schwall, Richard Steiner, Allen Zhang, Thomas L. Popejoy, Gregory Martin Ludvik, Thomas Matthew Clark, Xiaoran Zheng
  • Patent number: 10761992
    Abstract: A processor reduces bus bandwidth consumption by employing a shared load scheme, whereby each shared load retrieves data for multiple compute units (CUs) of a processor. Each CU in a specified group monitors a bus for load accesses directed to a cache shared by the multiple CUs. In response to identifying a load access on the bus, a CU determines if the load access is a shared load access for its share group. In response to identifying a shared load access for its share group, the CU allocates an entry of a private cache associated with the CU for data responsive to the shared load access. The CU then monitors the bus for the data targeted by the shared load. In response to identifying the targeted data on the bus, the CU stores the data at the allocated entry of the private cache.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: September 1, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Maxim V. Kazakov
  • Patent number: 10761765
    Abstract: A source site includes a controller, a set of source worker nodes, and a message queue connected between the controller and source worker nodes. A destination site includes a set of destination worker nodes. The controller identifies differences between a first snapshot created at the source site at a first time and a second snapshot created at a second time, after the first time. Based on the differences, a set of tasks are generated. The tasks include one or more of copying an object from the source to destination or deleting an object from the destination. The controller places the tasks onto the message queue. A first source worker node retrieves the first task and coordinates with a first destination worker node to perform the first task. A second source worker nodes retrieves the second task and coordinates with a second destination worker node to perform the second task.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: September 1, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Abhinav Duggal, Atul Avinash Karmarkar, Philip Shilane, Kevin Xu
  • Patent number: 10747729
    Abstract: Device-specific chunked hash size tuning to maximize synchronization throughput is described. A synchronization client application or similar program may employ hashing to detect changes to content of remotely stored files and synchronize those (as opposed to synchronizing all files, for example). Instead of using static hash chunk sizes for all client applications of a cloud storage service, the synchronization client application may determine the size of hash buffer by baselining throughput of hashing on each synchronization device and finding the number of bytes hashed in a given amount of time. Thus, hash chunk size may be optimized on a machine by machine basis.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: August 18, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Brian D. Jones, Julian Burger
  • Patent number: 10740240
    Abstract: A computer implemented method for saving cache access power is suggested. The cache is provided with a set predictor logic for providing a generated set selection for selecting a set in the cache, and with a set predictor cache for pre-caching generated set indices of the cache. The method comprises further: receiving a part of a requested memory address; checking, in the set predictor cache, whether the requested memory address is already generated; in the case, that the requested memory address has already been generated: securing that the set predictor cache is switched off; issuing the pre-cached generated set index towards the cache; and securing that only that part of the cache is switched on that is associated with the pre-cached generated set index.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: August 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Christian Jacobi, Markus Kaltenbach, Ulrich Mayer, Johannes C. Reichart, Anthony Saporito, Siegmund Schlechter
  • Patent number: 10739996
    Abstract: Systems and methods are disclosed for enhanced garbage collection operations at a memory device. The enhanced garbage collection may include selecting data and blocks to garbage collect to improve device performance. Data may be copied and reorganized according to a data stream via which the data was received, or data and blocks may be evaluated for garbage collection based on other access efficiency metrics. Data may be selected for collection based on sequentiality of the data, host access patterns, or other factors. Processing of host commands may be throttled based on a determined amount of work to garbage collect a plurality of blocks, in order to limit variability in host command throughput over a time period.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: August 11, 2020
    Assignee: Seagate Technology LLC
    Inventors: David Scott Ebsen, Kevin A Gomez, Mark Ish, Daniel John Benjamin, Robert Wayne Moss
  • Patent number: 10742565
    Abstract: An information handling system includes a first provider module to provide a first message and a second provider module to provide a second message, a first memory structure, and a first intermediate integration module. The first intermediate integration module to dequeue the first message from the first queue of the first memory structure prior to the second message in response to a determination that the first delivery time is before the second delivery time, and in response to a determination that the first delivery time is substantially equal to the second delivery time: to determine a first message identifier sequence number for the first message and a second message identifier sequence number for the second message, and to dequeue the second message from the first queue prior to the first message in response to the second message identifier sequence number being lower than the first message identifier sequence number.
    Type: Grant
    Filed: January 18, 2016
    Date of Patent: August 11, 2020
    Assignee: Dell Products, L.P.
    Inventors: Shounak Chattaraj, Balasubramanian Srinivasan, Anshuman Pathak
  • Patent number: 10735341
    Abstract: An approach for a dynamic provisioning of multiple RSS engines is provided. In an embodiment, a method comprises monitoring a CPU usage of hardware queues implemented in a plurality of RSS pools, and determining whether a CPU usage of any hardware queue, implemented in a particular RSS pool of the plurality of RSS pools, has increased above a threshold value. In response to determining that a CPU usage of a particular hardware queue, implemented in the particular RSS pool, has increased above the threshold value, it is determined whether the particular RSS pool includes an unused hardware queue (a queue with light CPU usage). If such an unused hardware queue is presented, then an indirection table that is associated with the particular RSS pool is modified to remap one or more data flows from the particular hardware queue to the unused hardware queue.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: August 4, 2020
    Assignee: NICIRA, INC.
    Inventors: Aditya G. Holla, Rajeev Nair, Shilpi Agarwal, Subbarao Narahari, Zongyun Lai, Wenyi Jiang, Srikar Tati
  • Patent number: 10725819
    Abstract: A system and method is disclosed for scheduling and allocating data storage. An example method comprises generating a scheduling problem based at least on states of each of the plurality of storage nodes, a received plurality of storage tasks and received constraints, wherein the scheduling problem is a constraint satisfaction problem, selecting one or more approaches to solving the scheduling problem based on metadata associated with the storage tasks and constraints, solving the scheduling problem to generate a scheduling solution based on the one or more approaches, determining whether the given constraints are satisfied by the scheduling solution, executing, by the processor, the scheduling solution by assigning storage of data to each of the plurality of storage nodes when the constraints are satisfied by the scheduling solution and determining another scheduling solution based on the one or more approaches when the constraints are not satisfied by the scheduling solution.
    Type: Grant
    Filed: May 18, 2018
    Date of Patent: July 28, 2020
    Assignee: Acronis International GmbH
    Inventors: Sergey Bykov, Eugene Aseev, Sanjeev Solanki, Serguei Beloussov, Stanislav Protasov
  • Patent number: 10726072
    Abstract: In an example, data in a non-flat format and metadata corresponding to the data are obtained from a first database. The data is flattened into flat data and augmented with the metadata. One or more pieces of the flat data are scanned to locate a first piece of flat data having a first attribute with attribute values that are a subset of attribute values of a second attribute of a second piece of flat data. A link is then created between the first attribute of the first piece of flat data and the second attribute of the second piece of flat data. A graph structure is generated, the graph structure containing a plurality of nodes, each node corresponding to a data type of the flat data and corresponding to one or more pieces of data in the flat data of the corresponding data type.
    Type: Grant
    Filed: January 25, 2018
    Date of Patent: July 28, 2020
    Assignee: SAP SE
    Inventors: Haichao Wei, Priyanka Khaitan
  • Patent number: 10725834
    Abstract: Aspects of the present invention disclose a method, computer program product, and system for scheduling an application. The method includes one or more processors receiving a task, the task includes instructions indicating desired nodes to perform the task through programs. The method further includes one or more processors identifying application characteristic information and node characteristic information associated with nodes within a data center composed of nodes. The application characteristic information includes resource utilization information for applications on nodes within the data center. The method further includes one or more processors determining that the nodes reach a threshold level of power consumption. The threshold level is a pre-set maximum amount of power utilized by a node within the data center. The method further includes one or more processors determining a node consuming an amount of power that is below a threshold level of power consumption in the data center.
    Type: Grant
    Filed: November 30, 2017
    Date of Patent: July 28, 2020
    Assignee: International Business Machines Corporation
    Inventors: Eun Kyung Lee, Bilge Acun, Yoonho Park
  • Patent number: 10719429
    Abstract: A system and method for dynamic load testing on a target application are provided. The method includes, receiving a request for varying load on a target application in running load-testing environment. The running load-testing environment has a plurality of threads being executed for load-testing. The plurality of threads has a coordinator thread and one or more waiting threads. Further, the one or more waiting threads are locked from accessing the target application and the coordinator thread capable of unlocking the one or more waiting threads. The coordinator thread is executed based on the request to unlock the one or more waiting threads. The unlocked threads access the target application to test the load.
    Type: Grant
    Filed: February 2, 2017
    Date of Patent: July 21, 2020
    Assignee: Tata Consultancy Services Limited
    Inventor: Lutfur Rahaman
  • Patent number: 10719466
    Abstract: A polling device driver is partitioned into a plurality of driver threads for controlling a device of a computer system. The device has a first device state of an unscouted state and a scouted state, and a second device state of an inactive state and an active state. A driver thread of the plurality of driver threads determines that the first device state of the device state is in the unscouted state, and changes the first state of the device to the scouted state. The driver thread further determines that the second device state of the device is in the inactive state and changes the second device state of the device to the active state. The driver thread executes an operation on the device during a pre-determined time slot configured for the driver thread.
    Type: Grant
    Filed: July 11, 2018
    Date of Patent: July 21, 2020
    Assignee: Rambus Inc.
    Inventors: Bart Trojanowski, Michael L. Takefman, Maher Amer
  • Patent number: 10719760
    Abstract: An apparatus to facilitate workload scheduling is disclosed. The apparatus includes one or more clients, one or more processing units to processes workloads received from the one or more clients, including hardware resources and scheduling logic to schedule direct access of the hardware resources to the one or more clients to process the workloads.
    Type: Grant
    Filed: April 9, 2017
    Date of Patent: July 21, 2020
    Assignee: Intel Corporation
    Inventors: Liwei Ma, Nadathur Rajagopalan Satish, Jeremy Bottleson, Farshad Akhbari, Eriko Nurvitadhi, Chandrasekaran Sakthivel, Barath Lakshmanan, Jingyi Jin, Justin E. Gottschlich, Michael Strickland
  • Patent number: 10721329
    Abstract: A data scheduling method and customer premises equipment are provided. The customer premises equipment stores, in a first queue, received service data that is sent from a first network to a second network. Test data is generated that is to be sent from the first network to the second network. The test data is stored in a second queue. The customer premises equipment preferentially schedules the service data in the first queue to be in a third queue and in response to determining that a rate at which the service data is received is lower than a preset limit rate, schedules the test data in the second queue to be in the third queue to reach the limit rate. The service data and/or the test data in the third queue is sent to the second network at the limit rate.
    Type: Grant
    Filed: December 19, 2017
    Date of Patent: July 21, 2020
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yanfei Ye, Heping Peng, Ting Xu
  • Patent number: 10713300
    Abstract: Techniques are described related to for generating/distributing state machines that are implemented within a security zone to obtain private information from one or more resources within the security zone. In various implementations, an automated assistant client implemented by processor(s) within the security zone may receive a free form natural language query (“FFNLQ”) that is answerable using private information available from resource(s) within the security zone. Data indicative of the FFNLQ may be provided to a semantic processor outside of the security zone, and the online semantic processor may return a state machine that is implemented by processor(s) within the security zone to obtain the private information from resource(s) within the security zone. Based on the state machine and the obtained private information, natural language output may be generated and presented to convey information responsive to the FFNLQ.
    Type: Grant
    Filed: November 15, 2017
    Date of Patent: July 14, 2020
    Assignee: GOOGLE LLC
    Inventors: Adomas Paltanavicius, Andrea Ambu
  • Patent number: 10712731
    Abstract: The disclosure provides a control device including a memory storing address information and a server section communicating with the external machine serving as a destination of publishing the address information. The server section includes a determining section, a monitoring section, and an address managing section. The determining section determines priority levels of publishing variables included in a control program to the external machine in accordance with a predetermined rule. The monitoring section monitors free capacity of the memory. With respect to public variables for the external machine among the variables included in the control program, the address managing section adds logical addresses of the respective public variables to the address information in order of the publishing priority levels as long as the free capacity does not fall below a predetermined threshold.
    Type: Grant
    Filed: September 11, 2018
    Date of Patent: July 14, 2020
    Assignee: OMRON Corporation
    Inventors: Yuta Nagata, Kotaro Okamura, Jintaro Deki
  • Patent number: 10705830
    Abstract: In a computer-implemented method for managing hosts of a pre-configured hyper-converged computing device, a pre-configured hyper-converged computing device comprising a plurality of hosts is managed, where the plurality of hosts is allocable to workload domains, where unallocated hosts of the plurality of hosts is maintained within a pool of unallocated hosts, and where the plurality of hosts each have an operating system version. An unallocated host of the pool of unallocated hosts is determined as having an operating system version that is outside of a range of supported operating system versions. The operating system version of the unallocated host is updated to an operating system version within the range of supported operating system versions.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: July 7, 2020
    Assignee: VMware, Inc.
    Inventors: Arun Mahajan, Chitrank Seshadri, Atanu Panda, Sudipto Mukhopadhyay, Mao Ye, Benjamin Davini
  • Patent number: 10708323
    Abstract: Systems for managing content in a cloud-based service platform. A server in a cloud-based environment is interfaced with storage devices that hold one or more stored objects accessible by two or more users. The stored objects comprise folders and files as well as other objects such as workflow objects that are associated with the folders or the files. The workflow objects comprise workflow metadata that describes a workflow as a set of workflow tasks to be carried out in a progression. Processing of a workflow task and/or carrying out a portion of the progression includes modification of shared content objects. The processing or modification events are detected through workflow events, which in turn cause one or more workflow responses to be generated. Workflow responses comprise updates to the workflow metadata to record progression through the workflow and/or workflow responses comprise updates to any one or more of the stored objects.
    Type: Grant
    Filed: July 30, 2018
    Date of Patent: July 7, 2020
    Assignee: Box, Inc.
    Inventors: Anne Elizabeth Hiatt Pearl, Jenica Nash Blechschmidt, Natalia Vinnik, Robert Kyle Waldrop, Sam Michael Devlin, Steven Luis Cipolla, Sesh Jalagam
  • Patent number: 10705831
    Abstract: In a computer-implemented method for maintaining unallocated hosts of a pre-configured hyper-converged computing device at a baseline operating system version, a plurality of hosts of a pre-configured hyper-converged computing device is managed, where the plurality of hosts are allocable to workload domains, where the plurality of hosts each have an operating system version within a range of supported operating system versions, where unallocated hosts of the plurality of hosts are maintained within a pool of unallocated hosts, and where the unallocated hosts of the pool of unallocated hosts have a baseline operating system version of the range of supported operating system versions. A new unallocated host is received at the pre-configured hyper-converged computing device for inclusion to the pool of unallocated hosts. An operating system version of the new unallocated host is determined.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: July 7, 2020
    Assignee: VMware, Inc.
    Inventors: Arun Mahajan, Chitrank Seshadri, Atanu Panda, Sudipto Mukhopadhyay, Mao Ye, Benjamin Davini
  • Patent number: 10705980
    Abstract: A method for sending communication data includes: ascertaining whether a configuration of a communication channel between a data-sending application and at least one data-receiving application can activate a write lock that precludes at least one further data-sending application from writing data to a first memory area; activating the write lock, if the configuration of the communication channel provides for the activation of the write lock; writing the communication data and sender state data indicating the communication data to the first data memory area; and deactivating the write lock if the configuration of the communication channel provides for the activation of the write lock. The data-sending and data-receiving applications each have read access to the first data memory area, and the activation of the write lock does not substantially adversely affect the read access by each of the data-sending and data-receiving applications to the first data memory area.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: July 7, 2020
    Assignee: ELEKTROBIT AUTOMOTIVE GMBH
    Inventors: Mortiz Neukirchner, Michael Stilkerich, Niko Böhm, Simon Dürr
  • Patent number: 10699366
    Abstract: Techniques are disclosed relating to sharing an arithmetic logic unit (ALU) between multiple threads. In some embodiments, the threads also have dedicated ALUs for other types of operations. In some embodiments, arbitration circuitry is configured to receive operations to be performed by the shared arithmetic logic unit from the set of threads and issue the received operations to the shared arithmetic logic unit. In some embodiments, the arbitration circuitry is configured to switch to a different one of the set of threads for each instruction issued to the shared arithmetic logic unit. In some embodiments, the shared ALU is configured to perform 32-bit operations and the dedicated ALUs are configured to perform the same operations using 16-bit precision. In some embodiments, the shared ALU is shared between two threads and is physically located adjacent to other datapath circuitry for the two threads.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: June 30, 2020
    Assignee: Apple Inc.
    Inventor: Robert D. Kenney
  • Patent number: 10698821
    Abstract: A dataflow execution environment is provided with dynamic placement of cache operations and action execution ordering.
    Type: Grant
    Filed: October 30, 2018
    Date of Patent: June 30, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Vinícius Michel Gottin, Fábio André Machado Porto, Yania Molina Souto
  • Patent number: 10698730
    Abstract: A processing unit for neural network processing includes: an instruction memory that stores tasks including one or more instructions; a data memory that stores data related to the tasks; a data flow processor that determines whether the data has been prepared for the tasks and notifies a control flow processor that preparations for the tasks have been finished in order of finished data preparation; the control flow processor that controls execution of the tasks in order of notification from the data flow processor; and a functional processor that performs computations resulting from the one or more instructions of the tasks controlled for execution by the control flow processor.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: June 30, 2020
    Assignee: FuriosaAI Co.
    Inventors: Hanjoon Kim, Boncheol Gu, Jeehoon Kang, Changman Lee
  • Patent number: 10691495
    Abstract: The disclosure provides techniques for scheduling a jitterless workload on a virtual machine (VM) executing on a host comprising one or more pCPUs comprising a first subset of the one or more pCPUs and a second subset of the one or more pCPUs. The techniques further include creating a jitterless zone, wherein the jitterless zone includes the first subset of the one or more pCPUs. The techniques further include determining whether a vCPU of the VM is used to execute a jitterless workload or a non-jitterless workload. The techniques further include allocating by a CPU scheduler to the vCPU at least one of the pCPUs in the jitterless zone when the vCPU of the VM is used to execute a jitterless workload. The techniques further include scheduling the jitterless workload for execution by the vCPU on the allocated at least one of the pCPUs in the jitterless zone.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: June 23, 2020
    Assignee: VMware, Inc.
    Inventors: Xunjia Lu, Haoqiang Zheng, Bi Wu
  • Patent number: 10691683
    Abstract: A system and methods relate to, inter alia, aggregating electronic information generated at a first computing environment. The system and methods further relate to receiving a message for at least a portion of the electronic information from a second computing environment. The system and methods further relate to determining whether the aggregated electronic information is available. The system and methods further relate to transforming the electronic information from a first type to a second type in response to determining that the aggregated electronic information is available, the first type comprising an electronic information type of the electronic information generated at the first computing environment and the second type comprising another electronic information type consumable by the second computing environment. The system and methods further relate to transmitting at least a portion of the transformed electronic information to the second computing environment.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: June 23, 2020
    Assignee: BLUEOWL, LLC
    Inventors: Kenneth J. Sanchez, Blake Konrardy, Micah Wind Russo, Eric Dahl
  • Patent number: 10681187
    Abstract: A system and method of intelligently scheduling actions from multiple network software stacks is disclosed. The scheduler uses information, such as requested start time, slip time, action duration and priority to schedule actions among a plurality of network stacks. In some embodiments, the scheduler attempts to maximize the radio usage by prioritizing the actions based not only on their given priority, but also based on their duration, and the ability for other actions to tolerate a delay in being performed.
    Type: Grant
    Filed: December 14, 2017
    Date of Patent: June 9, 2020
    Assignee: Silicon Laboratories, Inc.
    Inventor: Bryan Murawski
  • Patent number: 10678677
    Abstract: Continuous debugging is disclosed. For example, a source code repository stores a first project. A processor is configured to execute a job scheduler, which includes a debugger. The job scheduler receives a request to execute a job, which includes executing a first executable code. A guest is instantiated. The first project is copied as a second project. The second project is compiled into the first executable code. The guest is instructed to execute the job including the first executable code. The debugger intercepts an error caused by executing the first executable code. The debugger updates the second project by inserting a breakpoint into the second project based on the error. The updated second project is compiled into a second executable code. The guest is instructed to re-execute the job including executing the second executable code. Execution of the second executable code is paused at the breakpoint.
    Type: Grant
    Filed: January 10, 2019
    Date of Patent: June 9, 2020
    Assignee: Red Hat Israel, Ltd.
    Inventors: Eran Kuris, Alexander Stafeyev, Arie Bregman
  • Patent number: 10678665
    Abstract: A computer system is provided that includes a cloud platform that includes a plurality of nodes. Each node includes a processor configured to run virtual machines. The cloud platform includes a fault condition injection engine configured to generate fault conditions on selected nodes of the plurality of nodes. The computer system further includes a user interface system configured to receive user input of fault condition experimentation parameters from a user for a target virtual machine associated with the user. The cloud platform allocates a set of nodes of the plurality of nodes for a controlled sandbox environment configured to run the target virtual machine of the user. The fault condition injection engine generates fault conditions on the allocated set of nodes based on the fault condition experimentation parameters.
    Type: Grant
    Filed: May 21, 2018
    Date of Patent: June 9, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Meir Shmouely, Charles Joseph Torre, Cheng Ding, Sekhar Poornananda Chintalapati, Ritchie Nicholas Hughes
  • Patent number: 10680904
    Abstract: An information processing device includes: a memory configured to store a management program; and a processor coupled to the memory, wherein the processor, based on the management program, performs operations of: collecting operation status information stored in a storage of each of one or more information processing devices and related to an operation status of a calculation resource of each of the one or more information processing devices; determining whether the operation status information has periodicity based on the operation status information and a threshold; producing, when it is determined that the operation status information has periodicity, past activation information related to the operation status in a past duration in which the periodicity is present; and generates a prediction value of the operation status information based on the operation status information and the past activation information.
    Type: Grant
    Filed: April 4, 2018
    Date of Patent: June 9, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Shigeto Suzuki, Hiroshi Endo, Hiroyuki Fukuda
  • Patent number: 10664278
    Abstract: In a distributed computing system comprising multiple processor types, a method of provisioning includes receiving a request from a client device for execution of a function. A first data structure identifies implementations of the function and compatible processor types for each implementation. A second data structure identifies available processors in the system. Compatible processor types matching available processors are candidates for execution of the function. A provisioning instruction is created for allocating resources for execution of the function.
    Type: Grant
    Filed: August 17, 2017
    Date of Patent: May 26, 2020
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yuanxi Chen, Jack Hon Wai Ng, Craig Davies, Reza Azimi
  • Patent number: 10666960
    Abstract: The present invention provides a method for decoding a video signal using a graph-based transform including receiving a generalized graph signal including a graph parameter set; obtaining a graph-based transform kernel of a transform unit based on the graph parameter set and a predetermined penalty function; and decoding the transform unit using the graph-based transform kernel.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: May 26, 2020
    Assignee: LG Electronics Inc.
    Inventors: Amir Said, Yung-Hsuan Chao, Hilmi Enes Egilmez
  • Patent number: 10664044
    Abstract: An HMD includes an image display section configured to display an image to be visually recognizable through an outside scene. The HMD includes a position detecting section configured to recognize an input and a control section configured to cause the image display section to display information and change the display in the image display section according to the input recognized by the position detecting section.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: May 26, 2020
    Assignee: SEIKO EPSON CORPORATION
    Inventors: Kazuo Nishizawa, Masahide Takano, Teruhito Kojima
  • Patent number: 10659304
    Abstract: A method of allocating a plurality of processes on a plurality of node devices coupled through a network, includes: dividing the plurality of processes into one or more process groups including at least one process among the plurality of processes, based on a bandwidth desired for data communication between processes in the plurality of processes; specifying, for each of the one or more process groups, a node device which is able to perform entirety of processes included in the process group among the plurality of node devices; and allocating the process group on the specified node device, for each of the one or more process groups.
    Type: Grant
    Filed: May 19, 2016
    Date of Patent: May 19, 2020
    Assignee: FUJITSU LIMITED
    Inventors: Ryoichi Funabashi, Ryuta Tanaka
  • Patent number: 10656970
    Abstract: An apparatus and method are provided for scheduling graph computing on heterogeneous platforms based on energy efficiency. A scheduling engine receives an edge set that represents a portion of a graph comprising vertices with at least one edge connecting two or more of the vertices. The scheduling engine obtains an operating characteristic for each processing resource of a plurality of heterogeneous processing resources. The scheduling engine computes, based on the operating characteristics and an energy parameter, a set of processing speed values for the edge set, each speed value corresponding to a combination of the edge set and a different processing resource of the plurality of heterogeneous processing resources. The scheduling engine identifies an optimal processing speed value from the set of computed speed values for the edge set.
    Type: Grant
    Filed: September 28, 2016
    Date of Patent: May 19, 2020
    Assignee: Futurewei Technologies, Inc.
    Inventors: Yinglong Xia, Hui Zang
  • Patent number: 10649670
    Abstract: Embodiments of the present disclosure relates to data block processing in a distributed processing system. According to one embodiment of the present disclosure, a computer-implemented method is proposed. A first performance indicator for processing a data block by a first processing module is obtained, where the data block is loaded into the first processing module. Then, a second performance indicator for processing the data block by a second processing module is obtained, where the first and second processing modules being logical instances launched in a distributed processing system for processing data blocks. Next, one processing module is selected from the first and second processing modules for processing the data block based on a relationship between the first and second performance indicators.
    Type: Grant
    Filed: September 16, 2016
    Date of Patent: May 12, 2020
    Assignee: International Business Machines Corporation
    Inventors: Liang Liu, Junmei Qu, Hong Zhou Sha, Wei Zhuang
  • Patent number: 10649824
    Abstract: An enterprise system for an event management framework is described where an event subscription processor detects and/or creates computer-executable events, which are then published on user interface of multiple computing devices configured to subscribe, process, and execute the computer-executable events. The event subscription processor may enable processing and execution of one or more computer-executable events in a mode that the computer-executable event execution and management is centralized, and performed in a consistent manner within an organization. The event subscription processor allows the computer-executable event execution tasks/processes to be easily created, modified, and managed in one single enterprise system.
    Type: Grant
    Filed: September 24, 2018
    Date of Patent: May 12, 2020
    Assignee: Massachusetts Mutual Life Insurance Company
    Inventor: Meng Wee Tan
  • Patent number: 10649640
    Abstract: The perceivability of user interface elements of a graphical user interface can be defined as a selection along a range. At one end of the range, a combination of settings for the graphical user interface allows for a highly-detailed user interface; at another end of the range, a combination of settings provides a graphical user interface having the highest perceivability. The high perceivability may include high contrast, but also may provide other user interface settings to address accessibility issues for an end user. The combination of settings can include attributes affecting the background, transparency, borders and text legibility. The selected combination of settings either sets, overrides or limits values for these attributes of user interface elements during rendering.
    Type: Grant
    Filed: May 2, 2017
    Date of Patent: May 12, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Robert Bryce Johnson, Rachel Shelva Nizhnikov, Brett Humphrey, Ryan Demopoulos, Kelly Marie Renner
  • Patent number: 10645150
    Abstract: Hierarchical dynamic scheduling is disclosed. A plurality of physical nodes is included in a computer system. Each node includes a plurality of processors. Each processor includes a plurality of hyperthreads. An abstraction of the nodes, processors, and hyperthreads forms a hierarchy. Upon receiving an indication that a hyperthread should be assigned, a dynamic search of the hierarchy is performed, beginning at the leaf level, for a process to assign to the hyperthread.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: May 5, 2020
    Assignee: TidalScale, Inc.
    Inventor: Isaac R. Nassi
  • Patent number: 10642794
    Abstract: A data center comprising plural computer hosts and a storage system external to said hosts is disclosed. The storage system includes storage blocks for storing tangibly encoded data blocks. Each of said hosts includes a deduplicating file system for identifying and merging identical data blocks stored in respective storage blocks into one of said storage blocks so that a first file exclusively accessed by a first host of said hosts and a second file accessed exclusively by a second host of said hosts concurrently refer to the same one of said storage blocks.
    Type: Grant
    Filed: January 21, 2009
    Date of Patent: May 5, 2020
    Assignee: VMware, Inc.
    Inventors: Austin Clements, Irfan Ahmad, Jinyuan Li, Murali Vilayannur
  • Patent number: 10635825
    Abstract: Data privacy information pertaining to particular data hosted by a first workload provisioned to a first location can be received. The first workload can be monitored to determine whether the first workload is accessed by a second workload, determine whether the second workload is indicated as being authorized, in the data privacy information, to access the particular data hosted by first workload, and determine whether the second workload has access to the particular data hosted by the first workload. If so, the first workload can be automatically provisioned to a second location to which provisioning of the first workload is allowed based on the data privacy information.
    Type: Grant
    Filed: July 11, 2018
    Date of Patent: April 28, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sergio Varga, Jørgen E. Borup, Thiago Cesar Rotta, Marco Aurelio Stelmar Netto, Kris Blöndal
  • Patent number: 10637735
    Abstract: Apparatus for pattern-based migration of a source workload to a target workload at a target deployment which includes a discovery engine, a decision system, a deployment manager, a pattern deployment engine and a residual migration and remediation system. The discovery engine takes the source deployment as an input and discovers metadata associated with the deployed components of the source workload and the IT topology. The deployment manager in cooperation with the pattern deployment engine at the target determines a closest starting-point template to be used for pattern-based target workload deployment. The decision system receives the metadata from the discovery engine and in cooperation with the deployment manager makes a go or no-go decision whether to trigger pattern-based target workload. The residual migration and remediation system finds any undiscovered source workload components and deploys the undiscovered workload components to the target deployment by an image-based migration.
    Type: Grant
    Filed: August 26, 2015
    Date of Patent: April 28, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Giuseppe Ciano, Kapuveera R. Reddy, Hsiao-Choong Thio, Andre Tost, Sreekrishnan Venkiteswaran
  • Patent number: 10635669
    Abstract: Data engine integration and data refinement are described. The actions include receiving, by a data refinement engine, a request for data. The actions include determining a first amount of processing to be performed by the data refinement engine and a second amount of processing to be performed by one or more processors of a data source that include a plurality of data nodes. The actions include transmitting, by the data refinement engine, code to the plurality of data nodes of instructions associated with the second amount of processing. The actions include receiving, by the data refinement engine and from the plurality of data nodes, unprocessed first data and processed second data. The actions include processing, by the data refinement engine, the unprocessed first data. The actions include, in response to the request for data, transmitting, by the data refinement engine, the processed first data and the processed second data.
    Type: Grant
    Filed: January 27, 2015
    Date of Patent: April 28, 2020
    Assignee: MicroStrategy Incorporated
    Inventor: Scott Cappiello
  • Patent number: 10635645
    Abstract: The disclosed computer-implemented method for maintaining aggregate tables in databases may include (1) maintaining a database that comprises a primary table of data, an intermediate mapping table of metadata from the data in the primary table, and an aggregate table, (2) for each new item of data received during a time period, updating the primary table with the new item of data and updating at least one row in the intermediate mapping table with metadata from the new item of data, and (3) at the end of the time period, updating the aggregate table with an aggregation of the metadata based on the metadata stored in the intermediate table. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: July 31, 2014
    Date of Patent: April 28, 2020
    Assignee: Veritas Technologies LLC
    Inventor: Aeham Abushwashi
  • Patent number: 10635498
    Abstract: A management resource and method monitor management operations associated with a group of managed devices, assign significance factors the management operations, and record instance information, identifying the managed device and the management operation, for each management operation performed. For each managed device, an interest factor is determined for each management operation. Weighting factors are determined for each managed device based on the interest factors and the significance factors. A management console display may be generated wherein the display indicates, whether in graphical or textual form, high priority managed devices, where the high priority devices are selected from the entire population of managed devices, based on the weighting factors. The IT administrator may adjust a weighting factor threshold to adjust the magnitude or extent of the filtering.
    Type: Grant
    Filed: May 5, 2017
    Date of Patent: April 28, 2020
    Assignee: Dell Products L.P.
    Inventors: Ankit Bansal, Vaideeswaran Ganesan, Krishna Kumar Gupta, Ajit Kumar Padhi
  • Patent number: 10628424
    Abstract: An event processing system for processing events in an event stream is disclosed. The system receives information identifying an application and generates a common application runtime model of the application based on the information identifying the application. The system converts the common application runtime model of the application into a first generic representation of the application. The first generic representation of the application is executed in a first target event processing system of a plurality of target event processing systems. The first generic representation of the application comprises a runtime Directed Acyclic Graph (DAG) of components of the application. The system then transmits the first generic representation of the application to the first target event processing system for execution by the first target event processing system.
    Type: Grant
    Filed: September 11, 2017
    Date of Patent: April 21, 2020
    Assignee: Oracle International Corporation
    Inventors: Hoyong Park, Gyorgy Geiszter
  • Patent number: 10621084
    Abstract: Embodiments for efficient garbage collection in a data storage environment. In a storage system comprising multiple storage devices having respective sets of storage regions, at least one respective storage fragmentation threshold used to trigger a garbage collection operation is identified. The garbage collection operation is performed to reclaim data space in the storage system according to each of a block perspective and an area perspective. The block perspective performs the garbage collection operation on individual blocks of data and the area perspective performs the garbage collection operation on a plurality of the blocks in a respective storage region. The block perspective and the area perspective portions of the garbage collection operation are executed independently of one another.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: April 14, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Afief Halumi, Yosef Shatsky, Asaf Porat-Stoler, Reut Cohen, Sergey Marenkov
  • Patent number: 10620988
    Abstract: A distributed computing system may incorporate an implementation based on a codelet-based execution model, where a codelet is a high-level dataflow element. In addition to supporting the use of codelets, the system may further provide support for “datalets,” which are an extension of codelets providing better built-in support for static dataflow programming. Such a distributed computing system, implementing computing based on such codelets, may incorporate an implementation of an execution model, locality management schemes, scheduling schemes, a type system, and/or management of heterogeneous systems.
    Type: Grant
    Filed: December 16, 2011
    Date of Patent: April 14, 2020
    Assignee: ET International, Inc.
    Inventors: Christopher G. Lauderdale, Rishi L. Khan
  • Patent number: 10621530
    Abstract: Systems and methods deploy artifacts to a database in a self-organizing matter as a single transaction. An example method includes determining one or more root nodes in a dependency graph, the dependency graph including a node for each of the plurality of artifacts, each node having a respective dependency count, wherein the one or more root nodes have a respective dependency count of zero. The method also includes generating a work item for each of the root nodes and placing the work item in a work queue. In such a method, a plurality of workers can pop work items off the work queue in parallel and initiate deployment of the artifacts represented by the work items. Each worker of the plurality of workers can also reduce by one the dependency count of nodes in the dependency graph that are successor nodes of the root node deployed using the worker.
    Type: Grant
    Filed: July 28, 2016
    Date of Patent: April 14, 2020
    Assignee: SAP SE
    Inventors: Le-Huan Stefan Tran, Arne Harren, Jonathan Bregler, Alexander Bunte, Andreas Kellner, Daniel Kuntze, Vladislav Leonkev, Simon Lueders, Volker Sauermann, Michael Schnaubelt