Process Scheduling Patents (Class 718/102)
  • Patent number: 10908961
    Abstract: In an embodiment, a method is provided. In an embodiment, the method includes implementing a virtual remote direct memory access (RDMA) component in a virtualization layer on the computer system, the virtual RDMA component having an interface implementing RDMA semantics. An RDMA send request from a send queue associated with a first application running on the first VM is read using the virtual RDMA component, the RDMA send request referencing a send buffer in an application memory space for the first application. The virtual RDMA component then copies or transfers a message in the send buffer to a receive buffer in the second VM. A host computing system configured to implement the method and instructions configured to be executed on a host computing system is also provided.
    Type: Grant
    Filed: July 25, 2019
    Date of Patent: February 2, 2021
    Assignee: Intel Corporation
    Inventors: William R. Magro, Robert J. Woodruff, Jianxin Xiong
  • Patent number: 10904111
    Abstract: A method, a computer program product, and a computer system for a lightweight framework with dynamic self-organizing coordination capacity for clustered applications are provided. The lightweight framework provides a means for managing tasks that require coordination between application nodes. A node receives a task and determines whether one of other nodes is processing the task. The node runs as an active node to process the task, in response to determining that none of the other nodes is processing the task. The node runs as one of one or more passive nodes that monitor processing of the task, in response to determining that the one of the other node is processing the task.
    Type: Grant
    Filed: October 2, 2014
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Anna Joffe, Howard A. Kelsey, Viktor Levine, Michael P. W. Thornton
  • Patent number: 10896064
    Abstract: A workload scheduling method, system, and computer program product include analyzing a resource scheduling requirement for processes of a workload including the communication patterns among CPUs and accelerators, creating feasible resources based on static resource information of the resources for the processes of the workload, and selecting an available resource of the feasible resources to assign the workload based on the resource scheduling requirement, such that the CPU and GPU connection topology of the selection matches the communication patterns of the workload.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: January 19, 2021
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Liana Liyow Fong, Seelam R. Seetharami, Wei Tan
  • Patent number: 10896440
    Abstract: Usage and performance data from a plurality of installed appliances is received via a network, a different corresponding subset of said appliances being associated with each of a plurality of customers. Said usage and performance data across customers is analyzed to identify capacity utilization related trends. A targeted offer is determined for a given customer, based at least in part on said analysis across customers and the given customer's own usage and performance data.
    Type: Grant
    Filed: December 23, 2015
    Date of Patent: January 19, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Donald Mace, Xiaoye Jiang, Gil Shneorson
  • Patent number: 10891211
    Abstract: Systems and methods for version control of pipelined enterprise software are disclosed. Exemplary implementations may: store information for executable code of software applications that are installed and executable by users, receive first user input from a first user that represents selection by the first user of a first software pipeline for execution; receive second user input from a second user that represents a second selection by the second user of a second software pipeline for execution, wherein the second software pipeline includes different versions of software applications that are included in the first software pipeline; facilitate execution of the first software pipeline for the first user; and facilitate execution of the second software pipeline for the second user at the same time as the execution of the first software pipeline for the first user.
    Type: Grant
    Filed: July 29, 2020
    Date of Patent: January 12, 2021
    Assignee: Instabase, Inc.
    Inventors: Shih Ping Chang, David Edgar Lluncor
  • Patent number: 10884992
    Abstract: A distributed file system is provided having multi-stream object-based data upload. A distributed file system comprises a plurality of client processing nodes, wherein one or more of the plurality of client processing nodes selectively operate in one or more of an object-based mode and a POSIX-style mode; and a plurality of storage nodes, wherein one or more of the plurality of client processing nodes transfer multiple portions of the same data entity (e.g., an object or a file) substantially simultaneously to one or more of the storage nodes. A uniform interface is optionally provided to access the object-based mode and the POSIX-style mode. The multiple portions of the same data entity comprise blocks and multiple blocks can be committed substantially simultaneously in parallel. Committed data containing an error that was uploaded using the object-based mode becomes unavailable for further object-based access until the error is repaired using the POSIX-style mode.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: January 5, 2021
    Assignee: EMC IP Holding Company LLC
    Inventor: Andrey Nevolin
  • Patent number: 10887443
    Abstract: The invention enables digital music content to be downloaded to and used on a portable wireless computing device. An application running on the wireless device has been automatically adapted to parameters associated with the wireless device without end-user input (e.g. the application has been configured in dependence on the device OS and firmware, related bugs, screen size, pixel number, security models, connection handling, memory etc. This application enables an end-user to browse and search music content on a remote server using a wireless network; to download music content from that remote server using the wireless network and to playback and manage that downloaded music content. The application also includes a digital rights management system that enables unlimited legal downloads of different music tracks to the device and also enables any of those tracks stored on the device to be played so long as a subscription service has not terminated.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: January 5, 2021
    Assignee: TikTok Pte. Ltd.
    Inventors: Mark Stephen Knight, Michael Ian Lamb, Robert John Lewis, Stephen William Pocock, Philip Anthony Sant, Mark Peter Sullivan, Christopher John Evans
  • Patent number: 10884477
    Abstract: The described embodiments include a computing device with a plurality of clients and a shared resource for processing job items. During operation, a given client of the plurality of clients stores first job items in a queue for the given client. When the queue for the given client meets one or more conditions, the given client notifies one or more other clients that the given client is to process job items using the shared resource. The given client then processes the first job items from the queue using the shared resource. Based on being notified, at least one other client that has second job items to be processed using the shared resource, processes the second job items using the shared resource. The given client can transition the shared resource between power states to enable the processing of job items.
    Type: Grant
    Filed: October 20, 2016
    Date of Patent: January 5, 2021
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Alexander J. Branover, Benjamin Tsien
  • Patent number: 10884733
    Abstract: An apparatus causes a management unit included in an arithmetic processing unit to manage, where an executable task is included in a queue, execution of the task. The apparatus causes a standby unit included in the arithmetic processing unit to execute, when the executable task is not included in the queue, a decision process for deciding, by polling, whether information from another apparatus different from the apparatus is received by a communication controller until the executable task is included in the queue.
    Type: Grant
    Filed: July 6, 2018
    Date of Patent: January 5, 2021
    Assignee: FUJITSU LIMITED
    Inventors: Yuto Tamura, Kohta Nakashima
  • Patent number: 10884809
    Abstract: A method of workflow management in a cloud computing system that includes generating a workflow graph from a workflow definition, the workflow graph including nodes representing work-elements; generating a stream matrix from the workflow graph, the stream matrix including pointers to lists of the work-elements, each of the lists representing a workstream; processing the stream matrix to place work-elements in a platform service pipeline for the cloud computing system based on resource availability of the platform service pipeline; and removing work-elements from the lists and the platform service pipeline upon completion.
    Type: Grant
    Filed: May 14, 2019
    Date of Patent: January 5, 2021
    Assignee: VMware, Inc.
    Inventors: Tissa Senevirathne, Andrew Sharpe, Harish Barkur Bhat, Francis Guillier
  • Patent number: 10884799
    Abstract: At least one processor of a storage system comprises a plurality of cores and is configured to execute a first thread in a plurality of modes of operation. When operating in a first mode of operation, the first thread polls at least one interface of the storage system for data to be processed. Responsive to detecting the data, the first thread processes the data. Responsive to having no remaining data to be processed, the first thread suspends execution on the first core if another thread is executing on a second core and operating in a second mode of operation. When operating in the second mode of operation, the first thread polls at least one interface associated with a second thread operating executing on a second core and operating in the first mode of operation for data to be processed. Responsive to detecting the data, the first thread causes the second thread to resume execution.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: January 5, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Amitai Alkalay, Lior Kamran
  • Patent number: 10887252
    Abstract: A network interface device is connected to a host computer by having a memory controller, and a scatter-gather offload engine linked to the memory controller. The network interface device prepares a descriptor including a plurality of specified memory locations in the host computer, incorporates the descriptor in exactly one upload packet, transmits the upload packet to the scatter-gather offload engine via the uplink, invokes the scatter-gather offload engine to perform memory access operations cooperatively with the memory controller at the specified memory locations of the descriptor, and accepts results of the memory access operations.
    Type: Grant
    Filed: November 6, 2018
    Date of Patent: January 5, 2021
    Assignee: MELLANOX TECHNOLOGIES, LTD.
    Inventors: Dror Bohrer, Noam Bloch, Peter Paneah, Richard Graham
  • Patent number: 10884800
    Abstract: The present disclosure involves systems, software, and computer implemented methods for resource allocation and management. One example method includes receiving, by a first dispatcher in a dispatching layer, a first request to run a first task for a first application, the first request including a first application priority. A determination is made that the first application priority is lower than at least one higher application priority of another application. Execution of the first application is suspended based on determining that the first application priority is lower than the at least one higher application priority. An indication that an application having a higher application priority has finished is received. A determination is made that the first application priority is a highest application priority of currently-running applications. The first task for the first application is dispatched to a first application server.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: January 5, 2021
    Assignee: SAP SE
    Inventors: Alain Gauthier, Martin Parent, Edgar Lott
  • Patent number: 10877808
    Abstract: There is provided mechanisms for scheduling a task from a plurality of tasks to a processor core of a cluster of processor cores. The processor cores share caches. A method is performed by a controller. The method comprises determining group-wise task relationships between the plurality of tasks based on duration of cache misses resulting from running groups of the plurality of tasks on processor cores sharing the same cache. The method comprises scheduling the task to one of the processor cores based on the group-wise task relationships of the task.
    Type: Grant
    Filed: October 10, 2016
    Date of Patent: December 29, 2020
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Patrik â„«berg, Bengt Wikenfalk
  • Patent number: 10877795
    Abstract: At least some embodiments described herein relate to the automatic tuning of a dataflow execution graph. Such dataflow execution graphs are often used to execute some processing against a stream of data messages. A performance parameter of the dataflow execution graph is monitored, and compared against a service level objective. Based on the comparison, it is automatically decided whether a configuration of the dataflow execution graph should be changed. If a change is decided to be made, the configuration of the dataflow execution graph is altered. Thus, rather than require explicit instructions to change the configuration of a dataflow execution graph, the configuration of a dataflow execution graph is changed (or tuned) depending on compliance of performance with a service level objective.
    Type: Grant
    Filed: July 25, 2018
    Date of Patent: December 29, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Rahul Potharaju, Terry Yumin Kim
  • Patent number: 10872006
    Abstract: A remote procedure call channel for interprocess communication in a managed code environment ensures thread-affinity on both sides of an interprocess communication. Using the channel, calls from a first process to a second process are guaranteed to run on a same thread in a target process. Furthermore, calls from the second process back to the first process will also always execute on the same thread. An interprocess communication manager that allows thread affinity and reentrancy is able to correctly keep track of the logical thread of execution so calls are not blocked in unmanaged hosts. Furthermore, both unmanaged and managed hosts are able to make use of transparent remote call functionality provided by an interprocess communication manager for the managed code environment.
    Type: Grant
    Filed: September 7, 2018
    Date of Patent: December 22, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Jackson M. Davis, John A. Shepard
  • Patent number: 10873412
    Abstract: System and method for using a network with base stations to optimally or near-optimally schedule radio resources among the users are disclosed. In certain embodiments the system and method are designed to operate in real-time (such as but not limited to 100 ?s) to schedule radio resources in a 5G NR network by solving for an optimal or near-optimal solution to scheduling problem by decomposing it into a number of small and independent sub-problems, selecting a subset of sub-problems and fitting them into a number of parallel processing cores from one or multiple many-core computing devices, and solving for an optimal or near-optimal solution through parallel processing within approximately 100 ?s. In other embodiments, the sub-problems are constructed to have a similar mathematical structure. In yet other embodiments, the sub-problems are constructed to each be solved within approximately 10 s of ?s.
    Type: Grant
    Filed: July 19, 2018
    Date of Patent: December 22, 2020
    Assignee: Virginia Polytechnic Institute and State University
    Inventors: Yan Huang, Y. Thomas Hou, Yongce Chen
  • Patent number: 10866832
    Abstract: A workflow scheduling system includes a first processor configured to schedule a plurality of workflows each including a plurality of tasks; a plurality of second processors configured to form a predetermined number of logical computation units and execute the scheduled workflows in parallel; and a memory that stores information about a plurality of task groups each of which includes one or more tasks from one or more of the workflows. The first processor is configured to, based on the stored information, instruct the second processors to execute the scheduled workflows while limiting a total number of the workflows simultaneously executed by the second processors to the predetermined number for each of the task groups.
    Type: Grant
    Filed: September 3, 2018
    Date of Patent: December 15, 2020
    Assignee: TOSHIBA MEMORY CORPORATION
    Inventor: Yoshihiro Ohba
  • Patent number: 10868751
    Abstract: A system, a method, and a computer program for generating a dynamically configurable resolution route for transmitting a request object to one or more nodes in a network, comprising receiving a trigger signal from a first node, determining one or more destination nodes based on a resolution process, schema or scenario, determining a pathway to the one or more destination nodes, generating a resolution route for transmitting the request object in the network, iteratively transmitting the request object to the one or more destination nodes based on the resolution route, receiving a request object resolution signal from a final destination node, and transmitting the request object resolution signal to the first node based on the request object resolution signal.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: December 15, 2020
    Assignee: Saudi Arabian Oil Company
    Inventors: Mohammad D. Shammari, Adnan O. Haidar, Abdullah A. Tamimi, Sami H. Buri, Hussain A. Hajjaj, Mohammad A. Qahtani
  • Patent number: 10867283
    Abstract: Mechanisms can be provided for locking a component and extending the lock to one or more additional component(s) in a visual analyzer application. Embodiments can receive a request for a first component of a document for a first thread where the document is displayed by a graphical user interface (GUI) and has components including the first component and a second component. A lock manager may lock the first component. An action handler can determine, based on code associated with an event pertaining to the request, that the second component also needs to be locked. The lock manager may lock the second component for a same thread, if the first and second components are not currently locked. Additional user actions directed to other components of the application not currently locked may still proceed, permitting asynchronous calls to be processed without interference with a previous action that has already started.
    Type: Grant
    Filed: September 20, 2018
    Date of Patent: December 15, 2020
    Assignee: Oracle International Corporation
    Inventors: Alvin Andrew Raj, Matthew Jakubiak, Bo Jonas Birger Lagerblad
  • Patent number: 10866834
    Abstract: A simultaneous multi-threading (SMT) processor core capable of thread-based biasing with respect to execution resources. The SMT processor includes priority controller circuitry to determine a thread priority value for each of a plurality of threads to be executed by the SMT processor core and to generate a priority vector comprising the thread priority value of each of the plurality of threads. The SMT processor further includes thread selector circuitry to make execution cycle assignments of a pipeline by assigning to each of the plurality of threads a portion of the pipeline's execution cycles based on each thread's priority value in the priority vector. The thread selector circuitry is further to select, from the plurality of threads, tasks to be processed by the pipeline based on the execution cycle assignments.
    Type: Grant
    Filed: March 29, 2019
    Date of Patent: December 15, 2020
    Assignee: Intel Corporation
    Inventors: Andrew Herdrich, Ian Steiner, Leeor Peled, Michael Prinke, Eylon Toledano
  • Patent number: 10860369
    Abstract: Prioritizing resource allocation to computer applications which includes: grouping the computer applications into groups according to an initial criteria; modifying the groups according to one or more criteria used to identify active computer applications; analyzing the groups to prioritize the groups in order of the active time of the computer applications in the groups; analyzing the computer applications in the groups to prioritize the computer applications in the groups in order of the active time of the computer applications in the groups; setting the highest priority for the computer applications that either (1) have high frequency of use, or (1) are active now; and prioritizing the computer applications according to the priority setting.
    Type: Grant
    Filed: January 11, 2017
    Date of Patent: December 8, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jun Y. Du, Luo Xu Min, Guang Shi, Rui Shi, Wei Lin C W Wu, Jian C D L Zhang
  • Patent number: 10860387
    Abstract: Dynamic distributed work allocation is disclosed. For example, a first work server (WS) stores a first plurality of tasks and a second WS stores a second plurality of tasks. A work client (WC) is configured to send a first lock request (LR) with a first priority value (PV) to the first WS and a second LR with a second PV to the second WS. The WC receives a first lock notice (LN) and a first task from the first WS, and a second LN and a second task from the second WS. Prior to a first lock duration (LD) expiring and completing processing of the first task, the WC sends a third LR to the first WS that extends the first LD. After completing the second task, the WC sends a lock release notice and a fourth LR to the second WS.
    Type: Grant
    Filed: March 15, 2019
    Date of Patent: December 8, 2020
    Assignee: Red Hat, Inc.
    Inventors: John Eric Ivancich, Casey Taylor Bodley, Matthew William Benjamin, Daniel Francis Gryniewicz
  • Patent number: 10860400
    Abstract: A method is used in monitoring an application in a computing environment. The method represents execution of the application on a system as a finite state machine. The finite state machine depicts at least one state of the application, where the state indicates at least one of successful application execution and unsuccessful application execution. The method identifies an error state within the finite state machine, where the error state indicates the unsuccessful application execution. The method identifies, by analyzing the finite state machine, a non-error state as a cause of the unsuccessful application execution, where the unsuccessful application execution is represented as a path comprising a plurality of states, where the path comprises the non-error state. The method maps the non-error state to a location in the application to identify the cause of the unsuccessful application execution.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: December 8, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Karun Thankachan, Prajnan Goswami, Mohammad Rafey
  • Patent number: 10861126
    Abstract: An apparatus to facilitate asynchronous execution at a processing unit. The apparatus includes one or more processors to detect independent task passes that may be executed out of order in a pipeline of the processing unit, schedule a first set of processing tasks to be executed at a first set of processing elements at the processing unit and schedule a second set of tasks to be executed at a second set of processing elements, wherein execution of the first set of tasks at the first set of processing elements is to be performed simultaneous and in parallel to execution of the second set of tasks at the second set of processing elements.
    Type: Grant
    Filed: June 21, 2019
    Date of Patent: December 8, 2020
    Assignee: Intel Corporation
    Inventors: Saurabh Sharma, Michael Apodaca, Aditya Navale, Travis Schluessler, Vamsee Vardhan Chivukula, Abhishek Venkatesh, Subramaniam Maiyuran
  • Patent number: 10853283
    Abstract: An integrated circuit includes technology for generating input/output (I/O) latency metrics. The integrated circuit includes a real-time clock (RTC), a read measurement register, and a read latency measurement module. The read latency measurement module includes control logic to perform operations comprising (a) in response to receipt of read responses that complete read requests associated with an I/O device, automatically calculating read latencies for the completed read requests, based at least in part on time measurements from the RTC for initiation and completion of the read requests; (b) automatically calculating an average read latency for the completed read requests, based at least in part on the calculated read latencies for the completed read requests; and (c) automatically updating the read measurement register to record the average read latency for the completed read requests. Other embodiments are described and claimed.
    Type: Grant
    Filed: June 19, 2019
    Date of Patent: December 1, 2020
    Assignee: Intel Corporation
    Inventors: Garrett Matthias Drown, Patrick Lu
  • Patent number: 10853099
    Abstract: A method for rendering interface elements, including: obtaining a first set of one or more interface elements associated with a target user interface (UI) to be rendered, the first set of one or more interface elements comprising one or more interface elements that meet a pre-configured priority condition; rendering the first set of one or more interface elements at a higher priority than other interface elements associated with the target UI; and outputting a rendering result of the first set of one or more interface elements.
    Type: Grant
    Filed: July 10, 2018
    Date of Patent: December 1, 2020
    Assignee: Alibaba Group Holding Limited
    Inventors: Qinghe Xu, Xu Zeng, Zheng Liu, Yongcai Ma, Lidi Jiang, Kerong Shen, Decai Jin, Chong Zhang
  • Patent number: 10846084
    Abstract: Implementations of the disclosure implement timely and context triggered (TACT) prefetching that targets particular load IPs in a program contributing to a threshold amount of the long latency accesses. A processing device comprising an execution unit; and a prefetcher circuit communicably coupled to the execution unit is provided. The prefetcher circuit is to detect a memory request for a target instruction pointer (IP) in a program to be executed by the execution unit. A trigger IP is identified to initiate a prefetch operation of memory data for the target IP. Thereupon, an association is determined between memory addresses of the trigger IP and the target IP. The association comprising a series of offsets representing a path between the trigger IP and an instance of the target IP in memory. Based on the association, an offset from the memory address of the trigger IP to prefetch the memory data is produced.
    Type: Grant
    Filed: January 3, 2018
    Date of Patent: November 24, 2020
    Assignee: Intel Corporation
    Inventors: Anant Vithal Nori, Sreenivas Subramoney, Shankar Balachandran, Hong Wang
  • Patent number: 10846133
    Abstract: A control apparatus includes a memory and a processor coupled to the memory, wherein the processor is configured to decide whether or not, in a system in which an occupation period of a job is allocated to each of a plurality of calculators so as to operate each of the plurality of calculators, there is a standby job that is to occupy a number of calculators equal to or smaller than a number of one or more end jobs that end in a middle of a first period which is occupied by one or more calculators from among the plurality of calculators, within a second period having a time length equal to or smaller than a remaining time period of the first period, and switch, when deciding that there is no standby job, a mode of the one or more calculators to a power saving mode.
    Type: Grant
    Filed: October 19, 2018
    Date of Patent: November 24, 2020
    Assignee: FUJITSU LIMITED
    Inventor: Takahiro Kagami
  • Patent number: 10845854
    Abstract: The disclosed computing device may include electronic components, at least one of which is a processor. The computing device may also include a heat sink thermally coupled to the electronic components, as well as a temperature sensor that determines the current temperature inside the computing device. The computing device may further include a controller. The processor may generate a load schedule for the electronic components based on the current temperature inside the computing device. This load schedule ensures that a maximum temperature for the heat sink is not exceeded even when the total system power load exceeds, for a short period of time, the maximum sustainable power level the heat sink can dissipate. The controller may then load the electronic components according to the generated load schedule. Various other methods, systems, and computer-readable media are also disclosed.
    Type: Grant
    Filed: August 28, 2018
    Date of Patent: November 24, 2020
    Assignee: Facebook, Inc.
    Inventors: Howard William Winter, ChuanKeat Kho, Peter John Richard Gilbert Bracewell
  • Patent number: 10841367
    Abstract: A new workload is assigned to a subset of a plurality of processors, the subset of processors assigned a subset of a plurality of cache devices. A determination is made that the new workload is categorized as a cache-dependent workload which would be executed more efficiently were additional data elements associated with the new workload to be held in the subset of cache devices, and pursuant to determining the new workload is the cache-dependent workload, a determination is made as to whether the subset of cache devices is meeting the memory need of the new workload. Responsive to determining the subset of cache devices is not meeting the memory need of the new workload, a cache related action is performed.
    Type: Grant
    Filed: May 17, 2018
    Date of Patent: November 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: John A. Bivens, Eugen Schenfeld, Valentina Salapura, Ruchi Mahindru, Min Li
  • Patent number: 10841369
    Abstract: Provided are a computer program product, system, and method for determining allocatable host system resources to remove from a cluster and return to a host service provider. A determination is made of unused host system resources, that are not currently being used by workloads, in a plurality of host systems. A determination is made of required resources for computational resources required to complete processing unfinished workloads that have not completed. A determination is made of an amount of resources to remove from the cluster by subtracting the unused host system resources by the required resources for computational resources. At least one of the host systems available for the workloads is selected to remove from the cluster having resources that satisfy the amount of resources to remove.
    Type: Grant
    Filed: November 26, 2018
    Date of Patent: November 17, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lior Aronovich, Priya Unnikrishnan
  • Patent number: 10831542
    Abstract: In an SRCU environment, per-processor data structures each maintain a list of SRCU callbacks enqueued by SRCU updaters. An SRCU management data structure maintains a current-grace-period counter that tracks a current SRCU grace period, and a future-grace-period counter that tracks a farthest-in-the-future SRCU grace period needed by the SRCU callbacks enqueued by the SRCU updaters. A combining tree is used to mediate a plurality of grace-period-start requests concurrently vying for an opportunity to update the future-grace-period record on behalf of SRCU callbacks. The current-grace-period counter is prevented from wrapping during some or all of the grace-period-start request processing. In an embodiment, the counter wrapping is prevented by performing some or all of the grace-period start-request processing within an SRCU read-side critical section.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: November 10, 2020
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Paul E. McKenney
  • Patent number: 10817220
    Abstract: A block I/O request processing threads executes only on a processor core to which it is assigned. After it executes for a period of time, the block I/O request processing thread yields its assigned processing core to another type of thread that is runnable on the processing core, such as a file I/O request processing thread. When there are no block I/O requests for a block I/O request processing thread to process, it is suspended from being executed. A monitor thread running on another processing core detects that a newly received block I/O request is available for processing, and makes the block I/O request processing thread runnable again. The block I/O request processing thread may be assigned a higher priority than file I/O request processing threads, and preempt any runnable file I/O request processing threads when it is made runnable to process the newly received block I/O request.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: October 27, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Jean-Pierre Bono, Sudhir Srinivasan, Joon-Jack Yap
  • Patent number: 10817349
    Abstract: Systems and methods for waking up waiting processing streams in a manner that reduces the number of spurious wakeups. An example method may comprise: assigning identifiers of a sequence of identifiers to wakeup signals and to processing streams, the sequence of identifiers representing a chronological order that the wakeup signals are initiated and the processing streams begin waiting, wherein each of the identifiers is exclusively assigned to either a wakeup signal or a processing stream and the sequence of identifiers comprises a first identifier associated with a wakeup signal; and responsive to receiving the wakeup signal, avoiding waking a processing stream associated with an identifier greater than the first identifier.
    Type: Grant
    Filed: June 14, 2018
    Date of Patent: October 27, 2020
    Assignee: Red Hat, Inc.
    Inventor: Torvald Riegel
  • Patent number: 10810042
    Abstract: Methods and systems for improving the performance of a distributed job scheduler by dynamically splitting and distributing the work of a single job into parallelizable tasks that are executed among multiple nodes in a cluster are described. The distributed job scheduler may split a job into a plurality of tasks and assign the tasks to nodes within the cluster based on a time remaining to complete the job, an estimated time to complete the job, and a number of identified healthy nodes within the cluster. The distributed job scheduler may monitor job progress over time and adjust (e.g., increase) the number of nodes used to execute the plurality of tasks if the time remaining to complete the job falls below a threshold amount of time or if the time remaining to complete the job minus the estimated time to complete the job falls below the threshold amount of time.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: October 20, 2020
    Assignee: Rubrik, Inc.
    Inventors: Schuyler Merritt Smith, Patricia Ann Beekman, Nam Hyun Jo
  • Patent number: 10812676
    Abstract: The present disclosure relates to an image processing apparatus and an information processing method, and more particular an image processing apparatus including an acceptance unit configured to accept an operation performed on the image processing apparatus, and a controller configured to perform control such that an operation performed on the image processing apparatus is not accepted by the acceptance unit while a service associated with remote assistance is accepted based on an access from an external apparatus.
    Type: Grant
    Filed: December 21, 2017
    Date of Patent: October 20, 2020
    Assignee: CANON KABUSHIKI KAISHA
    Inventor: Naoya Kakutani
  • Patent number: 10802831
    Abstract: A computer system is associated with a number of computers including at least one central processing unit (CPU). Managing of parallel processing on the computer system may comprise determining a scheduling limit to restrict a number of worker threads available for executing tasks on the computer system. The managing may further comprise executing a plurality of tasks on the computer system. An availability of a CPU associated with the computer system is determined based on whether a load of the CPU exceeds a first threshold. When the CPU is determined to be unavailable, the scheduling limit is reduced. A further task is scheduled for execution on one of the CPUs according to the reduced scheduling limit. The worker threads available to execute tasks on the computer system may be limited, such that the quantity of worker threads available for executing tasks does not exceed the scheduling limit.
    Type: Grant
    Filed: June 30, 2017
    Date of Patent: October 13, 2020
    Assignee: SAP SE
    Inventors: Viktor Povalyayev, David C. Hu, Marvin Baumgart, Michael Maris
  • Patent number: 10802876
    Abstract: A method of determining a multi-agent schedule includes defining a well-formed, non-preemptive task set that includes a plurality of tasks, with each task having at least one subtask. Each subtask is associated with at least one resource required for performing that subtask. In accordance with the method, an allocation, which assigns each task in the task set to an agent, is received and a determination is made, based on the task set and the allocation, as to whether a subtask in the task set is schedulable at a specific time. A system for implementing the method is also provided.
    Type: Grant
    Filed: May 22, 2013
    Date of Patent: October 13, 2020
    Assignee: Massachusetts Institute of Technology
    Inventors: Julie Ann Shah, Matthew Craig Gombolay
  • Patent number: 10797943
    Abstract: Disclosed aspects relate to configuration management in a stream computing environment to process a stream of tuples using a compiled application bundle. A set of configuration overlay parameters may be established separate from the compiled application bundle. A set of configuration overlay parameter values may be ascertained with respect to the set of configuration overlay data. A stream environment application overlay configuration may be determined based on the set of configuration overlay parameter values. The stream of tuples may be processed using the stream environment application overlay configuration.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: October 6, 2020
    Assignee: International Business Machines Corporation
    Inventor: Bradley W. Fawcett
  • Patent number: 10796225
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing tensor computations across computing devices. One of the methods includes: receiving specification data that specifies a distribution of tensor computations among a plurality of computing devices, wherein each tensor computation (i) is defined to receive, as input, one or more respective input tensors each having one or more respective input dimensions, (ii) is defined to generate, as output, one or more respective output tensors each having one or more respective output dimensions, or both, wherein the specification data specifies a respective layout for each input and output tensor that assigns each dimension of the input or output tensor to one or more of the plurality of computing devices; assigning, based on the layouts for the input and output tensors, respective device-local operations to each of the computing devices; and causing the tensor computations to be executed.
    Type: Grant
    Filed: August 5, 2019
    Date of Patent: October 6, 2020
    Assignee: Google LLC
    Inventor: Noam M. Shazeer
  • Patent number: 10795540
    Abstract: Methods, apparatuses, and computer program products for visualizing migration of a resource of a distributed computing environment are provided. Embodiments include displaying, within a graphical user interface (GUI), one or more graphical resource representations. Each graphical resource representation represents a resource of a distributed computing environment. Each graphical resource representation is displayed in a particular location within the GUI according to a location of the resource within the distributed computing environment. Embodiments also include displaying, within the GUI, a first graphical migration representation. The first graphical migration representation represents a first transfer operation of a first resource of the distributed computing environment.
    Type: Grant
    Filed: June 26, 2014
    Date of Patent: October 6, 2020
    Assignee: International Business Machines Corporation
    Inventors: Lance Bragstad, Bin Cao, James E. Carey, Mathew R. Odden
  • Patent number: 10791156
    Abstract: A stable conference system, which is reduced in the processing load placed on the system and receives a speech request from a discussion unit reliably, is be provided. A conference system S includes a control unit 1 and a discussion unit 2, a running packet is repeatedly transmitted and received between both units, the discussion unit includes a DU control portion 26, and a DU communication portion 21, the control unit includes a CU communication portion 11, and an audio slot, the DU control portion adds a request flag to the running packet, the audio slot receives audio information from the discussion unit, and when the running packet, with the request flag added, is received from the discussion unit, the CU communication portion transmits a content acquisition packet, carrying an acquisition command to acquire a content, to the discussion unit, which added the request flag, if the audio slot is vacant.
    Type: Grant
    Filed: November 10, 2016
    Date of Patent: September 29, 2020
    Assignee: Audio-Technica Corporation
    Inventors: Kazuhiro Onizuka, Yasuhito Kikuhara, Toru Aikawa
  • Patent number: 10783108
    Abstract: The present invention provides a mechanism whereby active servers are able to extend their RAM by using memory available in standby servers. This can be achieved, without having to take the servers out of their standby mode, by implementing a memory manager operating in at least one active server and configured to directly access the memory of the servers in standby mode, without requiring the processor of these servers in standby mode to be active. In these servers in standby mode, at least their memory, their network card and their communication means are active, whereas at least their processor is in standby mode.
    Type: Grant
    Filed: March 6, 2017
    Date of Patent: September 22, 2020
    Assignees: INSTITUT NATIONAL POLYTECHNIQUE DE TOULOUSE, CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE
    Inventors: Daniel Hagimont, Alain Tchana
  • Patent number: 10771536
    Abstract: Systems, methods, and computer-readable media for coordinating processing of data by multiple networked computing resources include monitoring data associated with a plurality of networked computing resources, and coordinating the routing of data processing segments to the networked computing resources.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: September 8, 2020
    Assignee: ROYAL BANK OF CANADA
    Inventors: Walter Michael Pitio, Philip Iannaccone, Daniel Aisen, Bradley Katsuyama, Robert Park, John Schwall, Richard Steiner, Allen Zhang, Thomas L. Popejoy, Gregory Martin Ludvik, Thomas Matthew Clark, Xiaoran Zheng
  • Patent number: 10761765
    Abstract: A source site includes a controller, a set of source worker nodes, and a message queue connected between the controller and source worker nodes. A destination site includes a set of destination worker nodes. The controller identifies differences between a first snapshot created at the source site at a first time and a second snapshot created at a second time, after the first time. Based on the differences, a set of tasks are generated. The tasks include one or more of copying an object from the source to destination or deleting an object from the destination. The controller places the tasks onto the message queue. A first source worker node retrieves the first task and coordinates with a first destination worker node to perform the first task. A second source worker nodes retrieves the second task and coordinates with a second destination worker node to perform the second task.
    Type: Grant
    Filed: February 2, 2018
    Date of Patent: September 1, 2020
    Assignee: EMC IP Holding Company LLC
    Inventors: Abhinav Duggal, Atul Avinash Karmarkar, Philip Shilane, Kevin Xu
  • Patent number: 10761992
    Abstract: A processor reduces bus bandwidth consumption by employing a shared load scheme, whereby each shared load retrieves data for multiple compute units (CUs) of a processor. Each CU in a specified group monitors a bus for load accesses directed to a cache shared by the multiple CUs. In response to identifying a load access on the bus, a CU determines if the load access is a shared load access for its share group. In response to identifying a shared load access for its share group, the CU allocates an entry of a private cache associated with the CU for data responsive to the shared load access. The CU then monitors the bus for the data targeted by the shared load. In response to identifying the targeted data on the bus, the CU stores the data at the allocated entry of the private cache.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: September 1, 2020
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventor: Maxim V. Kazakov
  • Patent number: 10747729
    Abstract: Device-specific chunked hash size tuning to maximize synchronization throughput is described. A synchronization client application or similar program may employ hashing to detect changes to content of remotely stored files and synchronize those (as opposed to synchronizing all files, for example). Instead of using static hash chunk sizes for all client applications of a cloud storage service, the synchronization client application may determine the size of hash buffer by baselining throughput of hashing on each synchronization device and finding the number of bytes hashed in a given amount of time. Thus, hash chunk size may be optimized on a machine by machine basis.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: August 18, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Brian D. Jones, Julian Burger
  • Patent number: 10739996
    Abstract: Systems and methods are disclosed for enhanced garbage collection operations at a memory device. The enhanced garbage collection may include selecting data and blocks to garbage collect to improve device performance. Data may be copied and reorganized according to a data stream via which the data was received, or data and blocks may be evaluated for garbage collection based on other access efficiency metrics. Data may be selected for collection based on sequentiality of the data, host access patterns, or other factors. Processing of host commands may be throttled based on a determined amount of work to garbage collect a plurality of blocks, in order to limit variability in host command throughput over a time period.
    Type: Grant
    Filed: July 18, 2016
    Date of Patent: August 11, 2020
    Assignee: Seagate Technology LLC
    Inventors: David Scott Ebsen, Kevin A Gomez, Mark Ish, Daniel John Benjamin, Robert Wayne Moss
  • Patent number: 10740240
    Abstract: A computer implemented method for saving cache access power is suggested. The cache is provided with a set predictor logic for providing a generated set selection for selecting a set in the cache, and with a set predictor cache for pre-caching generated set indices of the cache. The method comprises further: receiving a part of a requested memory address; checking, in the set predictor cache, whether the requested memory address is already generated; in the case, that the requested memory address has already been generated: securing that the set predictor cache is switched off; issuing the pre-cached generated set index towards the cache; and securing that only that part of the cache is switched on that is associated with the pre-cached generated set index.
    Type: Grant
    Filed: April 19, 2019
    Date of Patent: August 11, 2020
    Assignee: International Business Machines Corporation
    Inventors: Christian Jacobi, Markus Kaltenbach, Ulrich Mayer, Johannes C. Reichart, Anthony Saporito, Siegmund Schlechter