Priority Scheduling Patents (Class 718/103)
  • Patent number: 8963933
    Abstract: The desire to use an Accelerated Processing Device (APD) for general computation has increased due to the APD's exemplary performance characteristics. However, current systems incur high overhead when dispatching work to the APD because a process cannot be efficiently identified or preempted. The occupying of the APD by a rogue process for arbitrary amounts of time can prevent the effective utilization of the available system capacity and can reduce the processing progress of the system. Embodiments described herein can overcome this deficiency by enabling the system software to pre-empt a process executing on the APD for any reason. The APD provides an interface for initiating such a pre-emption. This interface exposes an urgency of the request which determines whether the process being preempted is allowed a grace period to complete its issued work before being forced off the hardware.
    Type: Grant
    Filed: July 23, 2012
    Date of Patent: February 24, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Robert Scott Hartog, Ralph Clay Taylor, Michael Mantor, Kevin McGrath, Sebastien Nussbaum, Nuwan S. Jayasena, Rex Eldon McCrary, Mark Leather, Philip J. Rogers
  • Patent number: 8959520
    Abstract: A data processor and method of controlling the performance of a data processor are provided. The data processor includes a memory that is operable to store at least two of the application programs and that can be executed on the data processor. A performance module is operable to monitor a performance flag and output a stop command as a function of the presence of the performance flag, wherein the performance module generates a command that terminates at least one of the application programs as a function of the outputting of a stop command.
    Type: Grant
    Filed: September 18, 2006
    Date of Patent: February 17, 2015
    Assignee: Siemens Aktiengesellschaft
    Inventor: Peter Gunther
  • Patent number: 8959521
    Abstract: A computer-readable medium tangibly embodying a program of machine-readable instructions executable by a digital processor of a computer system to perform operations for controlling computer system activities. The operations include receiving a command entered with an input device of the computer system to begin opportunistic computer system activities, where the command specifies a time period available for opportunistic computer system activities. Then initiating at least one computer system activity during the time period available for opportunistic computer system activities.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: February 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: Peter G. Capek, Clifford A. Pickover
  • Patent number: 8959370
    Abstract: The invention concerns scheduling an application comprised of precedence constrained parallel tasks on a high-performance computer system. The computer system has a plurality of processors each enabled to operate on different voltage supply levels. First, a priority order for the tasks based on the computation and communication costs of the tasks is determined. Next, the based on the priority order of the tasks, assigning each task both a processor and a voltage level that substantially minimizes energy consumption and completion time for performing that task when compared to energy consumption and completion time for performing that task on different combinations of processor and voltage level. It is an advantage of the invention that the scheduling takes account not only completion time (makespan), but also energy consumption. Aspects of the invention include a method, software, a scheduling module of a computer and a schedule.
    Type: Grant
    Filed: October 1, 2009
    Date of Patent: February 17, 2015
    Assignee: University of Sydney
    Inventors: Albert Zomaya, Young Choon Lee
  • Patent number: 8959328
    Abstract: A method, apparatus and system for selecting a highest prioritized task for executing a resource from one of a first and second expired scheduling arrays, where the first and second expired scheduling arrays may prioritize tasks for using the resource, and where tasks in the first expired scheduling array may be prioritized according to a proportionality mechanism and tasks in the second expired scheduling array may be prioritized according to an importance factor determined, for example, based on user input, and executing the task. Other embodiments are described and claimed.
    Type: Grant
    Filed: November 13, 2007
    Date of Patent: February 17, 2015
    Assignee: Intel Corporation
    Inventors: Tong Li, Scott Hahn
  • Patent number: 8954968
    Abstract: In general, techniques of this disclosure relate to measuring scheduling performance of monitored threads in an operating system with improved precision. In one example, a method includes inserting, by an operating system kernel, a monitored thread into a queue comprising one or more threads and recording an insertion time that the monitored thread is inserted into the run queue; receiving, by the kernel, an event to remove the monitored thread from the run queue; responsive to receiving the event, determining, by the kernel, an amount of time that the monitored thread is stored on the run queue based on the insertion time and a removal time at which the monitored thread was removed from the run queue; and when the amount of time the monitored thread is stored on the run queue is greater than or equal to a specified threshold, sending a notification to a notification listener.
    Type: Grant
    Filed: August 3, 2011
    Date of Patent: February 10, 2015
    Assignee: Juniper Networks, Inc.
    Inventors: William N. Pohl, Suhas Suhas, Alon Ronen
  • Patent number: 8954975
    Abstract: The present invention relates to a task scheduling method for a real time operating system (RTOS) mounted to an embedded system, and more particularly, to a task scheduling method which allows a programmer to make a CPU reservation for a task. The task scheduling method for a real time operating system, includes: at a scheduling time point, determining whether or not a highest priority of tasks present in a ready queue is a predetermined value K; if the highest priority is determined to be K, applying a reservation based scheduler to perform a scheduling; and if the highest priority is determined not to be K, applying a priority based scheduler to perform a scheduling; the tasks present in the ready queue, the priority of which is K, contains idle CPU reservation allocation information received as a factor when the tasks the priority of which is K are created.
    Type: Grant
    Filed: August 10, 2012
    Date of Patent: February 10, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Sang Cheol Kim, Duk Kyun Woo, Gyu Sang Shin, Pyeong Soo Mah, Seon Tae Kim
  • Publication number: 20150040133
    Abstract: Provided are techniques for multiple stage workload management. A staging queue and a run queue are provided. A workload is received. In response to determining that application resources are not available and that the workload has not been previously semi-started, the workload is added to the staging queue. In response to determining that the application resources are not available and that the workload has been semi-started, and, in response to determining that run resources are available, the workload is started. In response to determining that the application resources are not available and that the workload has been semi-started, and, in response to determining that the run resources are not available, adding the workload to the run queue.
    Type: Application
    Filed: August 5, 2013
    Publication date: February 5, 2015
    Applicant: International Business Machines Corporation
    Inventors: Brian K. Caufield, Ron E. Liu, Sriram K. Padmanabhan, Mi W. Shum, Chun H. Sun, DongJie Wei
  • Publication number: 20150040134
    Abstract: A method includes receiving a task for execution by a plurality of distributed storage and task execution units A priority level is determined for the task. A plurality of coordinated partial task requests are generated and sent to the plurality of distributed storage and task execution units, wherein the plurality coordinated partial task requests indicate a plurality of coordinated partial tasks and the priority level. A plurality of partial task results are received in response to performance of the plurality of coordinated partial tasks by the plurality of distributed storage and task execution units. A task result for the task is generated based on the plurality of partial task results.
    Type: Application
    Filed: May 27, 2014
    Publication date: February 5, 2015
    Applicant: CLEVERSAFE, INC.
    Inventors: Wesley Leggette, Andrew Baptist, Greg Dhuse, Jason K. Resch, Gary W. Grube
  • Patent number: 8949843
    Abstract: A multicore processor system includes one or more client carrying out parallel processing of tasks by means of processor cores and a server assisting the client to carry out the parallel processing via a communication network. Task information containing the minimum number of required cores indicating the number of processor cores required to carry out processes of the tasks and core information containing operation setup information indicating operation setup content of the processor cores are stored in the server. The server determines whether the task is allocated to the plurality of processor cores or not in accordance with the task information and the core information. The server updates the core information in accordance with a determination result to transmit the updated core information to the client. The client carries out the parallel processing by means of the processor cores in accordance with the received core information.
    Type: Grant
    Filed: September 13, 2010
    Date of Patent: February 3, 2015
    Assignee: Kabushiki Kaisha Enix
    Inventor: James Geraci
  • Patent number: 8949841
    Abstract: A streaming multiprocessor (SM) in a parallel processing subsystem schedules priority among a plurality of threads. The SM retrieves a priority descriptor associated with a thread group, and determines whether the thread group and a second thread group are both operating in the same phase. If so, then the method determines whether the priority descriptor of the thread group indicates a higher priority than the priority descriptor of the second thread group. If so, the SM skews the thread group relative to the second thread group such that the thread groups operate in different phases, otherwise the SM increases the priority of the thread group. f the thread groups are not operating in the same phase, then the SM increases the priority of the thread group. One advantage of the disclosed techniques is that thread groups execute with increased efficiency, resulting in improved processor performance.
    Type: Grant
    Filed: December 27, 2012
    Date of Patent: February 3, 2015
    Assignee: NVIDIA Corporation
    Inventors: Jack Hilaire Choquette, Olivier Giroux, Robert J. Stoll, Gary M. Tarolli, John Erik Lindholm
  • Patent number: 8949840
    Abstract: In a system, method and computer-readable medium for managing message delivery, message delivery jobs are dynamically prioritized into a plurality of priority queues based on a delivery timeframe for each job. A delivery manager controls delivery of the message delivery jobs through a number of delivery channels and ports. A priority manager reviews jobs pending in the queues. If the priority manager determines that a message delivery job will not be completed within its delivery timeframe, the priority manager assigns a higher priority to the message delivery job.
    Type: Grant
    Filed: December 6, 2007
    Date of Patent: February 3, 2015
    Assignee: West Corporation
    Inventors: Gary Douglas Pulford, Bruce Pollock, Ian James Juliano, James P. Breen
  • Patent number: 8943509
    Abstract: A method, apparatus, and computer program product for scheduling stream-based applications in a distributed computer system with configurable networks are provided. The method includes choosing, at a highest temporal level, jobs that will run, an optimal template alternative for the jobs that will run, network topology, and candidate processing nodes for processing elements of the optimal template alternative for each running job to maximize importance of work performed by the system. The method further includes making, at a medium temporal level, fractional allocations and re-allocations of the candidate processing elements to the processing nodes in the system to react to changing importance of the work. The method also includes revising, at a lowest temporal level, the fractional allocations and re-allocations on a continual basis to react to burstiness of the work, and to differences between projected and real progress of the work.
    Type: Grant
    Filed: March 21, 2008
    Date of Patent: January 27, 2015
    Assignee: International Business Machines Corporation
    Inventors: Nikhil Bansal, Kirsten W. Hildrum, James Giles, Deepak Rajan, Philippe L. Seto, Eugen Schenfeld, Rohit Wagle, Joel L. Wolf, Xiaolan J. Zhang
  • Publication number: 20150026694
    Abstract: A method of processing information includes receiving a notification indicating completion of a garbage collection processing; dividing a time period of the garbage collection processing into a plurality of intervals; calculating, for each of the plurality of intervals, an interval fill-rate by calculating a sum total of a processing time allocated to each of one or more threads, calculating a quotient by dividing the sum total by a smaller one of a number of threads and a number of cores, and dividing the quotient by an execution time; calculating an entire fill-rate by dividing, by an execution time of the entire garbage collection processing, a sum total of a product of the interval fill-rate and the execution time of the interval; and lowering a priority of a second process than a priority of the first process, when the entire fill-rate is equal to or less than a predetermined value.
    Type: Application
    Filed: June 16, 2014
    Publication date: January 22, 2015
    Inventor: Akira AKIYAMA
  • Publication number: 20150026693
    Abstract: An information processing apparatus includes a storage unit and a processing unit. The storage unit is configured to store therein a first execution time and a second execution time longer than the first execution time. The first execution time is a time expected to be taken to execute a first job included in a first job group. The processing unit is configured to determine in which time period an execution start time of the first job is included. The execution start time is a time at which execution of the first job is to be started. The processing unit is configured to select, as a predicted execution time of the first job, one of the first execution time and the second execution time based on a result of the determination. The processing unit is configured to perform scheduling of the first job group based on the predicted execution time.
    Type: Application
    Filed: June 13, 2014
    Publication date: January 22, 2015
    Applicant: FUJITSU LIMITED
    Inventors: Takashi Satoh, Tamotsu Sengoku, Hiroyuki Hatsushika, Yuuji Nomoto
  • Publication number: 20150026695
    Abstract: The system and method generally relate to reducing heat dissipated within a data center, and more particularly, to a system and method for reducing heat dissipated within a data center through service level agreement analysis, and resultant reprioritization of jobs to maximize energy efficiency. A computer implemented method includes performing a service level agreement (SLA) analysis for one or more currently processing or scheduled processing jobs of a data center using a processor of a computer device. Additionally, the method includes identifying one or more candidate processing jobs for a schedule modification from amongst the one or more currently processing or scheduled processing jobs using the processor of the computer device. Further, the method includes performing the schedule modification for at least one of the one or more candidate processing jobs using the processor of the computer device.
    Type: Application
    Filed: October 9, 2014
    Publication date: January 22, 2015
    Inventors: Christopher J. DAWSON, Vincenzo V. DI LUOFFO, Rick A. HAMILTON, II, Michael D. KENDZIERSKI
  • Publication number: 20150026692
    Abstract: A computer-implemented method for optimizing a queue of queries for database efficiency is implemented by a controller computing device coupled to a memory device. The method includes receiving a plurality of database queries at the computing device from at least one host, evaluating the plurality of database queries to determine a resource impact associated with each database query of the plurality of database queries, prioritizing the plurality of database queries based upon a set of prioritization factors and the resource impact associated with each database query, and submitting the prioritized plurality of database queries to a database system for execution. The database system executes the plurality of database queries in order of priority.
    Type: Application
    Filed: July 22, 2013
    Publication date: January 22, 2015
    Applicant: MasterCard International Incorporated
    Inventor: Debashis Ghosh
  • Patent number: 8935699
    Abstract: Architectures and techniques for substantially maintaining performance of hyperthreads within processing cores of processors. One technique can include determining that a first thread is scheduled for execution on one of two or more hyperthreads, where the first instruction thread has a first priority. Such a technique also includes determining that a second instruction thread is one of executing or scheduled for execution on another of the two or more hyperthreads, where the second instruction thread has a second priority that is less than the first priority The technique can further include preempting execution of the second instruction thread based at least in part on the second instruction thread having the second priority that is less than the first priority.
    Type: Grant
    Filed: October 28, 2011
    Date of Patent: January 13, 2015
    Assignee: Amazon Technologies, Inc.
    Inventors: Pradeep Vincent, Darek J. Mihocka
  • Patent number: 8935375
    Abstract: Methods, systems, and computer-readable media for facilitating coordination between a fabric controller of a cloud-computing network and a service application running in the cloud-computing network are provided. Initially, an update domain (UD) that includes role instance(s) of the service application is selected, where the service application represents a stateful application is targeted for receiving a tenant job executed thereon. The process of coordination involves preparing the UD for execution of the tenant job, disabling the role instance(s) of the UD to an offline condition, allowing the tenant job to execute, and restoring the role instance(s) to an online condition upon completing execution of the tenant job.
    Type: Grant
    Filed: December 12, 2011
    Date of Patent: January 13, 2015
    Assignee: Microsoft Corporation
    Inventors: Pavel Dournov, Luis Irun-Briz, Maxim Khutomenko, Corey Sanders, Gaurav Gupta, Akram Hassan, Ivan Santa Maria Filho, Ashish Shah, Todd Pfleiger, Saad Syed, Sushant Rewaskar, Umer Azad
  • Patent number: 8935700
    Abstract: Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor.
    Type: Grant
    Filed: December 13, 2013
    Date of Patent: January 13, 2015
    Assignee: International Business Machines Corporation
    Inventors: Dirk Michel, Bret R. Olszewski, Basu Vaidyanathan
  • Patent number: 8930584
    Abstract: Described herein are systems and methods for improving concurrency of a request manager for use in an application server or other environment. A request manager receives a request, and upon receiving the request the request manager associates a token with the request. A reference to the request is enqueued in each of a plurality of queues, wherein each queue stores a local copy of the token. A first reference to the request is dequeued from a particular queue, wherein when the first reference to the request is dequeued, the token is modified to create a modified token. Thereafter the request is processed. When other references to the request are dequeued from other queues, the other references to the request are discarded.
    Type: Grant
    Filed: August 9, 2012
    Date of Patent: January 6, 2015
    Assignee: Oracle International Corporation
    Inventors: Oleksandr Otenko, Prashant Agarwal
  • Patent number: 8930952
    Abstract: Provided are techniques for providing a first lock, corresponding to a resource, in a memory that is global to a plurality of processor; spinning, by a first thread running on a first processor of the processors, at a low hardware-thread priority on the first lock such that the first processor does not yield processor cycles to a hypervisor; spinning, by a second thread running on a second processor, on a second lock in a memory local to the second processor such that the second processor is configured to yield processor cycles to the hypervisor; acquiring the lock and the corresponding resource by the first thread; and, in response to the acquiring of the lock by the first thread, spinning, by the second thread, at the low hardware-thread priority on the first lock rather than the second lock such that the second processor does not yield processor cycles to the hypervisor.
    Type: Grant
    Filed: March 21, 2012
    Date of Patent: January 6, 2015
    Assignee: International Business Machines Corporation
    Inventors: Dirk Michel, Bret R. Olszewski, Basu Vaidyanathan
  • Patent number: 8930907
    Abstract: Described is a probabilistic concurrency testing mechanism for testing a concurrent software program that provides a probabilistic guarantee of finding any concurrent software bug at or below a bug depth (that corresponds to a complexity level for finding the bug). A scheduler/algorithm inserts priority lowering points into the code and runs the highest priority thread based upon initially randomly distributed priorities. When that thread reaches a priority lowering point, its priority is lowered to a value associated (e.g., by random distribution) with that priority lowering point, whereby a different thread now has the currently highest priority. That thread is run until its priority is similarly lowered, and so on, whereby all schedules needed to find a concurrency bug are run.
    Type: Grant
    Filed: December 1, 2009
    Date of Patent: January 6, 2015
    Assignee: Microsoft Corporation
    Inventors: Sebastian Carl Burckhardt, Pravesh Kumar Kothari, Madanlal S. Musuvathi, Santosh Ganapati Nagarakatte
  • Patent number: 8924983
    Abstract: A method and device for processing inter-subframe service load balancing and processing inter-cell interference is disclosed. The method includes: when processing the inter-subframe service load balancing, determining a service load of a link in a time period; determining a resource utilization ratio threshold according to the service load; and transmitting service data in each subframe according to the utilization ratio threshold. When processing inter-cell interference, performing inter-subframe service load balancing and in combination with various inter-cell interference coordination technologies, performing an interference mitigation process in one or a combination of a frequency domain, power and a space domain by interference coordination technology.
    Type: Grant
    Filed: March 21, 2011
    Date of Patent: December 30, 2014
    Assignee: China Academy of Telecommunications Technology
    Inventors: Zhiqiu Zhu, Nan Li, Qingquan Zeng, Mingyu Xu
  • Patent number: 8924981
    Abstract: Requests to be executed in the database system are received, where a plurality of the requests are provided in a queue for later execution. Priority indicators are calculated for assignment to corresponding ones of the plurality of requests in the queue, where the priority indicators are calculated based on delay times and predefined priority levels of the requests. The requests in the queue are executed in order according to the calculated priority indicators.
    Type: Grant
    Filed: November 12, 2010
    Date of Patent: December 30, 2014
    Assignee: Teradat US, Inc.
    Inventors: Douglas P. Brown, Thomas P. Julien, Louis M. Burger, Anita Richards
  • Patent number: 8924980
    Abstract: The present invention relates to a method of scheduling for multi-function radars. Specifically, the present invention relates to an efficient urgency-based scheduling method. The present invention provides a method of scheduling tasks in a radar apparatus including the steps of: receiving one or more tasks to schedule; calculating an urgency function for each said task; and storing the said tasks using said urgency function to order each said task relative to the other said tasks; wherein when a task is to be performed, the task having the highest value of urgency function is located.
    Type: Grant
    Filed: August 29, 2008
    Date of Patent: December 30, 2014
    Assignee: BAE SYSTEMS plc
    Inventor: Derek Geoffrey Finch
  • Patent number: 8924481
    Abstract: Apparatus for routing requests from a plurality of connected clients to a plurality of connected servers comprises a processor, memory and a network interface. The processor is configured to run a plurality of identical processes, each being for receiving requests and connecting each received request to a server. For each process, the processor is configured to maintain a queue of requests in memory, determine a number of queued requests that may be connected to a server, and attempt to connect this number of queued requests. The processor then accepts further requests, and if the queue is not empty, places the further requests in the queue, and if the queue is empty, attempts to connect the further requests. The processor determines the number of queued requests that may be connected to a server in dependence upon the length of the queues of all the processes and the number of available connections.
    Type: Grant
    Filed: May 25, 2011
    Date of Patent: December 30, 2014
    Assignee: Riverbed Technology, Inc.
    Inventors: Declan Sean Conlon, Gaurav Ghildyal
  • Publication number: 20140380328
    Abstract: A computer system includes: a physical computer including plural physical processors, a peripheral device connected to the plural physical processors, and a memory connected to the plural physical processors; and a management computer connected to the physical computer. The physical computer includes plural physical processor environments on each of which a virtual computer can be built, and the management computer includes an environment table indicating correspondence between plural physical processor environments each of which has the physical processor and on each of which a virtual computer can be built and an executable software program in each of the physical processor environments. When a specific software program is executed in the physical computer, a physical processor environment corresponding to a software program to be executed is selected from the plural physical processor environments by the environment table, and a virtual computer is built on the selected physical processor environment.
    Type: Application
    Filed: June 20, 2014
    Publication date: December 25, 2014
    Applicant: HITACHI, LTD.
    Inventors: Toshiyuki UKAI, Naonobu SUKEGAWA
  • Publication number: 20140380327
    Abstract: A device and method for synchronizing tasks executed in parallel on a platform comprising comprises several computation units. The tasks are apt to be preempted by the operating system of the platform, and the device comprises at least one register and one recording module installed in the form of circuits on said platform, said recording module being suitable for storing a relationship between a condition to be satisfied regarding the value recorded by one of said registers and one or more computation tasks, the device comprising a dynamic allocation module installed in the form of circuits on the platform and configured to choose a computation unit from among computation units of the platform when said condition is fulfilled, and for launching the execution on the chosen computation unit of a software function for searching for the tasks on standby awaiting the fulfillment of the condition and notifications of said tasks.
    Type: Application
    Filed: June 25, 2012
    Publication date: December 25, 2014
    Applicant: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIES ALTERNATIVES
    Inventors: Farhat Thabet, Yves Lhuillier, Raphael David
  • Patent number: 8918792
    Abstract: Disclosed are a workflow monitoring control system, method, and program, wherein, when workflows are executed by passing through processing sections, each provided with business application software, in order, the service quality of the workflows can be ensured in as many workflows as possible with limited computer resources. A workflow monitoring and control system connected to a plurality of processing sections each of which executes a unit process assigned respectively, a unit process being one of parts constituting business data processing, by using business application software and computer resources, comprises a workflow defining means, a service quality lower limit setting means, a service quality calculation means, a quality insufficiency judging means, a computer resource reallocation means.
    Type: Grant
    Filed: May 13, 2010
    Date of Patent: December 23, 2014
    Assignee: NEC Corporation
    Inventors: Yoshihiro Kanna, Shinji Kikuchi, Yohsuke Isozaki
  • Patent number: 8918789
    Abstract: A method of ranking workers for an incoming task includes recording a list of completed tasks in a computer data structure, extracting first attributes from the list for the tasks that were completed during a pre-determined period, generating a first feature vector for each task and worker from the first extracted attributes, training a Support Vector Machine (SVM) based on the feature vector to output a weight vector, extracting second attributes from an incoming task, generating a second feature vector for each worker based on the second extracted attributes, and ranking the workers using the second feature vectors and the weight vector. The first attributes may be updated during a subsequent period to re-train the SVM on updated first feature vectors to generate an updated weight vector. The workers may be re-ranked based on the second feature vectors and the updated weight vector. Accordingly, the feature vectors are dynamic.
    Type: Grant
    Filed: October 26, 2011
    Date of Patent: December 23, 2014
    Assignee: International Business Machines Corporation
    Inventors: Maira Athanazio de Cerqueira Gatti, Ricardo Guimaraes Herrmann, David Loewenstern, Florian Pinel, Larisa Shwartz
  • Patent number: 8918798
    Abstract: Embodiments relate to systems and methods for a shared object lock under state machine control. An operating system or virtual machine environment can host a set of multiple executing threads, and provide those threads with mutual access to one or more objects such as storage objects, memory objects, or others. The threads can independently request that the object be locked or unlocked, and the locked or unlocked state can be shared between the threads. Rather than communicate with the object(s) directly, in embodiments the threads communicate with a state machine that in turn controls the state of the object(s). When a request to change the state of the object(s) is received, the state machine can permit the object(s) to change between locked, unlocked, or other states based on the current state of the machine and the received message. Contention between threads can be reduced or eliminated.
    Type: Grant
    Filed: August 29, 2008
    Date of Patent: December 23, 2014
    Assignee: Red Hat, Inc.
    Inventor: David Lloyd
  • Patent number: 8918788
    Abstract: The present invention provides a scheduling method for a data processing system comprising at least one physical CPU, and one or more virtual machines each assigned to one or more virtual CPUs, the method comprising: a first scheduling step in which one of said virtual machines is elected to run on said physical CPU; and a second scheduling step in which at least one of the virtual CPUs assigned to the elected virtual machine is elected to run on said physical CPU. The second scheduling step is applied to the virtual machine only. When a virtual machine instance is elected to run on a given CPU, the second level scheduling determines the virtual CPU instance to run. The second level scheduling is global and can cause a virtual CPU migration from one physical CPU to another. In order to ensure correct task scheduling at guest level, virtually equivalent (in terms of calculation power) virtual CPUs should be provided to the scheduler.
    Type: Grant
    Filed: November 16, 2010
    Date of Patent: December 23, 2014
    Assignee: Virtuallogix SA
    Inventor: Vladimir Grouzdev
  • Patent number: 8918787
    Abstract: A management entity for managing the execution priority of processes in a computing system, the management entity being configured to, in response to activation of a pre-stored profile defining execution priorities for each of a plurality of processes, cause those processes to be executed by the computing system in accordance with the respective priorities defined in the active profile.
    Type: Grant
    Filed: December 7, 2007
    Date of Patent: December 23, 2014
    Assignee: Nokia Corporation
    Inventor: Dejan Ostojic
  • Publication number: 20140373021
    Abstract: An operating system provides a pool of worker threads servicing multiple queues of requests at different priority levels. A concurrency controller limits the number of currently executing threads. The system tracks the number of currently executing threads above each priority level, and preempts operations of lower priority worker threads in favor of higher priority worker threads. A system can have multiple pools of worker threads, with each pool having its own priority queues and concurrency controller. A thread also can change its priority mid-operation. If a thread becomes lower priority and is currently active, then steps are taken to ensure priority inversion does not occur. In particular, the current thread for the now lower priority item can be preempted by a thread for a higher priority item and the preempted item is placed in the lower priority queue.
    Type: Application
    Filed: June 14, 2013
    Publication date: December 18, 2014
    Inventors: Pedro Teixeira, Arun Kishan
  • Publication number: 20140373022
    Abstract: A method for performing instruction scheduling in an out-of-order microprocessor pipeline is disclosed. The method comprises selecting a first set of instructions to dispatch from a scheduler to an execution module, wherein the execution module comprises two types of execution units. The first type of execution unit executes both a first and a second type of instruction and the second type of execution unit executes only the second type. Next, the method comprises selecting a second set of instructions to dispatch, which is a subset of the first set and comprises only instructions of the second type. Next, the method comprises determining a third set of instructions, which comprises instructions not selected as part of the second set. Finally, the method comprises dispatching the second set for execution using the second type of execution unit and dispatching the third set for execution using the first type of execution unit.
    Type: Application
    Filed: December 16, 2013
    Publication date: December 18, 2014
    Applicant: Soft Machines, Inc.
    Inventor: Nelson N. CHAN
  • Publication number: 20140373023
    Abstract: A management server specifies processes that make exclusive control requests of files in a predetermined time slot, based on an execution schedule of a plurality of processes. Then, the management server specifies files that are the subjects of exclusive control in the predetermined time slot, based on utilization file information indicating files that are used by the respective processes. Then, the management server determines a plurality of file management servers as destinations of exclusive control requests of the respective specified files such that the number of exclusive control requests to be transmitted in the predetermined time slot to each of the file management servers, which is configured to perform exclusive control of a file, is not greater than a predetermined number of exclusive control requests.
    Type: Application
    Filed: April 11, 2014
    Publication date: December 18, 2014
    Applicant: FUJITSU LIMITED
    Inventors: YUTAKA ARAKAWA, Hisashi Sawada, Hiroyoshi Okada, Yasumi Izutani
  • Patent number: 8914803
    Abstract: A mechanism for flow control-based virtual machine (VM) request queuing is disclosed. A method of the invention includes implementing a pass-through mode for handling of one or more requests sent to a hypervisor by a virtual machine (VM) managed by the hypervisor, determining that a number of outstanding requests associated with the VM has exceeded a first threshold, implementing a queued mode for handling the one or more requests sent to the hypervisor from the VM, determining that a number of outstanding requests associated with the VM has fallen below a second threshold, implementing the pass-through mode for handling the one or more requests sent to the hypervisor from the VM, and repeating the implementing and determining as long as the VM continues to send requests to the hypervisor.
    Type: Grant
    Filed: August 25, 2011
    Date of Patent: December 16, 2014
    Assignee: Red Hat Israel, Ltd.
    Inventor: Michael Tsirkin
  • Patent number: 8914169
    Abstract: A communication system for controlling sharing of data across a plurality of subsystems on a locomotive, the communication system including an open defined interface unit configured so that a plurality of applications may access locomotive control system data in a common defined manner with predictable results.
    Type: Grant
    Filed: October 19, 2012
    Date of Patent: December 16, 2014
    Assignee: General Electric Company
    Inventors: Todd Goodermuth, Richard Hooker
  • Patent number: 8914806
    Abstract: A virtual storage management method that can increase the overall processing speed while preventing a processor from being overloaded. A request for acquisition of a memory area in a primary storage device is received from a process executed by a processor. It is determined whether or not the process that has made the acquisition request is a utility process executable in cooperation with another process. Control is provided so as to restrict swap-out of the utility process when it is determined that the process that has made the received acquisition request is a utility process executable in cooperation with another process, and a process cooperating with the utility process executed by the processor is a preferred process of which swap-out is restricted, and a processor utilization of the preferred process is greater than a predetermined value.
    Type: Grant
    Filed: February 24, 2010
    Date of Patent: December 16, 2014
    Assignee: Canon Kabushiki Kaisha
    Inventor: Akira Ishikawa
  • Publication number: 20140366032
    Abstract: Event processing is prioritized based on system workload. A time constraint attribute is defined in an event rule. The event rule uses one or more events. An event processing system is monitored to determine when the system is under a predefined level of stress. If the system is determined to be under the predefined level of stress, the time constraint attribute in the event rule is used to establish when the processing of a received event used in an event rule must be carried out.
    Type: Application
    Filed: June 5, 2014
    Publication date: December 11, 2014
    Inventors: David Granshaw, Samuel T. Massey, Daniel J. McGinnes, Martin A. Ross, Richard G. Schofield, Craig H. Stirling
  • Patent number: 8909764
    Abstract: There is provided a method of scheduling requests from a plurality of services to at least one data storage resource. The method comprises receiving, on a computer system, service requests from said plurality of services. The service requests comprise metadata specifying a service ID and a data size of payload data associated with said service request, and at least some of said service IDs have service throughput metadata specifying a required service throughput associated therewith. The method further includes arranging, in a computer system, said requests into FIFO throttled queues based on said service ID and then setting a deadline for processing of a request in a throttled queue. The deadline is selected in dependence upon the size of the request and the required service throughput associated therewith. Then, the deadline of each throttled queue is monitored and, if a request in a throttled queue has reached or exceeded the deadline the request is processed in a data storage resource.
    Type: Grant
    Filed: July 28, 2011
    Date of Patent: December 9, 2014
    Assignee: Xyratex Technology Limited
    Inventor: Ganesan Umanesan
  • Publication number: 20140359632
    Abstract: A priority-based scheduling and execution of threads may enable the completion of higher-priority tasks above lower-priority tasks. Occasionally, a high-priority thread may request a resource that has already been reserved by a lower-priority thread, and the higher-priority thread may be blocked until the lower-priority thread relinquishes the reservation. Such prioritization may be acceptable if the lower-priority thread is able to execute comparatively unimpeded, but in some scenarios, the lower-priority thread may execute at a lower priority than a third thread that also has a lower priority than the high-priority thread. In this scenario, the third thread is effectively but incorrectly prioritized above the high-priority thread.
    Type: Application
    Filed: May 31, 2013
    Publication date: December 4, 2014
    Inventors: Arun Upadhyaya Kishan, Neill Michael Clift, Mehmet Iyigun, Yevgeniy Bak, Syed Aunn Hasan Raza
  • Patent number: 8904451
    Abstract: This disclosure relates to methods and systems for queuing events. In one aspect, a method is disclosed that receives or creates an event and inserts the event into a queue. The method determines at least one property of the event and associates a priority with the event based on the property. The method then processes the event in accordance with its priority.
    Type: Grant
    Filed: April 13, 2012
    Date of Patent: December 2, 2014
    Assignee: Theplatform, LLC
    Inventors: Paul Meijer, Mark Hellkamp
  • Patent number: 8904399
    Abstract: A method and system for executing a plurality of threads are described. The method may include mapping a thread specified priority value associated with a dormant thread to a thread quantized priority value associated with the dormant thread if the dormant thread becomes ready to run. The method may further include adding the dormant thread to a ready to run queue and updating the thread quantized priority value. A thread quantum value associated with the dormant thread may also be updated, or a combination of the quantum value and quantized priority value may be both updated.
    Type: Grant
    Filed: December 9, 2010
    Date of Patent: December 2, 2014
    Assignee: QUALCOMM Incorporated
    Inventors: Steven S. Thomson, Paul R. Johnson, Chirag D. Shah, Ryan C. Michel
  • Patent number: 8904393
    Abstract: Provided are techniques for increasing transaction processing throughput. A transaction item with a message identifier and a session identifier is obtained. The transaction item is added to an earliest aggregated transaction in a list of aggregated transactions in which no other transaction item as the same session identifier. A first aggregated transaction in the list of aggregated transactions that has met execution criteria is executed. In response to determining that the aggregated transaction is not committing, the aggregated transaction is broken up into multiple smaller aggregated transactions and a target size of each aggregated transaction is adjusted based on measurements of system throughput.
    Type: Grant
    Filed: September 11, 2012
    Date of Patent: December 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Michael James Beckerle, Michael John Carney
  • Patent number: 8904395
    Abstract: Systems and methods for scheduling events in a virtualized computing environment are provided. In one embodiment, the method comprises scheduling one or more events in a first event queue implemented in a computing environment, in response to determining that number of events in the first event queue is greater than a first threshold value, wherein the first event queue comprises a first set of events received for purpose of scheduling, wherein said first set of events remain unscheduled; mapping the one or more events in the first event queue to one or more server resources in a virtualized computing environment; receiving a second set of events included in a second event queue, wherein one more events in the second set of event are defined as having a higher priority than one or more events in the first event queue that have or have not yet been scheduled.
    Type: Grant
    Filed: July 26, 2010
    Date of Patent: December 2, 2014
    Assignee: International Business Machines Corporation
    Inventors: Ofer Biran, Tirtsa Hochberg, Michael Massin, Gil Rapaport, Yossi Shiloach, Segev Eliezer Wasserkrug
  • Publication number: 20140351819
    Abstract: A method of determining a multi-agent schedule includes defining a well-formed, non-preemptive task set that includes a plurality of tasks, with each task having at least one subtask. Each subtask is associated with at least one resource required for performing that subtask. In accordance with the method, an allocation, which assigns each task in the task set to an agent, is received and a determination is made, based on the task set and the allocation, as to whether a subtask in the task set is schedulable at a specific time. A system for implementing the method is also provided.
    Type: Application
    Filed: May 22, 2013
    Publication date: November 27, 2014
    Inventors: Julie Ann Shah, Matthew Craig Gombolay
  • Publication number: 20140351820
    Abstract: An apparatus and method for managing stream processing tasks are disclosed. The apparatus includes a task management unit and a task execution unit. The task management unit controls and manages the execution of assigned tasks. The task execution unit executes the tasks in response to a request from the task management unit, collects a memory load state and task execution frequency characteristics based on the execution of the tasks, detects low-frequency tasks based on the execution frequency characteristics if it is determined that a shortage of memory has occurred based on the memory load state, assigns rearrangement priorities to the low-frequency tasks, and rearranges the tasks based on the assigned rearrangement priorities.
    Type: Application
    Filed: March 26, 2014
    Publication date: November 27, 2014
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventor: Myung-Cheol LEE
  • Patent number: 8898794
    Abstract: One embodiment of a computer-implemented data structure synchronization mechanism comprises an interface for accessing a data structure and storing ownership data in a shared memory location. The method further comprises denying write operations if the thread attempting the write operation is not designated as the owner thread by said ownership data. The method further comprises denying requests to modify the ownership data if the thread making the request is not designated as the owner thread by said ownership data. The method further comprises effecting a write fence in the context of the thread making the request to modify ownership data prior to modifying the ownership data. Other embodiments are described.
    Type: Grant
    Filed: September 6, 2011
    Date of Patent: November 25, 2014
    Inventor: Andrei Teodor Borac