Priority Scheduling Patents (Class 718/103)
-
Patent number: 8850440Abstract: Processing requests may be routed between a plurality of runtime environments, based on whether or not program(s) required for completion of the processing requests is/are loaded in a given runtime environment. Cost measures may be used to compare costs of processing a request in a local runtime environment and of processing the request at a non-local runtime environment.Type: GrantFiled: March 6, 2012Date of Patent: September 30, 2014Assignee: International Business Machines CorporationInventors: Paul Kettley, Daniel N. Millwood, Geoffrey S. Pirie
-
Storage subsystem device driver scheduling I/O servicing according to priority of identified process
Patent number: 8850439Abstract: Systems, methods, and apparatus to identify and prioritize application processes in one or more subsystems. Some embodiments identifying applications and processes associated with each application executing on a system, apply one or more priority rules to the identified applications and processes to generate priority information, and transmit the priority information to a subsystem. The subsystem then matches received requests with the priority information and services the processes according to the priority information.Type: GrantFiled: August 6, 2010Date of Patent: September 30, 2014Assignee: Intel CorporationInventors: Brian Dees, Knut Grimsrud -
Patent number: 8850441Abstract: A computer implemented method executing a plurality of tasks, each task comprising threads and each task being assigned a priority from 1 to a whole number greater than 1, each thread of a task assigned the same priority as the task and each thread being executed by a processor. The method also provides locking and unlocking arranged to lock and unlock data stored by a storage device responsive to such a request from a thread. A method of operating the system comprises maintaining a queue of threads that require access to locked data, maintaining an array comprising, for each priority, duration and/or throughput information for threads of the priority, setting a wait flag for a priority in the array according to a predefined algorithm calculated from the duration and/or throughput information in the array.Type: GrantFiled: March 7, 2013Date of Patent: September 30, 2014Assignee: International Business Machines CorporationInventor: Gerald Martyn Worsfold Allen
-
Publication number: 20140289732Abstract: Approaches that manage energy in a data center are provided. In one embodiment, there is an energy management tool, including an analysis component configured to determine a current energy profile of each of a plurality of systems within the data center, the current energy profile comprising an overall rating expressed as an integer value, the overall rating calculated based on a current workload usage and environmental conditions surrounding each of the plurality of systems; and a priority component configured to prioritize a routing of a workload to a set of systems from the plurality of systems within the data center having the least amount of energy present based on a comparison of the overall ratings for each of the plurality of systems within the data center.Type: ApplicationFiled: June 9, 2014Publication date: September 25, 2014Inventors: Christopher J. Dawson, Vincenzo V. Diluoffo, Rick A. Hamilton, II, Michael D. Kendzierski
-
Patent number: 8843931Abstract: A computer system determines a first criticality relating to frequency of execution of computer programs, a second criticality relating to frequency of execution of transactions, a third criticality relating to a number of users who execute the transactions, a fourth criticality relating to programs that modify the database tables having a large change in data, and a fifth criticality relating to the amount to time that each computer program is executed and the amount of time that each transaction is executed. The system determines intersections among the criticalities, and assigns a weighted value to each of the intersections. The system determines an overall criticality for a particular computer program or a particular transaction. The overall criticality is a function of the number of intersections in which the particular computer program or the particular transaction appears and the weighted values assigned to the intersections.Type: GrantFiled: June 29, 2012Date of Patent: September 23, 2014Assignee: SAP AGInventors: Bernd Sieren, Bjoern Panter, Dominik Held, Juergen Mahler, Mahadevan Venkata, Thomas Fischer
-
Publication number: 20140282576Abstract: An apparatus for high-performance parallel computation, includes plural computation nodes, each having dispatch units, memories in communication with the dispatch units, and processors, each of which is in communication with the memories and the dispatch units. Each dispatch unit is configured to recognize, as ready for execution, one or more computational tasks that have become ready for execution as a result of counted remote writes into the memories. Each of the dispatch units is configured to receive a dispatch request from a processor and to determine whether there exist one or more computational tasks that are both ready and available for execution by the processor.Type: ApplicationFiled: March 14, 2014Publication date: September 18, 2014Inventors: J.P. Grossman, Jeffrey S. Kuskin
-
Publication number: 20140282575Abstract: A method for performing dynamic port remapping during instruction scheduling in an out of order microprocessor is disclosed. The method comprises selecting and dispatching a plurality of instructions from a plurality of select ports in a scheduler module in first clock cycle. Next, it comprises determining if a first physical register file unit has capacity to support instructions dispatched in the first clock cycle. Further, it comprises supplying a response back to logic circuitry between the plurality of select ports and a plurality of execution ports, wherein the logic circuitry is operable to re-map select ports in the scheduler module to execution ports based on the response. Finally, responsive to a determination that the first physical register file unit is full, the method comprises re-mapping at least one select port connecting with an execution unit in the first physical register file unit to a second physical register file unit.Type: ApplicationFiled: December 10, 2013Publication date: September 18, 2014Applicant: Soft Machines, Inc.Inventor: Nelson N. CHAN
-
Publication number: 20140282574Abstract: Systems and methods for implementing constrained data-driven parallelism may provide programmers with mechanisms for controlling the execution order and/or interleaving of tasks spawned during execution. For example, a programmer may define a task group that includes a single task, and the single task may define a direct or indirect trigger that causes another task to be spawned (e.g., in response to a modification of data specified in the trigger). Tasks spawned by a given task may be added to the same task group as the given task. A deferred keyword may control whether a spawned task is to be executed in the current execution phase or its execution is to be deferred to a subsequent execution phase for the task group. Execution of all tasks executing in the current execution phase may need to be complete before the execution of tasks in the next phase can begin.Type: ApplicationFiled: October 7, 2013Publication date: September 18, 2014Applicant: Oracle International CorporationInventors: Virendra J. Marathe, Yosef Lev, Victor M. Luchangco
-
Publication number: 20140282572Abstract: A method for assigning tasks comprises receiving a set of tasks, modifying a deadline for each task based on execution ordering relationship of the tasks, ordering the tasks in increasing order based on the modified deadlines for the tasks, partitioning the ordered tasks using one of non-preemptive scheduling and preemptive scheduling based on a type of multicore processing environment, and assigning the partitioned tasks to one or more cores of a multicore electronic device based on results of the partitioning.Type: ApplicationFiled: March 14, 2013Publication date: September 18, 2014Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventor: Jaeyeon Kang
-
Publication number: 20140282573Abstract: Techniques are disclosed for managing deployment conflicts between applications executing in one or more processing environments. A first application is executed in a first processing environment and responsive to a request to execute the first application. During execution of the first application, a determination is made to redeploy the first application for execution partially in time on a second processing environment providing a higher capability than the first processing environment in terms of at least a first resource type. A deployment conflict is resolved between the first application and at least a second application.Type: ApplicationFiled: March 15, 2013Publication date: September 18, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Adam T. Clark, Michael T. Kalmbach, John E. Petri, Kevin Wendzel
-
Patent number: 8839257Abstract: Command sequencing may be provided. Upon receiving a plurality of action requests, an ordered queue comprising at least some of the plurality of actions may be created. The actions may then be performed in the queue's order.Type: GrantFiled: November 22, 2011Date of Patent: September 16, 2014Assignee: Microsoft CorporationInventors: Andrey Lukyanov, Rajmohan Rajagopalan, Shane Brady
-
Patent number: 8839256Abstract: A novel and useful system and method of improving the utilization of a special purpose accelerator in a system incorporating a general purpose processor. In some embodiments, the current queue status of the special purpose accelerator is periodically monitored using a background monitoring process/thread and the current queue status is stored in a shared memory. A shim redirection layer added a priori to a library function task determines at runtime and in user space whether to execute the library function task on the special purpose accelerator or the general purpose processor. At runtime, using the shim redirection layer and based on the current queue status, it is determined whether to execute the library function task on the special purpose accelerator or on the general purpose processor.Type: GrantFiled: June 9, 2010Date of Patent: September 16, 2014Assignee: International Business Machines CorporationInventors: Heather D. Achilles, Giora Biran, Amit Golander, Nancy A. Greco
-
Patent number: 8832700Abstract: A central manager receives tick subscription requests from subscribers, including a requested period and an allowable variance. The manager selects a group period for a group of requests, based on requested period(s) and allowable variance(s). In some cases, the group period is not a divisor of every requested period but nonetheless provides at least one tick within the allowable variance of each requested period. Ticks may be issued by invoking a callback function. Ticks may be issued in a priority order based on the subscriber's category, e.g., whether it is a user-interface process. An application platform may send a tick subscription request on behalf of an application process, e.g., a mobile device platform may submit subscription requests for processes which execute on a mobile computing device. Tick subscription requests may be sent during application execution, e.g., while the application's user interface is being built or modified.Type: GrantFiled: September 29, 2010Date of Patent: September 9, 2014Assignee: Microsoft CorporationInventors: Nimesh Amin, Alan Chun Tung Liu
-
Patent number: 8832694Abstract: A system and method for the dynamic allocation of resources based on multi-phase negotiation mechanism. A resource allocation decision can be made based on an index value computed by a selection index function. A negotiation process can be performed based on a schedule, a number of resources, and a price of resources. A user requesting a resource for a low priority task can negotiate based on the schedule, the user demanding the resource for a medium priority task can negotiate based on the schedule and/or the number of resources, and filially the user requesting the resource for a high priority job can successfully negotiate based on per unit resource price. The multi-phase negotiation mechanism motivates the users to be cooperative among them and improves a cooperative behavior coefficient and an overall user satisfaction rate.Type: GrantFiled: December 20, 2011Date of Patent: September 9, 2014Assignee: Xerox CorporationInventors: Dhanwant Singh Kang, Hua Liu, Tong Sun
-
Patent number: 8832707Abstract: An attribute of a descriptor associated with a task informs a runtime environment of which instructions a processor is to run to schedule a plurality if resources for completion of the task in accordance with a level of quality of service in a service level agreement.Type: GrantFiled: December 21, 2009Date of Patent: September 9, 2014Assignee: International Business Machines CorporationInventors: Daniel J. Henderson, Prabhakar N. Kudva, Naresh Nayar, Pia Naoko Sanda, David William Siegel, James Van Oosten, James Xenidis
-
Patent number: 8832702Abstract: A technique for scheduling execution of threads at a processor is disclosed. The technique includes executing a thread de-emphasis instruction of a thread that de-emphasizes the thread until the number of pending memory transactions, such as cache misses, associated with the thread are at or below a threshold. While the thread is de-emphasized, other threads at the processor that have a higher priority can be executed or assigned system resources. Accordingly, the likelihood of a stall in the processor is reduced.Type: GrantFiled: May 10, 2007Date of Patent: September 9, 2014Assignee: Freescale Semiconductor, Inc.Inventors: Klas M. Bruce, Sergio Schuler, Matt B. Smittle, Michael D. Snyder, Gary L. Whisenhunt
-
Patent number: 8832492Abstract: A method for maintaining applications may include: (1) receiving a request to recover a first application, (2) identifying a first production topology of the first application that identifies a set of resources upon which the application depends, (3) maintaining a template for transforming the first production topology of the first application into a first recovery topology for the first application, the template comprising information for mapping the first production topology to the first recovery topology, (4) applying the template to the first production topology at a first point in time to create the first recovery topology, and (5) recovering the first application to a first computing system using the first recovery topology. Various other methods, systems, and computer-readable media are also disclosed herein.Type: GrantFiled: January 25, 2013Date of Patent: September 9, 2014Assignee: Symantec CorporationInventors: Joshua Kruck, Aaron Christensen, Guido Westenberg, Girish Jorapurkar
-
Patent number: 8832704Abstract: An in-car-use multi-application execution device is provided that ensures safety while maintaining convenience by securing operation of a plurality of applications and suppressing occurrence of a termination process within a limited processing capacity without degrading a real-time feature. The in-car-use multi-application execution device dynamically predicts a processing time for each application, and schedules each application on the basis of the predicted processing time. If it is determined that an application failing to complete a process in a prescribed cycle exists as a result of the scheduling, a process is executed that terminates the application or degrades the function of the application on the basis of a preset priority order.Type: GrantFiled: August 12, 2010Date of Patent: September 9, 2014Assignee: Hitachi Automotive Systems, Ltd.Inventors: Masayuki Takemura, Shoji Muramatsu, Isao Furusawa, Shinya Ohtsuji, Takeshi Shima
-
Patent number: 8832703Abstract: A disclosed priority control program recorded in a computer-readable medium causes a computer to execute, in job allocation for computational resources, a first step of lowering a job allocation priority of a user based on an estimated utilization amount of a job associated with the user, the job allocation priority indicating a degree of priority of the user in obtaining an allocation of the computational resource, and the estimated utilization amount being an amount of the computational resources estimated to be used for the job and being submitted to and recorded in a memory device on a job-to-job basis; and a second step of increasing the job allocation priority over time at a restoration rate which corresponds to a user-specific amount of the computational resources available for the user per unit time, the user-specific amount being recorded in the memory device on a user-to-user basis.Type: GrantFiled: December 17, 2008Date of Patent: September 9, 2014Assignee: Fujitsu LimitedInventors: Yoshifumi Ujibashi, Kouichi Kumon
-
Publication number: 20140250438Abstract: Provided are a scheduling method in a multiprocessor apparatus and a method of assigning priorities to tasks using pseudo-deadlines in a multiprocessor apparatus. The scheduling method includes releasing tasks (510), setting relative pseudo-deadlines for the tasks such that jobs belonging to one task ?a among the tasks always have higher priorities than jobs belonging to another task ?b, and determining task priorities (520), and setting absolute pseudo-deadlines for jobs belonging to the tasks, and determining job priorities (530).Type: ApplicationFiled: October 29, 2013Publication date: September 4, 2014Applicant: Korea Advanced Institute of Science and TechnologyInventors: In-Sik SHIN, Hyeong-Boo Baek, Hoon-sung Shwa
-
Patent number: 8826286Abstract: The present invention relates to the field of enterprise network computing. In particular, it relates to monitoring workload of a workload scheduler. Information defining a plurality of test jobs of low priority is received. The test jobs have respective launch times, and are launched for execution in a data processing system in accordance with said launch times and said low execution priority. The number of test jobs executed within a pre-defined analysis time range is determined A performance decrease warning is issued if the number of executed test jobs is lower than a predetermined threshold number. A workload scheduler discards launching of jobs having a low priority when estimating that a volume of jobs submitted with higher priority is sufficient to keep said scheduling system busy.Type: GrantFiled: November 27, 2012Date of Patent: September 2, 2014Assignee: International Business Machines CorporationInventor: Sergej Boris
-
Patent number: 8826285Abstract: The object of the invention is in particular a device for execution of applications (510) in an aircraft information-processing system (500), permitting the simultaneous execution of at least two distinct applications, the said information-processing system comprising shared calculation and storage resources. The device comprises software segregation means capable of creating at least two distinct information-processing environments (505), a partition of the said calculation and storage resources being allocated to each of the said at least two environments in such a way that the execution of one of the said at least two applications in one of the said at least two environments does not have any effect on the execution of the other of the said at least two applications executed in the other of the said at least two environments. Another object of the invention is a method for employing such a device.Type: GrantFiled: August 14, 2009Date of Patent: September 2, 2014Assignee: Airbus OperationsInventor: Francois Beltrand
-
Patent number: 8826284Abstract: A server system having one or more processors and memory receives, from a client, a request to perform a first task. The server system determines whether a first slot in a primary task queue having a plurality of slots is available, where the first slot was selected in accordance with a slot-selection function designed to probabilistically distribute respective target slots for a plurality of successive tasks across a plurality of different non-consecutive slots in the primary task queue. In accordance with a determination that the first slot is available, the server system inserts the first task in the first slot in the primary task queue. In accordance with a determination that the first slot is unavailable, the server system inserts the first task at an entry point of a secondary task queue.Type: GrantFiled: March 27, 2012Date of Patent: September 2, 2014Assignee: Google Inc.Inventor: Alfred R. K. Fuller
-
Publication number: 20140245304Abstract: Techniques for implicit coscheduling of CPUs to improve corun performance of scheduled contexts are described. One technique minimizes skew by implementing corun migrations, and another technique minimizes skew by implementing a corun bonus mechanism. Skew between schedulable contexts may be calculated based on guest progress, where guest progress represents time spent executing guest operating system and guest application code. A non-linear skew catch-up algorithm is described that adjusts the progress of a context when the progress falls far behind its sibling contexts.Type: ApplicationFiled: May 8, 2014Publication date: August 28, 2014Applicant: VMWARE, INC.Inventors: Haoqiang ZHENG, Carl A. WALDSPURGER
-
Publication number: 20140245312Abstract: A system and method can support cooperative concurrency in a priority queue. The priority queue, which includes a calendar ring and a fast lane, can detect one or more threads that contend to claim one or more requests in the priority queue. Then, a victim thread can place a request in the fast lane in the priority queue, and release a contending thread, which proceeds to consume the request in the fast lane.Type: ApplicationFiled: February 28, 2013Publication date: August 28, 2014Applicant: ORACLE INTERNATIONAL CORPORATIONInventor: Oleksandr Otenko
-
Publication number: 20140245311Abstract: An adaptive partition scheduler is a priority-based scheduler that also provides execution time guarantees (fair-share). Execution time guarantees apply to threads or groups of threads when the system is overloaded. When the system is not overloaded, threads are scheduled based strictly on priority, maintaining strict real-time behavior. When the system is overloaded, threads are scheduled based priority of threads that are in a ready state and based on the available guaranteed processor time budget of the adaptive partition associated with each thread.Type: ApplicationFiled: February 25, 2013Publication date: August 28, 2014Applicant: QNX SOFTWARE SYSTEMS LIMITEDInventors: Dan Dodge, Attila Danko, Sebastien Marineau-Mes, Peter van der Veen, Colin Burgess, Thomas Fletcher, Brian Stecher
-
Publication number: 20140245314Abstract: The present invention provides apparatus and methods to perform thermal management in a computing environment. In one embodiment, thermal attributes are associated with operations and/or processing components, and the operations are scheduled for processing by the components so that a thermal threshold is not exceeded. In another embodiment, hot and cool queues are provided for selected operations, and the processing components can select operations from the appropriate queue so that the thermal threshold is not exceeded.Type: ApplicationFiled: May 2, 2014Publication date: August 28, 2014Applicant: Sony Computer Entertainment Inc.Inventor: Keisuke Inoue
-
Publication number: 20140245313Abstract: A system and method can support a concurrent priority queue. The concurrent priority queue allows a plurality of threads to interact with the priority queue. The priority queue can use a sequencer to detect and order a plurality of threads that contend for one or more requests in the priority queue. Furthermore, the priority queue operates to reduce the contention among the plurality of threads.Type: ApplicationFiled: February 28, 2013Publication date: August 28, 2014Applicant: ORACLE INTERNATIONAL CORPORATIONInventor: Oleksandr Otenko
-
Patent number: 8819687Abstract: Some embodiments of a multi processor system implement a virtual-time-based quality-of-service scheduling technique. In at least one embodiment of the invention, a method includes scheduling a memory request to a memory from a memory request queue in response to expiration of a virtual finish time of the memory request. The virtual finish time is based on a share of system memory bandwidth associated with the memory request. The method includes scheduling the memory request to the memory from the memory request queue before the expiration of the virtual finish time of the memory request if a virtual finish time of each other memory request in the memory request queue has not expired and based on at least one other scheduling rule.Type: GrantFiled: May 7, 2010Date of Patent: August 26, 2014Assignee: Advanced Micro Devices, Inc.Inventors: Jaewoong Chung, Debarshi Chatterjee
-
Patent number: 8819683Abstract: A method, apparatus, system, article of manufacture, and computer-readable storage medium provide the ability to dynamically modify a distributed computing system workflow. A grid application dynamically receives configuration information including business rules that describe execution profiles. Channels based on the one or more execution profiles are defined. Each channel is configured to execute a work request in a distributed grid compute system (based on an execution profile). A first work request is received from a requestor and includes an identity of the requestor. The first work request is evaluated and the identity of the requestor is applied to direct the first work request to the appropriate channel.Type: GrantFiled: August 31, 2010Date of Patent: August 26, 2014Assignee: Autodesk, Inc.Inventor: Garrick D. Evans
-
Publication number: 20140237477Abstract: Methods and systems for scheduling jobs to manycore nodes in a cluster include selecting a job to run according to the job's wait time and the job's expected execution time; sending job requirements to all nodes in a cluster, where each node includes a manycore processor; determining at each node whether said node has sufficient resources to ever satisfy the job requirements and, if no node has sufficient resources, deleting the job; creating a list of nodes that have sufficient free resources at a present time to satisfy the job requirements; and assigning the job to a node, based on a difference between an expected execution time and associated confidence value for each node and a hypothetical fastest execution time and associated hypothetical maximum confidence value.Type: ApplicationFiled: April 24, 2014Publication date: August 21, 2014Applicant: NEC Laboratories America, Inc.Inventors: Srihari Cadambi, Kunal Rao, Srimat Chakradhar, Rajat Phull, Giuseppe Coviello, Murugan Sankaradass, Cheng-Hong Li
-
Publication number: 20140237476Abstract: A method and apparatus that schedules and manages a background task for device is described. In an exemplary embodiment, the device registers the background task, where the registering includes storing execution criteria for the background task. The execution criteria indicates a criterion for launching the background task and the execution criteria based on a component status of the device. The device further monitors the running state of the device for an occurrence of the execution criteria. If the execution criteria occurs, the device determines an available headroom with the device in order to perform the background task and launches the background task if the background task importance is greater than the available device headroom, where the background task importance is a measure of how important it is for the device to run the background task.Type: ApplicationFiled: February 6, 2014Publication date: August 21, 2014Applicant: Apple Inc.Inventors: Daniel Andreas Steffen, Kevin James Van Vechten
-
Publication number: 20140237478Abstract: Systems and methods provide an extensible, multi-stage, realtime application program processing load adaptive, manycore data processing architecture shared dynamically among instances of parallelized and pipelined application software programs, according to processing load variations of said programs and their tasks and instances, as well as contractual policies. The invented techniques provide, at the same time, both application software development productivity, through presenting for software a simple, virtual static view of the actually dynamically allocated and assigned processing hardware resources, together with high program runtime performance, through scalable pipelined and parallelized program execution with minimized overhead, as well as high resource efficiency, through adaptively optimized processing resource allocation.Type: ApplicationFiled: April 24, 2014Publication date: August 21, 2014Inventor: Mark Henrik Sandstrom
-
Patent number: 8813082Abstract: A method and apparatus for managing thread priority based on object creation rates. Embodiments of the invention provide a thread monitor configured to reduce the execution priority of a thread creating a sufficient number of new objects to be disruptive of system performance. Thus, although the thread may still create a large number of objects, by monitoring object creation rates and reducing the execution priority of such a thread, overall system performance may be improved. In other words, a given thread may still “misbehave,” but receive fewer opportunities to do so.Type: GrantFiled: June 22, 2006Date of Patent: August 19, 2014Assignee: International Business Machines CorporationInventors: Eric L. Barsness, John M. Santosuosso, John J. Stecher
-
Patent number: 8813083Abstract: A method and system to facilitate a user level application executing in a first processing unit to enqueue work or task(s) safely for a second processing unit without performing any ring transition. For example, in one embodiment of the invention, the first processing unit executes one or more user level applications, where each user level application has a task to be offloaded to a second processing unit. The first processing unit signals the second processing unit to handle the task from each user level application without performing any ring transition in one embodiment of the invention.Type: GrantFiled: July 1, 2011Date of Patent: August 19, 2014Assignee: Intel CorporationInventors: Robert L. Farrell, Ali-Reza Adl-Tabatabai, Altug Koker
-
Patent number: 8813085Abstract: An embodiment or embodiments of an information handling apparatus can use an entitlement vector to simultaneously manage and activate entitlement of objects and processes to various resources independently from one another. An information handling apparatus can comprise an entitlement vector operable to specify resources used by at least one object of a plurality of object. The information handling apparatus can further comprise a scheduler operable to schedule a plurality of threads based at least partly on entitlement as specified by the entitlement vector.Type: GrantFiled: October 28, 2011Date of Patent: August 19, 2014Assignee: Elwha LLCInventors: Andrew F. Glew, Daniel A. Gerrity, Clarence T. Tegreene
-
Patent number: 8813086Abstract: A computer implemented method, system and/or computer program product schedules execution of work requests through work plan prioritization. One or more work packets are mapped to and assigned to each work request from a group of work requests. A complexity level is derived for and assigned to each work packet, and priority levels of various work requests are determined for each entity from a group of entities. A global priority for the group of work requests is then determined. The global priority and the complexity levels combine to create a priority function, which is used to schedule execution of the work requests.Type: GrantFiled: August 20, 2013Date of Patent: August 19, 2014Assignee: International Business Machines CorporationInventors: Saeed Bagheri, Jarir K. Chaar, Yi-Min Chee, Krishna C. Ratakonda
-
Patent number: 8813084Abstract: A broadcast receiving apparatus and scheduling method thereof are provided. The broadcast receiving apparatus includes: a communication interface which performs an input-output operation of the broadcast receiving apparatus in response to a request for an input-output event from at least one of the plurality of operating systems; and a controller which processes the requested input-output event according to a priority given to the operating system that has requested the input-output event.Type: GrantFiled: May 6, 2011Date of Patent: August 19, 2014Assignee: Samsung Electronics Co., Ltd.Inventor: Young-Ho Choi
-
Patent number: 8804173Abstract: An information processing apparatus includes a storing unit, a determining unit, and a merging unit. The storing unit stores control data including apparatus identifiers identifying communication apparatuses and requested information identifiers identifying information items specified in information acquisition requests for the communication apparatuses. The determining unit determines whether a specified communication apparatus specified in a newly-received information acquisition request is already registered in the control data. If the specified communication apparatus is already registered in the control data, the merging unit determines whether a requested information identifier specified in the newly-received information acquisition request is recorded in the control data for the specified communication apparatus.Type: GrantFiled: November 17, 2011Date of Patent: August 12, 2014Assignee: Ricoh Company, Ltd.Inventor: Akira Nagamori
-
Patent number: 8806498Abstract: A scheduling apparatus and method allocate a plurality of works to a plurality of processing cores by transferring a work having no dependency on the execution completion of another work from a dependency queue to a runnable queue, transferring the work from the runnable queue to an idle one of the processing cores for execution, transferring the work executed by the one processing core to a finish queue, where the work becomes designated a finished work, and transferring a work within the dependency queue, having a dependency upon the execution completion of the finished work, to the runnable queue.Type: GrantFiled: February 9, 2011Date of Patent: August 12, 2014Assignee: Samsung Electronics Co., Ltd.Inventors: Sung-Jong Seo, Sung-Hak Lee, Dong-Woo Im, Hyo-Jung Song, Seung-Mo Cho
-
Publication number: 20140223443Abstract: A method, system and computer program product for determining a relative priority for a job. A “policy” is selected based on the job itself and the reason that the job is being executed, where the policy includes a priority range for the job and for an application. A priority for the job that is within the priority range of the job as established by the selected policy is determined based on environmental and context considerations. This job priority is then adjusted based on the priority of the application (within the priority range as established by the policy) becoming the job's final priority. By formulating a priority that more accurately reflects the true priority or importance of the job by taking into consideration the environmental and context considerations, job managers will now be able to process these jobs in a more efficient manner.Type: ApplicationFiled: February 4, 2013Publication date: August 7, 2014Applicant: International Business Machines CorporationInventors: Rohith K. Ashok, Roy F. Brabson, Michael J. Burr, Sivaram Gottimukkala, Hugh E. Hockett, Kristin R. Whetstone
-
Patent number: 8799690Abstract: An approach that manages energy in a data center is provided. In one embodiment, there is an energy management tool, including an analysis component configured to determine an energy profile of each of a plurality of systems within the data center. The energy management tool further comprises a priority component configured to prioritize a routing of a workload to a set of systems from the plurality of systems within the data center having the least amount of energy present based on the energy profile of each of the plurality of systems within the data center.Type: GrantFiled: June 21, 2009Date of Patent: August 5, 2014Assignee: International Business Machines CorporationInventors: Christopher J. Dawson, Vincenzo V. Diluoffo, Rick A. Hamilton, Michael D. Kendzierski
-
Patent number: 8799913Abstract: A computing system, method and computer-readable medium is provided. To prevent a starvation phenomenon from occurring in a priority-based task scheduling, a plurality of tasks may be divided into a priority-based group and other groups. The groups to which the tasks belong may be changed.Type: GrantFiled: October 19, 2010Date of Patent: August 5, 2014Assignee: Samsung Electronics Co., LtdInventors: Jeong Joon Yoo, Shi Hwa Lee, Seung Won Lee, Young Sam Shin, Min Yung Son
-
Patent number: 8798791Abstract: A cloud server and method controls one or more robots. The cloud server receives location information of each robot. A robot closest to a task location where a task is taken according to the location information. The cloud server of the data center sends a command to the located robot to move to the task location, where the command defines a task of the located robot to perform.Type: GrantFiled: August 25, 2011Date of Patent: August 5, 2014Assignee: Hon Hai Precision Industry Co., Ltd.Inventors: Shen-Chun Li, Shou-Kuo Hsu
-
Patent number: 8799916Abstract: A job profile describes characteristics of a job. A performance parameter is calculated based on the job profile, and using a value of the performance parameter, an allocation of resources is determined to assign to the job to meet a performance goal associated with a job.Type: GrantFiled: February 2, 2011Date of Patent: August 5, 2014Assignee: Hewlett-Packard Development Company, L. P.Inventors: Ludmila Cherkasova, Abhishek Verma
-
Publication number: 20140215479Abstract: A method includes, in a program that includes a defined number of job slots for data updating processing jobs, scheduling a first job in one of the slots, and executing the first job, wherein the first job includes scanning a list of additional jobs and scheduling those additional jobs for execution, further wherein a total number of the additional jobs in the program exceeds the defined number of job slots.Type: ApplicationFiled: January 31, 2013Publication date: July 31, 2014Applicant: Red Hat, Inc.Inventor: Bill Clifford Riemers
-
Publication number: 20140215480Abstract: Provided is a computer system including a first processor disposed in a first zone, a second processor disposed in a second zone, a prioritizing unit, and a scheduling unit. The prioritizing unit prioritizes the first processor and the second processor based on the thermal conditions of the first zone and the second zone, respectively. The scheduling unit schedules a task to one of the first processor and the second processor according to the priority provided by the prioritizing unit.Type: ApplicationFiled: January 30, 2014Publication date: July 31, 2014Applicant: International Business Machines CorporationInventors: Yuan Shan Hsiao, Po-Chun BJ Ko, Alex CP Lee, YuhHung Liaw
-
Patent number: 8793365Abstract: A system and method of allocating a job submission for a computational task to a set of distributed server farms each having at least one processing entity comprising; receiving a workload request from at least one processing entity for submission to at least one of the set of distributed server farms; using at least one or more conditions associated with the computational task for accepting or rejecting at least one of the server farms to which the job submission is to be allocated; determining a server farm that can optimize the one or more conditions; and dispatching the job submission to the server farm which optimizes the at least one of the one or more conditions associated with the computational task and used for selecting the at least one of the server farms.Type: GrantFiled: March 4, 2009Date of Patent: July 29, 2014Assignee: International Business Machines CorporationInventors: Igor Arsovski, Anthony Richard Bonaccio, Hayden C. Cranford, Jr., Alfred Degbotse, Joseph Andrew Iadanza, Todd Edwin Leonard, Pradeep Thiagarajan, Sebastian Theodore Ventrone
-
METHOD FOR SIMULTANEOUS SCHEDULING OF PROCESSES AND OFFLOADING COMPUTATION ON MANY-CORE COPROCESSORS
Publication number: 20140208327Abstract: A method is disclosed to manage a multi-processor system with one or more manycore devices, by managing real-time bag-of-tasks applications for a cluster, wherein each task runs on a single server node, and uses the offload programming model, and wherein each task has a deadline and three specific resource requirements: total processing time, a certain number of manycore devices and peak memory on each device; when a new task arrives, querying each node scheduler to determine which node can best accept the task and each node scheduler responds with an estimated completion time and a confidence level, wherein the node schedulers use an urgency-based heuristic to schedule each task and its offloads; responding to an accept/reject query phase, wherein the cluster scheduler send the task requirements to each node and queries if the node can accept the task with an estimated completion time and confidence level; and scheduling tasks and offloads using a aging and urgency-based heuristic, wherein the aging guaranteType: ApplicationFiled: April 6, 2013Publication date: July 24, 2014Applicant: NEC Laboratories America, Inc.Inventors: Srihari Cadambi, Kunal Rao, Srimat T. Chakradhar, Rajat Phull, Giuseppe Coviello, Murugan Sankaradass, Cheng-Hong Li -
Publication number: 20140208328Abstract: A method for terminal acceleration, a terminal and a storage medium is provided. The method includes steps of: detecting a memory resource occupied by all running application processes; determining whether the memory resource occupied by all running application processes reaches or is greater than a preset memory threshold; and terminating the running of at least one of all the running application processes according to the preset terminating conditions, when the memory resource occupied by all the running application processes reaches or is greater than the preset memory threshold, so that the terminal can be automatically accelerated according to the current utilization condition of its memory and the running application processes, the operating speed of the terminal may be improved, and the functions of the terminal may be further diversified.Type: ApplicationFiled: March 26, 2014Publication date: July 24, 2014Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventor: Qiang CHEN