Priority Scheduling Patents (Class 718/103)
  • Publication number: 20150121388
    Abstract: A task scheduling method is applied to a heterogeneous multi-core processor system. The heterogeneous multi-core processor system has at least one first processor core and at least one second processor core. The task scheduling method includes: referring to task priorities of tasks of the heterogeneous processor cores to identify at least one first task of the tasks that belongs to a first priority task group, wherein each first task belonging to the first priority task group has a task priority not lower than task priorities of other tasks not belonging to the first priority task group; and dispatching at least one of the at least one first task to at least one run queue of at least one of the at least one first processor core.
    Type: Application
    Filed: October 16, 2014
    Publication date: April 30, 2015
    Inventors: Ya-Ting Chang, Jia-Ming Chen, Yu-Ming Lin, Yin Chen, Hung-Lin Chou, Yeh-Ji Chou, Shou-Wen Ho
  • Patent number: 9021495
    Abstract: Resources in a computing environment are managed, for example, by a hardware controller controlling dispatching of resources from one or more pools of resources to be used in execution of threads. The controlling includes conditionally dispatching resources from the pool(s) to one or more low-priority threads of the computing environment based on current usage of resources in the pool(s) relative to an associated resource usage threshold. The management further includes monitoring resource dispatching from the pool(s) to one or more high-priority threads of the computing environment, and based on the monitoring, dynamically adjusting the resource usage threshold used in the conditionally dispatching of resources from the pool(s) to the low-priority thread(s).
    Type: Grant
    Filed: March 3, 2013
    Date of Patent: April 28, 2015
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Brian R. Prasky, Chung-Lung K. Shum
  • Patent number: 9021493
    Abstract: Resources in a computing environment are managed, for example, by a hardware controller controlling dispatching of resources from one or more pools of resources to be used in execution of threads. The controlling includes conditionally dispatching resources from the pool(s) to one or more low-priority threads of the computing environment based on current usage of resources in the pool(s) relative to an associated resource usage threshold. The management further includes monitoring resource dispatching from the pool(s) to one or more high-priority threads of the computing environment, and based on the monitoring, dynamically adjusting the resource usage threshold used in the conditionally dispatching of resources from the pool(s) to the low-priority thread(s).
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: April 28, 2015
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Steven R. Carlough, Christopher A. Krygowski, Brian R. Prasky, Chung-Lung K. Shum
  • Publication number: 20150113537
    Abstract: A system for providing reliable availability of a general workload and continuous availability of a priority workload over long distances may include a first computing site configured to execute a first instance associated with the priority workload, wherein the first instance is designated as an active instance, a second computing site configured to execute a second instance of the priority workload, wherein the second instance is designated as a standby instance, a third computing site configured to restart a third instance associated with the general workload, and a workload availability module configured to synchronize a portion of data associated with the third instance with a corresponding portion of data associated with the second instance.
    Type: Application
    Filed: October 22, 2013
    Publication date: April 23, 2015
    Applicant: International Business Machines Corporation
    Inventors: Serge Bourbonnais, Paul M. Cadarette, Michael G. Fitzpatrick, David B. Petersen, Gregory W. Vance
  • Patent number: 9015720
    Abstract: A system and method to optimize processor performance and minimizing average thread latency by selectively loading a cache when a program state, resources required for execution of a program or the program itself change, is described. An embodiment of the invention supports a “cache priming program” that is selectively executed for a first thread/program/sub-routine of each process. Such a program is optimized for situations when instructions and other program data are not yet resident in cache(s), and/or whenever resources required for program execution or the program itself changes. By pre-loading the cache with two resources required for two instructions for only a first thread, average thread latency is reduced because the resources are already present in the cache.
    Type: Grant
    Filed: January 6, 2009
    Date of Patent: April 21, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Andrew Brown, Brian Emberling
  • Patent number: 9014206
    Abstract: In a method for improving the transmission efficiency in a communication system with a layered protocol stack, data packets are processed on an upper protocol layer. Data packets are forwarded to a lower protocol layer for transmission and the transmission is performed with variable channel access delays. The upper protocol layer is notified by the lower protocol layer when a transmission is started to allow a synchronization of timers in the upper protocol layer. If a layer performs a scheduling of data packets for the transmission, a rescheduling is performed alternatively or in addition during a channel access delay. Devices and software programs embodying the invention are also described.
    Type: Grant
    Filed: October 23, 2006
    Date of Patent: April 21, 2015
    Assignee: Optis Cellular Technology, LLC
    Inventors: Joachim Sachs, Stefan Wager, Bela Rathonyi
  • Publication number: 20150106819
    Abstract: Disclosed herein is a task scheduling method for a priority-based real-time operating system in a multicore environment, which solves problems occurring in real-time multicore task scheduling which employs a conventional decentralized scheme. In the task scheduling method, one or more scheduling algorithm candidates for sequential tasks are combined with one or more scheduling algorithm candidates for parallel tasks. Respective task scheduling algorithm candidates generated at combining, are simulated and performances of the candidates are evaluated based on performance evaluation criteria. A task scheduling algorithm exhibiting best performance is selected from among results obtained at evaluating the performances.
    Type: Application
    Filed: August 18, 2014
    Publication date: April 16, 2015
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Sangcheol KIM, Seontae KIM, Pyeongsoo MAH
  • Patent number: 9009714
    Abstract: A scheduling method, medium and apparatus are provided. In the scheduling method, medium and apparatus, it is possible to prevent the possibility that the order between the priorities of the tasks represented by the expired timers and the tasks requested by the interrupt is reversed while also not deteriorating the performance of a real time operating system (RTOS), even though the number of timers expired when the interrupt occurs or that are already expired before the interrupt occurs is large, by selecting a timer for representing a point of time corresponding to a point of time when an interrupt occurs from among one or more timers each of which representing a task, a point of time assigned to the tasks, and a priority assigned to the task and executing a task represented by the selected timer and one or more tasks requested by the interrupt in order of priority.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: April 14, 2015
    Assignee: Samsung Electonics Co., Ltd.
    Inventors: Jae-don Lee, Seung-won Lee, Jeong-joon Yoo, Young-sam Shin, Min-kyu Jeong, Keun-soo Yim
  • Patent number: 9009715
    Abstract: A method, system and computer program product for optimally allocating objects in a virtual machine environment implemented on a NUMA computer system. The method includes: obtaining a node identifier; storing the node identifier in a thread; obtaining an object identifier of a lock-target object from a lock thread; writing a lock node identifier into the lock-target object; traversing an object reference graph where the object reference graph contains an object as a graph node, a reference from the first object to a second object as an edge, and a stack allocated to a thread as the root node; determining whether a move-target object contains the lock node identifier; moving the move-target object to a subarea allocated to a lock node if it contains the lock node identifier, and moving the move-target object to the destination of the current traversal target object if the lock node identifier is not found.
    Type: Grant
    Filed: October 6, 2010
    Date of Patent: April 14, 2015
    Assignee: International Business Machines Corporation
    Inventor: Takeshi Ogasawara
  • Patent number: 9009716
    Abstract: Creating a thread of execution in a computer processor, including copying, as indicated by a hardware processor opcode having been specified by a user-level process, data from a first set of registers to a second set of registers, wherein the first set of registers is associated with a parent hardware thread, wherein the second set of registers is associated with a child hardware thread, wherein the child hardware thread is in a wait state, and changing, as indicated by the hardware processor opcode, the child hardware thread from the wait state to an ephemeral run state.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: April 14, 2015
    Assignee: International Business Machines Corporation
    Inventors: Patrick J. Bohrer, Ahmed Gheith, James L. Peterson
  • Patent number: 9009717
    Abstract: A mechanism dynamically modifies the base-priority of a spawned set of processes according to their actual resource utilization (CPU or I/O wait time) and to a priority class assigned to them during their startup. In this way it is possible to maximize the CPU and I/O resource usage without at the same time degrading the interactive experience of the users currently logged on the system.
    Type: Grant
    Filed: August 24, 2010
    Date of Patent: April 14, 2015
    Assignee: International Business Machines Corporation
    Inventors: Mauro Arcese, Luigi Pichetti
  • Publication number: 20150100967
    Abstract: Techniques are disclosed for managing deployment conflicts between applications executing in one or more processing environments. A first application is executed in a first processing environment and responsive to a request to execute the first application. During execution of the first application, a determination is made to redeploy the first application for execution partially in time on a second processing environment providing a higher capability than the first processing environment in terms of at least a first resource type. A deployment conflict is resolved between the first application and at least a second application.
    Type: Application
    Filed: December 12, 2014
    Publication date: April 9, 2015
    Inventors: Adam T. Clark, Michael T. Kalmbach, John E. Petri, Kevin Wendzel
  • Publication number: 20150100966
    Abstract: A method includes a set of execution units of a dispersed storage network (DSN) receiving sets of sub-task requests from a computing device and storing the sets of sub-task requests, where each execution unit stores a request of each of the sets of sub-task requests to produce a corresponding plurality of sub-task requests. The method continues with each execution unit generating sub-task estimation data and adjusting timing, sequencing, or processing of the corresponding plurality of sub-task requests based on the estimation data to produce a plurality of partial results, where, due to one or more difference factors from a list of difference factors, the execution units process pluralities of sub-task requests at difference paces, where the list of difference factors includes differences in amounts of data to be processed per sub-task request, processing capabilities, memory storage capabilities, and networking capabilities.
    Type: Application
    Filed: August 5, 2014
    Publication date: April 9, 2015
    Applicant: CLEVERSAFE, INC.
    Inventors: Andrew Baptist, Ilya Volvovski, Joseph Martin Kaczmarek, Yogesh Ramesh Vedpathak
  • Publication number: 20150100965
    Abstract: A method includes, in one implementation, receiving a first set of instructions of a first thread, receiving a second set of instructions of a second thread, and allocating queues to the instructions from the first and second sets. During a time when the first and second threads are simultaneously being processed, changeable number of queues to can be allocated to the first thread based on factors such as the first and/or second thread's requirements or priorities, while maintaining a minimum specified number of queues that are allocated to the first and/or second thread. When needed, one thread may be stalled so that at least the minimum number of queues remains reserved for another thread while attempting to satisfy thread-priority requests or queue-requirement requests.
    Type: Application
    Filed: October 4, 2013
    Publication date: April 9, 2015
    Inventor: Thang M. Tran
  • Patent number: 9003274
    Abstract: The illustrative embodiments provide for a system and recordable type medium for representing actions in a data processing system. A table is generated. The table comprises a plurality of rows and columns. Ones of the columns represent corresponding ones of computer applications that can start or stop in parallel with each other in a data processing system. Ones of the rows represent corresponding ones of sequences of actions within a corresponding column. Additionally, the table represents a definition of relationships among memory address spaces, wherein the table represents when each particular address space is started or stopped during one of a start-up process, a recovery process, and a shut-down process. The resulting table is stored.
    Type: Grant
    Filed: December 21, 2007
    Date of Patent: April 7, 2015
    Assignee: International Business Machines Corporation
    Inventor: Joseph John Katnic
  • Patent number: 8997103
    Abstract: One embodiment sets forth a technique for N-way memory barrier operation coalescing. When a first memory barrier is received for a first thread group execution of subsequent memory operations for the first thread group are suspended until the first memory barrier is executed. Subsequent memory barriers for different thread groups may be coalesced with the first memory barrier to produce a coalesced memory barrier that represents memory barrier operations for multiple thread groups. When the coalesced memory barrier is being processed, execution of subsequent memory operations for the different thread groups is also suspended. However, memory operations for other thread groups that are not affected by the coalesced memory barrier may be executed.
    Type: Grant
    Filed: April 6, 2012
    Date of Patent: March 31, 2015
    Assignee: NVIDIA Corporation
    Inventors: Shirish Gadre, Charles McCarver, Anjana Rajendran, Omkar Paranjape, Steven James Heinrich
  • Patent number: 8994999
    Abstract: An image forming apparatus has a print engine, a job issuing device, two print control devices that control the print engine, and a job assignment device that assigns the print job to one of the two print control devices that is appropriate for the print job. Each print control device notifies the print engine of print information corresponding to the print job and manages already-indicated latest information, which is the latest print information. The print information is different from the already-indicated latest information. When making a switchover between the two print control devices to select a switched-to print control device, which is a print control device to which to assign the print job, the job assignment device causes the switched-to print control device to delete the already-indicated latest information managed by the switched-to print control device.
    Type: Grant
    Filed: November 18, 2013
    Date of Patent: March 31, 2015
    Assignee: Kyocera Document Solutions Inc.
    Inventor: Kazuto Misu
  • Patent number: 8997109
    Abstract: Disclosed herein are an apparatus and method for managing a data stream distributed parallel processing service. The apparatus includes a service management unit, a Quality of Service (QoS) monitoring unit, and a scheduling unit. The service management unit registers a plurality of tasks constituting the data stream distributed parallel processing service. The QoS monitoring unit gathers information about the load of the plurality of tasks and information about the load of a plurality of nodes constituting a cluster which provides the data stream distributed parallel processing service. The scheduling unit arranges the plurality of tasks by distributing the plurality of tasks among the plurality of nodes based on the information about the load of the plurality of tasks and the information about the load of the plurality of nodes.
    Type: Grant
    Filed: August 14, 2012
    Date of Patent: March 31, 2015
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Myung-Cheol Lee, Hyun-Hwa Choi, Hun-Soon Lee, Byoung-Seob Kim, Mi-Young Lee
  • Patent number: 8997105
    Abstract: The present invention relates to a processor and a method for processing a data packet, the method including steps of decreasing a value of a first credit parameter when the data packet is admitted to a processor at least partly based on the value of the first credit parameter and a first limit of the first credit parameter, and increasing the value of the first credit parameter, in dependence on a data storage level in a buffer in which the data packet is stored before being admitted to the processor, the value of the first credit parameter not being increased, so as to become larger than a second limit of the first credit parameter, when the buffer is empty.
    Type: Grant
    Filed: August 7, 2013
    Date of Patent: March 31, 2015
    Assignee: Marvell International Ltd.
    Inventor: Jakob Carlström
  • Patent number: 8997104
    Abstract: A system and method for managing performance of a mobile device. A current runtime list and a measure of performance are determined. A request to execute a new application not on the current runtime list is received. A determination is made whether executing the new application will cause the performance measure to be equal to or less than a threshold level. When executing the new application will not cause the performance measure to be equal to or less than a threshold level, the request to execute the new application is granted. When executing the new application will cause the performance measure to be equal to or less than a threshold level, the request to execute the new application is declined.
    Type: Grant
    Filed: June 29, 2012
    Date of Patent: March 31, 2015
    Assignee: Time Warner Cable Enterprises LLC
    Inventors: Dharmen Udeshi, Vijay Venkateswaran, Michael Charles Roudi
  • Patent number: 8997171
    Abstract: In accordance with one or more aspects, an application that is to be suspended on a computing device is identified based on a policy. The policy indicates that applications that are not being used are to be suspended. The application is automatically suspended, and is allowed to remain in memory but not execute while suspended. Additionally, when memory is to be freed one or more suspended applications to terminate are automatically selected based on the policy, and these one or more selected applications are terminated.
    Type: Grant
    Filed: August 19, 2011
    Date of Patent: March 31, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Benjamin Salim Srour, Michael H. Krause, Haseeb Ahmed, Zinaida A. Pozen
  • Patent number: 8997102
    Abstract: It is determined that a memory pressure condition exists which limits how many active processes are allowed. There is generated and stored of a set of values corresponding to parameters for each process where the parameters are related to priority factors assigned to the associated process. There is calculated a prioritization score for each process based on the corresponding set of values. There is determined a first active process with the lowest priority based on the prioritization scores. The first active process is deactivated to reduce the memory pressure condition.
    Type: Grant
    Filed: June 3, 2005
    Date of Patent: March 31, 2015
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: William Pohl, Walter J. Searle, Chukwuma Valentine Akpuokwe, Bradd William Szonye
  • Publication number: 20150089509
    Abstract: In accordance with one aspect of the present description execution of a particular command by a data processor such as a storage controller, may include obtaining priority over a resource which is also associated with execution of another command, setting a timer for the duration of a dynamically set timeout period, and detecting a potential deadlock condition as a function of expiration of the dynamically set timeout period before execution of the particular command is completed. In one embodiment, the particular command releases priority over the resource upon detection of the potential deadlock condition, and then reobtains priority over the resource in a retry of the command. It is believed that such an arrangement can relieve a potential deadlock condition, allowing execution of one or more commands including the particular command to proceed. Other features and aspects may be realized, depending upon the particular application.
    Type: Application
    Filed: September 26, 2013
    Publication date: March 26, 2015
    Applicant: International Business Machines Corporation
    Inventors: Theresa M. Brown, Nedlaya Y. Francisco, Suguang Li, Beth A. Peterson, Raul E. Saba
  • Publication number: 20150089510
    Abstract: A scheduling device according to embodiment may comprise a controller, a load calculator, a resource calculator. The controller may be configured to obtain an execution history of one or more tasks operating on a virtual OS. The load calculator may be configured to calculate a first resource amount required by each task based on the execution history. The resource calculator may be configured to calculate a second resource to be assigned to the virtual OS based on the first resource amount calculated for the one or more tasks.
    Type: Application
    Filed: September 10, 2014
    Publication date: March 26, 2015
    Applicant: Kabushiki Kaisha Toshiba
    Inventor: Yasuyuki KOZAKAI
  • Patent number: 8990783
    Abstract: Embodiments can include computer-implemented methods or non-transitory computer readable media storing executable instructions. The method or instructions can perform execution scheduling for code generated from an executable graphical model, where the generated code is executed on a target. The method/instructions can perform execution scheduling for a first code portion having a first execution rate, and a second code portion having a second execution rate that is temporally related to the first execution rate. The execution scheduling can account for target environment characteristics obtained from a target, can use an execution schedule, and can account for optimizations related to the first code portion or the second code portion. The method/instructions can further schedule execution of the first code portion and the second code portion in generated executable code based on the performing.
    Type: Grant
    Filed: January 28, 2011
    Date of Patent: March 24, 2015
    Assignee: The MathWorks, Inc.
    Inventors: Biao Yu, Jim Carrick, Pieter J. Mosterman
  • Patent number: 8990537
    Abstract: Disclosed herein are systems, methods, and non-transitory computer-readable storage media for managing free chains of compute resources. A system configured to practice the method divides a free chain of compute resources into a usable part (UP) which contains resources available for immediate allocation and an unusable part (UUP) which contains resources not available for immediate allocation but which become available after a certain minimum number of allocations. The system sorts resources in the UP by block number, and maintains a last used object (LUO) vector, indexed by block number, which records a last object in the UP for each block. Each time the system frees a resource, the system adds the freed resource to a tail of the UUP and promotes an oldest resource in the UUP to the UP. This approach can manage free chains in a manner that is both flaw tolerant and has relatively high performance.
    Type: Grant
    Filed: April 22, 2013
    Date of Patent: March 24, 2015
    Assignee: Avaya Inc.
    Inventor: John H. Meiners
  • Patent number: 8990452
    Abstract: Techniques are described for eliminating backpressure in a distributed system by changing the rate data flows through a processing element. Backpressure occurs when data throughput in a processing element begins to decrease, for example, if new processing elements are added to the operating chart or if the distributed system is required to process more data. Indicators of backpressure (current or future) may be monitored. Once current backpressure or potential backpressure is identified, the operator graph or data rates may be altered to alleviate the backpressure. For example, a processing element may reduce the data rates it sends to processing elements that are downstream in the operator graph, or processing elements and/or data paths may be eliminated. In one embodiment, processing elements and associate data paths may be prioritized so that more important execution paths are maintained.
    Type: Grant
    Filed: July 26, 2011
    Date of Patent: March 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Michael J. Branson, Ryan K. Cradick, John M. Santosuosso
  • Publication number: 20150082316
    Abstract: The present invention relates to automation of flow assurance simulation workflow using a web-based batch simulation scheduler. Further, the present invention relates to a system for efficient utilization of simulation resources comprising a database server; a simulator; a batch simulation scheduler; a license monitor; a launcher; a wrapper; and a debugger. The batch simulation scheduler schedules simulations on computing resources with specific input files based on user submitted information in database according to user-specified priorities. The license monitor updates the database with the number of available licenses for each feature or module. The launcher running on computing resources monitors the database for instructions from the scheduler to launch the simulations. The debugger parses the input file (s) and verifies its syntax without using a simulation license. The system utilizes all available licenses to execute the jobs thereby resulting in parallel execution of cases in the jobs.
    Type: Application
    Filed: September 18, 2013
    Publication date: March 19, 2015
    Applicant: evoleap, LLC
    Inventors: Michael Zaldivar, Prasanna Venkatesh Parthasarathy
  • Patent number: 8984494
    Abstract: An embodiment can include one or more computer readable media storing executable instructions for performing execution scheduling for code generated from an executable graphical model. The media can store instructions for accessing a first code portion having a first priority, and a second code portion having a second priority, where the second priority has a relationship with the first priority. The media can store instructions for accessing target environment characteristics that indicate a performance of the target environment, and for performing execution scheduling for the first code portion and the second code portion, the execution scheduling taking into account the target environment characteristics, the execution scheduling using an execution schedule.
    Type: Grant
    Filed: October 21, 2013
    Date of Patent: March 17, 2015
    Assignee: The MathWorks, Inc.
    Inventors: James E. Carrick, Biao Yu
  • Patent number: 8984519
    Abstract: A system and method for scheduling client-server applications onto heterogeneous clusters includes storing at least one client request of at least one application in a pending request list on a computer readable storage medium. A priority metric is computed for each application, where the computed priority metric is applied to each client request belonging to that application. The priority metric is determined based on estimated performance of the client request and load on the pending request list. The at least one client request of the at least one application is scheduled based on the priority metric onto one or more heterogeneous resources.
    Type: Grant
    Filed: October 13, 2011
    Date of Patent: March 17, 2015
    Assignee: NEC Laboratories America, Inc.
    Inventors: Srihari Cadambi, Srimat Chakradhar, M. Mustafa Rafique
  • Publication number: 20150074673
    Abstract: According to certain embodiment, there is provided a control apparatus including a processor. The processor controls a first processing unit. The processor acquires determination information for estimating a time delay till execution of a first process is started, in response to receiving an interrupt request for the first process related to the first processing unit from a program being executed underway by the processor or from hardware connected via a bus to the processor, and determines whether to execute the first process or not based on the determination information.
    Type: Application
    Filed: March 11, 2014
    Publication date: March 12, 2015
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventors: Yuji TOHZAKA, Hiroki KUDO, Takafumi SAKAMOTO, Noritaka DEGUCHI
  • Publication number: 20150074671
    Abstract: Systems and methods are disclosed for reducing latency in processing data sets in a distributed fashion. A job-queue operable for queuing data-processing jobs run on multiple nodes in a cluster may be communicatively coupled to a job analyzer. The job analyzer may be operable to read the data-processing jobs and extract information characterizing those jobs in ways that facilitate identification of resources in the cluster serviceable to run the data-processing jobs and/or data to be processed during the running of those jobs. The job analyzer may also be coupled to a resource warmer operable to warm-up a portion of the cluster to be used to run a particular data-processing job prior to the running of the job. In some embodiments, mappers and/or reducers may be extracted from the jobs and converted into compute node identifiers and/or data units identifying blocks for processing, informing the warm-up operations of the resource warmer.
    Type: Application
    Filed: February 6, 2014
    Publication date: March 12, 2015
    Applicant: Robin Systems, Inc.
    Inventors: Krishna Satyasai Yeddanapudi, Christopher Alan Mildebrandt, Rao V. Madduri
  • Publication number: 20150074672
    Abstract: Systems and methods are disclosed for scheduling jobs processed in a distributed fashion to realize unharnessed efficiencies latent in the characteristics of the jobs and distributed processing technologies. A job store may be communicatively coupled to a job analyzer. The job analyzer may be operable to read information characterizing a job to identify multiple data blocks to be processed during the job at multiple locations in a cluster of nodes. A scheduling module may use information about the multiple data blocks, their storage locations, their status with respect to being provisioned to processing logic, data blocks to be processed by other jobs, data blocks in cache that have been pre-fetched for a prior job, quality-of-services parameters, and/or job characteristics, such as job size, to schedule the job in relation to other jobs.
    Type: Application
    Filed: February 6, 2014
    Publication date: March 12, 2015
    Applicant: Robin Systems, Inc.
    Inventors: Krishna Satyasai Yeddanapudi, Christopher Alan Mildebrandt, Rao V. Madduri
  • Publication number: 20150074674
    Abstract: An apparatus for adjusting priorities of tasks determines a task violating a real-time constraint using a profiling result of the real-time software and task details including a real-time constraint for each task, adjusts a priority of the task violating the real-time constraint or a higher candidate task close to the task violating the real-time constraint, and simulates execution of the real-time software depending on the adjusted priority.
    Type: Application
    Filed: August 1, 2014
    Publication date: March 12, 2015
    Inventors: Yu Seung Ma, Sang Cheol Kim, Duk Kyun Woo, PYEONG SOO MAH, SEON-TAE KIM
  • Publication number: 20150074676
    Abstract: A plurality of tasks are processed simultaneously in a plurality of CPUs. A task control circuit is connected to the plurality of CPUs, and when executing a system call signal instruction, each CPU transmits a system call signal to the task control circuit. Upon receipt of a system call signal from a CPU 0, the task control circuit 200 refers to a processor management register, identifies a RUN-task of the CPU 0, selects a READY-task that is to be executed next, switches process data of the RUN-task and process data of the READY-task, and updates processor management information.
    Type: Application
    Filed: November 17, 2014
    Publication date: March 12, 2015
    Inventor: Naotaka MARUYAMA
  • Publication number: 20150074310
    Abstract: Methods and systems for implementing virtual processors are disclosed. For example, in an embodiment a processing apparatus configured to act as a plurality of virtual processors includes a first virtual program space that includes a first program execution memory, the first program execution memory including code to run a non-real-time operating system capable of supporting a one or more non-real-time applications, a second virtual program space that includes a second program execution memory, the second program execution memory including code to run one or more real-time processes, and a central processing unit (CPU) configured to operate in a first operating mode and a second operating mode, the CPU being configured to perform operating system and application activities using the first virtual program space for the first operating mode without using the second virtual program space and without appreciably interfering with the one or more real-time processes that are running in the second operating mode.
    Type: Application
    Filed: November 17, 2014
    Publication date: March 12, 2015
    Applicant: Marvell World Trade Ltd.
    Inventors: Timor KARDASHOV, Maxim Kovalenko, Arie Elias, Guy Ray
  • Publication number: 20150074670
    Abstract: The current document is directed to an interface and authorization service that allows users of a cloud-director management subsystem of distributed, multi-tenant, virtual data centers to extend the services and functionalities provided by the cloud-director management subsystem. A cloud application programming interface (“API”) entrypoint represents a request/response RESTful interface to services and functionalities provided by the cloud-director management subsystem as well as to service extensions provided by users. The API entrypoint includes a service-extension interface and an authorization-service management interface. The cloud-director management subsystem provides the authorization service to service extensions that allow the service extensions to obtain, from the authorization service, an indication of whether or not a request directed to the service extension through the API entrypoint is authorized.
    Type: Application
    Filed: September 10, 2013
    Publication date: March 12, 2015
    Applicant: VMware, Inc.
    Inventor: Radoslav Gerganov
  • Publication number: 20150074675
    Abstract: Aspects of the disclosure provide a method for instruction scheduling. The method includes receiving a sequence of instructions, identifying redundant flag-register based dependency of the instructions, and re-ordering the instructions without being restricted by the redundant flag-register based dependency.
    Type: Application
    Filed: August 28, 2014
    Publication date: March 12, 2015
    Applicant: MARVELL WORLD TRADE LTD
    Inventors: Xinyu QI, Ningsheng Jian, Haitao Huang, Liping Gao
  • Publication number: 20150067693
    Abstract: A job management apparatus includes a storage device configured to store a maximum power value when one or more calculation nodes has executed a first job, and a controller configured to detect the first job, at least one of which an identification information is matched with the identification information of a second job to be scheduled, of which the number of the calculation nodes to use is matched with the number of the calculation nodes of the second job, and of which a difference of the number of the calculation nodes with the second job is within a prescribed range, predict, as a second maximum power value of the second job, a first maximum power value of the detected first job, and schedule the second job such that the second maximum power value of the second job does not exceed a power consumption limit value set according to a time.
    Type: Application
    Filed: August 1, 2014
    Publication date: March 5, 2015
    Inventor: Fumio YAMAZAKI
  • Publication number: 20150067691
    Abstract: A system, method, and computer program product are provided for providing prioritized access for multithreaded processing. The method includes the steps of allocating threads to process a workload and assigning a set of priority tokens to at least a portion of the threads. Access to a resource, by each one of the threads, is based on the priority token assigned to the thread and the threads are executed by a multithreaded processor to process the workload.
    Type: Application
    Filed: January 3, 2014
    Publication date: March 5, 2015
    Applicant: NVIDIA Corporation
    Inventors: Daniel Robert Johnson, Minsoo Rhu, James M. O' Connor, Stephen William Keckler
  • Publication number: 20150067692
    Abstract: Implementations disclosed herein relate to thermal based prioritized computing application scheduling. For example, a processor may determine a prioritized computing application. The processor may schedule the prioritized computing application to transfer execution from a first processing unit to a second processing unit based on a thermal reserve energy associated with the second processing unit.
    Type: Application
    Filed: June 29, 2012
    Publication date: March 5, 2015
    Inventors: Kimon Berlin, Tom Fisher, Raphael Gay
  • Patent number: 8973006
    Abstract: A circuit arrangement and method for a data processing system for executing a plurality of tasks with a central processing unit having a processing capacity allocated to the processing unit; the circuit arrangement being configured to allocate the processing unit to the specific tasks in a time-staggered manner for processing, so that the tasks are processed in an order to be selected and tasks not having a current processing request are skipped over in the order during the processing; the circuit arrangement including a prioritization order control unit to determine the order in which the tasks are executed; and in response to each selection of a task for processing, the order of the tasks being redetermined and the selection being controlled so that for a number N of tasks, a maximum of N time units elapse until an active task is once more allocated processing capacity by the processing unit.
    Type: Grant
    Filed: September 11, 2012
    Date of Patent: March 3, 2015
    Assignee: Robert Bosch GmbH
    Inventors: Eberhard Boehl, Ruben Bartholomae
  • Patent number: 8972995
    Abstract: A method, apparatus, and system in which an integrated circuit comprises an initiator Intellectual Property (IP) core, a target IP core, an interconnect, and a tag and thread logic. The target IP core may include a memory coupled to the initiator IP core. Additionally, the interconnect can allow the integrated circuit to communicate transactions between one or more initiator Intellectual Property (IP) cores and one or more target IP cores coupled to the interconnect. A tag and thread logic can be configured to concurrently perform per-thread and per-tag memory access scheduling within a thread and across multiple threads such that the tag and thread logic manages tags and threads to allow for per-tag and per-thread scheduling of memory accesses requests from the initiator IP core out of order from an initial issue order of the memory accesses requests from the initiator IP core.
    Type: Grant
    Filed: August 6, 2010
    Date of Patent: March 3, 2015
    Assignee: Sonics, Inc.
    Inventors: Krishnan Srinivasan, Ruben Khazhakyan, Harutyan Aslanyan, Drew E. Wingard, Chien-Chun Chou
  • Patent number: 8972488
    Abstract: Providing a first control process that executes in a hardware processor, providing a first server process that executes in a hardware processor, that responds to write requests by storing objects in in-memory, non-relational data store, and that responds to read requests by providing objects from in-memory, non-relational data store, wherein the objects each have an object size; forming a plurality of persistent connections between the first control process and the first server process; using the first control process, pipelining, using a pipeline having a pipeline size, requests that include the read requests and the write requests over at least one of the plurality of persistent connections; using the first control process, adjusting the number of plurality of persistent connections and the pipeline size based on an average of the object sizes; and using the first control process, prioritizing requests by request type based on anticipated load from the requests.
    Type: Grant
    Filed: September 28, 2011
    Date of Patent: March 3, 2015
    Assignee: Redis Labs Ltd.
    Inventors: Yiftach Shoolman, Ofer Bengal
  • Publication number: 20150058858
    Abstract: The present invention provides methods and system, including computer program products, implementing and using techniques for providing tasks of different classes with access to CPU time provided by worker threads of a database system. In particular, the invention relates to such a database-system-implemented method comprising the following steps: inserting the tasks to a queue of the database system; and executing the tasks inserted to the queue by worker threads of the database system according to their order in the queue; characterized in that the queue is a priority queue; and in that the method further comprises the following steps: assigning each class to a respective priority; and in that the step of inserting the tasks to the queue includes: associating each task with the respective priority assigned to its class.
    Type: Application
    Filed: August 20, 2014
    Publication date: February 26, 2015
    Inventors: Hasso PLATTNER, Martin GRUND, Johannes WUST
  • Publication number: 20150058857
    Abstract: An architecture for a load-balanced groups of multi-stage manycore processors shared dynamically among a set of software applications, with capabilities for destination task defined intra-application prioritization of inter-task communications (ITC), for architecture-based ITC performance isolation between the applications, as well as for prioritizing application task instances for execution on cores of manycore processors based at least in part on which of the task instances have available for them the input data, such as ITC data, that they need for executing.
    Type: Application
    Filed: June 27, 2014
    Publication date: February 26, 2015
    Inventor: Mark Henrik Sandstrom
  • Patent number: 8966489
    Abstract: An information processing device disclosed includes a plurality of executing units for executing various processes. The information processing device and method thereof acquire setting information that indicates an operating condition with respect to each executing unit from information an operation of a main process executed by the plurality of executing units, and sets an operating state of each of the executing units based on the acquired setting information.
    Type: Grant
    Filed: February 26, 2009
    Date of Patent: February 24, 2015
    Assignee: Fujitsu Limited
    Inventors: Kouichi Yasaki, Kazuaki Nimura, Isamu Yamada
  • Patent number: 8963933
    Abstract: The desire to use an Accelerated Processing Device (APD) for general computation has increased due to the APD's exemplary performance characteristics. However, current systems incur high overhead when dispatching work to the APD because a process cannot be efficiently identified or preempted. The occupying of the APD by a rogue process for arbitrary amounts of time can prevent the effective utilization of the available system capacity and can reduce the processing progress of the system. Embodiments described herein can overcome this deficiency by enabling the system software to pre-empt a process executing on the APD for any reason. The APD provides an interface for initiating such a pre-emption. This interface exposes an urgency of the request which determines whether the process being preempted is allowed a grace period to complete its issued work before being forced off the hardware.
    Type: Grant
    Filed: July 23, 2012
    Date of Patent: February 24, 2015
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Robert Scott Hartog, Ralph Clay Taylor, Michael Mantor, Kevin McGrath, Sebastien Nussbaum, Nuwan S. Jayasena, Rex Eldon McCrary, Mark Leather, Philip J. Rogers
  • Patent number: 8959520
    Abstract: A data processor and method of controlling the performance of a data processor are provided. The data processor includes a memory that is operable to store at least two of the application programs and that can be executed on the data processor. A performance module is operable to monitor a performance flag and output a stop command as a function of the presence of the performance flag, wherein the performance module generates a command that terminates at least one of the application programs as a function of the outputting of a stop command.
    Type: Grant
    Filed: September 18, 2006
    Date of Patent: February 17, 2015
    Assignee: Siemens Aktiengesellschaft
    Inventor: Peter Gunther
  • Patent number: 8959521
    Abstract: A computer-readable medium tangibly embodying a program of machine-readable instructions executable by a digital processor of a computer system to perform operations for controlling computer system activities. The operations include receiving a command entered with an input device of the computer system to begin opportunistic computer system activities, where the command specifies a time period available for opportunistic computer system activities. Then initiating at least one computer system activity during the time period available for opportunistic computer system activities.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: February 17, 2015
    Assignee: International Business Machines Corporation
    Inventors: Peter G. Capek, Clifford A. Pickover