Patents Issued in March 31, 2016
-
Publication number: 20160092235Abstract: An apparatus and method are described for improved thread selection. For example, one embodiment of a processor comprises: first logic to maintain a history table comprising a plurality of entries, each entry in the table associated with an instruction and including history data indicating prior hits and/or misses to a cache level and/or a translation lookaside buffer (TLB) for that instruction; and second logic to select a particular thread for execution at a particular processor pipeline stage based on the history data.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: REKAI GONZALEZ-ALBERQUILLA, TANAUSU RAMIREZ, JOSEP M. CODINA, ENRIC GIBERT CODINA
-
Publication number: 20160092236Abstract: A processor includes a mechanism that checks for and flushes only speculative loads and any respective dependent instructions that are younger than an executed wait for event (WEV) instruction, and which also match an address of a store instruction that has been determined to have been executed by a different processor prior to execution of the paired SEV instruction by the different processor. The mechanism may allow speculative loads that do not match the address of any store instruction that has been determined to have been executed by a different processor prior to execution of the paired SEV instruction by the different processor.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Pradeep Kanapathipaillai, Richard F. Russo, Sandeep Gupta, Conrado Blasco
-
Publication number: 20160092237Abstract: In an aspect, a pipelined execution resource can produce an intermediate result for use in an iterative approximation algorithm in an odd number of clock cycles. The pipelined execution resource executes SIMD requests by staggering commencement of execution of the requests from a SIMD instruction. When executing one or more operations for a SIMD iterative approximation algorithm, and an operation for another SIMD iterative approximation algorithm is ready to begin execution, control logic causes intermediate results completed by the pipelined execution resource to pass through a wait state, before being used in a subsequent computation. This wait state presents two open scheduling cycles in which both parts of the next SIMD instruction can begin execution. Although the wait state increases latency to complete an in-progress algorithm, a total throughput of execution on the pipeline increases.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Kristie Veith, Leonard Rarick, Manouk Manoukian
-
Publication number: 20160092238Abstract: Systems and methods for implementing certain load instructions, such as vector load instructions by cooperation of a main processor and a coprocessor. The load instructions which are identified by the main processor for offloading to the coprocessor are committed in the main processor without receiving corresponding load data. Post-commit, the load instructions are processed in the coprocessor, such that latencies incurred in fetching the load data are hidden from the main processor. By implementing an out-of-order load data buffer associated with an in-order instruction buffer, the coprocessor is also configured to avoid stalls due to long latencies which may be involved in fetching the load data from levels of memory hierarchy, such as L2, L3, L4 caches, main memory, etc.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: Lucian CODRESCU, Christopher Edward KOOB, Eric Wayne MAHURIN, Suresh Kumar VENKUMAHANTI
-
Publication number: 20160092239Abstract: An apparatus and method for a SIMD unstructured branching. For example, one embodiment of a processor comprises: an execution unit having a plurality of channels to execute instructions; and a branch unit to process unstructured control flow instructions and to maintain a per channel count value for each channel, the branch unit to store instruction pointer tags for the unstructured control flow instructions in a memory and identify the instruction pointer tags using tag addresses, the branch unit to further enable and disable the channels based at least on the per channel count value.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: SUBRAMANIAM MAIYURAN, DARIN M. STARKEY
-
Publication number: 20160092240Abstract: An apparatus and method for a SIMD structured branching. For example, one embodiment of a processor comprises: an execution unit having a plurality of channels to execute instructions; and a branch unit to process control flow instructions and to maintain a per channel count for each channel and a control instruction count for the control flow instructions, the branch unit to enable and disable the channels based at least on the per channel count.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: Subramaniam MAIYURAN, Darin M. STARKEY, Thomas A. PIAZZA
-
Publication number: 20160092241Abstract: Embodiments are directed to a method of adjusting an index, wherein the index identifies a location of an element within an array. The method includes executing, by a computer, a single instruction that adjusts a first parameter of the index to match a parameter of an array address. The single instruction further adjusts a second parameter of the index to match a parameter of the array element. The adjustment of the first parameter includes a sign extension.Type: ApplicationFiled: September 29, 2014Publication date: March 31, 2016Inventor: Michael K. Gschwind
-
Publication number: 20160092242Abstract: Aspects of the disclosure relate to methods, systems, and apparatuses of a fast start system. A computing device may automatically restart itself based on a restart schedule from a fast start network server. The computing device may initiate a booting sequence and retrieve login credentials of a user stored in the computing device. Using the stored login credentials, the computing device can login the user to the system. In response to successfully logging in the user, the computing device may initialize at least one startup application on the computing device. Once the user is successfully logged in, the computing device may automatically lock the computing device to the user to prevent any unauthorized use of the workstation.Type: ApplicationFiled: September 29, 2014Publication date: March 31, 2016Inventors: Sundar Krishnamoorthy, Suresh G. Nair, Mohana K. Viswanathan
-
Publication number: 20160092243Abstract: Trusted firmware on a host server is used for managing access to a hardware security module (HSM) connected to the host server. The HSM stores confidential information associated with an operating system. As part of access management, the firmware detects a boot device identifier associated with a boot device configured to boot the operating system on the host server. The firmware then receives a second boot device identifier from the HSM. The boot device identifier and the second boot device identifier are then compared by the firmware. Based on the comparison, the firmware determines that the boot device identifier matches with the second boot device identifier. Based on this determination, the firmware grants the operating system access to the HSM.Type: ApplicationFiled: December 18, 2014Publication date: March 31, 2016Inventors: Volker M. M. Boenisch, Reinhard Buendgen, Franziska Geisert, Jakob C. Lang, Mareike Lattermann, Angel Nunez Mencias
-
Publication number: 20160092244Abstract: Various exemplary embodiments relate to a method of configuring a device in a network, the method including loading one or more system configuration commands into an active memory; processing the one or more system configuration commands; loading one or more blocks of customer commands into an active memory; and processing each of the one or more blocks of customer commands, wherein each block is processed as soon as it is loaded into the active memory.Type: ApplicationFiled: September 25, 2014Publication date: March 31, 2016Inventor: Vishnukumar S. Thumati
-
Publication number: 20160092245Abstract: Embodiments of the present invention can comprise a data rich tooltip or other graphical or textual preview of a selected object of interest. This preview can provide the user with additional information about the object so that the user does not need to waste time opening multiple objects or records in order to find the desired one. Instead, the summary view can provide enough information that the user does not need to open the record; the tooltip has the information that they want to know about the item. According to one embodiment, the summary view can be generated for each record based on a pre-configured template. Content presented in the summary view can be defined by the template and may be text about the object (e.g., object field values), about other related objects, images, or other information that would help the user find the desired object without opening it.Type: ApplicationFiled: September 25, 2014Publication date: March 31, 2016Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: RACHEL HOGUE, DAVID MOR, FIDEL LEE, ZHENGYAN WANG
-
Publication number: 20160092246Abstract: Embodiments are directed to utilizing reverse dependency injection for managing bootstrapping of applications in web browser and mobile environments. By using reverse dependency injection, embodiments enable a component to declare that it is a “dependency of” another component in a visual analyzer application. This ensures that the dependencies are loaded before the other component is loaded, thereby minimizing delays when a user starts up an application. In some embodiments, information identifying a plugin to be loaded can be received. Embodiments can determine configuration information for the plugin where the configuration information includes both forward and reverse dependencies. Embodiments may generate, based on the configuration information, a data structure that represents the forward and reverse dependencies. Embodiments may analyze the data structure to determine an ordered list of loadings.Type: ApplicationFiled: September 25, 2015Publication date: March 31, 2016Inventors: Bo Jonas Birger Lagerblad, Arun Lakshminarayan Katkere
-
Publication number: 20160092247Abstract: First and second simulated processing of a stream-based computing application using respective first and second simulation conditions may be performed. The first and second simulation conditions may specify first and second operator graph configurations. Each simulated processing may include inputting a stream of test tuples to the stream-based computing application, which may operate on one or more compute nodes. Each compute node may have one or more computer processors and a memory to store one or more processing elements. Each simulated processing may be monitored to determine one or more performance metrics. The first and second simulated processings may be sorted based on a first performance metric to identify a simulated processing having a first rank. An operator graph configuration associated with the simulated processing having the first rank may be selected if the first performance metric for the simulated processing having the first rank is within a processing constraint.Type: ApplicationFiled: December 8, 2015Publication date: March 31, 2016Inventors: Michael J. Branson, John M. Santosuosso
-
Publication number: 20160092248Abstract: An application process can be executed based on an initialization instruction, where the application process includes instructions associated with a hook framework. A virtual machine configured to load the hook framework on the virtual machine based on instructions included in the application process can be initiated and the instructions associated with the hook framework can be executed upon initiation of the virtual machine to insert a hook on the virtual machine. A nascent process configured to initiate an additional virtual machine can be initiated based on a request to load an application, where the additional virtual machine is hooked via the hook inserted on the virtual machine.Type: ApplicationFiled: June 28, 2013Publication date: March 31, 2016Inventors: Inbar Shani, Sigal Maon, Amichai Nitsan
-
Publication number: 20160092249Abstract: A computer-implemented method, computer program product, and computing system is provided for providing a framework for logically representing the discretization of logic for a backtracking algorithm. In an implementation, a method may include defining a validation class representing a validation logic to be tested. A processable class may be defined representing a backtracking logic flow to be implemented. The processable class may be associated with the validation class. One or more candidate options may be evaluated based upon, at least in part, the validation logic and the backtracking logic flow.Type: ApplicationFiled: September 25, 2014Publication date: March 31, 2016Inventor: Douglas C. Ewing
-
Publication number: 20160092250Abstract: A system for providing dynamic code deployment and versioning is provided. The system may be configured to receive a first request to execute a newer program code on a virtual compute system, determine, based on the first request, that the newer program code is a newer version of an older program code loaded onto an existing container on a virtual machine instance on the virtual compute system, initiate a download of the newer program code onto a second container on the same virtual machine instance, and causing the first request to be processed with the older program code in the existing container.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Timothy Allen Wagner, Sean Philip Reque, Derek Steven Manwaring, Xin Zhao, Dylan Chandler Thomas
-
Publication number: 20160092251Abstract: A service manages a plurality of virtual machine instances for low latency execution of user codes. The service can provide the capability to execute user code in response to events triggered on an auxillary service to provide implicit and automatic rate matching and scaling between events being triggered on the auxiliary service and the corresponding execution of user code on various virtual machine instances. An auxiliary service may be configured as an event triggering service to detect events and generate event messages for execution of the user codes. The service can request, receive, or poll for event messages directly from the auxiliary service or via an intermediary message service. Event messages can be rapidly converted to requests to execute user code on the service. The time from processing the event message to initiating a request to begin code execution is less than a predetermined duration, for example, 100 ms.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventor: Timothy Allen Wagner
-
Publication number: 20160092252Abstract: A service manages a plurality of virtual machine instances for low latency execution of user codes. The plurality of virtual machine instances can be configured based on a predetermined set of configurations. One or more containers may be created within the virtual machine instances. In response to a request to execute user code, the service identifies a pre-configured virtual machine instance suitable for executing the user code. The service can allocate the identified virtual machine instance to the user, create a new container within an instance already allocated to the user, or re-use a container already created for execution of the user code. When the user code has not been activated for a time-out period, the service can invalidate allocation of the virtual machine instance destroy the container. The time from receiving the request to beginning code execution is less than a predetermined duration, for example, 100 ms.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventor: Timothy Allen Wagner
-
Publication number: 20160092253Abstract: A host-side overcommit value is set upon a physical node that implements virtual machines (VM Node). The overcommit value is determined by receiving a selected enablement template that includes a selected computing capacity and a selected overcommit value. A user-side normalization factor is determined that normalizes the selected computing capacity against a reference data handling system. A comparable computing capacity of the VM Node is determined. A host-side normalization factor is determined that normalizes the comparable computing capacity against the reference data handling system. The host-side overcommit value is determined from the selected overcommit value, the user-side normalization factor, and the host-side normalization factor. The host-side overcommit value may indicate the degree the comparable computing capacity is overcommitted to virtual machines deployed upon heterogeneous VM Nodes as normalized against the reference system.Type: ApplicationFiled: September 25, 2014Publication date: March 31, 2016Inventors: Susan F. Crowell, Jason A. Nikolai, Andrew T. Thorstensen
-
Publication number: 20160092254Abstract: Methods and systems for providing a communication path are disclosed. Information can be received via a first communication session based on a first messaging protocol. The first communication session can be terminated at a virtual machine of a group of virtual machines. A dynamically bound communication path to a resource can be selected based on a dynamically reconfigurable routing table for the group of virtual machines. A second communication session can be initiated, at the virtual machine, via the selected dynamically bound communication path. The information can be transmitted to the resource via the second communication session based on a second messaging protocol.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: Sudhir Borra, Douglas Makofka
-
Publication number: 20160092255Abstract: A method, system and computer program product for efficiently utilizing a virtual file system cache across cloud computing nodes. A determination is made as to which hypervisors will be able to share all or a portion of the memory in its cache module (look-aside cache) to become a hypervisor in a “pool of hypervisors” based on the workload of the virtual machines run by the hypervisor. All or a portion of the memory in the cache module in each hypervisor in the pool of hypervisors that is available to be utilized by other virtual machines is allocated to form a “shared cache module” to be utilized by virtual machines run by the pool of hypervisors. In this manner, the look-aside cache available to the hypervisor will be utilized more effectively since any available memory can be utilized by other virtual machines running on different hypervisors on different cloud computing nodes.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: Edward R. Bernal, Ivan M. Heninger
-
Publication number: 20160092256Abstract: Methods and arrangements for managing a consistency group for computing sites. A plurality of computing sites are communicated with, each of the sites comprising one or more of (i) and (ii): (i) at least one virtual machine; and (ii) at least one server. Updates captured at each of the sites are received, and the captured updates are batched. The batched updates are communicated to the plurality of sites, thereby ensuring data consistency across the plurality of sites. Other variants and embodiments are broadly contemplated herein.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Praveen Jayachandran, Shripad J. Nadgowda, Akshat Verma
-
Publication number: 20160092257Abstract: A physical computing device that operates in a network. The device includes a group of tenant virtual machines (VMs). Each VM is hosted on a host machine that includes a virtualization software. The device receives network bandwidth allocation policies for the group of VMs. The device determines a set of potential communication peers for each VM. The device sends the network bandwidth allocation policy of each VM to the virtualization software of the host machines of each potential communication peer of the VM.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Hua Wang, Jianjun Shen, Donghai Han, Caixia Jiang
-
Publication number: 20160092258Abstract: Systems and methods for preferentially assigning virtual machines (VMs) on a particular NUMA node with network queues on the same NUMA node are described. A load balancer process on a host assigns multiple VMs to network queues. The assignment of the VMs to a network queues is performed with a bias toward assigning VMs using a particular NUMA node to network queues on the same NUMA node. A scheduler on the host assigns VMs to NUMA nodes. The scheduler is biased toward assigning VMs to the same NUMA node as the PNIC and/or the same NUMA node as a network queue assigned to the VM.Type: ApplicationFiled: October 31, 2014Publication date: March 31, 2016Inventors: Rishi Mehta, Xiaochuan Shen, Amitabha Banerjee, Ayyappan Veeraiyan
-
Publication number: 20160092259Abstract: Systems and methods for preferentially assigning virtual machines (VMs) on a particular NUMA node with network queues on the same NUMA node are described. A load balancer process on a host assigns multiple VMs to network queues. The assignment of the VMs to a network queues is performed with a bias toward assigning VMs using a particular NUMA node to network queues on the same NUMA node. A scheduler on the host assigns VMs to NUMA nodes. The scheduler is biased toward assigning VMs to the same NUMA node as the PNIC and/or the same NUMA node as a network queue assigned to the VM.Type: ApplicationFiled: October 31, 2014Publication date: March 31, 2016Inventors: Rishi Mehta, Xiaochuan Shen, Amitabha Banerjee, Ayyappan Veeraiyan
-
Publication number: 20160092260Abstract: A determination method includes: receiving a request of a change from a first system configured by a first configuration to a second system configured by a second configuration, the request of the change including configuration data related to the first configuration and change data related to the change; extracting a functional requirement for a function that is realized in the first system based on the configuration data; identifying an operational requirement for realizing the first system based on the functional requirement and data about an operational process that is used for the first system; identifying a constraint condition about the second system based on configuration elements of the second configuration that are identified by the configuration data and the change data; and determining feasibility of the change to the second system based on the functional requirement, the operational requirement, and the constraint condition.Type: ApplicationFiled: July 23, 2015Publication date: March 31, 2016Applicant: FUJITSU LIMITEDInventors: Shinji KIKUCHI, Yasuhide MATSUMOTO
-
Publication number: 20160092261Abstract: The present disclosure provides a physical computer virtualization method. The method includes receiving a virtualization instruction inputted by a user on a physical computer; restarting the physical computer; and loading the physical computer with a virtual machine management system mirror image file after restarting the physical computer to boot the physical computer into a virtual machine management system. The method also include obtaining physical disks of the physical computer; and creating a virtual machine through the virtual machine management system and using the physical disks of the physical computer.Type: ApplicationFiled: September 30, 2015Publication date: March 31, 2016Inventors: XING LI, LINCHUN HE
-
Publication number: 20160092262Abstract: A computing device receives information describing one or more workflow components. The computing device determines whether at least one executable step can be determined for each of the one or more workflow components. The computing device provides an indication of whether at least one executable step can be determined for each of the one or more workflow components.Type: ApplicationFiled: September 29, 2014Publication date: March 31, 2016Inventors: James E. Bostick, John M. Ganci, JR., Sarbajit K. Rakshit, Craig M. Trim
-
Publication number: 20160092263Abstract: A system and method supports dynamic thread pool sizing suitable for use in multi-threaded processing environment such as a distributed data grid. Dynamic thread pool resizing utilizes measurements of thread pool throughput and worker thread utilization in combination with analysis of the efficacy of prior thread pool resizing actions to determine whether to add or remove worker threads from a thread pool in a current resizing action. Furthermore, the dynamic thread pool resizing system and method can accelerate or decelerate the iterative resizing analysis and the rate of worker thread addition and removal depending on the needs of the system. Optimizations are incorporated to prevent settling on a local maximum throughput. The dynamic thread pool sizing/resizing system and method thereby provides rapid and responsive adjustment of thread pool size in response to changes in work load and processor availability.Type: ApplicationFiled: September 17, 2015Publication date: March 31, 2016Inventors: Gene Gleyzer, Jason Howes
-
Publication number: 20160092264Abstract: A method, system, and computer program product for the prioritization of code execution. The method includes accessing a thread in a context containing a set of code instances stored in memory; identifying sections of the set of code instances that correspond to deferrable code tasks; executing the thread in the context; determining that the thread is idle; and executing at least one of the deferrable code tasks. The deferrable code task is executed within the context and in response to determining that the thread is idle.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Nathan Fontenot, Robert C. Jennings, JR., Joel H. Schopp, Michael T. Strosaker, George C. Wilson
-
Publication number: 20160092265Abstract: A multithreaded application that includes operations on a shared data structure may exploit futures to improve performance. For each operation that targets the shared data structure, a thread of the application may create a future and store it in a thread-local list of futures (under weak or medium futures linearizability policies) or in a shared queue of futures (under strong futures linearizability policies). Prior to a thread evaluating a future, type-specific optimizations may be performed on the list or queue of pending futures. For example, futures may be sorted temporally or by key, or multiple operations indicated in the futures may be combined or eliminated. During an evaluation of a future, a thread may compute the results of the operations indicated in one or more other futures. The order in which operations take effect and the optimization operations performed may be dependent on the futures linearizability policy.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Applicant: Oracle International CorporationInventors: Alex Kogan, Maurice P. Herlihy
-
Publication number: 20160092266Abstract: Software that performs the following steps: (i) running a first customer application on a first set of virtual machine(s), with the first customer application including a first plurality of independently migratable elements, including a first independently migratable element and a second independently migratable element; (ii) dynamically checking a status of the first set of virtual machine(s) to determine whether a first migration condition exists; and (iii) on condition that the first migration condition exists, migrating the first independently migratable element to a second set of virtual machine(s) without migrating the second independently migratable element to the second set of virtual machine(s).Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: Pankaj S. Bavishi, Ramani R. Routray, Esha Seth, Riyazahamad M. Shiraguppi
-
Publication number: 20160092267Abstract: In response to receipt of a process-level input request that is subject to business-level requirements, multiple sets of attributes are identified. The sets of attributes are each from one of multiple informational domains that represent processing factors associated with at least the process-level input request, contemporaneous infrastructure processing capabilities, and historical process performance of similar processes. The multiple sets of attributes from the multiple informational domains are hashed as a vector into an initial process prioritization. The attributes of the hashed vector of the multiple sets of attributes from the multiple informational domains are weighted in the initial process prioritization into a hashed-weighted resulting process prioritization. The process-level input request is assigned to a process category based upon the hashed-weighted resulting process prioritization.Type: ApplicationFiled: September 27, 2014Publication date: March 31, 2016Inventors: Can P. Boyacigiller, Swaminathan Chandrasekaran
-
Publication number: 20160092268Abstract: A system and method for supporting a scalable thread pool in a multi-threaded processing environments such as a distributed data grid. A work distribution system utilizes a collection of association piles to hold elements communicated between a service thread and multiple worker threads. Worker threads associated with the association piles poll elements in parallel. Polled elements are not released until returned from the worker thread. First in first out ordering of operations is maintained with respect to related elements by ensuring related elements are held in the same association pile and preventing polling of related elements until any previously polled and related elements have been released. By partitioning the elements across multiple association piles while ensuring proper ordering of operations with respect to related elements the scalable thread pool enables the use of large thread pools with reduced contention compared to a conventional single producer multiple consumer queue.Type: ApplicationFiled: September 17, 2015Publication date: March 31, 2016Inventors: Gene Gleyzer, Jason Howes
-
Publication number: 20160092269Abstract: A computer-implemented method for scheduling a set of jobs executed in a computer system can include determining a workload-time parameter for a set of at least one job. The workload-time parameter can relate to execution-time parameters for the set of at least one job. The method can include determining a schedule tuning parameter for the set of at least one job, the schedule tuning parameter based on the workload-time parameter. The method can include generating a scheduling factor for each job in the set, the scheduling factor generated based on the schedule tuning parameter. The method can include scheduling the set of at least one job based on the scheduling factor.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Gordon Booman, David Kalmuk, Torsten Steinbach
-
Publication number: 20160092270Abstract: A method is implemented by a network device having a symmetric multi-processing (SMP) architecture. The method improves response time for processes implementing routing algorithms in a network. The method manages core assignments for the processes during a network convergence process. The method includes determining a number of interrupts or system events processed by a subset of cores of a set of cores of a central processing unit and identifying a core within the subset of cores with a lowest number of interrupts or system events processed. The method further includes changing an affinity mask of at least one process implementing the routing algorithms during the network convergence to target the core within the subset of cores with a lowest number of interrupts or system events processed.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventor: Ramesh Uniyal
-
Publication number: 20160092271Abstract: A method, system and computer program product for efficiently utilizing connections in connection pools. A period of time an application running on a virtual machine needs a greater number of connections to an external resource than allocated in its pool of connections is identified. The connection pool for this application as well as the connection pools for the other applications containing connections to the same external resource are merged to form a logical pool of connections to be shared by those applications during the identified period of time. Alternatively, in an application server cluster environment, the connection pools utilized by the application servers to access the external resource may be reconfigured based on the weight assigned to each member (or application server) of the cluster which is based on the member's load size. In these manners, the resource connections in these pools of connections will be more efficiently utilized.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Rispna Jain, Anoop Gowda Malaly Ramachandra
-
Publication number: 20160092272Abstract: Methods, systems, and computer programs are presented for allocating CPU cycles and disk Input/Output's (IOs) to resource-creating processes based on dynamic weights that change according to the current percentage of resource utilization in the storage device. One method includes operations for assigning a first weight to a processing task that increases resource utilization of a resource for processing incoming input/output (IO) requests, and for assigning a second weight to a generating task that decreases the resource utilization of the resource. Further, the method includes an operation for dynamically adjusting the second weight based on the current resource utilization in the storage system. Additionally, the method includes an operation for allocating the CPU cycles and disk IOs to the processing task and to the generating task based on their respective first weight and second weight.Type: ApplicationFiled: October 30, 2015Publication date: March 31, 2016Inventors: Gurunatha Karaje, Tomasz Barszczak, Vanco Buca, Ajay Gulati, Umesh Maheshwari
-
Publication number: 20160092273Abstract: A memory management system for managing objects which represent memory in a multi-threaded operating system extracts the ID of the home free-list from the object header to determine whether the object is remote and adds the object to a remote object list if the object is determined to be remote. The memory management system determines whether the number of objects on the remote object list exceeds a threshold. If the threshold is exceeded, the system batch-removes the objects on the remote object list and then adds those objects to the appropriate one or more remote home free-lists.Type: ApplicationFiled: September 25, 2014Publication date: March 31, 2016Inventors: Christopher Reed, Mark Horsburgh
-
Publication number: 20160092274Abstract: Heterogeneous thread scheduling techniques are described in which a processing workload is distributed to heterogeneous processing cores of a processing system. The heterogeneous thread scheduling may be implemented based upon a combination of periodic assessments of system-wide power management considerations used to control states of the processing cores and higher frequency thread-by-thread placement decisions that are made in accordance with thread specific policies. In one or more implementations, a system workload context is periodically analyzed for a processing system having heterogeneous cores including power efficient cores and performance oriented cores. Based on the periodic analysis, cores states are set for some of the heterogeneous cores to control activation of the power efficient cores and performance oriented cores for thread scheduling.Type: ApplicationFiled: September 26, 2014Publication date: March 31, 2016Inventors: Neeraj Kumar Singh, Tristan A. Brown, Jeremiah S. Samli, Jason S. Wohlgemuth, Youssef Maged Barakat
-
Publication number: 20160092275Abstract: A computer-implemented method for scheduling a set of jobs executed in a computer system can include determining a workload-time parameter for a set of at least one job. The workload-time parameter can relate to execution-time parameters for the set of at least one job. The method can include determining a schedule tuning parameter for the set of at least one job, the schedule tuning parameter based on the workload-time parameter. The method can include generating a scheduling factor for each job in the set, the scheduling factor generated based on the schedule tuning parameter. The method can include scheduling the set of at least one job based on the scheduling factor.Type: ApplicationFiled: May 8, 2015Publication date: March 31, 2016Inventors: Gordon Booman, David Kalmuk, Torsten Steinbach
-
Publication number: 20160092276Abstract: Embodiments of the present invention provide systems and methods for mapping the architected state of one or more threads to a set of distributed physical register files to enable independent execution of one or more threads in a multiple slice processor. In one embodiment, a system is disclosed including a plurality of dispatch queues which receive instructions from one or more threads and an even number of parallel execution slices, each parallel execution slice containing a register file. A routing network directs an output from the dispatch queues to the parallel execution slices and the parallel execution slices independently execute the one or more threads.Type: ApplicationFiled: September 29, 2015Publication date: March 31, 2016Inventors: Sam G. Chu, Markus Kaltenbach, Hung Q. Le, Jentje Leenstra, Jose E. Moreira, Dung Q. Nguyen, Brian W. Thompto
-
Publication number: 20160092277Abstract: A host-side overcommit value is set upon a physical node that implements virtual machines (VM Node). The overcommit value is determined by receiving a selected enablement template that includes a selected computing capacity and a selected overcommit value. A user-side normalization factor is determined that normalizes the selected computing capacity against a reference data handling system. A comparable computing capacity of the VM Node is determined. A host-side normalization factor is determined that normalizes the comparable computing capacity against the reference data handling system. The host-side overcommit value is determined from the selected overcommit value, the user-side normalization factor, and the host-side normalization factor. The host-side overcommit value may indicate the degree the comparable computing capacity is overcommitted to virtual machines deployed upon heterogeneous VM Nodes as normalized against the reference system.Type: ApplicationFiled: August 25, 2015Publication date: March 31, 2016Inventors: Susan F. Crowell, Jason A. Nikolai, Andrew T. Thorstensen
-
Publication number: 20160092278Abstract: In accordance with an embodiment, described herein is a system and method for providing a partition file system in a multitenant application server environment. The system enables application server components to work with partition-specific files for a given partition, instead of or in addition to domain-wide counterpart files. The system also allows the location of some or all of a partition-specific storage to be specified by higher levels of the software stack. In accordance with an embodiment, also described herein is a system and method for resource overriding in a multitenant application server environment, which provides a means for administrators to customize, at a resource group level, resources that are defined in a resource group template referenced by a partition, and to override resource definitions for particular partitions.Type: ApplicationFiled: September 25, 2015Publication date: March 31, 2016Inventors: TIMOTHY QUINN, RAJIV MORDANI, SNJEZANA SEVO-ZENZEROVIC, JOSEPH DI POL, NAZRUL ISLAM
-
Publication number: 20160092279Abstract: According to one general aspect, a scheduler computing device may include a computing task memory configured to store at least one computing task. The computing task may be executed by a data node of a distributed computing system, wherein the distributed computing system includes at least one data node, each data node having a central processor and an intelligent storage medium, wherein the intelligent storage medium comprises a controller processor and a memory. The scheduler computing device may include a processor configured to assign the computing task to be executed by either the central processor of a data node or the intelligent storage medium of the data node, based, at least in part, upon an amount of data associated with the computing task.Type: ApplicationFiled: March 19, 2015Publication date: March 31, 2016Inventors: Jaehwan LEE, Yang Seok KI
-
Publication number: 20160092280Abstract: A method for operating a lock in a computing system having plural processing units and running under multiple runtime environments is provided. When a requester thread attempts to acquire the lock while the lock is held by a holder thread, determine whether the holder thread is suspendable or non-suspendable. If the holder thread is non-suspendable, put the requester thread in a spin state regardless of whether the requester thread is suspendable or non-suspendable; otherwise determines whether the requester thread is suspendable or non-suspendable unless the requester thread quits acquiring the lock. If the requester thread is non-suspendable, arrange the requester thread to attempt acquiring the lock again; otherwise add the requester thread to a wait queue as an additional suspended thread. Suspended threads stored in the wait queue are allowable to be resumed later for lock acquisition. The method is applicable for the computing system with a multicore processor.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Yi AI, Lin XU, Jianchao LU, Shaohua ZHANG
-
Publication number: 20160092281Abstract: An information processing apparatus includes a generation unit configured to generate a second script for setting the specified setting value, and an execution unit configured to execute a first script using the work setting value and the plurality of setting values to be set excluding the specified setting value, wherein the execution unit executes the generated second script after executing the first script.Type: ApplicationFiled: September 23, 2015Publication date: March 31, 2016Inventor: Jun Nakawaki
-
Publication number: 20160092282Abstract: A first feature (e.g., chart or table) includes a reference to a dynamic pointer. Independently, the pointer is defined to point to a second feature (e.g., a query). The first feature is automatically updated to reflect a current value of the second feature. The reference to the pointer and pointer definition are recorded in a central registry, and changes to the pointer or second feature automatically cause the first feature to be updated to reflect the change. A mapping between features can be generated using the registry and can identify interrelationships to a developer. Further, changes in the registry can be tracked, such that a developer can view changes pertaining to a particular time period and/or feature of interest (e.g., corresponding to an operation problem).Type: ApplicationFiled: December 8, 2015Publication date: March 31, 2016Applicant: Splunk Inc.Inventor: Itay A. Neeman
-
Publication number: 20160092283Abstract: An orchestrator executes an end-to-end process across applications. The executing of the end-to-end process by the orchestrator comprises executing flow logic by the orchestrator, the flow logic according to a data model defining arguments to include in interactions between the orchestrator and each of the applications. A message broker exchanges information among the orchestrator and the applications.Type: ApplicationFiled: December 3, 2015Publication date: March 31, 2016Inventors: Stephane Herman Maes, Lars Rossen, Woong Joseph Kim, Keith Kuchler, Jan Vana, Petr Fiedler, Ankit Ashok Desai, Christopher William Johnson, Michael Yang
-
Publication number: 20160092284Abstract: A method for data storage includes reading from a memory device data that is stored in a group of memory cells as respective analog values, and classifying readout errors in the read data into at least first and second different types, depending on zones in which the analog values fall. A memory quality that emphasizes the readout errors of the second type is assigned to the group of the memory cells, based on evaluated numbers of the readout errors of the first and second types.Type: ApplicationFiled: September 30, 2014Publication date: March 31, 2016Inventors: Yael Shur, Eyal Gurgi, Moshe Neerman, Naftali Sommer