Patents Issued in January 21, 2016
-
Publication number: 20160019050Abstract: When providing a user with native access to at least a portion of device hardware, the user can be prevented from modifying firmware and other configuration information by controlling the mechanisms used to update that information. For example, a clock or a timer mechanism can be used by a network interface card to define a mutability period. During the mutability period, firmware update to a peripheral device can be allowed. Once the mutability period has expired, firmware update to a peripheral device will no longer be allowed.Type: ApplicationFiled: September 25, 2015Publication date: January 21, 2016Inventors: Michael David Marr, Matthew T. Corddry, James R. Hamilton
-
Publication number: 20160019051Abstract: One embodiment of the present invention provides a system for facilitating an upgrade of a cluster of servers in the presence of one or more inaccessible nodes in the cluster. During operation, the system upgrades a version of a distributed software program on each of a plurality of nodes in the cluster. The system may detect that one or more nodes of the cluster are inaccessible. The system continues to upgrade nodes in the cluster other than the one or more nodes that were detected to be inaccessible, in which upgrading involves installing and activating a newer version of the distributed software on the nodes being upgraded. The system then upgrades an acting version of the cluster.Type: ApplicationFiled: September 25, 2015Publication date: January 21, 2016Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: Sameer Joshi, Jonathan Creighton, Suman R. Bezawada, Kannabran Viswanathan
-
Publication number: 20160019052Abstract: An apparatus and method of automatically installing an application in different terminals by storing terminal information of a user and allowing the user to install an application when the user installs an application in at least two terminals, and in which an installation process may be automatically conducted is provided. Information related to an application installed in a first terminal is received from the first terminal; and a second terminal is requested to install another application corresponding to the application, in the second terminal, by using the received information related to the application.Type: ApplicationFiled: September 28, 2015Publication date: January 21, 2016Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Youl-woong SUNG, Jong-baek KIM, Il-joo KIM, Young-chul SOHN, Soo-min SHIN, Ho JIN
-
Publication number: 20160019053Abstract: In one embodiment, a method receives a software upgrade package for a management computer and main computer. The management computer upgrades software of the management computer using the software upgrade package where the upgrading replaces an image of the software of the management computer with an image from the software upgrade package. Upon upgrade of the management computer, the management computer initiates an upgrade of the main computer. The main computer withdraws use of the services, and upon the withdrawing, the management computer reboots the main computer. Then, the main computer upgrades software of the main computer using the software upgrade package upon rebooting where the upgrading replaces an image of the software of the main computer with an image from the software upgrade package. Upon the upgrading, the main computer restores the use of the services.Type: ApplicationFiled: September 28, 2015Publication date: January 21, 2016Applicant: OC ACQUISITION LLCInventors: Matthew Gambardella, Matthew Garrett, Bryan Payne, Joe Heck, Devin Carlen, Mike Szilagyi, Mark Gius, Ken Caruso, Paul McMillan, Yona Benjamin Mankin
-
Publication number: 20160019054Abstract: A method and system for generating a ROM patch are provided. In one embodiment, a computing device obtains an original assembly code and a modified assembly code which is a modified version of the original assembly code, the original assembly code being used for an executable code which is stored in a ROM of a device. The computing device compares the original assembly code and the modified assembly code to identify difference(s) in the modified assembly code with respect to the original assembly code. The computing device then compiles the difference(s) (sometimes, after adjusting the differences) and generates a ROM patch by converting the compiled difference(s) into a replacement executable code for some of the executable code stored in the ROM of the device. In another embodiment, a method and system for using a ROM patch are disclosed.Type: ApplicationFiled: July 21, 2014Publication date: January 21, 2016Applicant: SanDisk Technologies Inc.Inventor: Shahar Bar-Or
-
Publication number: 20160019055Abstract: Techniques for runtime patching of an OS without stopping execution of the OS are presented. When a patch function is needed, it is loaded into the OS code. Threads of the OS that are in kernel mode have a flag set and a jump is inserted at a location of an old function. When the old function is accessed, the jump uses a trampoline to check the flag, if the flag is set, processing returns to the old function; otherwise processing jumps to a given location of the patch. Flags are unset when exiting or entering the kernel mode.Type: ApplicationFiled: September 28, 2015Publication date: January 21, 2016Inventors: Vojtech Pavlík, Jirí Kosina
-
Publication number: 20160019056Abstract: Techniques for automatically identifying input files used to generate output files in a software build process are provided. In one embodiment, a computer system can execute one or more build commands for generating output files for a software product, where the software product is associated with a build tree comprising various input files. The computer system can further intercept system calls invoked during the execution of the one or more build commands and can collect information pertaining to at least a portion of the intercepted system calls. The computer system can then create a dependency graph based on the collected information, where the dependency graph identifies a subset of input files in the build tree that are actually used by the one or more build commands to generate the output files.Type: ApplicationFiled: July 15, 2014Publication date: January 21, 2016Inventor: Michael Rohan
-
Publication number: 20160019057Abstract: Managing sets of parameter values includes: receiving a plurality of sets of parameter values for a generic computer program, and processing log entries associated with executions of instances of the generic computer program, each instance associated with one or more parameter values. The processing includes: analyzing the generic computer program to classify each of one or more parameters associated with the generic computer program as a member of either a first class or a second class; processing a log entry associated with an execution of a first instance of the generic computer program to form a particular set of parameter values; and determining whether to add the particular set of parameter values to the plurality of sets of parameter values based on a comparison of a first identifier for the particular set of parameter values to identifiers for at least some of the sets of parameter values.Type: ApplicationFiled: July 20, 2015Publication date: January 21, 2016Inventors: Edward Bach, Richard Oberdorf, Brond Larson
-
Publication number: 20160019058Abstract: A method and apparatus for verifying code integrity on a client, the method comprising: determining a verification object on the client; generating a plurality of verification sequences, wherein each verification sequence comprises a memory access mode, and a verification algorithm; randomly selecting a verification sequence from the plurality of verification sequences, and obtaining a server verification result for the verification object in accordance with the selected verification sequence; sending the selected verification sequence to the client; receiving a client verification result for the verification object calculated by the client in accordance with the selected verification sequence; and comparing the server verification result with the client verification result to obtain a code verification result.Type: ApplicationFiled: September 29, 2015Publication date: January 21, 2016Applicant: TENCENT TECHNOLOGY (SHENZHEN) COMPANY LIMITEDInventors: Hejun Hu, Zhaohui Yin, Fei Cao, Zhigang Zhou
-
Publication number: 20160019059Abstract: Task implementation tools are registered to interface with a change management tool. The change management tool configures a plurality of tasks to implement a change to an information technology infrastructure's configuration, and sends a plurality of task instructions to the plurality of task implementation tools for performing the plurality of tasks. Each task instruction is directed to a targeted task implementation tool for performing a particular task and includes task-related information for the targeted implementation tool to perform the particular task. Registering the plurality of task implementation tools to interface with the change management tool includes registering each the plurality of task implementation tools to accept the task instruction for the particular task directed to it from the change management tool, to know what to do with the task instruction and to know how to respond to the task instruction.Type: ApplicationFiled: October 1, 2015Publication date: January 21, 2016Inventors: Kia BEHNIA, Douglas MUELLER
-
Publication number: 20160019060Abstract: Enforcing loop-carried dependency (LCD) during dataflow execution of loop instructions by out-of-order processors (OOPs), and related circuits, methods, and computer-readable media, is disclosed. In one aspect, a reservation station circuit is provided, comprising one or more reservation station segments configured to store a consumer loop instruction. Each reservation station segment also includes an operand buffer for each operand of the consumer loop instruction, the operand buffer indicating a producer loop instruction and an LCD distance between the producer loop instruction and the consumer loop instruction. Each reservation station segment receives an execution result of the producer loop instruction, and a loop iteration indicator that indicates a current loop iteration for the producer loop instruction.Type: ApplicationFiled: September 15, 2014Publication date: January 21, 2016Inventors: Karamvir Singh Chatha, Michael Alexander Howard, Rick Seokyong Oh, Ramesh Chandra Chauhan
-
Publication number: 20160019061Abstract: Managing dataflow execution of loop instructions by out-of-order processors (OOPs), and related circuits, methods, and computer-readable media are disclosed. In one aspect, a reservation station circuit is provided. The reservation station circuit includes multiple reservation station segments, each storing a loop instruction of a loop of a computer program. Each reservation station segment also stores an instruction execution credit indicating whether the corresponding loop instruction may be provided for dataflow execution. The reservation station circuit further includes a dataflow monitor that distributes an initial instruction execution credit to each reservation station segment. As each loop iteration is executed, each reservation station segment determines whether the instruction execution credit indicates that the loop instruction for the reservation station segment may be provided for dataflow execution.Type: ApplicationFiled: September 15, 2014Publication date: January 21, 2016Inventors: Karamvir Singh Chatha, Michael Alexander Howard, Rick Seokyong Oh, Ramesh Chandra Chauhan
-
Publication number: 20160019062Abstract: A processor includes a core and an event-based sampler. The core includes logic to execute and retire an instruction. The event-based sampler includes logic determine a subset of a plurality of execution data of the processor from a register. The register includes bits specifying a subset of execution data. The event-based sampler further includes logic to selectively collect the determined subset of execution data upon retirement of the instruction and to store the selectively collected execution data.Type: ApplicationFiled: July 16, 2014Publication date: January 21, 2016Inventors: Ahmad Yasin, Peggy J. Irelan, Grant G. Zhou
-
Publication number: 20160019063Abstract: A processor of an aspect includes a decode unit to decode a thread pause instruction from a first thread. A back-end portion of the processor is coupled with the decode unit. The back-end portion of the processor, in response to the thread pause instruction, is to pause processing of subsequent instructions of the first thread for execution. The subsequent instructions occur after the thread pause instruction in program order. The back-end portion, in response to the thread pause instruction, is also to keep at least a majority of the back-end portion of the processor, empty of instructions of the first thread, except for the thread pause instruction, for a predetermined period of time. The majority may include a plurality of execution units and an instruction queue unit.Type: ApplicationFiled: July 21, 2014Publication date: January 21, 2016Applicant: Intel CorporationInventors: Lihu Rappoport, Zeev Sperber, Michael Mishaeli, Stanislav Shwartsman, Lev Makovsky, Adi Yoaz, Ofer Levy
-
Publication number: 20160019064Abstract: Approaches are described to improve database performance by implementing a RLE decompression function at a low level within a general-purpose processor or an external block. Specifically, embodiments of a hardware implementation of an instruction for RLE decompression are disclosed. The described approaches improve performance by supporting the RLE decompression function within a processor and/or external block. Specifically, a RLE decompression hardware implementation is disclosed that produces a 64-bit RLE decompression result, with an example embodiment performing the task in two pipelined execution stages with a throughput of one per cycle. According to embodiments, hardware organization of narrow-width shifters operating in parallel, controlled by computed shift counts, is used to perform the decompression.Type: ApplicationFiled: September 28, 2015Publication date: January 21, 2016Inventors: JEFFREY S. BROOKS, ROBERT GOLLA, ALBERT DANYSH, SHASANK CHAVAN, PRATEEK AGRAWAL, ANDREW EWOLDT, DAVID WEAVER
-
Publication number: 20160019065Abstract: A data processing apparatus has prefetch circuitry for prefetching cache lines of instructions into an instruction cache. A prefetch lookup table is provided for storing prefetch entries, with each entry corresponding to a region of a memory address space and identifying at least one block of one or more cache lines within the corresponding region from which processing circuitry accessed an instruction on a previous occasion. When the processing circuitry executes an instruction from a new region, the prefetch circuitry looks up the table, and if it stores a prefetch entry for the new region, then the at least one block identified by the corresponding entry is prefetched into the cache.Type: ApplicationFiled: July 17, 2014Publication date: January 21, 2016Inventors: Mitchell Bryan HAYENGA, Christopher Daniel EMMONS
-
Publication number: 20160019066Abstract: A method, system, and computer program product for executing divergent threads using a convergence barrier are disclosed. A first instruction in a program is executed by a plurality of threads, where the first instruction, when executed by a particular thread, indicates to a scheduler unit that the thread participates in a convergence barrier. A first path through the program is executed by a first divergent portion of the participating threads and a second path through the program is executed by a second divergent portion of the participating threads. The first divergent portion of the participating threads executes a second instruction in the program and transitions to a blocked state at the convergence barrier. The scheduler unit determines that all of the participating threads are synchronized at the convergence barrier and the convergence barrier is cleared.Type: ApplicationFiled: July 13, 2015Publication date: January 21, 2016Inventors: Gregory Frederick Diamos, Richard Craig Johnson, Vinod Grover, Olivier Giroux, Jack H. Choquette, Michael Alan Fetterman, Ajay S. Tirumala, Peter Nelson, Ronny Meir Krashinsky
-
Publication number: 20160019067Abstract: In an embodiment, a method is provided. The method includes managing user-level threads on a first instruction sequencer in response to executing user-level instructions on a second instruction sequencer that is under control of an application level program. A first user-level thread is run on the second instruction sequencer and contains one or more user level instructions. A first user level instruction has at least 1) a field that makes reference to one or more instruction sequencers or 2) implicitly references with a pointer to code that specifically addresses one or more instruction sequencers when the code is executed.Type: ApplicationFiled: September 26, 2015Publication date: January 21, 2016Inventors: Hong Wang, John P. Shen, Edward T. Grochowski, Richard A. Hankins, Gautham N. Chinya, Bryant E. Bigbee, Shivnandan D. Kaushik, Xiang Chris Zou, Per Hammarlund, Scott Dion Rodgers, Xinmin Tian, Anil Aggawal, Prashant Sethi, Baiju V. Patel, James P Held
-
Publication number: 20160019068Abstract: In one embodiment, a method includes receiving a request to execute first program code that is configured to perform a step of a computation, wherein the request includes a current state of the computation, determining whether the first program code is to be invoked based on an execution condition, when the execution condition is true, executing the first program code based on the current state of the computation, and returning a response that includes a result of executing the first program code, and when the execution condition is false, returning a response indicating that the result of the executing is invalid. The execution condition may be false when an amount of time that has passed since a previous execution of the first program code is greater than a threshold time limit.Type: ApplicationFiled: July 18, 2014Publication date: January 21, 2016Inventors: Ari Alexander Grant, Jonanthan P. Dann
-
Publication number: 20160019069Abstract: A network element (NE) comprising a receiver configured to couple to a cloud network; and a multi-core central processing unit (CPU) coupled to the receiver and configured to receive a first partition configuration from an orchestration element, partition a plurality of processor cores into a plurality of processor core partitions according to the first partition configuration, and initiate a plurality of virtual basic input/output systems (vBIOSs) such that each vBIOS manages a processor core partition.Type: ApplicationFiled: July 15, 2015Publication date: January 21, 2016Inventors: An Wei, Kangkang Shen
-
Publication number: 20160019070Abstract: A method for configuring a connection in a storage system is provided. A configuring device determines that the configuring device cannot communicate with a first control board, and identifies route information related to the first control board in a route information table. The route information is route information between an adapter card and the first control board. The configuring device modifies the identified route information by changing an address of the first control board in the route information to an address of a second control board.Type: ApplicationFiled: September 25, 2015Publication date: January 21, 2016Inventor: Jiaolin LUO
-
Publication number: 20160019071Abstract: Exemplary embodiments provide methods, mediums, and systems for generating a runtime environment that is customized to a particular computer program, particularly in terms of the function definitions that support function calls made in the computer program. The customized runtime environment may therefore be smaller in size than a conventional runtime environment. To create such a customized runtime environment, an analyzer may be provided which monitors test executions of the computer program and/or performs a structural analysis of the source code of the computer program. The analyzer may determine a list of probabilistically or deterministically required function definitions, and provide the list to a component reducer. The component reducer may eliminate any function definitions not deemed to be required from a runtime environment, thereby producing a customized runtime environment that is built to support a particular computer program.Type: ApplicationFiled: July 15, 2014Publication date: January 21, 2016Inventors: Peter Hartwell WEBB, James T. STEWART, Todd FLANAGAN
-
Publication number: 20160019072Abstract: Embodiments of the present invention provide a method, system and computer program product for dynamic selection of a runtime classloader for a generated class file. In an embodiment of the invention, a method for dynamic selection of a runtime classloader for a generated class file is provided. The method includes extracting meta-data from a program object directed for execution in an application server and determining from the meta-data a container identity for a container in which the program object had been compiled. The method also includes selecting a container according to the meta-data. Finally, the method includes classloading the program object in the selected container.Type: ApplicationFiled: September 29, 2015Publication date: January 21, 2016Inventors: Erik J. Burckart, Andrew Ivory, Todd E. Kaplinger, Stephen J. Kenna, Aaron K. Shook
-
Publication number: 20160019073Abstract: A method and architecture for using dynamically loaded plugins is described herein. The dynamically loaded plugin architecture comprises a parent context and a plugin repository. The parent context may define one or more reusable software components. The plugin repository may store one or more plugins. When a plugin is loaded, a child context may be created dynamically. The child context is associated with the plugin and inherits the one or more reusable software components from the parent context.Type: ApplicationFiled: May 22, 2015Publication date: January 21, 2016Inventors: Alan Chaney, Clay Cover, Gregory A. Bolcer, Andrey Mogilev
-
Publication number: 20160019074Abstract: A method comprising, in a cloud computing system: receiving a new job at the cloud computing system; sampling VMs (Virtual Machines) of the cloud computing system for the load currently handled by each of the VMs; if the load currently handled by the VMs is within operational bounds, sending the new job to one of the VMs which currently handles the highest load compared to other ones of the VMs; and if the load currently handled by the VMs is beyond operational bounds, sending the new job to one of the VMs which currently handles the lowest load compared to other ones of the VMs.Type: ApplicationFiled: July 14, 2015Publication date: January 21, 2016Inventors: Amir Nahir, Ariel Orda, Dan Raz
-
Publication number: 20160019075Abstract: Performing a checkpoint includes determining a checkpoint boundary of the checkpoint for a virtual machine, wherein the virtual machine has a first virtual processor, determining a scheduled hypervisor interrupt for the first virtual processor, and adjusting, by operation of one or more computer processors, the scheduled hypervisor interrupt to before or substantially at the checkpoint boundary.Type: ApplicationFiled: September 30, 2015Publication date: January 21, 2016Inventor: David A. Larson
-
Publication number: 20160019076Abstract: A method comprises pairing a virtual machine instance with a virtual agent that is registered with registry in an execution environment. In this regard, upon instantiating the virtual machine and the corresponding virtual agent, the virtual agent monitors for transaction(s), e.g., a specific invoked method, on that execution environment. The virtual agent is also configured for generating an event in response to detecting the transaction. The virtual agent provides a unique signature associated with the event, which identifies the origin of the virtual machine instance. Still further, the virtual agent is configured for forwarding the event to the registry for collating with other events so as to produce composite end-to-end logs of processes in a manner that enables provenance.Type: ApplicationFiled: July 15, 2014Publication date: January 21, 2016Inventor: Eamonn Lawler
-
Publication number: 20160019077Abstract: A virtualization manager executing on a processing device adds a host to a list of hosts associated with the virtualization manager. The virtualization manager identifies a list of external VMs running on the host that are not managed by the virtualization manager. The virtualization manager obtains detailed information for each of the external VMs running on the host from an agent running on the host. The virtualization manager then manages the external VMs running on the host using the detailed information.Type: ApplicationFiled: July 15, 2014Publication date: January 21, 2016Inventor: Oved Ourfali
-
Publication number: 20160019078Abstract: A method, system and computer program product are provided for implementing dynamic adjustment of Input/Output bandwidth for Virtual Machines of a Single Root Input/Output Virtualization (SRIOV) adapter. The SRIOV adapter includes a plurality of virtual functions (VFs). Each individual virtual function (VF) is enabled to be explicitly assigned to a Virtual Machine (VM); and each of a plurality of VF teams is created with one or more VFs and is assigned to a VM. Each VF team is enabled to be dynamically resizable for dynamic adjustment of Input/Output bandwidth.Type: ApplicationFiled: July 16, 2014Publication date: January 21, 2016Inventors: Narsimha R. Challa, Charles S. Graham, Swaroop Jayanthi, Sailaja R. Keshireddy, Adam T. Stallman
-
Publication number: 20160019079Abstract: Methods and systems for I/O acceleration using an I/O accelerator device on a virtualized information handling system include pre-boot configuration of first and second device endpoints that appear as independent devices. After loading a storage virtual appliance that has exclusive access to the second device endpoint, a hypervisor may detect and load drivers for the first device endpoint. The storage virtual appliance may then initiate data transfer I/O operations using the I/O accelerator device. The data transfer operations may be read or write operations to a storage device that the storage virtual appliance provides access to. The I/O accelerator device may use direct memory access (DMA).Type: ApplicationFiled: July 16, 2014Publication date: January 21, 2016Inventors: Gaurav CHAWLA, Robert Wayne HORMUTH, Shyamkumar T. IYER, Duk M. KIM
-
Publication number: 20160019080Abstract: A method, system and computer program product for allocating storage for virtual machine instances. The input/output (I/O) usage of disk extents utilized by a virtual machine is saved in an I/O profile of the virtual machine. In response to deallocating the virtual machine, the I/O usage of the disk extents is extracted from its I/O profile and saved in a data structure. Upon starting a new instance of the virtual machine, new disk extents are allocated to the new virtual machine instance. The I/O usage of the disk extents for the previous incarnation of the virtual machine is applied to the disk extents allocated to the new virtual machine instance. The newly allocated disk extents can now be placed in either a solid-state drive device or a hard disk drive device based on this I/O history without requiring a twenty-four hour long cycle.Type: ApplicationFiled: July 18, 2014Publication date: January 21, 2016Inventors: Hao T. Chang, Catherine C. Diep, Harold H. Hall, JR.
-
Publication number: 20160019081Abstract: In a computer-implemented method for viewing a snapshot of a virtual machine, during operation of a virtual machine in a first console, at least one snapshot of the virtual machine is presented for selection, wherein the snapshot includes a previous state of the virtual machine. Responsive to a selection of the snapshot, a second virtual machine of the selected snapshot is deployed in a second console, wherein the second virtual machine is deployed without closing the virtual machine in the first console.Type: ApplicationFiled: July 21, 2014Publication date: January 21, 2016Inventors: Rahul CHANDRASEKARAN, Ravi Kant CHERUKUPALLI, Uttam GUPTA
-
Publication number: 20160019082Abstract: In a computer-implemented method for comparing states of a virtual machine, a plurality of selectable states including a current state of a virtual machine and at least one snapshot of the virtual machine are presented for selection, wherein the at least one snapshot includes a state of the virtual machine at a previous state. Responsive to a selection of at least two states of the plurality of selectable states, a comparison tool for comparing information between the at least two states of the virtual machine is presented.Type: ApplicationFiled: July 21, 2014Publication date: January 21, 2016Inventors: Rahul CHANDRASEKARAN, Ravi Kant CHERUKUPALLI, Uttam GUPTA
-
Publication number: 20160019083Abstract: In a computer-implemented method for modifying a state of a virtual machine, information between two states of a virtual machine is compared, wherein the two states include a current state of the virtual machine and previous state of the virtual machine. The previous state of the virtual machine is included within a snapshot of the virtual machine at the previous state. Information that is different between the two states is identified. The information that is different between the two states is presented, wherein the information that is different is selectable for copying between the two states.Type: ApplicationFiled: July 21, 2014Publication date: January 21, 2016Inventors: Rahul CHANDRASEKARAN, Ravi Kant CHERUKUPALLI, Uttam GUPTA
-
Publication number: 20160019084Abstract: A method is disclosed for providing for a high-level local manager in each data center of a group of data centers. The high-level local manager is configured to allocate a new virtual machine or re-allocate an already running virtual machine. The high-level local managers exchange information with each other and run the same programs or processes, so that each local manager knows where the new virtual machine is to be assigned. Once determined which data center will execute the virtual machine, the method provides for a low-level local manager to assign the virtual machine to one of the servers of the data center according to a local algorithm.Type: ApplicationFiled: September 30, 2014Publication date: January 21, 2016Applicant: ECO4CLOUD S.R.L.Inventors: Agostino Forestiero, Raffaele Giordanelli, Carlo Mastroianni, Giuseppe Papuzzo
-
Publication number: 20160019085Abstract: A provisioning server automatically configures a virtual machine (VM) according to user specifications and then deploys the VM on a physical host. The user may either choose from a list of pre-configured, ready-to-deploy VMs, or he may select which hardware, operating system and application(s) he would like the VM to have. The provisioning server then configures the VM accordingly, if the desired configuration is available, or it applies heuristics to configure a VM that best matches the user's request if it isn't. The invention also includes mechanisms for monitoring the status of VMs and hosts, for migrating VMs between hosts, and for creating a network of VMs.Type: ApplicationFiled: May 19, 2015Publication date: January 21, 2016Inventors: Dilip KHANDEKAR, Dragutin PETKOVIC, Pratap SUBRAHMANYAM, Bich Cau LE
-
Publication number: 20160019086Abstract: An apparatus and method for generating a Software Defined Network (SDN)-based virtual network. The apparatus includes a network information generator and a virtual network generator, in which an SDN-based virtual network desired by a user may be generated efficiently by allocating physical resources to reflect various user demands.Type: ApplicationFiled: July 16, 2015Publication date: January 21, 2016Inventors: Byung Yun LEE, Yong Yoon SHIN, Ji Young KWAK, Sae Hoon KANG, Sun Hee YANG
-
Publication number: 20160019087Abstract: A method, system, and computer-readable medium for providing a secure computer network for the real time transfer of data are provided. The data is grouped and stored as per user preferences. The data being transmitted is encrypted, decrypted, and validated by the system (assuming user identifications/passwords are verified).Type: ApplicationFiled: September 29, 2015Publication date: January 21, 2016Inventor: Eileen Chu Hing
-
Publication number: 20160019088Abstract: According to one aspect of the present disclosure, a method and technique for mobility operation resource allocation is disclosed. The method includes: receiving a request to migrate a running application from a first machine to a second machine; displaying an adjustable resource allocation mobility setting interface indicating a plurality of mobility settings comprising at least one performance-based mobility setting and at least one concurrency-based mobility setting; receiving, via the interface, a selection of a mobility setting defining a resource allocation to utilize for the migration; and migrating the running application from the first machine to the second machine utilizing resources as set by the selected mobility setting.Type: ApplicationFiled: September 30, 2015Publication date: January 21, 2016Inventors: Maria Garza, Neal R. Marion, Nathaniel S. Tomsic, Vasu Vallabhaneni
-
Publication number: 20160019089Abstract: Provided is a method and system for scheduling computing so as to meet the quality of service (QoS) expected in a system by identifying the operation characteristic of an application in real time and enabling all nodes in the system to dynamically change the schedulers thereof organically between each other. The scheduling method includes: detecting an event of requesting a scheduler change; selecting a scheduler corresponding to the event among schedulers; and changing a scheduler of a node, which schedules use of the control unit, to the selected scheduler, without rebooting the node.Type: ApplicationFiled: March 11, 2014Publication date: January 21, 2016Inventors: Hyunku Jeong, Sungmin Lee
-
Publication number: 20160019090Abstract: A data processing control device performs a MapReduce process. When the data processing control device assigns input data to first Reduce tasks and a second Reduce task performed by using a result of Map processes, the data processing control device assigns input data with smaller amount than any of amounts of the input data which is assigned to the first Reduce tasks to the second Reduce task. The data processing control device assigns the first Reduce tasks and the second Reduce task, to which input data is assigned, to a server that performs Reduce processes in the MapReduce process such that the second Reduce task is started after the assignment of all of the first Reduce tasks.Type: ApplicationFiled: July 2, 2015Publication date: January 21, 2016Applicant: FUJITSU LIMITEDInventors: Nobuyuki KUROMATSU, Yuichi MATSUDA, Haruyasu UEDA
-
Publication number: 20160019091Abstract: A system includes a processor and a non-transitory computer-readable medium. The non-transitory computer-readable medium comprises instructions executable by the processor to cause the system to perform a method. The method comprises receiving a first job to execute and executing the first job. A plurality of data associated with the first job is determined The plurality of data comprises data associated with (i) a second job executed immediately prior to the first job, (ii) a third job executed immediately after the first job, (iii) a determination of whether the first job failed or executed successfully and (iv) a type of data associated with the first job. The determined plurality of data is stored.Type: ApplicationFiled: July 18, 2014Publication date: January 21, 2016Inventors: Christina Ann Leber, John A. Interrante, Kareem Sherif Aggour, Jenny Marie Weisenberg Williams
-
Publication number: 20160019092Abstract: A method and an apparatus for closing a program, and a storage medium are provided. The method includes: opening, by a mobile terminal, a task management area of a multitasking processing queue, where a response area is provided in the task management area; detecting, by the mobile terminal, in the response area; and closing, by the mobile terminal, all programs in the multitasking processing queue in response to a specified operation of a user detected in the response area. An operation of closing a background program is simplified, and a user can easily close a background program just by performing a specified operation in a response area; therefore, the operation is simple and convenient.Type: ApplicationFiled: July 20, 2015Publication date: January 21, 2016Inventors: Cancai YUAN, Yingru DENG, Junwei LIU, Xinxin ZHANG, Lei LONG
-
Publication number: 20160019093Abstract: The system and method generally relate to reducing heat dissipated within a data center, and more particularly, to a system and method for reducing heat dissipated within a data center through service level agreement analysis, and resultant reprioritization of jobs to maximize energy efficiency. A computer implemented method includes performing a service level agreement (SLA) analysis for one or more currently processing or scheduled processing jobs of a data center using a processor of a computer device. Additionally, the method includes identifying one or more candidate processing jobs for a schedule modification from amongst the one or more currently processing or scheduled processing jobs using the processor of the computer device. Further, the method includes performing the schedule modification for at least one of the one or more candidate processing jobs using the processor of the computer device.Type: ApplicationFiled: September 22, 2015Publication date: January 21, 2016Inventors: Christopher J. DAWSON, Vincenzo V. DI LUOFFO, Rick A. HAMILTON, II, Michael D. KENDZIERSKI
-
Publication number: 20160019094Abstract: A computer-implemented system and method facilitate dynamically allocating server resources. The system and method include determining a current queue distribution, referencing historical information associated with execution of at least one task, and predicting, based on the current queue distribution and the historical information, a total number of tasks of various task types that are to be executed during the time period in the future. Based on this prediction, a resource manager determines a number of servers that should be instantiated for use during the time period in the future.Type: ApplicationFiled: July 18, 2014Publication date: January 21, 2016Inventors: Jozef Habdank, Tadeusz Habdank-Wojewodzki
-
Publication number: 20160019095Abstract: A data processing system includes physical computing resources that include a plurality of processors. The plurality of processors include a first processor having a first processor type and a second processor having a second processor type that is different than the first processor type. The data processing system also includes a resource manager to assign portions of the physical computing resources to be used when executing logical partitions. The resource manager is configured to assign a first portion of the physical computing resources to a logical partition, to determine characteristics of the logical partition, the characteristics including a memory footprint characteristic, to assign a second portion of the physical computing resources based on the characteristics of the logical partition, and to dispatch the logical partition to execute using the second portion of the physical computing resources.Type: ApplicationFiled: September 15, 2015Publication date: January 21, 2016Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Diane G. Flemming, Octavian F. Herescu, William A. Maron, Mysore S. Srinivas
-
Publication number: 20160019096Abstract: A deployment system enables a developer to define a logical, multi-tier application blueprint that can be used to create and manage (e.g., redeploy, upgrade, backup, patch) multiple applications in a cloud infrastructure. In the application blueprint, the developer models an overall application architecture, or topology, that includes individual and clustered nodes (e.g., VMs), logical templates, cloud providers, deployment environments, software services, application-specific code, properties, and dependencies between top-tier and second-tier components. The application can be deployed according to the application blueprint, which means any needed VMs are provisioned from the cloud infrastructure, and application components and software services are installed.Type: ApplicationFiled: June 1, 2015Publication date: January 21, 2016Inventors: David WINTERFELDT, Komal MANGTANI, Sesh JALAGAM, Vishwas NAGARAJA
-
Publication number: 20160019097Abstract: The purpose of the invention is to simplify the work of setting migration WWNs used in live migration of LPARs. Hypervisor management software of a management computer acquires and stores, in a storage unit, WWNs set for logical FC-HBAs of hypervisors of computers and host information including a WWN of a source capable of accessing a logical unit (LU) of a storage device. The hypervisor management software uses such information as a basis to output, on a display screen, information indicating whether or not a migration WWN, which is a WWN value of a logical FC-HBA used at migration of a virtual computer of the computer, is in a state of being able to be used to access the LU.Type: ApplicationFiled: December 7, 2012Publication date: January 21, 2016Applicant: Hitachi, Ltd.Inventors: Gaku SAITO, Satoshi NAKAMICHI, Atsushi ITO
-
Publication number: 20160019098Abstract: A cloud manager controls the deployment and management of machines for an online service. A build system creates deployment-ready virtual hard disks (VHDs) that are installed on machines that are spread across one or more networks in farms that each may include different configurations. The build system is configured to build VHDs of differing configurations that depend on a role of the virtual machine (VM) for which the VHD will be used. The build system uses the VHDs to create virtual machines (VMs) in both test and production environments for the online service. The cloud manager system automatically provisions machines with the created virtual hard disks (VHDs). Identical VHDs can be installed directly on the machines that have already been tested.Type: ApplicationFiled: May 22, 2015Publication date: January 21, 2016Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jason M. Cahill, Alexander Hopmann, Marc Keith Windle, Erick Raymundo Lerma
-
Publication number: 20160019099Abstract: A method of calculating a processing power available from a supervisor of a multi-programmed computing system by a first partition of a plurality of partitions, the method comprising collecting, by the first partition, state data from the supervisor, the state data including a processing capacity of the multi-programmed computing system. The method further comprises initializing a remaining capacity variable to the processing capacity of the multi-programmed computing system; initializing variables, including setting a binary variable to a first logic value for each of the plurality of partitions; iteratively computing an entitlement and amount of power to award for each of the plurality of partitions having their respective binary variables set to the first logic value; and requesting the processing power from the supervisor, based on the iterative computation.Type: ApplicationFiled: July 17, 2014Publication date: January 21, 2016Inventor: Brian K. Wade