Patents Issued in December 8, 2016
-
Publication number: 20160357551Abstract: A conditional fetch-and-phi operation tests a memory location to determine if the memory locations stores a specified value and, if so, modifies the value at the memory location. The conditional fetch-and-phi operation can be implemented so that it can be concurrently executed by a plurality of concurrently executing threads, such as the threads of wavefront at a GPU. To execute the conditional fetch-and-phi operation, one of the concurrently executing threads is selected to execute a compare-and-swap (CAS) operation at the memory location, while the other threads await the results. The CAS operation tests the value at the memory location and, if the CAS operation is successful, the value is passed to each of the concurrently executing threads.Type: ApplicationFiled: June 2, 2015Publication date: December 8, 2016Inventors: David A. Wood, Steven K. Reinhardt, Bradford M. Beckmann, Marc S. Orr
-
Publication number: 20160357552Abstract: An arithmetic processing device includes an instruction decode unit, an instruction execution unit and an instruction hold unit, wherein the instruction hold unit includes; a first holder including a plurality of first entries each configured to hold a decoded instruction; a second holder including a smaller number of second entries than the number of the first entries; a first selector configured to select an instruction to be registered in the second holder from instructions held in the first entries and store identification information that identifies the selected instruction into any of the second entries; and a second selector configured to sequentially select an executable instruction from instructions registered in the second holder, input the selected executable instruction to the instruction execution unit, and detect a dependency between the instruction inputted to the instruction execution unit and the instructions registered in the second holder.Type: ApplicationFiled: May 20, 2016Publication date: December 8, 2016Applicant: FUJITSU LIMITEDInventors: Sota SAKASHITA, Yasunobu AKIZUKI
-
Publication number: 20160357553Abstract: Restricted instructions are prohibited from execution within a transaction. There are classes of instructions that are restricted regardless of type of transaction: constrained or nonconstrained. There are instructions only restricted in constrained transactions, and there are instructions that are selectively restricted for given transactions based on controls specified on instructions used to initiate the transactions.Type: ApplicationFiled: August 16, 2016Publication date: December 8, 2016Inventors: Dan F. Greiner, Christian Jacobi, Timothy J. Slegel
-
Publication number: 20160357554Abstract: An apparatus comprises a processing pipeline comprising out-of-order execution circuitry and second execution circuitry. Control circuitry monitors at least one reordering metric indicative of an extent to which instructions are executed out of order by the out-of-order execution circuitry, and controls whether instructions are executed using the out-of-order execution circuitry or the second execution circuitry based on the reordering metric. A speculation metric indicative of a fraction of executed instructions that are flushed due to a mis-speculation can also be used to determine whether to execute instructions on first or second execution circuitry having different performance or energy consumption characteristics.Type: ApplicationFiled: June 5, 2015Publication date: December 8, 2016Inventors: Ian Michael CAULFIELD, Peter Richard GREENHALGH, Simon John CRASKE, Max John BATLEY, Allan John SKILLMAN, Antony John PENTON
-
Publication number: 20160357555Abstract: A method for coordinating the transfer of data between external memory and an array of data processors using address generators and local memory. The method includes loading a plurality of groups of operands into local memory, processing the plurality of groups of operands on a single processor, and then returning the process results external memory.Type: ApplicationFiled: August 1, 2016Publication date: December 8, 2016Applicant: PACT XPP TECHNOLOGIES AGInventors: Martin Vorbach, Volker Baumgarte, Frank May, Armin Nuckel
-
Publication number: 20160357556Abstract: Systems, methods, and apparatuses for data speculation execution (DSX) are described. In some embodiments, a hardware apparatus for performing DSX comprises a hardware decoder to decode an instruction, the instruction to include an opcode, and execution hardware to execute the decoded instruction to continue a data speculative execution (DSX) and to determine that a DSX loop iteration is to be committed, commit speculative stores associated with the DSX loop iteration, and start a new DSX loop iteration.Type: ApplicationFiled: December 24, 2014Publication date: December 8, 2016Inventors: Elmoustapha OULD-AHMED-VALL, Christopher J. HUGHES, Robert VALENTINE, Milind B. GIRKAR
-
Publication number: 20160357557Abstract: A Vector Floating Point Test Data Class Immediate instruction is provided that determines whether one or more elements of a vector specified in the instruction are of one or more selected classes and signs. If a vector element is of a selected class and sign, an element in an operand of the instruction corresponding to the vector element is set to a first defined value, and if the vector element is not of the selected class and sign, the operand element corresponding to the vector element is set to a second defined value.Type: ApplicationFiled: August 16, 2016Publication date: December 8, 2016Inventors: Jonathan D. Bradbury, Eric M. Schwarz
-
Publication number: 20160357558Abstract: A transient load instruction for a processor may include a transient or temporary load instruction that is executed in parallel with a plurality of input operands. The temporary load instruction loads a memory value into a temporary location for use within the instruction packet. According to some examples, a VLIW based microprocessor architecture may include a temporary cache for use in writing/reading a temporary memory value during a single VLIW packet cycle. The temporary cache is different from the normal register bank that does not allow writing and then reading the value just written during the same VLIW packet cycle.Type: ApplicationFiled: June 8, 2015Publication date: December 8, 2016Inventors: Eric MAHURIN, Jakub Pawel GOLAB
-
Publication number: 20160357559Abstract: Systems and methods for load canceling in a processor that is connected to an external interconnect fabric are disclosed. As a part of a method for load canceling in a processor that is connected to an external bus, and responsive to a flush request and a corresponding cancellation of pending speculative loads from a load queue, a type of one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor, is converted from load to prefetch. Data corresponding to one or more of the pending speculative loads that are positioned in the instruction pipeline external to the processor is accessed and returned to cache as prefetch data. The prefetch data is retired in a cache location of the processor.Type: ApplicationFiled: August 23, 2016Publication date: December 8, 2016Applicant: SOFT MACHINES, INC.Inventors: Karthikeyan Avudaiyappan, Mohammad Abdallah
-
Publication number: 20160357560Abstract: One embodiment of the present invention sets forth a technique for performing aggregation operations across multiple threads that execute independently. Aggregation is specified as part of a barrier synchronization or barrier arrival instruction, where in addition to performing the barrier synchronization or arrival, the instruction aggregates (using reduction or scan operations) values supplied by each thread. When a thread executes the barrier aggregation instruction the thread contributes to a scan or reduction result, and waits to execute any more instructions until after all of the threads have executed the barrier aggregation instruction. A reduction result is communicated to each thread after all of the threads have executed the barrier aggregation instruction and a scan result is communicated to each thread as the barrier aggregation instruction is executed by the thread.Type: ApplicationFiled: August 16, 2016Publication date: December 8, 2016Inventors: Brian FAHS, Ming Y. SIU, Brett W. Coon, John R. NICKOLLS, Lars NYLAND
-
Publication number: 20160357561Abstract: A processing pipeline may have first and second execution circuits having different performance or energy consumption characteristics. Instruction supply circuitry may support different instruction supply schemes with different energy consumption or performance characteristics. This can allow a further trade-off between performance and energy efficiency. Architectural state storage can be shared between the execute units to reduce the overhead of switching between the units. In a parallel execution mode, groups of instructions can be executed on both execute units in parallel.Type: ApplicationFiled: April 13, 2016Publication date: December 8, 2016Inventors: Peter Richard GREENHALGH, Simon John CRASKE, Ian Michael CAULFIELD, Max John BATLEY, Allan John SKILLMAN, Antony John PENTON
-
Publication number: 20160357562Abstract: An apparatus and method for dynamically controlling functional aspects of an MCU. In one embodiment an MCU includes a central processing unit (CPU), a memory for storing instructions executable by the CPU, and a T/C channel coupled to receive control values generated by CPU and M event signals, wherein M is an integer greater than 1. The T/C channel is configured to select one or more of the M event signals based on the one or more of the control values. The T/C channel is configured to generate a control signal as a function of the selected one or more of the M event signals. A function of the T/C channel can be controlled by the control signal.Type: ApplicationFiled: June 5, 2015Publication date: December 8, 2016Inventor: Jon Matthew Brabender
-
Publication number: 20160357563Abstract: A processor includes a decode unit to decode a packed data alignment plus compute instruction. The instruction is to indicate a first set of one or more source packed data operands that is to include first data elements, a second set of one or more source packed data operands that is to include second data elements, at least one data element offset. An execution unit, in response to the instruction, is to store a result packed data operand that is to include result data elements that each have a value of an operation performed with a pair of a data element of the first set of source packed data operands and a data element of the second set of source packed data operands. The execution unit is to apply the at least one data element offset to at least a corresponding one of the first and second sets of source packed data operands. The at least one data element offset is to counteract any lack of correspondence between the data elements of each pair in the first and second sets of source packed data operands.Type: ApplicationFiled: June 2, 2015Publication date: December 8, 2016Applicant: Intel CorporationInventors: Edwin Jan Van Dalen, Alexander Augusteijn, Martinus C. Wezelenburg, Steven Roos
-
Publication number: 20160357564Abstract: A method and apparatus for microcontroller (MCU) memory relocation. The MCU includes a central processing unit (CPU) and memory, but lacks a memory management unit (MMU). In one embodiment of the method, a first program is selected for execution by the CPU. The first program is one of a plurality of programs stored in the memory of the MCU. Each of the programs includes position dependent instructions. The programs are compiled from source code written in position dependent code.Type: ApplicationFiled: June 3, 2015Publication date: December 8, 2016Inventor: Jon Matthew Brabender
-
Publication number: 20160357565Abstract: Apparatus for processing data 2 is provided with fetch circuitry 16 for fetching program instructions for execution from one or more active threads of instructions having respective program counter values. Pipeline circuitry 22, 24 has a first operating mode and a second operating mode. Mode switching circuitry 30 switches the pipeline circuitry 22, 24, between the first operating mode and the second operating mode in dependence upon a number of active threads of program instructions having program instructions available to be executed. The first operating mode has a lower average energy consumption per instruction executed than the second operating mode and the second operating mode has a higher average rate of instruction execution for a single thread than the first operating mode. The first operating mode may utilise a barrel processing pipeline 22 to perform interleaved multiple thread processing.Type: ApplicationFiled: April 20, 2016Publication date: December 8, 2016Inventors: Peter Richard GREENHALGH, Simon John CRASKE, Ian Michael CAULFIELD, Max John BATLEY, Allan John SKILLMAN, Antony John PENTON
-
Publication number: 20160357566Abstract: An approach is provided is provided in which a computing system matches a writeback instruction tag (ITAG) to an entry instruction tag (ITAG) included in an issue queue entry. The writeback ITAG is provided by a first of multiple load store units. The issue queue entry includes multiple ready bits, each of which corresponds to one of the multiple load store units. In response to matching the writeback ITAG to the entry ITAG, the computer system sets a first ready bit corresponding to the first load store unit. In turn, the computing system issues an instruction corresponding to the entry ITAG based upon detecting that each of the multiple ready bits is set.Type: ApplicationFiled: June 2, 2015Publication date: December 8, 2016Inventors: Joshua W. Bowman, Sundeep Chadha, Michael J. Genden, Dhivya Jeganathan, Dung Q. Nguyen, David R. Terry, Eula F. Tolentino
-
Publication number: 20160357567Abstract: An approach is provided is provided in which a computing system matches a writeback instruction tag (ITAG) to an entry instruction tag (ITAG) included in an issue queue entry. The writeback ITAG is provided by a first of multiple load store units. The issue queue entry includes multiple ready bits, each of which corresponds to one of the multiple load store units. In response to matching the writeback ITAG to the entry ITAG, the computer system sets a first ready bit corresponding to the first load store unit. In turn, the computing system issues an instruction corresponding to the entry ITAG based upon detecting that each of the multiple ready bits is set.Type: ApplicationFiled: August 15, 2015Publication date: December 8, 2016Inventors: Joshua W. Bowman, Sundeep Chadha, Michael J. Genden, Dhivya Jeganathan, Dung Q. Nguyen, David R. Terry, Eula F. Tolentino
-
Publication number: 20160357568Abstract: An apparatus including first and second reservation stations. The first reservation station dispatches a load micro instruction, and indicates on a hold bus if the load micro instruction is a specified load micro instruction directed to retrieve an operand from a prescribed resource other than on-core cache memory. The second reservation station is coupled to the hold bus, and dispatches one or more younger micro instructions therein that depend on the load micro instruction for execution after a number of clock cycles following dispatch of the first load micro instruction, and if it is indicated on the hold bus that the load micro instruction is the specified load micro instruction, the second reservation station is configured to stall dispatch of the one or more younger micro instructions until the load micro instruction has retrieved the operand. The resources include a fuse array that stores configuration data.Type: ApplicationFiled: December 14, 2014Publication date: December 8, 2016Inventors: GERARD M. COL, COLIN EDDY, G. GLENN HENRY
-
Publication number: 20160357569Abstract: A system for pipelining signal flow graphs by a plurality of shared memory processors organized in a 3D physical arrangement with the memory overlaid on the processor nodes that reduces storage of temporary variables. A group function formed by two or more instructions to specify two or more parts of the group function. A first instruction specifies a first part and specifies control information for a second instruction adjacent to the first instruction or at a pre-specified location relative to the first instruction. The first instruction when executed transfers the control information to a pending register and produces a result which is transferred to an operand input associated with the second instruction. The second instruction specifies a second part of the group function and when executed transfers the control information from the pending register to a second execution unit to adjust the second execution unit's operation on the received operand.Type: ApplicationFiled: August 16, 2016Publication date: December 8, 2016Inventor: Gerald George Pechanek
-
Publication number: 20160357570Abstract: Restricted instructions are prohibited from execution within a transaction. There are classes of instructions that are restricted regardless of type of transaction: constrained or nonconstrained. There are instructions only restricted in constrained transactions, and there are instructions that are selectively restricted for given transactions based on controls specified on instructions used to initiate the transactions.Type: ApplicationFiled: August 16, 2016Publication date: December 8, 2016Inventors: Dan F. Greiner, Christian Jacobi, Timothy J. Slegel
-
Publication number: 20160357571Abstract: Implementations of the present disclosure involve a system and/or method for implementing a reset controller of a microprocessor or other type of computing system by connecting the reset controller to a reset controller bus or other type of general purpose bus. Through the reset bus, the reset controller signals used to generate the reset sequence of the system may be transmitted to the components of the system through a bus, rather than utilizing a direct wire connection between the components and the reset controller. The wires that comprise the reset bus may then be run to one or more components of the microprocessor design that are restarted during the reset sequence. Each of these components may also include a reset controller circuit that is designed to receive the reset control signals from the reset controller and decode the signals to determine if the received signal applies to the component.Type: ApplicationFiled: June 4, 2015Publication date: December 8, 2016Applicant: Oracle International CorporationInventor: Ali Vahidsafa
-
Publication number: 20160357572Abstract: Techniques for memory management of a data processing system are described herein. According to one embodiment, a memory usage monitor executed by a processor of a data processing system monitors memory usages of groups of programs running within a memory of the data processing system. In response to determining that a first memory usage of a first group of the programs exceeds a first predetermined threshold, a user level reboot is performed in which one or more applications running within a user space of an operating system of the data processing system are terminated and relaunched. In response to determining that a second memory usage of a second group of the programs exceeds a second predetermined threshold, a system level reboot is performed in which one or more system components running within a kernel space of the operating system are terminated and relaunched.Type: ApplicationFiled: December 17, 2015Publication date: December 8, 2016Inventors: Andrew D. Myrick, David M. Chan, Jonathan R. Reeves, Jeffrey D. Curless, Lionel D. Desai, James C. McIlree, Karen A. Crippes, Rasha Eqbal
-
Publication number: 20160357573Abstract: A device management apparatus includes circuitry configured to execute steps of determining whether a model of a setting subject in which a setting value accepted at first accepting is to be set is a model in which the setting value can be set; if it is determined that the model of the setting subject is a model in which the setting value cannot be set, acquiring a setting value associated with setting value identifying information similar to setting value identifying information input at the first accepting, from a first storage device configured to store a model, a setting value that can be set in the model, and a predetermined setting value identifying information about the setting value in association with one another; and transmitting the setting value acquired at the acquiring to a device of the model in which the setting value cannot be set.Type: ApplicationFiled: June 2, 2016Publication date: December 8, 2016Applicant: Ricoh Company, Ltd.Inventor: Tomohiro IKEDA
-
Publication number: 20160357574Abstract: The detection of whether a local application is managed by a management service is described. In one example, depending upon whether an installation token includes a unique token value, detection logic can determine whether an application is managed or unmanaged based on additional factors. The additional factors include whether a keychain installation token includes a unique token value, the value of the keychain installation token, and a value of a launched flag for the application. Various combinations of those factors and the identification of either a managed or unmanaged status for the application are described. Using the concepts described herein, an unmanaged application can proceed to execute with limited functionality, present a notification that it should be reinstalled by the management service, stop executing, or take other measures.Type: ApplicationFiled: June 5, 2015Publication date: December 8, 2016Inventors: Lucas Chen, Raghuram Rajan, Jonathan Blake `Brannon
-
Publication number: 20160357575Abstract: Various technologies and techniques are disclosed for using contracts in dynamic languages. For example, a contract can be directly associated with an object. The contract can then be used to provide type safety for the object. As another example, contracts can be used with mix-ins. A declaration for a contract is provided with a mix-in. The contract is associated with a target object at runtime when applying the mix-in. Conditions can be assigned to mix-ins that must be met before the mix-in can be applied to the target object. At runtime, if the target object meets the one or more conditions, then the mix-in can be applied to the target object.Type: ApplicationFiled: May 3, 2016Publication date: December 8, 2016Applicant: Microsoft Technology Licensing, LLCInventor: Bertrand Le Roy
-
Publication number: 20160357576Abstract: Generating customized on-demand videos from automated test scripts is provided. Responsive to receiving a request for an instruction on performing a task on a computer, a database of automated test scripts may be searched to identify a set of test scripts that comprise a set of executable actions associated with the task. An automation test sequence associated with performing of the task is built based on test scripts identified in the searching. The automation test sequence is run on a machine. While the automation test sequence is running on the machine, screen activities of the running automation test sequence are recorded to generate a video, e.g., by running a video capture program.Type: ApplicationFiled: August 24, 2015Publication date: December 8, 2016Inventors: Diane C. Chalmers, David R. Draeger, Lee A. Jacobson
-
Publication number: 20160357577Abstract: The disclosure is related to a method and a device for displaying the execution status of an application. The method comprises receiving a non-touch instruction of the application; searching the application according to the instruction; and executing an operation with a specific display effect on the determined icon. The disclsoure further discloses a device for displaying the execution status of an application. The disclosure displays the non-touch instructions for the applications sent from users such that the user may understand the execution status sufficiently of the applications during usage.Type: ApplicationFiled: December 16, 2015Publication date: December 8, 2016Inventors: Guowei GAO, Yang JIANG, Lulu ZHOU, Fei ZHAO
-
Publication number: 20160357578Abstract: A makeup guide information that matches facial features of a user and a device thereof are provided. The device includes a display and a controller configured to display a face image of a user in real-time, and execute a makeup mirror so as to display the makeup guide information on the face image of the user, according to a makeup guide request.Type: ApplicationFiled: May 31, 2016Publication date: December 8, 2016Inventors: Ji-yun KIM, Joo-young SON, Tae-hwa HONG
-
Publication number: 20160357579Abstract: For customizing content according to a dynamically changing audience, a mobile device associated with a member of the audience is detected to be present within a defined area. A locale preference corresponding to the member is collected from the mobile device. The locale preference is analyzed to determine whether the locale preference is also a locale preference of a threshold number of members of the audience. When the locale preference is also the locale preference of the threshold number of members of the audience, the locale preference is weighted according to a weighting rule to form a weighted common locale preference. When a weight of the weighted common locale preference exceeds a threshold weight, the weighted common locale preference is selected and used to configure the content, forming customized content. The customized content is delivered to a public presentation device present in the defined area.Type: ApplicationFiled: August 24, 2015Publication date: December 8, 2016Applicant: International Business Machines CorporationInventors: Su Liu, Eric J. Rozner, Chin Ngai Sze, Yaoguang Wei
-
Publication number: 20160357580Abstract: A method of enhancing performance of an application executing in a parallel processor and a system for executing the method are disclosed. A block size for input to the application is determined. Input is partitioned into blocks having the block size. Input within each block is sorted. The application is executed with the sorted input.Type: ApplicationFiled: June 4, 2015Publication date: December 8, 2016Applicant: Advanced Micro Devices, Inc.Inventor: Alexander Lyashevsky
-
Publication number: 20160357581Abstract: Handling locale information on a computing platform in a cloud computing environment. An application pushed by a cloud client is received by a computing platform, in response to receiving a request from a user to execute the application. Locale information associated with the application and the user is retrieved by the computing platform. A script is created and executed by the computing platform to configure a locale of an operating system, and that identifies and installs applications upon which the pushed application depends for execution. A runtime environment, the pushed application, and the applications upon which the pushed application depends for execution in the runtime environment are booted by the computing platform. The pushed application is then executed by the computing platform in the runtime environment.Type: ApplicationFiled: February 5, 2016Publication date: December 8, 2016Inventors: Lin Quan Jiang, Yan Min Sheng, Lei Wang, Hai Hong Xu
-
Publication number: 20160357582Abstract: A method to configure an information system with a plurality of interconnected users, in which users shall not write nor modify the source code of the program, but have the possibility of forming the desired types by accessing, via appropriate interface systems, a set of concepts organized into libraries, and selecting, by way of appropriate means, the concepts necessary to form types.Type: ApplicationFiled: August 4, 2014Publication date: December 8, 2016Inventors: Valeria NALDI, Carlo PESCIO
-
Publication number: 20160357583Abstract: Systems and methods for disabling one or more plugins associated with a browser application are provided. In one exemplary method, a plugin is installed on an electronic device, and the device receives data from a data source, where that data is associated with the installed plugin. Whether the installed plugin meets a disabling criteria is determined. In accordance with a determination that that the installed plugin meets a disabling criteria: performance of a function with the installed plugin is foregone; and it is reported to the data source that the installed plugin is not installed on the electronic device. In accordance with a determination that the installed plugin does not meet the disabling criteria, the function is performed with the installed plugin.Type: ApplicationFiled: September 24, 2015Publication date: December 8, 2016Inventors: Kevin DECKER, Conrad SHULTZ, Steven FALKENBURG, Darin ADLER, Richard MONDELLO, Craig M. FEDERIGHI, Patrick L. COFFMAN, Jessie BERLIN
-
Publication number: 20160357584Abstract: A simulation mechanism manages deployment of a simplified computing solution (SCS) and a corresponding simulation model that simulates a scaled multiple of the SCS to allow a deployment configuration for a large scale computing solution to be determined and tested before actually deploying the large scale computing solution.Type: ApplicationFiled: June 16, 2015Publication date: December 8, 2016Inventors: Bin Cao, Daniel L. Hiebert, Brian R. Muras
-
Publication number: 20160357585Abstract: Described herein are systems, methods, and software to provide virtualized computing sessions with attachable volumes to requesting users. In one implementation, a virtual computing service identifies a service login for an end user to initiate a virtual computing session. In response to the service login, the virtual computing service identifies a virtual machine to allocate to the virtual computing service, and initiates a user login process to log the end user into the virtual machine. The virtual computing service further initiates, prior to completing the user login process, a volume attach process to attach at least one storage volume to the virtual machine based on credentials associated with the service login.Type: ApplicationFiled: June 4, 2015Publication date: December 8, 2016Inventors: Jeffrey Ulatoski, Steven Lawson, Matthew Conover
-
Publication number: 20160357586Abstract: In one approach, a method comprises: a virtual machine receiving an invocation instruction from a caller that invokes a callee, wherein the caller represents a first set of instructions and the callee represents a second set of instructions, wherein the invocation instruction is associated with a first set of arguments; in response to receiving the invocation instruction and determining that the callee requires one or more additional parameters to be supplied by the virtual machine, the virtual machine causing the one or more additional parameters to be appended to the first set of arguments to create a second set of arguments; wherein the virtual machine prevents the caller from providing the one or more additional arguments that are to be supplied by the virtual machine; the virtual machine invoking the callee using the second set of arguments.Type: ApplicationFiled: April 4, 2016Publication date: December 8, 2016Inventor: JOHN ROBERT ROSE
-
Publication number: 20160357587Abstract: Systems, methods, and computer-readable media for annotating process and user information for network flows. In some embodiments, a capturing agent, executing on a first device in a network, can monitor a network flow associated with the first device. The first device can be, for example, a virtual machine, a hypervisor, a server, or a network device. Next, the capturing agent can generate a control flow based on the network flow. The control flow may include metadata that describes the network flow. The capturing agent can then determine which process executing on the first device is associated with the network flow and label the control flow with this information. Finally, the capturing agent can transmit the labeled control flow to a second device, such as a collector, in the network.Type: ApplicationFiled: May 11, 2016Publication date: December 8, 2016Inventors: Navindra Yadav, Abhishek Ranjan Singh, Anubhav Gupta, Shashidhar Gandham, Jackson Ngoc Ki Pang, Shih-Chun Chang, Hai Trong Vu
-
Publication number: 20160357588Abstract: A queue management method including queuing, in a specific queue of a first virtual machine, one or more messages addressed to the specific queue, generating, when transferring management of the specific queue from the first virtual machine to a second virtual machine, a first queue and a second queue in the second virtual machine, queuing, in the generated first queue, the one or more messages that have been queued in the specific queue, queuing, after the second queue has been generated, a received message addressed to the specific queue in the second queue and performing, in response to an instruction to perform reference to or an operation for a specific message corresponding to the specific queue, reference to or an operation for the specific message in order of the first queue, the specific queue, and the second queue.Type: ApplicationFiled: May 31, 2016Publication date: December 8, 2016Applicant: FUJITSU LIMITEDInventors: Yuhei SHIBUKAWA, Tomohiro Kawasaki, Osamu Miyamoto
-
Publication number: 20160357589Abstract: Methods and apparatus are disclosed to scale application deployments in cloud computing environments using virtual machine pools. An example method disclosed herein includes preparing a virtual machine pool including a virtual machine for use in a scaling operation, the virtual machine prepared in accordance with a blueprint of the application deployed in a deployment environment separate from the virtual machine pool, in response to receiving a request to scale the application, determining by executing an instruction with a processor, whether configuration information of the virtual machine pool satisfies a scaling requirement included in the request, and based on the determination, executing an instruction with the processor to transfer the virtual machine from the virtual machine pool to the deployment environment to perform the scaling operation in accordance with the request to scale.Type: ApplicationFiled: June 30, 2016Publication date: December 8, 2016Inventors: Servesh Singh, Kiran Singh, Shyam Mankala
-
Publication number: 20160357590Abstract: Systems and methods for transmitting encapsulated SNMP commands to virtual machines. An example method may comprise: receiving a Simple Network Management Protocol (SNMP) request; encapsulating, by a processing device, the SNMP request in a format compatible with respect to which a virtual machine is configured to communicate; and providing the encapsulated SNMP request to the virtual machine.Type: ApplicationFiled: July 27, 2016Publication date: December 8, 2016Inventor: David Botzer
-
Publication number: 20160357591Abstract: A method includes validating, by a switch, a message including virtual machine (VM) information using a value of a virtual station interface (VSI) type identification (ID) to perform a lookup of a fetched VSI database. The VM information for the VM comprises VSI type ID and virtual local area network (VLAN) ID. The switch retrieves an address of the VM from a first table for multiple different VM types based on using VSI type ID and network ID. The switch retrieves rules associated with the retrieved address of the VM and the VSI type ID from a second table including VM information. The switch applies the associated rules for the VM.Type: ApplicationFiled: August 17, 2016Publication date: December 8, 2016Inventors: Vasmi M. Abidi, Chandramouli Radhakrishnan
-
Publication number: 20160357592Abstract: Managing credential for use with virtual machines includes storing a first virtual credential adapter within a hypervisor executing within a host data processing system. The first virtual credential adapter maintains a credential for a computing resource. Using a processor of the host data processing system, associating the first virtual credential adapter with a first virtual machine. The first virtual credential adapter is associated, at most, with a single virtual machine at any time. Responsive to associating the first virtual credential adapter with the first virtual machine, the first virtual machine accesses the computing resource using the credential maintained by the first virtual credential adapter.Type: ApplicationFiled: August 18, 2016Publication date: December 8, 2016Inventors: Christine L. Eisenmann, Louis T. Fuka, James W. Moody, Washington E. Munive
-
Publication number: 20160357593Abstract: A hypervisor virtual server system, including a plurality of virtual servers, a plurality of virtual disks that are read from and written to by the plurality of virtual servers, a physical disk, an I/O backend coupled with the physical disk and in communication with the plurality of virtual disks, which reads from and writes to the physical disk, a tapping driver in communication with the plurality of virtual servers, which intercepts I/O requests made by any one of said plurality of virtual servers to any one of said plurality of virtual disks, and a virtual data services appliance, in communication with the tapping driver, which receives the intercepted I/O write requests from the tapping driver, and that provides data services based thereon.Type: ApplicationFiled: August 18, 2016Publication date: December 8, 2016Inventor: Ziv Kedem
-
Publication number: 20160357594Abstract: One or more techniques and/or systems are disclosed for redeploying a baseline VM (BVM) to one or more child VMs (CVMs) by merely cloning virtual drives of the BVM, instead of the entirety of the parent BVM. A temporary directory is created in a datastore that has the target CVMs that are targeted for virtual drive replacement (e.g., are to be “re-baselined”). One or more replacement virtual drives (RVDs) are created in the temporary directory, where the RVDs comprise a clone of a virtual drive of the source BVM. The one or more RVDs are moved from the temporary directory to a directory of the target CVMs, replacing existing virtual drives of the target CVMs so that the target CVMs are thus re-baselined to the state of the parent BVM.Type: ApplicationFiled: August 22, 2016Publication date: December 8, 2016Inventors: George Costea, Eric Forgette
-
Publication number: 20160357595Abstract: A transactional memory system determines whether to pass control of a transaction to an about-to-run-out-of-resource handler. A processor of the transactional memory system determines information about an about-to-run-out-of-resource handler for transaction execution of a code region of a hardware transaction. The processor dynamically monitors an amount of available resource for the currently running code region of the hardware transaction. The processor detects that the amount of available resource for transactional execution of the hardware transaction is below a predetermined threshold level. The processor, based on the detecting, saves speculative state information of the hardware transaction, and executes the about-to-run-out-of-resource handler, the about-to-run-out-of-resource handler determining whether the hardware transaction is to be aborted or salvaged.Type: ApplicationFiled: August 18, 2016Publication date: December 8, 2016Inventors: Fadi Y. Busaba, Harold W. Cain, III, Michael Karl Gschwind, Maged M. Michael, Valentina Salapura
-
Publication number: 20160357596Abstract: A transactional memory system determines whether to pass control of a transaction to an about-to-run-out-of-resource handler. A processor of the transactional memory system determines information about an about-to-run-out-of-resource handler for transaction execution of a code region of a hardware transaction. The processor dynamically monitors an amount of available resource for the currently running code region of the hardware transaction. The processor detects that the amount of available resource for transactional execution of the hardware transaction is below a predetermined threshold level. The processor, based on the detecting, saves speculative state information of the hardware transaction, and executes the about-to-run-out-of-resource handler, the about-to-run-out-of-resource handler determining whether the hardware transaction is to be aborted or salvaged.Type: ApplicationFiled: August 18, 2016Publication date: December 8, 2016Inventors: Fadi Y. Busaba, Harold W. Cain, III, Michael Karl Gschwind, Maged M. Michael, Valentina Salapura
-
Publication number: 20160357597Abstract: A transactional execution of a set of instructions in a transaction of a program may be initiated to collect memory operand access characteristics of a set of instructions of a transaction during the transactional execution. The memory operand access characteristics may be stored upon a termination of the transactional execution of the set of instructions. The memory operand access characteristics may include an address of an accessed storage location, a count of a number of times the storage location is accessed, a purpose value indicating whether the storage location is accessed for a fetch, store, or update operation, a count of a number of times the storage location is accessed for one or more of a fetch, store, or update operation; a translation mode in which the storage location is accessed; and an addressing mode.Type: ApplicationFiled: August 23, 2016Publication date: December 8, 2016Inventors: Dan F. Greiner, Michael Karl Gschwind, Valentina Salapura, Timothy J. Slegel
-
Publication number: 20160357598Abstract: A swap method, for an electronic system, includes generating an accessing signal by a calculating module of the electronic system; generating a swap signal for instructing a swap mode of the accessing signal by the calculating module; accessing data from a storage module of the electronic system according to the accessing signal by an accessing module of the electronic system; and swapping the data according to the swap signal by a swap module.Type: ApplicationFiled: August 19, 2016Publication date: December 8, 2016Inventors: Cheok Yan Goh, Ching-Hwa Yu
-
Publication number: 20160357599Abstract: An apparatus is provided for implementation of back-end system for providing point-of-use toolkits. The apparatus may receive an assignment of work tasks assigned to a technician for manufacture of a tangible product. In response, the apparatus may compile a point-of-use toolkit including comprehensive information regarding the work tasks, and transmit the point-of-use toolkit to a front-end system associated with the technician. The apparatus may determine an occurrence of a delay associated with the schedule that impacts the assignment of the one or more work tasks, and transmit information associated with the delay to the manufacturing scheduling system. In response to receiving an update to the assignment of the tasks from the manufacturing scheduling system, the apparatus may compile an update of the point-of-use toolkit, and transmit the update of the point-of-use toolkit to the front-end system.Type: ApplicationFiled: June 5, 2015Publication date: December 8, 2016Inventors: John William Glatfelter, Brian Dale Laughlin, Brian A. McCarthy
-
Publication number: 20160357600Abstract: Disclosed herein are systems, methods, and computer-readable media directed to scheduling threads in a multi-processing environment that can resolve a priority inversion. Each thread has a scheduling state and a context. A scheduling state can include attributes such as a processing priority, classification (background, fixed priority, real-time), a quantum, scheduler decay, and a list of threads that may be waiting on the thread to make progress. A thread context can include registers, stack, other variables, and one or more mutex flags. A first thread can hold a resource with a mutex, the first thread having a low priority. A second thread having a scheduling state with a high priority can be waiting on the resource and may be blocked behind the mutex held by the first process. A scheduler can execute the context of the lower priority thread using the scheduler state of the second, higher priority thread. More than one thread can be waiting on the resource held by the first thread.Type: ApplicationFiled: September 30, 2015Publication date: December 8, 2016Inventors: Daniel A. CHIMENE, Daniel A. STEFFEN, James M. MAGEE, Russell A. BLAINE, Shantonu SEN