Batch Or Transaction Processing Patents (Class 718/101)
-
Patent number: 8689218Abstract: A method is provided for interfacing a plurality of processing components with a shared resource component. A token signal path is provided to allow propagation of a token through the processing components, wherein possession of the token by a given processing component enables the latter to conduct a transaction with the shared resource component. Token processing logic is also provided for propagating the token from one processing component to another along the token signal path, the propagating being done at a propagation rate that is related to a transaction rate associated with the shared resource component. A circuit comprising a plurality of processing components and a shared resource component is provided wherein the plurality processing components and the shared resource components are interfaced with one another using the method proposed.Type: GrantFiled: October 15, 2009Date of Patent: April 1, 2014Assignee: Octasic Inc.Inventors: Tom Awad, Martin Laurence, Martin Filteau, Pascal Gervais, Douglas Morrissey
-
Patent number: 8676635Abstract: A method and system for managing transactions. At least one resource manager (RM) for managing changes to respective system resources of a data processing system is provided. A resource manager coordinator (RMC) for coordinating commit-backout activities of the at least one resource manager is provided. The resource manager coordinator (RMC) is hosted by the data processing system. The data processing system receives a business service request from a remote computer system to perform a task. The task includes compliant processes complying with a commit/backout protocol and non-compliant processes not complying with a commit/backout protocol. The compliant processes are running on the data processing system and the non-compliant processes are running on a counterpart processing system that is coupled to the data processing system by a labile link.Type: GrantFiled: January 9, 2009Date of Patent: March 18, 2014Assignee: International Business Machines CorporationInventor: Mauro Antonio Giacomello
-
Patent number: 8677376Abstract: A synchronization system is described herein that synchronizes two environments by correctly matching identity objects in a source environment with related objects in a target environment. In addition to matching identities based on primitive attributes, the system matches identities across multiple heterogeneous environments based on their relative positions in an identity graph. The system builds the identity graph by first matching some identity objects based on primitive attribute value comparisons. The system fills in the remainder of the identity graph by comparing references to/from the matched identity objects. The combination of attribute value comparisons and comparing references enables identity-aware applications to complete a single identity graph, determine the equivalency of identities in this graph, and apply policy based on this new relationship.Type: GrantFiled: September 29, 2010Date of Patent: March 18, 2014Assignee: Microsoft CorporationInventors: Billy Kwan, Joseph M. Schulman
-
Publication number: 20140075442Abstract: There is provided a method to schedule execution of a plurality of batch jobs by a computer system. The method includes: reading one or more constraints that constrain the execution of the plurality of batch jobs by the computer system and a current load on the computer system; grouping the plurality of batch jobs into at least one run frequency that includes at least one batch job; setting the at least one run frequency to a first run frequency; computing a load generated by each batch job in the first run frequency on the computer system based on each batch job's start time; and determining an optimized start time for each batch job in the first run frequency that meets the one or more constraints and that distributes each batch job's load on the computer system using each batch job's computed load and the current load.Type: ApplicationFiled: November 11, 2013Publication date: March 13, 2014Applicant: eBay Inc.Inventor: Josep M. Ferrandiz
-
Publication number: 20140075441Abstract: A processor core includes a transactional memory, a transaction failure instruction address register (TFIAR), and a transaction failure data address register (TFDAR). The transactional memory stores information of a plurality of transactions executed by the processor core. The processor core retrieves instruction and data address associated with the aborted transaction from TFIAR and TFDAR respectively and stores them into a profiling table. The processor core then generates profiling information based on instruction and data addresses associated with the aborted transaction.Type: ApplicationFiled: September 13, 2012Publication date: March 13, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Robert J. Blainey, Harold W. Cain, Susan E. Eisen, Bradly G. Frey, Charles B. Hall, Hung Q. Le, Cathy May
-
Publication number: 20140068617Abstract: Described herein are systems and methods for receiving a recommendation before submitting a work request. As described herein, an indication of a work request, a recommendation request and a set of application server properties are received at a recommendation engine. The recommendation engine processes the recommendation request, and based on the set of application server properties, determines a recommendation on whether to submit the work request and/or whether to schedule the work request for a later time. Thereafter, the recommendation engine generates a recommendation notification that indicates whether to submit/schedule the work request to provide for a proactive approach to submitting the work request.Type: ApplicationFiled: August 29, 2012Publication date: March 6, 2014Applicant: ORACLE INTERNATIONAL CORPORATIONInventor: Suman Rachakonda
-
Publication number: 20140068619Abstract: This invention relates to scheduling threads in a multicore processor. Executable transactions may be scheduled using at least one distribution queue, which lists executable transactions in order of eligibility for execution, and multilevel scheduler which comprises a plurality of linked individual executable transaction schedulers. Each of these includes a scheduling algorithm for determining the most eligible executable transaction for execution. The most eligible executable transaction is outputted from the multilevel scheduler to the at least one distribution queue.Type: ApplicationFiled: August 12, 2013Publication date: March 6, 2014Applicant: Synopsys, Inc.Inventor: Mark David Lippett
-
Publication number: 20140068618Abstract: Described herein are techniques for automatically batching GUI-based (Graphical User Interface) tasks. The described techniques include automatically determining whether a user is performing batchable tasks in a GUI-based environment. Once detected, the described techniques include predicting the next tasks of a batch based upon those detected batchable tasks. With the described techniques, the user may be asked to verify and/or correct the predicted next tasks. Furthermore, the described techniques may include performing a batch and doing so without user interaction.Type: ApplicationFiled: January 31, 2013Publication date: March 6, 2014Applicant: MICROSOFT CORPORATIONInventors: Qingwei Lin, Fan Li, Jiang Li
-
Patent number: 8661442Abstract: Automated techniques are disclosed for minimizing communication between nodes in a system comprising multiple nodes for executing requests in which a request type is associated with a particular node. For example, a technique comprises the following steps. Information is maintained about frequencies of compound requests received and individual requests comprising the compound requests. For a plurality of request types which frequently occur in a compound request, the plurality of request types is associated to a same node. As another example, a technique for minimizing communication between nodes, in a system comprising multiple nodes for executing a plurality of applications, comprises the steps of maintaining information about an amount of communication between said applications, and using said information to place said applications on said nodes to minimize communication among said nodes.Type: GrantFiled: May 31, 2011Date of Patent: February 25, 2014Assignee: International Business Machines CorporationInventors: Paul M. Dantzig, Arun Kwangil Iyengar, Francis Nicholas Parr, Gong Su
-
Publication number: 20140053160Abstract: In accordance with embodiments disclosed herein, there are provided mechanisms and methods for batch processing in an on-demand service environment. For example, in one embodiment, mechanisms include receiving a processing request for a multi-tenant database, in which the processing request specifies processing logic and a processing target group within the multi-tenant database. Such an embodiment further includes dividing or chunking the processing target group into a plurality of processing target sub-groups, queuing the processing request with a batch processing queue for the multi-tenant database among a plurality of previously queued processing requests, and releasing each of the plurality of processing target sub-groups for processing in the multi-tenant database via the processing logic at one or more times specified by the batch processing queue.Type: ApplicationFiled: October 23, 2013Publication date: February 20, 2014Applicant: SALESFORCE.COM, INC.Inventors: Gregory D. Fee, William J. Gallagher
-
Publication number: 20140053159Abstract: Among other aspects disclosed are a method and system for processing a batch of input data in a fault tolerant manner. The method includes reading a batch of input data including a plurality of records from one or more data sources and passing the batch through a dataflow graph. The dataflow graph includes two or more nodes representing components connected by links representing flows of data between the components. At least one but fewer than all of the components includes a checkpoint process for an action performed for each of multiple units of work associated with one or more of the records. The checkpoint process includes opening a checkpoint buffer stored in non-volatile memory at the start of processing for the batch.Type: ApplicationFiled: October 18, 2013Publication date: February 20, 2014Applicant: Ab Initio Technology LLCInventors: Bryan Phil Douros, Matthew Darcy Atterbury, Tim Wakeling
-
Patent number: 8656392Abstract: Computer-implemented methods, systems, and computer-readable storage media are disclosed to coordinate a plurality of devices in performing a task. A particular computer-implemented method includes storing updated status information at a device where the updated status information reflects a change in a vote for a task state of one or more of a plurality of devices. A first updated status message is sent to one or more of the plurality of devices where the first updated status message communicates the updated status information. A task consensus at the device is updated when the updated status information indicates that at least a predetermined quantity of the plurality of devices agrees on the task status.Type: GrantFiled: June 10, 2009Date of Patent: February 18, 2014Assignee: The Boeing CompanyInventor: Charles A. Erignac
-
Patent number: 8656376Abstract: A method for providing intrinsic supports for a VLIW DSP processor with distributed register files comprises the steps of: generating a program representation with cluster information on instructions of the DSP processor, wherein the cluster information is provided by a program with cluster intrinsic coding; identifying data stream operations indicating parallel instruction sequences applied on different data sets in the program representation; identifying data sharing relations indicating data shared by the data stream operations in the program representation; identifying data aggregation relations indicating results aggregated from the data stream operations in the program representation; and performing register allocation for the DSP processor according to the identified data stream operations, the data sharing relations and the data aggregation relations.Type: GrantFiled: September 1, 2011Date of Patent: February 18, 2014Assignee: National Tsing Hua UniversityInventors: Jenq Kuen Lee, Chi Bang Kuan
-
Patent number: 8656394Abstract: A method for executing an application program using streams. A device driver receives a first command within an application program and parses the first command to identify a first stream token that is associated with a first stream. The device driver checks a memory location associated with the first stream for a first semaphore, and determines whether the first semaphore has been released. Once the first semaphore has been released, a second command within the application program is executed. Advantageously, embodiments of the invention provide a technique for developers to take advantage of the parallel execution capabilities of a GPU.Type: GrantFiled: August 15, 2008Date of Patent: February 18, 2014Assignee: Nvidia CorporationInventors: Nicholas Patrick Wilt, Ian Buck, Philip Cuadra
-
Patent number: 8656391Abstract: A method and computer program product for defining a plurality of tags, each of which is associated with a discrete process executable on activity content. At least one of the plurality of tags is associated with a piece of content within an activity, thus defining one or more associated tags.Type: GrantFiled: June 22, 2007Date of Patent: February 18, 2014Assignee: International Business Machines CorporationInventors: Scott H. Prager, Martin T. Moore, Charles R. Hill
-
Patent number: 8656408Abstract: Guiding OS thread scheduling in multi-core and/or multi-threaded microprocessors by: determining, for each thread among the active threads, the power consumed by each instruction type associated with an instruction executed by the thread during the last context switch interval; determining for each thread among the active threads, the power consumption expected for each instruction type associated with an instruction scheduled by said thread during the next context switch interval; generating at least one combination of N threads among the active threads (M), and for each generated combination determining if the combination of N threads satisfies a main condition related to the power consumption per instruction type expected for each thread of the thread combination during the next context switch interval and to the thread power consumption per instruction type determined for each thread of the thread combination during the last context switch interval; and selecting a combination of N threads.Type: GrantFiled: September 28, 2011Date of Patent: February 18, 2014Assignee: International Business Machines CorporationsInventors: Hisham E. Elshishiny, Ahmed T. Sayed Gamal El Din
-
Patent number: 8656398Abstract: A system and method for synchronization of workflows in a video file workflow system. A workflow is created that splits execution of the workflow tasks (in a single, video file workflow) across multiple Content Management Systems (CMSs). When a single workflow is split across two CMSs, which jointly perform the overall workflow, the two resulting workflows are created to essentially mirror each other so that each CMS can track the tasks being executed on the other CMS using synchronization messages. Hence, both CMSs have the same representation of the processing status of the video content at all time. This allows for dual tracking of the workflow process and for independent operations, at different CMSs, when the CMS systems require load balancing. The split-processing based synchronization can be implemented in the workflows themselves or with simple modifications to workflow templates, without requiring any modification of the software of the workflow systems.Type: GrantFiled: May 3, 2011Date of Patent: February 18, 2014Assignee: Ericsson Television IncInventor: James Alexander
-
Patent number: 8650577Abstract: A mobile terminal and controlling method thereof are disclosed, by which a scheduling function of giving a processing order to each of a plurality of tasks is supported. The present invention includes a memory including an operating system having a scheduler configured to perform a second scheduling function on a plurality of tasks, each having a processing order first-scheduled in accordance with a first reference and a processor performing an operation related to the operating system, the processor processing a plurality of the tasks. Moreover, if a first task among a plurality of the first-scheduled tasks meets a second reference, the scheduler performs the second scheduling function by changing the processing orders to enable the first task to be preferentially processed.Type: GrantFiled: September 23, 2011Date of Patent: February 11, 2014Assignee: LG Electronics Inc.Inventor: Sookyoung Kim
-
Patent number: 8650272Abstract: A distributed transaction processing system includes a plurality of resources, resource managers to manage corresponding ones of the resources, and a transaction manager to coordinate performance of a transaction with the resource managers. In response to failure of the transaction manager, the resource managers are configured to collaborate to decide whether to commit or abort the transaction.Type: GrantFiled: September 26, 2008Date of Patent: February 11, 2014Assignee: Hewlett-Packard Development Company, L.P.Inventor: Douglas B. Myers
-
Publication number: 20140040898Abstract: A system includes an initiator and processing nodes. The initiator distributes portions of a transaction among the processing nodes. Each processing node has at least one downstream neighbor to which the processing node sends commit messages. The commit messages include a commit status of the processing node. The downstream neighbor is also a processing node.Type: ApplicationFiled: July 31, 2012Publication date: February 6, 2014Inventors: Alan H. KARP, Wojciech GOLAB, Terence P. KELLY, Dhruva CHAKRABARTI
-
Patent number: 8645958Abstract: In general, this disclosure is directed to a software virtual machine that provides high-performance transactional data acceleration optimized for multi-core computing platforms. The virtual machine utilizes an underlying parallelization engine that seeks to maximize the efficiencies of multi-core computing platforms to provide a highly scalable, high performance (lowest latency), virtual machine. In some embodiments, the virtual machine may be viewed as an in-memory virtual machine with an ability in its operational state to self organize and self seek, in real time, available memory work boundaries to automatically optimize maximum available throughput for data processing acceleration and content delivery of massive amounts of data.Type: GrantFiled: June 15, 2012Date of Patent: February 4, 2014Assignee: uCIRRUSInventors: Raymond J. Huetter, Alka Yamarti
-
Patent number: 8645961Abstract: An image formation apparatus that has a webpage viewing function includes a job receiver that receives a job execution instruction from a user terminal, a job analyzer that analyzes the received job execution instruction, a job executor that executes a job based on a result of the analysis, and a job registration part that, if the received job execution instruction includes URL information specifying a webpage, registers user identification information pertaining to a user who issued the job execution instruction and the URL information included therein in correspondence with each other such that the webpage can be viewed with use of the URL information.Type: GrantFiled: March 7, 2008Date of Patent: February 4, 2014Assignee: Konica Minolta Business Technologies, Inc.Inventors: Tomonari Yoshimura, Atsushi Ohshima, Masami Yamada, Masakazu Murakami, Takahiro Ikeda
-
Publication number: 20140033210Abstract: A technique for attesting a plurality of data processing systems includes generating a logical grouping for a data processing system. The logical grouping is associated with a rule that describes a condition that must be met in order for the data processing system to be considered trusted. A list of one or more children associated with the logical grouping is retrieved. The one or more children are attested to determine whether each of the one or more children is trusted. In response to the attesting, the rule is applied to determine whether the condition has been met in order for the data processing system to be considered trusted. A plurality of logical groupings is associated to determine whether an associated plurality of data processing systems can be considered trusted.Type: ApplicationFiled: September 30, 2013Publication date: January 30, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David Haikney, David Nigel Mackintosh, Jose Juan Palacios Perez
-
Publication number: 20140033209Abstract: A computing system for handling barrier commands includes a memory, an interface, and a processor. The memory is configured to store a pre-barrier spreading range that identifies a target computing system associated with a barrier command. The interface is coupled to the memory and is configured to send a pre-barrier computing probe to the target computing system identified in the pre-barrier spreading range and receive a barrier completion notification messages from the target computing system. The pre-barrier computing probe is configured to instruct the target computing system to monitor a status of a transaction that needs to be executed for the barrier command to be completed. The processor is coupled to the interface and is configured to determine a status of the barrier command based on the received barrier completion notification messages.Type: ApplicationFiled: July 26, 2013Publication date: January 30, 2014Applicant: Futurewei Technologies, Inc.Inventors: Iulin Lih, Chenghong He, Hongbo Shi, Naxin Zhang
-
Patent number: 8640137Abstract: Embodiments of an event-driven resource management technique may enable the management of cluster resources at a sub-computer level (e.g., at the thread level) and the decomposition of jobs at an atomic (task) level. A job queue may request a resource for a job from a resource manager, which may locate a resource in a resource list and grant the resource to the job queue. After the resource is granted, the job queue sends the job to the resource, on which the job may be partitioned into tasks and from which additional resources may be requested from the resource manager. The resource manager may locate additional resources in the list and grant the resources to the resource. The resource sends the tasks to the granted resources for execution. As resources complete their tasks, the resource manager is informed so that the status of the resources in the list can be updated.Type: GrantFiled: August 30, 2010Date of Patent: January 28, 2014Assignee: Adobe Systems IncorporatedInventors: Sandford P. Bostic, Stephen Paul Reiser, Andrey J. Bigney
-
Patent number: 8638805Abstract: Described embodiments provide for restructuring a scheduling hierarchy of a network processor having a plurality of processing modules and a shared memory. The scheduling hierarchy schedules packets for transmission. The network processor generates tasks corresponding to each received packet associated with a data flow. A traffic manager receives tasks provided by one of the processing modules and determines a queue of the scheduling hierarchy corresponding to the task. The queue has a parent scheduler at each of one or more next levels of the scheduling hierarchy up to a root scheduler, forming a branch of the hierarchy. The traffic manager determines if the queue and one or more of the parent schedulers of the branch should be restructured. If so, the traffic manager drops subsequently received tasks for the branch, drains all tasks of the branch, and removes the corresponding nodes of the branch from the scheduling hierarchy.Type: GrantFiled: September 30, 2011Date of Patent: January 28, 2014Assignee: LSI CorporationInventors: Balakrishnan Sundararaman, Shashank Nemawarkar, David Sonnier, Shailendra Aulakh, Allen Vestal
-
Publication number: 20140019977Abstract: An economical system and method of migrating legacy applications running on proprietary mainframe computer systems and distributed networks to commodity hardware-based software frameworks, by offloading the batch processing from the legacy systems, and returning the resultant data to the original legacy system to be consumed by the unaltered applications. An open source code tool is used to transfer the software, and rewrite it on a faster and more economical hardware system, while leaving a seamless integration of offloaded processing with existing batch processing flow.Type: ApplicationFiled: July 10, 2012Publication date: January 16, 2014Applicant: SEARS BRANDS, LLCInventors: Sunilkumar Narayan Kakade, Philip Shelley, Susan S. Hsu
-
Publication number: 20140019978Abstract: Disclosed are a system, method and computer-readable medium relating to managing resources within a compute environment having a group of nodes or computing devices. The method comprises, for each node in the compute environment: traversing a list jobs having a fixed time relationship, wherein for each job in the list, the following steps occur: obtaining a range list of available timeframes for each job, converting each availability timeframe to a start range, shifting the resulting start range in time by a job offset, for a first job, copying the resulting start range into a node range, and for all subsequent jobs, logically AND'ing the start range with the node range. Next, the method comprises logically OR'ing the node range with a global range, generating a list of acceptable resources on which to start and the timeframe at which to start and creating reservations according to the list of acceptable resources for the resources in the group of computing devices and associated job offsets.Type: ApplicationFiled: December 21, 2012Publication date: January 16, 2014Applicant: ADAPTIVE COMPUTING ENTERPRISES, INC.Inventor: ADAPTIVE COMPUTING ENTERPRISES, INC.
-
Publication number: 20140019979Abstract: Embodiments of the invention relate to generating automated web task procedures from an analysis of web history logs. One aspect of the invention concerns a method that comprises identifying sequences of related web actions from a web log, grouping each set of similar web actions into an action class, and mapping the sequences of related web actions into sequences of action classes. The method further clusters each group of similar sequences of action classes into a cluster, wherein relationships among the action classes in the cluster are represented by a state machine, and generates automated web task procedures from the state machine.Type: ApplicationFiled: September 16, 2013Publication date: January 16, 2014Applicant: International Business Machines CorporationInventors: Saleema A. Amershi, Tessa A. Lau, Jalal U. Mahmud, Jeffrey W. Nichols
-
Publication number: 20140019569Abstract: Embodiments herein disclose a process to find patterns represented by closed sequences with temporal ordering in time series data by converting the time series data into transactions. A distributed transaction handling unit continuously finds closed sequences with mutual confidence and lowest possible support thresholds from the data. The transaction handling unit distributes the data to be processed on multiple slave computers and uses data structures to store the statistics of the discovered patterns, which are kept up to date in real time. The transaction handling unit partitions the work into independent tasks so that the overhead of inter process and inter thread communication is kept at minimal. The transaction handling unit creates multiple check-points at user defined time interval or on demand or at the time of shutdown and is capable of using any of the available checkpoints and to be ready to process further data in an incremental manner.Type: ApplicationFiled: July 12, 2012Publication date: January 16, 2014Inventors: Amit Vasant Sharma, Rajesh Satchidanand Kulkarni, Mukund Babaji Neharkar
-
Patent number: 8631422Abstract: Techniques for business event processing are presented. Methods and apparatuses disclosed herein may operate to receive a request to perform an operation on a listing previously published by an online marketplace; to identify at least one additional listing having certain characteristics in common with the listing from a plurality of previously published listings including the listing; and to automatically perform the operation on the at least one additional listing.Type: GrantFiled: September 4, 2012Date of Patent: January 14, 2014Assignee: eBay Inc.Inventors: Kam Kasravi, Vadim Geshel, Sergiy Pereshyvaylo, Angie Ruan, Yitao Yao, Maxim Drobintsev
-
Patent number: 8626882Abstract: A distributed embedded system that allows for the reconfiguration of tasks and messages. The system includes a system configuration manager and a plurality of electronic control units (ECU) each having an ECU configuration manager. Each ECU configuration manager stores the current configuration data for task scheduling and bus/network accessing/retrieving for the current schedule for that ECU. The system configuration manager includes a separate configuration data table for each ECU that can be reconfigured by programming signals sent on a system bus. The system configuration manager transmits the new configuration data from the data table on the bus to the ECU configuration manager if the scheduling of the tasks, message retrieval from the bus and message transmission on the bus changes for an ECU as a result of adding new tasks or new ECUs to the system.Type: GrantFiled: October 7, 2005Date of Patent: January 7, 2014Assignee: GM Global Technology Operations LLCInventor: Shengbing Jiang
-
Patent number: 8627451Abstract: A sandbox tool can cooperate with components of a secure operating system to create an isolated execution environment for accessing untrusted content without exposing other processes and resources of the computing system to the untrusted content. The sandbox tool can allocate resources (storage space, memory, etc) of the computing system, which are necessary to access the untrusted content, to the isolated execution environment, and apply security polices of the operating system to the isolated execution environment such that untrusted content running in the isolated execution environment can only access the resources allocated to the isolated execution environment.Type: GrantFiled: August 21, 2009Date of Patent: January 7, 2014Assignee: Red Hat, Inc.Inventors: Daniel J. Walsh, Eric Lynn Paris
-
Publication number: 20140007110Abstract: Normalized interface for heterogeneous transaction processing systems are provided. One or more disparate transaction processing systems include an interface and a set of operations. When a user attempts to access the transaction processing systems, a normalized interface is presented having a normalized set of operations that map to the set of operations and the interface of the transaction processing systems. The user interacts with the normalized interface as if directly interacting with the transaction processing systems.Type: ApplicationFiled: June 29, 2012Publication date: January 2, 2014Applicant: NCR CorporationInventor: Jonathan Daniel Cordero
-
Patent number: 8621464Abstract: In the dynamic sampling or collection of data relative to locks for which threads attempting to acquire the lock may be spinning so as to adaptively adjust the spinning of threads for a lock, an implementation for monitoring a set of parameters relative to the sampling of data of particular locks and selectively terminating the sampling when certain parameter values or conditions are met.Type: GrantFiled: January 31, 2011Date of Patent: December 31, 2013Assignee: International Business Machines CorporationInventors: Michael H. Dawson, Vijay V. Sundaresan, Alexei I. Svikine
-
Patent number: 8621474Abstract: A computer system having a process running unit which runs processes of a plurality of programs; a user input unit through which a command of a user to select one of a plurality of performance modes is inputted; and a controller which controls the process running unit to run a process of a program of the programs, which are currently being executed, according to a priority order corresponding to the performance mode selected by the command of the user if the command of the user is inputted.Type: GrantFiled: July 13, 2007Date of Patent: December 31, 2013Assignee: Samsung Electronics Co., Ltd.Inventors: Kyoung-youl Kim, Min-sun Park, Keon-young Cho
-
Patent number: 8614799Abstract: A method of paged memory management for a software process executing in a memory of a computer system, the software process having a first operating mode and a second operating mode, and the software process having associated memory page use information for determining a set of pages to be maintained in the memory. The method comprises recording the memory page use information to a data store as first operating mode memory page use information in response to a determination that the software process leaves the first operating mode, and retrieving the first operating mode memory page use information in response to a determination that the software process enters the first operating mode.Type: GrantFiled: June 1, 2006Date of Patent: December 24, 2013Assignee: International Business Machines CorporationInventors: Gordon Douglas Hutchison, Matthew Francis Peters, Emma Louise Shepherd
-
Patent number: 8615757Abstract: A system and method are disclosed. In one embodiment the system includes a physical resource that is capable of generating I/O data. The system also includes multiple virtual machines to utilize the physical resource. Among the virtual machines are a resource source virtual machine that is capable of owning the physical resource. The resource source virtual machine is also capable of sending a stream of one or more I/O packets generated from the I/O data that targets a resource sink virtual machine. The resource sink virtual machine is designated as a termination endpoint of the I/O data from the physical device. Also among the virtual machines are one or more resource filter virtual machines. Each of the resource filter virtual machines is capable of filtering I/O packets of a particular type from the stream prior to the stream reaching the resource sink virtual machine.Type: GrantFiled: December 26, 2007Date of Patent: December 24, 2013Assignee: Intel CorporationInventors: Carl G. Klotz, Jr., Steve Grobman, Vedvyas Shanbhogue
-
Patent number: 8615768Abstract: A synchronization system is described herein that synchronizes resource objects in an order based on their dependency relationships so that a referenced object is available by the time an object that references it is synchronized. Reference attributes present in resources define the dependency relationship among resources. Using these relationships, the system builds a dependency tree and orders synchronization operations for environment reconciliation by precedence so that referential integrity is preserved while still synchronizing reference attributes. The system can deterministically create a change list that guarantees referential integrity, and perform change list processing in parallel. The synchronization system attempts to order the synchronization based on references available to ensure that the system creates and updates dependent resources before their parent resources. Thus, the synchronization system provides a fast, reliable update mechanism for synchronizing two related data environments.Type: GrantFiled: September 27, 2010Date of Patent: December 24, 2013Assignee: Microsoft CorporationInventors: Billy Kwan, Joseph M. Schulman
-
Publication number: 20130339962Abstract: A transaction executing within a computing environment ends prior to completion; i.e., execution is aborted. Pursuant to aborting execution, a hardware transactional execution CPU mode is exited, and one or more of the following is performed: restoring selected registers; committing nontransactional stores on abort; branching to a transaction abort program status word specified location; setting a condition code and/or abort code; and/or preserving diagnostic information.Type: ApplicationFiled: March 8, 2013Publication date: December 19, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Dan F. Greiner, Christian Jacobi, Timothy J. Slegel
-
Publication number: 20130339960Abstract: A TRANSACTION BEGIN instruction and a TRANSACTION END instruction are provided. The TRANSACTION BEGIN instruction causes either a constrained or nonconstrained transaction to be initiated, depending on a field of the instruction. The TRANSACTION END instruction ends the transaction started by the TRANSACTION BEGIN instruction.Type: ApplicationFiled: March 7, 2013Publication date: December 19, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Dan F. Greiner, Christian Jacobi, Marcel Mitran, Timothy J. Slegel
-
Publication number: 20130339961Abstract: A transaction is initiated via a transaction begin instruction. During execution of the transaction, the transaction may abort. If the transaction aborts, a determination is made as to the type of transaction. Based on the transaction being a first type of transaction, resuming execution at the transaction begin instruction, and based on the transaction being a second type, resuming execution at an instruction following the transaction begin instruction. Regardless of transaction type, resuming execution includes restoring one or more registers specified in the transaction begin instruction and discarding transactional stores. For one type of transaction, the nonconstrained transaction, the resuming includes storing information in a transaction diagnostic block.Type: ApplicationFiled: March 8, 2013Publication date: December 19, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: INTERNATIONAL BUSINESS MACHINES CORPORATION
-
Patent number: 8612580Abstract: Embodiments are directed to distributing processing tasks from the reduced-performance computer system to at least one other computer system, to processing, at one computer system, a distributed task received from a reduced-performance computer system, and to establishing a simulation environment for testing distributed computing framework functionality. In an embodiment, a reduced-performance computer system monitors computing tasks to determine a processing resource usage level for each task. The computing tasks are part of a software application that is running on the reduced-performance computer system. The reduced-performance computer system determines that one of the monitored tasks is using processing resources beyond a specified threshold level. The reduced-performance computer system sends the task to another computer system that receives, processes and returns the results of the tasks to the reduced-performance computer system.Type: GrantFiled: May 31, 2011Date of Patent: December 17, 2013Assignee: Microsoft CorporationInventors: Niraj Girishkumar Gandhi, Kenneth Van Hyning, Jinghao Liu, Kyle Allen Larsen
-
Patent number: 8612981Abstract: A method for distributing a task to plural processors is provided. A distribution rule of distributing plural tasks to sub processors or the main processor, respectively, is previously written in a program code of an application configured to include plural tasks. At the time of executing the application, the main processor reads out the distribution rule, and distributes plural tasks to the sub processors or the main processor, respectively, in accordance with the distribution rule.Type: GrantFiled: September 13, 2007Date of Patent: December 17, 2013Assignees: Sony Corporation, Sony Computer Entertainment Inc.Inventor: Kan Murata
-
Patent number: 8613045Abstract: Embodiments are directed to providing access to a resource over a network. A client device may request access to a server. An application may be provided to the client device. The application may cause control of the client device to be switched from a first desktop to a secure desktop. The secure desktop may be configured to restrict applications access to within the secure desktop. An indication of the resource on the server to map to may be received at the client device. The indicated resource may be mapped onto a file system on the client device. Mapping may comprise using a remote file access protocol, using DLL injection, or adding a kernel module to an operating system on the client device. The mapped resource may be constrained to be accessed through the secure desktop.Type: GrantFiled: May 1, 2008Date of Patent: December 17, 2013Assignee: F5 Networks, Inc.Inventor: Andrey Shigapov
-
Patent number: 8612510Abstract: A large-scale data processing system and method for processing data in a distributed and parallel processing environment. The system includes an application-independent framework for processing data having a plurality of application-independent map modules and reduce modules. These application-independent modules use application-independent operators to automatically handle parallelization of computations across the distributed and parallel processing environment when performing user-specified data processing operations. The system also includes a plurality of user-specified, application-specific operators, for use with the application-independent framework to perform a user-specified data processing operation on a user-specified set of input files. The application-specific operators include: a map operator and a reduce operator. The map operator is applied by the application-independent map modules to input data in the user-specified set of input files to produce intermediate data values.Type: GrantFiled: January 12, 2010Date of Patent: December 17, 2013Assignee: Google Inc.Inventors: Jeffrey Dean, Sanjay Ghemawat
-
Patent number: 8612753Abstract: In one embodiment of the invention, a server may send encrypted material to a client. The client processor may decrypt and process the material, encrypt the results, and send the results back to the server. This sequence of events may occur while the execution or processing of the material is restricted to the client processor. Any material outside the client processor, such as material located in system memory, will be encrypted.Type: GrantFiled: December 23, 2008Date of Patent: December 17, 2013Assignee: Intel CorporationInventors: Yasser Rasheed, Steve Grobman
-
Patent number: 8607236Abstract: An information processing system is provided to alleviate excessive load on a master node, thereby allowing the master node to efficiently perform the process of assigning jobs to nodes. A client 10 classifies a plurality of jobs constituting a large-scale arithmetic operation into several blocks, and requests a master node 20 to process the jobs block by block, such that the master node 20 always performs the process of assigning a predetermined number of jobs or less. Here, the predetermined number is preferably determined in such a manner as to allow the master node 20 to efficiently perform the process of assigning the jobs to nodes, even if the number of nodes is significant. As such, the client 10 has the function of controlling the load on the master node 20, and therefore it is possible to prevent the load on the master node 20 from increasing.Type: GrantFiled: August 17, 2006Date of Patent: December 10, 2013Assignee: NS Solutions CorporationInventors: Shinjiro Kawano, Makoto Tensha, Katsumi Shiraishi
-
Patent number: 8607247Abstract: Method, system, and computer program product embodiments for synchronizing workitems on one or more processors are disclosed. The embodiments include executing a barrier skip instruction by a first workitem from the group, and responsive to the executed barrier skip instruction, reconfiguring a barrier to synchronize other workitems from the group in a plurality of points in a sequence without requiring the first workitem to reach the barrier in any of the plurality of points.Type: GrantFiled: November 3, 2011Date of Patent: December 10, 2013Assignee: Advanced Micro Devices, Inc.Inventors: Lee W. Howes, Benedict R. Gaster, Michael C. Houston, Michael Mantor, Mark Leather, Norman Rubin, Brian D. Emberling
-
Patent number: 8607031Abstract: A hardware device for concurrently processing a fixed set of predetermined tasks associated with an algorithm which includes a number of processes, some of the processes being dependent on binary decisions, includes a plurality of task units for processing data, making decisions and/or processing data and making decisions, including source task units and destination task units. A task interconnection logic means interconnect the task units for communicating actions from a source task unit to a destination task unit. Each of the task units includes a processor for executing only a particular single task of the fixed set of predetermined tasks associated with the algorithm in response to a received request action, and a status manager for handling the actions from the source task units and building the actions to be sent to the destination task units.Type: GrantFiled: February 3, 2012Date of Patent: December 10, 2013Assignee: International Business Machines CorporationInventors: Alain Benayoun, Jean-Francois Le Pennec, Patrick Michel, Claude Pin