Distributed Processing System Patents (Class 712/28)
  • Patent number: 8122427
    Abstract: A Decentralized System Services (DSS) architecture defines a framework for building fault-tolerant distributed applications across decentralized and heterogeneous systems. DSS enables “complexity through composition” by defining distributed designs as compositions of limited function and observable services which may be quickly and dynamically assembled to perform higher level functions. DSS defines a standardized interaction between distributed services using sessionless, asynchronous communications with explicit failure semantics. Accounting for latency, failure and state management all become a natural part of the design process. DSS includes a runtime implementation for managing concurrent services—the Common Concurrency Runtime (CCR), a protocol for service interactions—the Web Services Application Protocol (WSAP), and a set of required service behaviors which provide for composibility, location independence, and fault tolerance—Distributed Protocol Oriented Programming (DPOP).
    Type: Grant
    Filed: January 4, 2006
    Date of Patent: February 21, 2012
    Assignee: Microsoft Corporation
    Inventors: Georgios Chrysanthakopoulos, Henrik Frystyk Nielsen, George M. Moore
  • Patent number: 8116718
    Abstract: A communication device includes a voice data and RF integrated circuit (IC) that includes a memory module that stores a plurality of applications corresponding to a plurality of uses of the communication device. A processing module executes a selected one of the plurality of applications and selects one of a plurality of power modes based on a current one of the plurality of uses of the communication device corresponding to the selected one of the plurality of applications. The processing module generates a power mode signal based on the selected one of the plurality of power modes. An off-chip power management circuit receives the power mode signal and that generates a plurality of power supply signals to the voice data and RF IC based on the power mode signal.
    Type: Grant
    Filed: January 21, 2011
    Date of Patent: February 14, 2012
    Assignee: Broadcom Corporation
    Inventors: Yossi Cohen, Nelson R. Sollenberger, Vafa James Rakshani, Ahmadreza (Reza) Rofougaran, Maryam Rofougaran, Claude G. Hayek, Frederic Christian Marc Hayem
  • Patent number: 8103891
    Abstract: Power consumption may be reduced in a media device including a first processor coupled to the non-volatile memory, either directly or indirectly, allowing the first processor to generate a pointer structure. The first processor may also be coupled, either directly or indirectly to a memory space, allowing the first processor to write the pointer structure in the memory space. The media device includes a second processor, such as a DSP/SHW or peripheral processor, and may be also be coupled, either directly or indirectly to the memory space, allowing the second processor to retrieve a block of media data from the non-volatile memory. Retrieval of the block of media data may be read directly from the non-volatile memory, or in some cases, the media data being retrieved may be parsed. The media data may be an audio file data, video file data, or both.
    Type: Grant
    Filed: March 16, 2009
    Date of Patent: January 24, 2012
    Assignee: QUALCOMM Incorporated
    Inventors: Gary D. Good, Kuntal D. Sampat, Christopher H. Bracken
  • Patent number: 8103855
    Abstract: The present disclosure provides a methodology for reducing congestion of a processing unit, preferably by configuring a plurality of functional blocks to run in parallel or in series without the influence or input from the processing unit. In an embodiment, the present method chains a plurality of functional blocks together by software so that one functional block starts after the completion of another functional block. The configuration of the chain can be series, parallel, and any combination thereof, arranged to meet the circuit's objective. The chaining can be configured and re-configured, preferably by software input. The chaining can also be performed at design time or at run time. The chaining can also be modified, preferably at design time, but can also be modified at run time.
    Type: Grant
    Filed: June 29, 2008
    Date of Patent: January 24, 2012
    Assignee: Navosha Corporation
    Inventors: Hirak Mitra, Raj Kulkarni, Richard Wicks, Michael Moon
  • Patent number: 8095810
    Abstract: In a storage system that includes two or more file servers each including an arbitrary number of operating virtual file servers, a management server: holds a load information table regarding a load on each virtual file server for each time period and redundancy information table for the storage system; judges, with reference to the load information table and redundancy information table, whether or not the loads on the virtual file servers can be handled by a smaller number of file servers than the number of currently-operating file servers; selects, if the judgment result is positive, a power-off target file server and makes another file server fail over a virtual file server in the power-off target file server; and turns off the power-off target file server.
    Type: Grant
    Filed: March 12, 2008
    Date of Patent: January 10, 2012
    Assignee: Hitachi, Ltd.
    Inventors: Keiichi Matsuzawa, Takahiro Nakano
  • Patent number: 8095733
    Abstract: A data processing system includes an interconnect fabric, a system memory coupled to the interconnect fabric and including a virtual barrier synchronization region allocated to storage of virtual barrier synchronization registers (VBSRs), and a plurality of processing units coupled to the interconnect fabric and operable to access the virtual barrier synchronization region. Each of the plurality of processing units includes a processor core and a cache memory including a cache controller and a cache array that caches VBSR lines from the virtual barrier synchronization region of the system memory. The cache controller of a first processing unit, responsive to a memory access request from its processor core that targets a first VBSR line, transfers responsibility for writing back to the virtual barrier synchronization region a second VBSR line contemporaneously held in the cache arrays of first, second and third processing units.
    Type: Grant
    Filed: April 7, 2009
    Date of Patent: January 10, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Guy L. Guthrie, Michael Siegel, William J. Starke, Derek E. Williams
  • Publication number: 20110320766
    Abstract: An apparatus and method is described herein for coupling a processor core of a first type with a co-designed core of a second type. Execution of program code on the first core is monitored and hot sections of the program code are identified. Those hot sections are optimize for execution on the co-designed core, such that upon subsequently encountering those hot sections, the optimized hot sections are executed on the co-designed core. When the co-designed core is executing optimized hot code, the first processor core may be in a low-power state to save power or executing other code in parallel. Furthermore, multiple threads of cold code may be pipelined on the first core, while multiple threads of hot code are pipeline on the co-designed core to achieve maximum performance.
    Type: Application
    Filed: June 29, 2010
    Publication date: December 29, 2011
    Inventors: Youfeng Wu, Shiliang Hu, Edson Borin, Cheng C. Wang, Mauricio Breternitz, JR., Wei Liu
  • Patent number: 8086828
    Abstract: Heterogeneous processors can cooperate for distributed processing tasks in a multiprocessor computing system. Each processor is operable in a “compatible” mode, in which all processors within a family accept the same baseline command set and produce identical results upon executing any command in the baseline command set. The processors also have a “native” mode of operation in which the command set and/or results may differ in at least some respects from the baseline command set and results. Heterogeneous processors with a compatible mode defined by reference to the same baseline can be used cooperatively for distributed processing by configuring each processor to operate in the compatible mode.
    Type: Grant
    Filed: March 25, 2009
    Date of Patent: December 27, 2011
    Assignee: NVIDIA Corporation
    Inventors: Henry Packard Moreton, Abraham B.de Waal
  • Patent number: 8082359
    Abstract: The present application is directed towards systems and methods for ensuring equal distribution of packet flows among a plurality of cores in a multi-core system by identifying a rank of a matrix created from a hash key. If the rank of the matrix is equal to or greater than a divisor of a modulo operation applied to the results of the hash function, then the hash key may be used to ensure equal distribution of packet flows.
    Type: Grant
    Filed: December 23, 2009
    Date of Patent: December 20, 2011
    Assignee: Citrix Systems, Inc.
    Inventor: Abhishek Chauhan
  • Patent number: 8074054
    Abstract: A processing system includes a group of processing units (“PUs”) arranged in a daisy chain configuration or a sequence capable of parallel processing. The processing system, in one embodiment, includes PUs, a demultiplexer (“demux”), and a multiplexer (“mux”). The PUs are connected or linked in a sequence or a daisy chain configuration wherein a first PU is located at the beginning of the sequence and a last digital PU is located at the end of the sequence. Each PU is configured to read an input data packet from a packet stream during a designated reading time frame. If the time frame is outside of the designated reading time frame, a PU allows a packet stream to pass through. The demux forwards a packet stream to the first digital processing unit. The mux receives a packet steam from the last digital processing unit.
    Type: Grant
    Filed: December 12, 2007
    Date of Patent: December 6, 2011
    Assignee: Tellabs San Jose, Inc.
    Inventors: Venkata Rangavajjhala, Naveen K. Jain
  • Patent number: 8073982
    Abstract: The invention relates to a method for setting an operating parameter in a peripheral IC. In this method, the operating parameter is transmitted from a central IC via a bus connection to the peripheral IC. The method is characterized in that the operating parameter is initially buffered in a preregister in the peripheral IC, and in that the buffered operating parameter is transferred into a working register only if a transfer signal is sent from the central IC via the bus connection. This method has the advantage that, for example in the case of rapidly changing receive conditions in a send/receive unit, adjustment of the send or receive gain setting is very flexible, and it is easy to avoid an incorrect setting due to a detected signal fluctuation. The invention also relates to a device for carrying out said method.
    Type: Grant
    Filed: December 14, 2002
    Date of Patent: December 6, 2011
    Assignee: Thomson Licensing
    Inventors: Friedrich Heizmann, Thomas Schwanenberger, Patrick Lopez
  • Patent number: 8065503
    Abstract: Methods, systems and computer programs for distributing a computing operation among a plurality of processes and for gathering results of the computing operation from the plurality of processes are described.
    Type: Grant
    Filed: December 15, 2006
    Date of Patent: November 22, 2011
    Assignee: International Business Machines Corporation
    Inventor: Bin Jia
  • Patent number: 8056080
    Abstract: Execution units process commands from one or more command queues. Once a command is available on the queue, each unit participating in the execution of the command atomically decrements the command's work groups remaining counter by the work group reservation size and processes a corresponding number of work groups within a work group range. Once all work groups within a range are processed, an execution unit increments a work group processed counter. The unit that increments the work group processed counter to the value stored in a work groups to be executed counter signals completion of the command. Each execution unit that access a command also marks a work group seen counter. Once the work groups processed counter equals the work groups to be executed counter and the work group seen counter equals the number of execution units, the command may be removed or overwritten on the command queue.
    Type: Grant
    Filed: August 31, 2009
    Date of Patent: November 8, 2011
    Assignee: International Business Machines Corporation
    Inventors: Benjamin G. Alexander, Gregory H. Bellows, Joaquin Madruga, Brian D. Watt
  • Publication number: 20110271077
    Abstract: A processor has a plurality of PEs (processing elements) that operate in parallel based on operation commands and an information collection unit that collects the data of the plurality of PEs, wherein each of the plurality of PEs holds data and a condition flag, supplies the data and the condition flag to the information collection unit upon receiving an operation command, and upon receiving an update request for updating the condition flag, updates the condition flag in accordance with the update request that was received; and the information collection unit, upon receiving the data and the condition flags, selects one PE based on a predetermined order of priority from among the PEs for which the received condition flags are active and both supplies the data of the selected PE as collection result data and supplies an update request for updating the condition flag of the PE that was selected.
    Type: Application
    Filed: January 14, 2010
    Publication date: November 3, 2011
    Inventor: Shohei Nomoto
  • Patent number: 8049760
    Abstract: The present disclosure describes implementations for processing instructions and data across multiple Arithmetic Logic Units (ALUs). In one implementation, a graphics processing apparatus comprises a plurality of ALUs configured to process independent instructions in parallel. Pre-processing logic is configured to receive instructions and associated data to be directed to one of the plurality of ALUs for processing from a register file, the pre-processing logic being configured to selectively format received instructions for delivery to a plurality of the ALUs. In addition, post-processing logic is configured to receive data output from the plurality of the ALUs and deliver the received data to the register file for write-back, the post-processing logic being configured to selectively format data output from a plurality of the ALUs for delivery to the register file as though the data had been output by a single ALU.
    Type: Grant
    Filed: December 13, 2006
    Date of Patent: November 1, 2011
    Assignee: Via Technologies, Inc.
    Inventors: Yang (Jeff) Jiao, Chien Te Ho
  • Patent number: 8051305
    Abstract: A motherboard device includes a first connecting interface coupled to a first graphics card, a second connecting interface coupled to a second graphics card, a power source connected electrically to the first connecting interface for supplying electric power to the first graphics card via the first connecting interface, and a switch unit interconnecting electrically the power source and the second connecting interface, and operable so as to switch between an ON-state, where the power source supplies electric power to the second graphics card via the second connecting interface, and an OFF-state, where the electric power from the power source is not supplied to the second graphics card.
    Type: Grant
    Filed: June 4, 2008
    Date of Patent: November 1, 2011
    Assignees: Micro-Star International Co., Ltd., MSI Electronic (Kun Shan) Co., Ltd.
    Inventor: Wen-Jie Zhu
  • Patent number: 8037284
    Abstract: A stream processing computer architecture includes creating a stream computer processing (SCP) system by forming a super node cluster of processors representing physical computation nodes (“nodes”), communicatively coupling the processors via a local interconnection means (“interconnect”), and communicatively coupling the cluster to an optical circuit switch (OCS), via optical external links (“links”). The OCS is communicatively coupled to another cluster of processors via the links. The method also includes generating a stream computation graph including kernels and data streams, and mapping the graph to the SCP system, which includes assigning the kernels to the clusters and respective nodes, assigning data stream traffic between the kernels to the interconnection when the data stream is between nodes in the same cluster, and assigning traffic between the kernels to the links when the data stream is between nodes in different clusters.
    Type: Grant
    Filed: November 9, 2010
    Date of Patent: October 11, 2011
    Assignee: International Business Machines Corporation
    Inventors: Eugen Schenfeld, Smith T. Basil, III
  • Patent number: 8032821
    Abstract: This disclosure relates to a method and system of processing chain calculations in spreadsheet applications utilizing multiple processors, each having a separate recalculation engine. A single calculation chain may be reordered into a unified chain where supporting and dependent formulas are organized into a tree hierarchy of child chains. The unified chain is further divided into dependency levels, where entries in each dependency level may be moved to a next dependency level during reordering. If an entry within a dependency level is dependent upon another entry not found within its own child chain, the unified chain is ordered such that an entry is only dependent upon an entry in a prior dependency level. Further, dependency levels allow a control thread to perform control-thread-only operations while maintaining multi-thread processing capabilities.
    Type: Grant
    Filed: May 8, 2006
    Date of Patent: October 4, 2011
    Assignee: Microsoft Corporation
    Inventors: Jeffrey J. Duzak, Andrew Becker, Matthew J. Androski, Duane Campbell
  • Patent number: 8027972
    Abstract: Embodiments of the invention may be used to normalize data stored in an in-memory database on a parallel computer system. The data normalization may be used to achieve memory savings, thereby reducing the number of compute nodes required to store an in-memory database. Thus, as a result, faster response times may be achieved when querying the data. In one embodiment, normalization may be performed in a manner to avoid datasets that cross physical or logical boundaries of the compute nodes of a parallel system.
    Type: Grant
    Filed: September 26, 2007
    Date of Patent: September 27, 2011
    Assignee: International Business Machines Corporation
    Inventors: Eric Lawrence Barsness, Amanda Peters, John Matthew Santosuosso
  • Publication number: 20110225392
    Abstract: Techniques for providing improved data distribution to and collection from multiple memories are described. Such memories are often associated with and local to processing elements (PEs) within an array processor. Improved data transfer control within a data processing system provides support for radix 2, 4 and 8 fast Fourier transform (FFT) algorithms through data reordering or bit-reversed addressing across multiple PEs, carried out concurrently with FFT computation on a digital signal processor (DSP) array by a DMA unit. Parallel data distribution and collection through forms of multicast and packet-gather operations are also supported.
    Type: Application
    Filed: May 23, 2011
    Publication date: September 15, 2011
    Applicant: ALTERA CORPORATION
    Inventors: Edwin F. Barry, Nikos P. Pitsianis, Kevin Coopman
  • Patent number: 8015396
    Abstract: In a computer system in which a server has, in addition to a disk used for booting, an operation transfer destination disk that has the same content as the boot disk, a method for changing the disk used by the server or another server in the computer system for booting to the operation transfer destination disk is realized by changing the content of the operation transfer destination disk to enable the OS and applications installed in the operation transfer destination disk to be booted from the destination disk and by changing the setting of a boot program of the server to enable booting from the operation transfer destination disk.
    Type: Grant
    Filed: September 22, 2008
    Date of Patent: September 6, 2011
    Assignee: Hitachi, Ltd.
    Inventors: Keisuke Hatasaki, Takao Nakajima
  • Patent number: 8015368
    Abstract: Enhancements to hardware architectures (e.g., a RISC processor or a DSP processor) to accelerate spectral band replication (SBR) processing are described. In some embodiments, instruction extensions configure a reconfigurable processor to accelerat SBR and other audio processing. In addition to the instruction extensions, execution units (e.g., multiplication and accumulation units (MACs)) may operate in parallel to reduce the number of audio processing cycles. Performance may be further enhanced through the use of source and destination units which are configured to work with the execution units and quickly fetch and store source and destination operands.
    Type: Grant
    Filed: April 21, 2008
    Date of Patent: September 6, 2011
    Assignee: Siport, Inc.
    Inventors: Sridhar Sharma, Binuraj Ravindran, Jeffrey V. Hill
  • Patent number: 8015567
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Grant
    Filed: August 31, 2004
    Date of Patent: September 6, 2011
    Assignee: NetLogic Microsystems, Inc.
    Inventor: David T. Hass
  • Publication number: 20110208947
    Abstract: Simplifying transmission in a distributed parallel computing system. The method includes: identifying at least one item in a data input to the parallel computing unit; creating a correspondence relation between the at least one item and indices thereof according to a simplification coding algorithm, where the average size of the indices is less than the average size of the at least one item; replacing the at least one item with the corresponding indices according to the correspondence relation; generating simplified intermediate results by the parallel computing unit based on the indices; and transmitting the simplified intermediate results. The invention also provides a system corresponding to the above method.
    Type: Application
    Filed: January 28, 2011
    Publication date: August 25, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Haibo Lin, Jia Jia Wen, Zhe Xiang, Yi Xin Zhao
  • Patent number: 7996653
    Abstract: In one embodiment, a node comprises a plurality of processor cores and a node controller configured to receive a first read operation addressing a first register. The node controller is configured to return a first value in response to the first read operation, dependent on which processor core transmitted the first read operation. In another embodiment, the node comprises the processor cores and the node controller. The node controller comprises a queue shared by the processor cores. The processor cores are configured to transmit communications at a maximum rate of one every N clock cycles, where N is an integer equal to a number of the processor cores. In still another embodiment, a node comprises the processor cores and a plurality of fuses shared by the processor cores. In some embodiments, the node components are integrated onto a single integrated circuit chip (e.g. a chip multiprocessor).
    Type: Grant
    Filed: October 7, 2010
    Date of Patent: August 9, 2011
    Assignee: GLOBALFOUNDRIES Inc.
    Inventors: William A. Hughes, Vydhyanathan Kalyanasundharam, Kiran K. Bondalapati, Philip E. Madrid, Stephen C. Ennis
  • Patent number: 7996497
    Abstract: A method for enabling a Node Controller (NC), which claims a duplicate or invalid service processor Node Controller Identification (NCID) in a distributed service processor system, to be integrated into the system includes reading an NCID by the NC after the NC is booted, saving the NCID into a non-volatile storage and broadcasting an NC Present Message (NPM) to a System Controller (SC) repeatedly until the SC initiates communication, updating the NCID for the NC in the non-volatile storage when the NC receives an NCID change message from the SC and rating any future NPM as a new NCID, and checking a record of a new NC when the SC receives the NPM from the NC. If the SC has a record of a recorded NC with the same NCID as the new NC, then the SC checks its role as a primary SC. If the SC does not have the record of the recorded NC with the same NCID as the new NC, then the SC checks validity of the NCID.
    Type: Grant
    Filed: June 30, 2008
    Date of Patent: August 9, 2011
    Assignee: International Business Machines Corporation
    Inventors: Michael John Jones, Ajay Kumar Mahajan, Rashmi Narasimhan, Atit D. Patel
  • Patent number: 7975268
    Abstract: A method and system for performing a requested job. The system includes multiple processing servers and a management server. Each processing server executes an assigned program for performing the requested job. An execution direction, which identifies each of the multiple programs and an execution order for their execution (including identification of a first program to be executed), is generated. The management server sends to a first processing server the execution direction and input data. The first processing server: executes the first program using the input data, resulting in updating the input data; sends to the management server an inquiry for identification information that identifies a second processing server to execute a second program included in the execution direction; receives the identification information from the management server; and sends to the second processing server the execution direction and the updated input data for subsequent execution of the second program.
    Type: Grant
    Filed: February 15, 2005
    Date of Patent: July 5, 2011
    Assignee: International Business Machines Corporation
    Inventor: Akihiro Kaneko
  • Publication number: 20110161975
    Abstract: A method for efficient dispatch/completion of a work element within a multi-node data processing system. The method comprises: selecting specific processing units from among the processing nodes to complete execution of a work element that has multiple individual work items that may be independently executed by different ones of the processing units; generating an allocated processor unit (APU) bit mask that identifies at least one of the processing units that has been selected; placing the work element in a first entry of a global command queue (GCQ); associating the APU mask with the work element in the GCQ; and responsive to receipt at the GCQ of work requests from each of the multiple processing nodes or the processing units, enabling only the selected specific ones of the processing nodes or the processing units to be able to retrieve work from the work element in the GCQ.
    Type: Application
    Filed: December 30, 2009
    Publication date: June 30, 2011
    Applicant: IBM CORPORATION
    Inventors: Benjamin G. Alexander, Gregory H. Bellows, Joaquin Madruga, Barry L. Minor
  • Publication number: 20110161976
    Abstract: A method efficiently dispatches/completes a work element within a multi-node, data processing system that has a global command queue (GCQ) and at least one high latency node. The method comprises: at the high latency processor node, work scheduling logic establishing a local command/work queue (LCQ) in which multiple work items for execution by local processing units can be staged prior to execution; a first local processing unit retrieving via a work request a larger chunk size of work than can be completed in a normal work completion/execution cycle by the local processing unit; storing the larger chunk size of work retrieved in a local command/work queue (LCQ); enabling the first local processing unit to locally schedule and complete portions of the work stored within the LCQ; and transmitting a next work request to the GCQ only when all the work within the LCQ has been dispatched by the local processing units.
    Type: Application
    Filed: December 30, 2009
    Publication date: June 30, 2011
    Applicant: IBM CORPORATION
    Inventors: Benjamin G. Alexander, Gregory H. Bellows, Joaquin Madruga, Barry L. Minor
  • Patent number: 7971029
    Abstract: A barrier synchronization device for realizing barrier synchronization of at least 2 processor cores belonging to a same synchronization group among a plurality of processor cores is included in a multi-core processor having a plurality of processor cores, and when two or more processor cores in that multi-core processor belong to the same synchronization group, the included barrier synchronization device is used for realizing barrier synchronization.
    Type: Grant
    Filed: December 15, 2009
    Date of Patent: June 28, 2011
    Assignee: Fujitsu Limited
    Inventors: Hideyuki Unno, Masaki Ukai, Matthew Depetro
  • Publication number: 20110154339
    Abstract: Disclosed herein is a system for processing large-capacity data in a distributed parallel processing manner based on MapReduce using a plurality of computing nodes. The distributed parallel processing system is configured to provide an incremental MapReduce-based distributed parallel processing function for large-capacity stream data which is being continuously collected even during the performance of the distributed parallel processing, as well as for large-capacity stored data which has been previously collected.
    Type: Application
    Filed: December 15, 2010
    Publication date: June 23, 2011
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Myung-Cheol LEE, Mi-Young Lee
  • Patent number: 7966454
    Abstract: A data processing system enables global shared memory (GSM) operations across multiple nodes with a distributed EA-to-RA mapping of physical memory. Each node has a host fabric interface (HFI), which includes HFI windows that are assigned to at most one locally-executing task of a parallel job. The tasks perform parallel job execution, but map only a portion of the effective addresses (EAs) of the global address space to the local, real memory of the task's respective node. The HFI window tags all outgoing GSM operations (of the local task) with the job ID, and embeds the target node and HFI window IDs of the node at which the EA is memory mapped. The HFI window also enables processing of received GSM operations with valid EAs that are homed to the local real memory of the receiving node, while preventing processing of other received operations without a valid EA-to-RA local mapping.
    Type: Grant
    Filed: February 1, 2008
    Date of Patent: June 21, 2011
    Assignee: International Business Machines Corporation
    Inventors: Lakshimarayana B. Arimilli, Ravi K. Arimilli, Robert S. Blackmore, Chulho Kim, Ramakrishnan Rajamony, William J. Starke, Hanhong Xue
  • Patent number: 7962425
    Abstract: A method and system for responding to an alert pertaining to an event. A unique processor of a first micro grid apparatus of at least one micro grid apparatus detects an alert data packet that includes the alert. Each micro grid apparatus includes at least two processors that contain a unique processor. Each processor of each micro grid apparatus has its own operating system. The unique processor of each micro grid apparatus has a unique operating system. Each unique processor selects at least one processor from each micro grid apparatus. Each selected processor is designated as a macro grid processor of a respective macro grid by altering the operating system of each selected processor. An artificial intelligence is generated for each macro grid. The event is responded to and quenched by implementing the artificial intelligence of each macro grid, after which each macro grid is extinguished.
    Type: Grant
    Filed: November 23, 2010
    Date of Patent: June 14, 2011
    Assignee: International Business Machines Corporation
    Inventor: Ian Edward Oakenfull
  • Patent number: 7962720
    Abstract: A distributed processing system includes at least two processing elements (100 and 200) which are mutually connected, and each processing element having at least a processing section, a memory section, and a communication section. A first processing section (102) stores data in a predetermined area of a first memory section (101), or reads data stored in a predetermined area of the first memory section (101). A first communication section (103) of one processing element (100) transmits data read from the first memory section (101) to the other processing element (200), or stores data received from the other processing element (200) to the first memory section (101).
    Type: Grant
    Filed: June 1, 2006
    Date of Patent: June 14, 2011
    Assignee: Olympus Corporation
    Inventors: Arata Shinozaki, Mitsunori Kubo
  • Patent number: 7953684
    Abstract: A system and method that optimizes reduce operations by consolidating the operation into a limited number of participating processes and then distributing the results back to all processes to optimize large message global reduce operations on non power-of-two processes. The method divides a group of processes into subgroups, performs paired exchange and local reduce operations at some of the processes to obtain half vectors of partial reduce results, consolidates partial reduce results into a set of regaining processes, performs successive recursive halving and recursive doubling at a set of remaining processes until each process in the set of remaining process has a half vector of the complete result, and provides a full complete result at every process.
    Type: Grant
    Filed: January 31, 2007
    Date of Patent: May 31, 2011
    Assignee: International Business Machines Corporation
    Inventor: Bin Jia
  • Patent number: 7953957
    Abstract: Methods, apparatus, and products for distributing parallel algorithms of a parallel application among compute nodes of an operational group in a parallel computer are disclosed that include establishing a hardware profile, the hardware profile describing thermal characteristics of each compute node in the operational group; establishing a hardware independent application profile, the application profile describing thermal characteristics of each parallel algorithm of the parallel application; and mapping, in dependence upon the hardware profile and application profile, each parallel algorithm of the parallel application to a compute node in the operational group.
    Type: Grant
    Filed: February 11, 2008
    Date of Patent: May 31, 2011
    Assignee: International Business Machines Corporation
    Inventors: Thomas M. Gooding, Brant L. Knudson, Cory Lappi, Ruth J. Poole, Andrew T. Tauferner
  • Patent number: 7944932
    Abstract: A data processing system includes a plurality of processing units each having a respective point-to-point communication link with each of multiple others of the plurality of processing units but fewer than all of the plurality of processing units. Each of the plurality of processing units includes interconnect logic, coupled to each point-to-point communication link of that processing unit, that broadcasts operations received from one of the multiple others of the plurality of processing units to one or more of the plurality of processing units.
    Type: Grant
    Filed: April 1, 2008
    Date of Patent: May 17, 2011
    Assignee: International Business Machines Corporation
    Inventors: Leo J. Clark, James S. Fields, Jr., Guy L. Guthrie, William J. Starke
  • Patent number: 7941637
    Abstract: A system has a first plurality of cores in a first coherency group. Each core transfers data in packets. The cores are directly coupled serially to form a serial path. The data packets are transferred along the serial path. The serial path is coupled at one end to a packet switch. The packet switch is coupled to a memory. The first plurality of cores and the packet switch are on an integrated circuit. The memory may or may not be on the integrated circuit. In another aspect a second plurality of cores in a second coherency group is coupled to the packet switch. The cores of the first and second pluralities may be reconfigured to form or become part of coherency groups different from the first and second coherency groups.
    Type: Grant
    Filed: April 15, 2008
    Date of Patent: May 10, 2011
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Perry H. Pelley, III, George P. Hoekstra, Lucio F. C. Pessoa
  • Patent number: 7941634
    Abstract: Specialized image processing circuitry is usually implemented in hardware in a massively parallel way as a single instruction multiple data (SIMD) architecture. The invention prevents long and complicated connection paths between a processing element and the memory subsystem, and improves maximum operating frequency. An optimized architecture for image processing has processing elements that are arranged in a two-dimensional structure, and each processing element has a local storage containing a plurality of reference pixels that are not neighbors in the reference image. Instead, the reference pixels belong to different blocks of the reference image, which may vary for different encoding schemes.
    Type: Grant
    Filed: November 14, 2007
    Date of Patent: May 10, 2011
    Assignee: Thomson Licensing
    Inventors: Marco Georgi, Klaus Gaedke, Malte Borsum
  • Publication number: 20110107061
    Abstract: A hardware pipeline has a number of rows including a first row, a last row, and an intermediate row between the first row and the last row. Each row stores a number of bytes of data as the data moves through the pipeline on a row-by-row basis from the first row towards the last row. A mechanism performs a first macro on the data beginning at the first row. The mechanism performs a second macro different than the first macro on the data beginning at the intermediate row where the first macro has been completely performed when the data has reached the intermediate row. The first and second macros each include a number of modifications of the data as the data moves through the pipeline to effect a complete transformation of the data. The complete transformation of the first macro is different than the complete transformation of the second data.
    Type: Application
    Filed: October 30, 2009
    Publication date: May 5, 2011
    Inventor: David A. Warren
  • Patent number: 7934035
    Abstract: A system for executing applications designed to run on a single SMP computer on an easily scalable network of computers, while providing each application with computing resources, including processing power, memory and others that exceed the resources available on any single computer. A server agent program, a grid switch apparatus and a grid controller apparatus are included. Methods for creating processes and resources, and for accessing resources transparently across multiple servers are also provided.
    Type: Grant
    Filed: April 24, 2008
    Date of Patent: April 26, 2011
    Assignee: Computer Associates Think, Inc.
    Inventors: Vladimir Miloushev, Peter Nickolov, Becky L. Hester, Borislav S. Marinov
  • Patent number: 7930533
    Abstract: A system for pre-execution environment (PXE) booting a storage processor from a peer storage processor allows for the ability to reboot and/or restart the storage processor without an externally connected PXE server. In response to a reboot request of the storage processor, the peer storage processor pushes an operating system boot image and/or other information to the storage processor for PXE booting the storage processor, and vice versa. The system may also operate with multiple coupled computers.
    Type: Grant
    Filed: September 26, 2007
    Date of Patent: April 19, 2011
    Assignee: EMC Corporation
    Inventors: Ying Guo, Qing Liu, Kevin Richards
  • Publication number: 20110087723
    Abstract: Apparatus, systems, and methods may operate to maintain a repository of stored executable images including a unique instance of an executable image comprising an operating system and at least a portion of one or more applications, and to provide substantially simultaneous executable access to a plurality of virtual or physical machines to execute portions of the executable image without constructing additional instances of the executable image. Additional apparatus, systems, and methods are disclosed.
    Type: Application
    Filed: October 9, 2009
    Publication date: April 14, 2011
    Inventors: Arijit Dutta, Harpreet Singh Walia
  • Patent number: 7924858
    Abstract: A data processing apparatus and method of operation of such a data processing apparatus are disclosed. The data processing apparatus has a main processing unit operable to perform a plurality of data processing tasks, and a data engine for performing a number of those tasks on behalf of the main processing unit. At least one shared resource is allocatable to the data engine by the main processing unit for use by the data engine when performing data processing tasks on behalf of the main processing unit. The data engine comprises a data engine core for performing the tasks, and a data engine subsystem configurable by the main processing unit and arranged to manage communication between the data engine core and an allocated shared resource. The data engine core comprises a resource manager unit for acting as a master device with respect to the data engine subsystem in order to manage use of the allocated shared resource.
    Type: Grant
    Filed: April 13, 2006
    Date of Patent: April 12, 2011
    Assignee: ARM Limited
    Inventors: Martinus Cornelius Wezelenburg, Johan Matterne, Dirk Duerinckx, Sven Wambecq
  • Patent number: 7925900
    Abstract: An apparatus and method provide power to perform functions on a computing device. In one example, the apparatus contains multiple processors that may operate at different power levels to consume different amounts of power. Also, any of the multiple processors may perform different functions. For example, one processor may be a low power processor that may control or operate at least one peripheral device to perform a low capacity function. Control may also switch from the low power processor to a high capacity processor. In one example, the high capacity processor controls the low power processor and further controls the at least one peripheral device through the lower power processor.
    Type: Grant
    Filed: January 26, 2007
    Date of Patent: April 12, 2011
    Assignee: Microsoft Corporation
    Inventors: Gregory H. Parks, Erik Michael Geidl, Andrew John Fuller, Troy Scott Jones
  • Patent number: 7920584
    Abstract: A data processing system is provided comprising a main processor operable to perform a plurality of data processing tasks, a data engine having a data engine core operable to perform a number of said plurality of data processing tasks on behalf of said main processor and a data stream processing unit providing a data communication path between said main processing unit and said data engine core. The data stream processing unit has a control interface operable to receive from said data engine core at least one command and a data stream controller operable to receive at least one input data stream and to perform at least one operation on said at least one input data stream to generate at least one output data stream comprising a sequence of data elements. The data stream processing unit is responsive to said at least one command from said data engine core to control said data stream controller to perform said at least one operation.
    Type: Grant
    Filed: April 12, 2006
    Date of Patent: April 5, 2011
    Assignee: ARM Limited
    Inventors: Johan Matterne, Martinus Cornelis Wezelenburg
  • Patent number: 7917729
    Abstract: A System-on-Chip (SoC) component comprising a single independent multiprocessor subsystem core including a plurality of multiple processors, each multiple processor having a local memory associated therewith forming a processor cluster; and a switch fabric means connecting each processor cluster within an SoC integrated circuit (IC). The single SoC independent multiprocessor subsystem core is capable of performing multi-threading operation processing for SoC devices when configured as a DSP, coprocessor, Hybrid ASIC, or network processing arrangements. The switch fabric means additionally interconnects a SoC local system bus device with SoC processor components with the independent multiprocessor subsystem core.
    Type: Grant
    Filed: June 1, 2007
    Date of Patent: March 29, 2011
    Assignee: International Business Machines Corporation
    Inventors: Christos J. Georgiou, Victor L. Gregurick, Valentina Salapura
  • Patent number: 7917728
    Abstract: An integrated circuit having a plurality of processing modules (I, T) is provided. At least one first processing module (I) issues at least one transaction towards at least one second processing module (T). Said integrated circuit further comprises at least one first transaction retraction unit (TRU1) for indicating an allowance to said at least one first of said processing modules (I) to retract said at least one transaction according to the sate of said second processing module (T).
    Type: Grant
    Filed: March 15, 2005
    Date of Patent: March 29, 2011
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Andrei Radulescu, Keese Gerard Willem Goossens
  • Patent number: 7908462
    Abstract: The current invention provides a virtual world simulation system capable of hosting with massive amount of concurrent players by integrating commodity parallel co-processors into servers. The current invention proposes novel parallel processing algorithms to make use of commodity parallel co-processors like a graphic processing unit (GPU) or any specialized hardware with parallel architecture design like a field-programmable gate array (FPGA), to accelerate virtual world simulation.
    Type: Grant
    Filed: June 9, 2010
    Date of Patent: March 15, 2011
    Assignee: Zillians Incorporated
    Inventor: Mu Chi Sung
  • Publication number: 20110055518
    Abstract: The different advantageous embodiments provide a system for partitioning a data processing system comprising a number of cores and a partitioning process. The partitioning process is configured to assign a number of partitions to the number of cores. Each partition in the number of partitions is assigned to a separate number of cores from the number of cores.
    Type: Application
    Filed: August 27, 2009
    Publication date: March 3, 2011
    Applicant: The Boeing Company
    Inventors: Jonathan N. Hotra, Kenn R. Luecke