Patents Issued in May 6, 2004
-
Publication number: 20040088475Abstract: A memory device (10) includes an array (12) of memory cells arranged in rows and columns. Preferably, each memory cell includes a pass transistor coupled to a storage capacitor. A row decoder is coupled to rows of memory cells while a column decoder (14) is coupled to columns of the memory cells. The column decoder (14) includes an enable input. A variable delay (32) has an output coupled to the enable input of the column decoder (14). The variable delay (32) receives an indication (R/W′) of whether a current cycle is a read cycle or a write cycle. In the preferred embodiment, a signal provided at the output of the variable delay (32) is delayed if the current cycle is a read cycle compared to if the current cycle is a write cycle.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Applicant: Infineon Technologies North America Corp.Inventors: Harald Streif, Stefan Wuensche, Mike Killian
-
Publication number: 20040088476Abstract: Method and apparatus using a Content Addressable Memory for sorting a plurality of data items is presented. The data items to be sorted are stored in the Content Addressable Memory. A plurality of bit-by-bit burst searches are performed on the contents of the Content Addressable Memory with all other bits in the search key masked. The number of burst searches is proportional to the total number of bits in the data items to be sorted. The search is deterministic dependent on the number of bits in each data item on which a sort is performed and on the number of data items to be sorted.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Applicant: MOSAID Technologies, Inc.Inventor: Mourad Abdat
-
Publication number: 20040088477Abstract: A storage system that may include one or more memory devices, a memory interface device corresponding to one or more of the memory devices, which are organized in sections, and a section controller. In this system, a data request for the data may be received over a communications path by a section controller. The section controller determines the addresses in the memory devices storing the requested data, transfers these addresses to those memory devices storing the requested data, and transfers an identifier to the memory interface device. The memory device, in response, reads the data and transfers the data to its corresponding memory interface device. The memory interface device then adds to the data the identifier it received from the section controller and forwards the requested bits towards their destination, such that the data need not pass through the section controller.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Inventors: Melvin James Bullen, Steven Louis Dodd, William Thomas Lynch, David James Herbison
-
Publication number: 20040088478Abstract: In accordance with one aspect of the present invention, a seek profile table used by a disk controller contains multiple profiles for seek operations, and is accessed by a separate index table containing, for each permutation of key parameters, an index to a corresponding profile. In operation, the estimated seek time for an enqueued data access operation is obtained by accessing the applicable index table entry, using the value of the index entry to determine the corresponding profile, and using the profile to estimate the access time. Preferably, a “time-based relocation expected access time” algorithm is used, in which a nominal seek time is established, and profile table entries express a probability that an operation with a given latency above the nominal seek time will complete within the latency period.. The expected access time is the latency plus the product of this probability and the time cost of a miss, i.e., the time of a single disk revolution.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Applicant: International Business Machines CorporationInventor: David Robison Hall
-
Publication number: 20040088479Abstract: Write operations less than full block size (short block writes) are internally accumulated while being written to disk in a temporary cache location. Once written to the cache location, the disk drive signals the host that the write operation has completed. Accumulation of short block writes in the drive is transparent to the host and does not present an exposure of data loss. The accumulation of a significant number of short block write operations in the queue make it possible to perform read/modify/write operations with a greater efficiency. In operation, the drive preferably cycles between operation in the cache location and the larger data block area to achieve efficient use of the cache and efficient selection of data access operations. In one embodiment, a portion of the disk surface is formatted at a smaller block size for use by legacy software.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Applicant: International Business Machines CorporationInventor: David Robison Hall
-
Publication number: 20040088480Abstract: A method for determining a speculative data acquisition in conjunction with an execution of a first access command relative to an execution of a second access command through execution of the read look ahead routine is disclosed.Type: ApplicationFiled: June 23, 2003Publication date: May 6, 2004Applicant: Seagate Technology LLCInventors: Travis D. Fox, Edwin S. Olds, Mark A. Gaertner, Abbas Ali
-
Publication number: 20040088481Abstract: A disk cache may include a volatile memory such as a dynamic random access memory and a nonvolatile memory such as a polymer memory. When a cache line needs to be allocated on a write, the polymer memory may be allocated and when a cache line needs to be allocated on a read, the volatile memory may be allocated.Type: ApplicationFiled: November 4, 2002Publication date: May 6, 2004Inventor: John I. Garney
-
Publication number: 20040088482Abstract: Data storage systems are provided. One such data storage system includes a first data storage carrier that incorporates multiple disk drives mounted adjacent to each other. Other systems also are provided.Type: ApplicationFiled: November 4, 2002Publication date: May 6, 2004Inventors: Herbert J. Tanzer, Patrick S. McGoey
-
Publication number: 20040088483Abstract: A method for providing online raid migration without non-volatile memory employs reconstruction of the RAID drives. In this manner, the method of the present invention protects online migration of data from power failure with little or no performance loss so that data can be recovered if power fails while migration is in progress, and migration may be resumed without the use of non-volatile memory.Type: ApplicationFiled: November 4, 2002Publication date: May 6, 2004Inventors: Paresh Chatterjee, Parag Maharana, Sumanesh Samanta
-
Publication number: 20040088484Abstract: A primary controller operates to transmit write data and a write time to a secondary controller in the earlier sequence of the write times after reporting a completion of a request for write to a processing unit. The secondary controller stores the write data and the write time transmitted from the primary controller in the cache memory. At a time, the secondary controller stores the write data in a disk unit in the earlier sequence of the write time. These operations make it possible to guarantee all the write data on or before the reference time.Type: ApplicationFiled: July 9, 2003Publication date: May 6, 2004Inventors: Akira Yamamoto, Katsunori Nakamura, Shigeru Kishiro
-
Publication number: 20040088485Abstract: A computer implemented cache memory for a RAID-5 configured disk storage system to achieve a significant enhancement of the data access and write speed of the raid disk. A memory cache is provided between the RAID-5 controller and the RAID-5 disks to speed up RAID-5 system volume accesses. It utilizes the time and spatial locality property of parity blocks. The memory cache is central in its physical architecture for easy management, better utilization, and easy application to a generalized computer system. The cache blocks are indexed by their physical disk identifier to improve the cache hit ratio and cache utilization.Type: ApplicationFiled: October 17, 2003Publication date: May 6, 2004Inventor: Rung-Ji Shang
-
Publication number: 20040088486Abstract: A technique for resynchronizing a memory system. More specifically, a technique for resynchronizing a plurality of memory segments in a redundant memory system after a hot-plug event. After a memory cartridge is hot-plugged into a system, the memory cartridge is synchronized with the operational memory cartridges such that the memory system can operate in lock step. A refresh counter in each memory cartridge is disabled to, generate a first refresh request to the corresponding memory segments in the memory cartridge. After waiting a period of time to insure that regardless of what state each memory cartridge is in when the first refresh request is initiated all cycles have been completely executed, each refresh counter is re-enabled, thereby generating a second refresh request. The generation of the second refresh request to each of the memory segments provides synchronous operation of each of the memory cartridges.Type: ApplicationFiled: October 21, 2003Publication date: May 6, 2004Inventors: Gary J. Piccirillo, Jerome J. Johnson, John E. Larson
-
Publication number: 20040088487Abstract: A chip-multiprocessing system with scalable architecture, including on a single chip: a plurality of processor cores; a two-level cache hierarchy; an intra-chip switch; one or more memory controllers; a cache coherence protocol; one or more coherence protocol engines; and an interconnect subsystem. The two-level cache hierarchy includes first level and second level caches. In particular, the first level caches include a pair of instruction and data caches for, and private to, each processor core. The second level cache has a relaxed inclusion property, the second-level cache being logically shared by the plurality of processor cores. Each of the plurality of processor cores is capable of executing an instruction set of the ALPHA™ processing core. The scalable architecture of the chip-multiprocessing system is targeted at parallel commercial workloads.Type: ApplicationFiled: October 24, 2003Publication date: May 6, 2004Inventors: Luiz Andre Barroso, Kourosh Gharachorloo, Andreas Nowatzyk
-
Publication number: 20040088488Abstract: A multi-threaded embedded processor that includes an on-chip deterministic (e.g., scratch or locked cache) memory that persistently stores all instructions associated with one or more pre-selected high-use threads. The processor executes general (non-selected) threads by reading instructions from an inexpensive external memory, e.g., by way of an on-chip standard cache memory, or using other potentially slow, non-deterministic operation such as direct execution from that external memory that can cause the processor to stall while waiting for instructions to arrive. When a cache miss or other blocking event occurs during execution of a general thread, the processor switches to the pre-selected thread, whose execution with zero or minimal delay is guaranteed by the deterministic memory, thereby utilizing otherwise wasted processor cycles until the blocking event is complete.Type: ApplicationFiled: May 7, 2003Publication date: May 6, 2004Applicant: Infineon Technologies North America Corp.Inventors: Robert E. Ober, Roger D. Arnold, Daniel Martin, Erik K. Norden
-
Publication number: 20040088489Abstract: A multi-port instruction/data integrated cache which is provided between a parallel processor and a main memory and stores therein a part of instructions and data stored in the main memory has a plurality of banks, and a plurality of ports including an instruction port unit consisting of at least one instruction port used to access an instruction from the parallel processor and a data port unit consisting of at least one data port used to access data from the parallel processor. Further, a data width which can be specified to the bank from the instruction port is set larger than a data width which can be specified to the bank from the data port.Type: ApplicationFiled: October 15, 2003Publication date: May 6, 2004Applicant: Semiconductor Technology Academic Research CenterInventors: Tetsuo Hironaka, Hans Jurgen Mattausch, Tetsushi Koide, Tai Hirakawa, Koh Johguchi
-
Publication number: 20040088490Abstract: A super predictive fetch system and method provides the benefits of a larger word line fill prefetch operation without the penalty normally associated with the larger line fill prefetch operation. Sequential memory access patterns are identified and caused to trigger a fetch of a sequential next line of data. The super predictive fetch operation includes a buffer into which the sequential next line of data is loaded. In one embodiment, the buffer is located in the memory controller. In another embodiment, the buffer is located in the cache controllers.Type: ApplicationFiled: November 6, 2002Publication date: May 6, 2004Inventor: Subir Ghosh
-
Publication number: 20040088491Abstract: A microprocessor is configured to continue execution in a special Speculative Prefetching After Data Cache Miss (SPAM) mode after a data cache miss is encountered. The microprocessor includes additional registers and program counter, and optionally additional cache memory for use during the special SPAM mode. By continuing execution during the SPAM mode, multiple outstanding and overlapping cache fill requests may be issued, thus improving performance of the microprocessor.Type: ApplicationFiled: October 24, 2003Publication date: May 6, 2004Inventors: Norman Paul Jouppi, Keith Istvan Farkas
-
Publication number: 20040088492Abstract: According to the present invention, methods and apparatus are provided for increasing the efficiency of data access in a multiple processor, multiple cluster system. Mechanisms for reducing the number of transactions in a multiple cluster system are provided. In one example, probe filter information is used to limit the number of probe requests transmitted to request and remote clusters.Type: ApplicationFiled: November 4, 2002Publication date: May 6, 2004Applicant: Newisys, Inc. a Delaware CorporationInventor: David B. Glasco
-
Publication number: 20040088493Abstract: According to the present invention, methods and apparatus are provided for increasing the efficiency of data access in a multiple processor, multiple cluster system. Mechanisms for reducing the number of transactions in a multiple cluster system are provided. In one example, memory controller filter information is used to probe a request or remote cluster while bypassing a home cluster memory controller.Type: ApplicationFiled: November 4, 2002Publication date: May 6, 2004Applicants: Newisys, Inc., A Delaware CorporationInventor: David B. Glasco
-
Publication number: 20040088494Abstract: Cache coherence directory eviction mechanisms are described for use in computer systems having a plurality of multiprocessor clusters. Interaction among the clusters is facilitated by a cache coherence controller in each cluster. A cache coherence directory is associated with each cache coherence controller identifying memory lines associated with the local cluster which are cached in remote clusters. A variety of techniques for managing eviction of entries in the cache coherence directory are provided.Type: ApplicationFiled: November 5, 2002Publication date: May 6, 2004Applicant: Newisys, Inc. A Delaware coporationInventors: David B. Glasco, Rajesh Kota, Sridhar K. Valluru
-
Publication number: 20040088495Abstract: Cache coherence directory eviction mechanisms are described for use in computer systems having a plurality of multiprocessor clusters. Interaction among the clusters is facilitated by a cache coherence controller in each cluster. A cache coherence directory is associated with each cache coherence controller identifying memory lines associated with the local cluster which are cached in remote clusters. A variety of techniques for managing eviction of entries in the cache coherence directory are provided.Type: ApplicationFiled: November 5, 2002Publication date: May 6, 2004Applicant: Newisys, Inc., A Delaware corporationInventors: David B. Glasco, Rajesh Kota, Sridhar K. Valluru
-
Publication number: 20040088496Abstract: Cache coherence directory eviction mechanisms are described for use in computer systems having a plurality of multiprocessor clusters. Interaction among the clusters is facilitated by a cache coherence controller in each cluster. A cache coherence directory is associated with each cache coherence controller identifying memory lines associated with the local cluster which are cached in remote clusters. A variety of techniques for managing eviction of entries in the cache coherence directory are provided.Type: ApplicationFiled: November 5, 2002Publication date: May 6, 2004Applicant: Newisys, Inc. A Delaware corporationInventors: David Brian Glasco, Rajesh Kota, Sridhar K. Valluru
-
Publication number: 20040088497Abstract: Methods and apparatus for exchanging cyclic redundancy check encoded (CRC-encoded) data are presented. An exemplary arrangement includes at least two blocks connected by an address bus and a data bus on which data is exchanged between the blocks. A snoop block, connected to the address and data buses, is configured to receive an address from the data bus. The snoop block includes address masking circuitry configured to mask off the address receivable from the data bus to generate at least one snoop address. A CRC block, connected to the data bus and to the snoop block, is configured to generate a CRC code from the data when a data address, carried on the address bus, matches the at least one snoop address.Type: ApplicationFiled: November 6, 2002Publication date: May 6, 2004Inventors: Russell C. Deans, Troy S. Dahlmann
-
Publication number: 20040088498Abstract: A system and method for freeing memory from individual pools of memory in response to a threshold being reached that corresponds with the individual memory pools is provided. The collective memory pools form a system wide memory pool that is accessible from multiple processors. When a threshold is reached for an individual memory pool, a page stealer method is performed to free memory from the corresponding memory pool. Remote memory is used to store data if the page stealer is unable to free pages fast enough to accommodate the application's data needs. Memory subsequently freed from the local memory area is once again used to satisfy the memory needs for the application. In one embodiment, memory affinity can be set on an individual application basis so that affinity is maintained between the memory pools local to the processors running the application.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Applicant: International Business Machines CorporationInventors: Jos Manuel Accapadi, Mathew Accapadi, Andrew Dunshea, Dirk Michel
-
Publication number: 20040088499Abstract: A distributed computer system is disclosed that allows shared memory resources to be synchronized so that accurate and uncorrupted memory contents are shared by the computer systems within the distribute computer system. The distributed computer system includes a plurality of devices, at least one memory resource shared by the plurality of devices, and a memory controller, coupled to the plurality of devices and to the shared memory resources. The memory controller synchronizes the access of shared data stored within the memory resources by the plurality devices and overrides synchronization among the plurality of devices upon notice that a prior synchronization event has occurred or the memory resource is not to be shared by other devices.Type: ApplicationFiled: October 21, 2003Publication date: May 6, 2004Inventor: James Alan Woodward
-
Publication number: 20040088500Abstract: One aspect of the invention is a method for automatically readying a medium. The method comprises monitoring a state of a storage medium using readying logic and determining a physical media type for the storage medium using the readying logic. The method also comprises determining a recorded type for the storage medium using the readying logic and selecting, using the readying logic, a file system type for the storage medium if the storage medium is formatted using a file system. The method further comprises automatically readying the storage medium in response to the determinations and selection.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Inventor: Allen J. Piepho
-
Publication number: 20040088501Abstract: A method and apparatus are provided for repacking of memory data. For at least one embodiment, data for a plurality of store instructions in a source code program is loaded from memory into the appropriate sub-location of a proxy storage location. The packed data is then written with a single instruction from the proxy storage location into contiguous memory locations.Type: ApplicationFiled: November 4, 2002Publication date: May 6, 2004Inventors: Jean-Francois C. Collard, Kalyan Muthukumar
-
Publication number: 20040088502Abstract: An integrated multilevel nonvolatile flash memory device has a memory array of a plurality of memory units arranged in a plurality of rows and columns. Each of the memory units has a plurality of memory cells with each memory cell for storing a multibit state. Each of the memory units stores encoded user data and overhead data. The partitioning of the encoded user data and the overhead data stored in a single memory unit may be done virtually. The result is a compact memory unit without the need for an index to overhead data for its associated user data.Type: ApplicationFiled: November 1, 2002Publication date: May 6, 2004Inventor: Jack E. Frayer
-
Publication number: 20040088503Abstract: An information processing method includes the steps of: obtaining, among arithmetic instructions, information on data sets referred to by memory reference, and assigning to different banks a plurality of data sets simultaneously referred to by memory reference performed in accordance with an arithmetic instruction. This allows bank assignment to be performed automatically without causing memory bank conflict.Type: ApplicationFiled: October 20, 2003Publication date: May 6, 2004Applicant: MATSUSHITA ELECTRIC CO., LTD.Inventors: Ryoko Miyachi, Wataru Hashiguchi
-
Publication number: 20040088504Abstract: A data storage system and method for reorganizing data to improve the effectiveness of data prefetching and reduce the data seek distance. A data reorganization region is allocated in which data is reorganized to service future requests for data. Sequences of data units that have been repeatedly requested are determined from a request stream, preferably using a graph where each vertex of the graph represents a requested data unit and each edge represents that a destination unit is requested shortly after a source unit the frequency of this occurrence. The most frequently requested data units are also determined from the request stream. The determined data is copied into the reorganization region and reorganized according to the determined sequences and most frequently requested units. The reorganized data might then be used to service future requests for data.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Inventors: Windsor Wee Sun Hsu, Honesty Cheng Young
-
Publication number: 20040088505Abstract: A method and apparatus are provided for enhancing the performance of storage systems is described. In the making of an initial copy to a secondary subsystem, or in the initial storage of data onto a primary storage subsystem, null data is skipped. The data may be skipped by sending the non-null data in sequence so missing addresses are identified as being null data, or a skip message may be used to designate regions where null data is to be present.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Applicant: Hitachi, Ltd.Inventor: Naoki Watanabe
-
Publication number: 20040088506Abstract: A data archiving controller automatically determines a whether a main storage devices has usage ratio in excess of a maximum limit and if an archiving or backing storage device has sufficient directory space to accept files from the main storage devices. The data archiving controller then determines using fuzzy logic the number of files to be transferred from the main storage devices to the backing storage devices. The data archiving controller has a set allocating apparatus in communication with the main storage device and the backing storage devices to receive retention device usage parameters for classification within classification sets. A membership rules retaining device contains the classification parameter defining rules by which the retention device usage parameters are assigned to the classification sets. The archiving controller has a rule evaluation apparatus for determining a quantity of data to be archived.Type: ApplicationFiled: November 1, 2002Publication date: May 6, 2004Applicant: Taiwan Semiconductor Manufacturing CompanyInventor: Nan-Jung Chen
-
Publication number: 20040088507Abstract: In a computer system that includes a first computer, a second computer, a first storage apparatus storing data in a fixed-length block format used by the second computer, and a backup apparatus connected to the first computer and storing data in a variable-length block format, the present invention provides a backup method for backing up data stored in the first storage apparatus to the backup apparatus. The first computer sends the second computer a request to read data in the fixed-length block format. In response to this request, the second computer reads the fixed-length block format data from the first storage apparatus and transfers this data to the first computer. The first computer converts the transferred fixed-length block format data into variable-length block format data. The converted variable-length block format data is stored in the backup apparatus.Type: ApplicationFiled: July 17, 2003Publication date: May 6, 2004Applicant: Hitachi, Ltd.Inventors: Ai Satoyama, Akira Yamamoto, Takashi Oeda, Yasutomo Yamamoto, Masaya Watanabe
-
Publication number: 20040088508Abstract: A system for backing up data includes a data-directing device configured to receive data to be backed up, a first backup storage device that is communicatively coupled to the data-directing device and that is configured to store the received data, a data-caching device that is coupled to the data-directing device and that is configured to store the received data, a switch that is configured to communicatively couple the data-directing device to a second backup storage device responsive to a backup operation failure, wherein data stored in the data-caching device is transferred to the second backup storage device via the data-directing device responsive to the backup operation failure.Type: ApplicationFiled: September 8, 2003Publication date: May 6, 2004Inventors: Curtis C. Ballard, William Wesley Torrey, Michael J. Chaloner
-
Microprocessor circuit for data carriers and method for organizing access to data stored in a memory
Publication number: 20040088509Abstract: A microprocessor circuit for organizing access to data or programs stored in a memory has a microprocessor, a memory for storing an operating system, and a memory for storing individual external programs. A plurality of memory areas with respective address spaces is provided in the memory for storing the external programs. Each address space is assigned an identifier. The identifier assigned to a memory area is loaded into a first auxiliary register prior to the addressing of the memory area and the identifier of the addressed memory area is loaded into a second auxiliary register. A comparison of the contents of the first and second auxiliary registers is performed. Furthermore, each address space of a memory area is assigned at least one bit sequence defining access rights, whereby code instructions and sensitive data can be protected against write accesses from other external programs.Type: ApplicationFiled: August 6, 2003Publication date: May 6, 2004Inventors: Franz-Josef Brucklmayr, Hans Friedinger, Holger Sedlak, Christian May -
Publication number: 20040088510Abstract: A log region (1415A) and a license region (1415B) are arranged in a memory of a memory card. The license region (1415B) stores licenses such as license IDs and license keys Kc as well as validity flags corresponding to entry numbers 0 (N−1). The log region (1415A) includes a receive log (70) and a send log (80). The memory card serving as a sender of the license accepts a receive state from the memory card on a receiver side, and validates the validity flag of a region designated by the entry number in the send log (8) when the receive state is ON. Consequently, even when communication is interrupted during shifting or copying of the license, the license to be shifted or copied can be restored.Type: ApplicationFiled: September 15, 2003Publication date: May 6, 2004Inventor: Yoshihiro Hori
-
Publication number: 20040088511Abstract: A system is described for controlling access to non-volatile memory. The system can include logic configured to determine whether to delay access to the non-volatile memory.Type: ApplicationFiled: October 30, 2002Publication date: May 6, 2004Inventors: Kinney C. Bacon, Lee R. Johnson
-
Publication number: 20040088512Abstract: A double data rate memory controller is provided with a plurality of data and strobe pads, means for receiving data and strobe signals via said pads at 1× double data rate memory speed, and means for receiving data and strobe signals via said pads at M× double data rate memory speed (M≧2).Type: ApplicationFiled: October 28, 2003Publication date: May 6, 2004Inventors: Eric M. Rentschler, Jeffrey G. Hargis, Leith L. Johnson
-
Publication number: 20040088513Abstract: A computing system includes a processor having an operating system executing thereon, a storage system having one or more storage media, and a controller coupled between the processor and the storage system. The controller maintains partition data defining one or more partitions for the storage media in response to commands received from the operating system, and controls access to the storage media in accordance with the partition data. The controller selects a subset of the partitions as active partitions, and communicates to the operating system a portion of the partition data that defines the active partitions. The controller may, for example, select the subset based on a current authenticated user. The controller intercepts storage access requests from the processor, and rejects storage accesses requests that are not directed to the active partitions.Type: ApplicationFiled: October 30, 2002Publication date: May 6, 2004Inventors: David W. Biessener, Kevin J. Tacheny, Gaston R. Biessener
-
Publication number: 20040088514Abstract: A storage system that may include one or more memory devices, a memory interface device corresponding to one or more of the memory devices, which are organized in sections, a section controller, and a switch. The switch is capable of reading a data request including a data block identifier and routing the data request and any associated data through the switch on the basis of this data block identifier, such that a data request may be routed to a memory section. The section controller, in response, determines the addresses in the memory devices storing the requested data, and it transfers these addresses to those memory devices storing the requested data.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Inventors: Melvin James Bullen, Steven Louis Dodd, William Thomas Lynch, David James Herbison
-
Publication number: 20040088515Abstract: A method of storing data includes the steps of storing data comprising the steps of identifying respective lifetimes of each member of an indexed collection of data elements, each of the data elements referenceable in a data index space representing a set of valid data element indices; identifying a set of pairs of the data elements having overlapping lifetimes; and generating a mapping from the data index space to an address offset space based on the set of pairs of the data elements having the overlapping lifetimes.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Inventors: Robert S. Schreiber, Alain Darte
-
Publication number: 20040088516Abstract: A method and system for virtual memory translation of data represented in a multidimensional coordinate system when the physical memory may be located in more than one physical memory location. The translation of one or more virtual addresses into one or more accesses to one or more physical memories is achieved by representing each address of each element of a memory of the one or more physical memories as a point in a Cartesian coordinate system wherein consecutive points in the Cartesian coordinate system represent virtual memory addresses corresponding to elements from different physical memories of the one or more physical memories. Points in the Cartesian coordinate system are translated into one or more corresponding physical memory addresses, and read or write operations may be performed relative to these physical memory addresses. Multiple read or write operations may be performed during a single clock cycle through the use of parallel accesses of the one or more physical memories.Type: ApplicationFiled: October 30, 2002Publication date: May 6, 2004Inventors: Malcolm Ryle Dwyer, Nikolaos Bellas
-
Publication number: 20040088517Abstract: A method includes storing a plurality of system status messages of a specified size, and transmitting the status messages as a combined status message of a size larger than said specified size to an external device. In one aspect, the system status messages may have sizes that are less than the width of a bus, and said transmitting the combined status message includes transmitting the combined status message having a width equal to a width of the bus.Type: ApplicationFiled: November 4, 2002Publication date: May 6, 2004Inventor: Thomas V. Spencer
-
Publication number: 20040088518Abstract: A memory access system is described which generates two memory addresses from a single memory access instruction which identifies a register holding at least two packed objects. In the preferred embodiment, the contents of a base register is combined respectively with each of two or more packed objects in an offset register.Type: ApplicationFiled: October 29, 2003Publication date: May 6, 2004Applicant: Broadcom CorporationInventor: Sophie Wilson
-
Publication number: 20040088519Abstract: A hyperprocessor includes a control processor controlling tasks executed by a plurality of processor cores, each of which may include multiple execution units, or special hardware units. The control processor schedules tasks according to control threads for the tasks created during compilation and comprising a hardware context including register files, a program counter and status bits for the respective task. The tasks are dispatched to the processor cores or special hardware units for parallel, sequential, out-of-order or speculative execution. A universal register file contains data to be operated on by the task, and an interconnect couples at least the processor cores or special hardware units to each other and to the universal register file, allowing each node to communicate with any other node.Type: ApplicationFiled: October 30, 2002Publication date: May 6, 2004Applicant: STMicroelectronics, Inc.Inventor: Faraydon O. Karim
-
Publication number: 20040088520Abstract: An embodiment of the present invention includes a pipeline comprising a plurality of stages and a pipeline timing controller controlling a plurality of predetermined delays, wherein, when one of the predetermined delays has expired, the pipeline timing controller sends a control signal to initiate at least one process within associated ones of the plurality of stages.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Inventors: Shail Aditya Gupta, Mukund Sivaraman
-
Publication number: 20040088521Abstract: A vector processing system for executing vector instructions, each instruction defining multiple value pairs, an operation to be executed and a modifier, the vector processing system comprising a plurality of parallel processing units, each arranged to receive one of said pairs of values and, when selected, to implement an operation on said value pair to generate a result, each processing unit comprising at least one flag and being selectable in dependence on a condition defined by said at least one flag, wherein the modifier defines the condition under which the parallel processing unit is individually selected.Type: ApplicationFiled: October 31, 2002Publication date: May 6, 2004Applicant: ALPHAMOSAIC LIMITEDInventors: Stephen Barlow, Neil Bailey, Timothy Ramsdale, David Plowman, Robert Swann
-
Publication number: 20040088522Abstract: A multi-processor computer system is described in which transaction processing in each cluster of processors is distributed among multiple protocol engines. Each cluster includes a plurality of local nodes and an interconnection controller interconnected by a local point-to-point architecture. The interconnection controller in each cluster comprises a plurality of protocol engines for processing transactions. Transactions are distributed among the protocol engines using destination information associated with the transactions.Type: ApplicationFiled: November 5, 2002Publication date: May 6, 2004Applicant: Newisys, Inc.Inventors: Charles Edward Watson,, Rajesh Kota, David Brian Glasco
-
Publication number: 20040088523Abstract: A multi-processor computer system permits various types of partitions to be implemented to contain and isolate hardware failures. The various types of partitions include hard, semi-hard, firm, and soft partitions. Each partition can include one or more processors. Upon detecting a failure associated with a processor, the connection to adjacent processors in the system can be severed, thereby precluding corrupted data from contaminating the rest of the system. If an inter-processor connection is severed, message traffic in the system can become congested as messages become backed up in other processors. Accordingly, each processor includes various timers to monitor for traffic congestion that may be due to a severed connection. Rather than letting the processor continue to wait to be able to transmit its messages, the timers will expire at preprogrammed time periods and the processor will take appropriate action, such as simply dropping queued messages, to keep the system from locking up.Type: ApplicationFiled: October 23, 2003Publication date: May 6, 2004Inventors: Richard E. Kessler, Peter J. Bannon, Kourosh Gharachorloo, Thukalan V. Verghese
-
Publication number: 20040088524Abstract: A system includes a first processor coupled to a second processor. The first and second processors are coupled to memory. The first processor fetches and executes supported instructions until an unsupported instruction is detected. The second processor executes the unsupported instruction. If there are less than a threshold number of consecutive supported instructions before the next unsupported instruction, the second processor loads the instructions in the first processor for execution so that the first processor does not fetch the instructions. If there are more than a threshold number of consecutive supported instructions before the next unsupported instruction, the first processor fetches and executes those instructions.Type: ApplicationFiled: July 31, 2003Publication date: May 6, 2004Applicant: Texas Instruments IncorporatedInventors: Gerard Chauvel, Serge Lasserre