Patents Issued in May 6, 2004
  • Publication number: 20040088475
    Abstract: A memory device (10) includes an array (12) of memory cells arranged in rows and columns. Preferably, each memory cell includes a pass transistor coupled to a storage capacitor. A row decoder is coupled to rows of memory cells while a column decoder (14) is coupled to columns of the memory cells. The column decoder (14) includes an enable input. A variable delay (32) has an output coupled to the enable input of the column decoder (14). The variable delay (32) receives an indication (R/W′) of whether a current cycle is a read cycle or a write cycle. In the preferred embodiment, a signal provided at the output of the variable delay (32) is delayed if the current cycle is a read cycle compared to if the current cycle is a write cycle.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Applicant: Infineon Technologies North America Corp.
    Inventors: Harald Streif, Stefan Wuensche, Mike Killian
  • Publication number: 20040088476
    Abstract: Method and apparatus using a Content Addressable Memory for sorting a plurality of data items is presented. The data items to be sorted are stored in the Content Addressable Memory. A plurality of bit-by-bit burst searches are performed on the contents of the Content Addressable Memory with all other bits in the search key masked. The number of burst searches is proportional to the total number of bits in the data items to be sorted. The search is deterministic dependent on the number of bits in each data item on which a sort is performed and on the number of data items to be sorted.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Applicant: MOSAID Technologies, Inc.
    Inventor: Mourad Abdat
  • Publication number: 20040088477
    Abstract: A storage system that may include one or more memory devices, a memory interface device corresponding to one or more of the memory devices, which are organized in sections, and a section controller. In this system, a data request for the data may be received over a communications path by a section controller. The section controller determines the addresses in the memory devices storing the requested data, transfers these addresses to those memory devices storing the requested data, and transfers an identifier to the memory interface device. The memory device, in response, reads the data and transfers the data to its corresponding memory interface device. The memory interface device then adds to the data the identifier it received from the section controller and forwards the requested bits towards their destination, such that the data need not pass through the section controller.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Inventors: Melvin James Bullen, Steven Louis Dodd, William Thomas Lynch, David James Herbison
  • Publication number: 20040088478
    Abstract: In accordance with one aspect of the present invention, a seek profile table used by a disk controller contains multiple profiles for seek operations, and is accessed by a separate index table containing, for each permutation of key parameters, an index to a corresponding profile. In operation, the estimated seek time for an enqueued data access operation is obtained by accessing the applicable index table entry, using the value of the index entry to determine the corresponding profile, and using the profile to estimate the access time. Preferably, a “time-based relocation expected access time” algorithm is used, in which a nominal seek time is established, and profile table entries express a probability that an operation with a given latency above the nominal seek time will complete within the latency period.. The expected access time is the latency plus the product of this probability and the time cost of a miss, i.e., the time of a single disk revolution.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Applicant: International Business Machines Corporation
    Inventor: David Robison Hall
  • Publication number: 20040088479
    Abstract: Write operations less than full block size (short block writes) are internally accumulated while being written to disk in a temporary cache location. Once written to the cache location, the disk drive signals the host that the write operation has completed. Accumulation of short block writes in the drive is transparent to the host and does not present an exposure of data loss. The accumulation of a significant number of short block write operations in the queue make it possible to perform read/modify/write operations with a greater efficiency. In operation, the drive preferably cycles between operation in the cache location and the larger data block area to achieve efficient use of the cache and efficient selection of data access operations. In one embodiment, a portion of the disk surface is formatted at a smaller block size for use by legacy software.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Applicant: International Business Machines Corporation
    Inventor: David Robison Hall
  • Publication number: 20040088480
    Abstract: A method for determining a speculative data acquisition in conjunction with an execution of a first access command relative to an execution of a second access command through execution of the read look ahead routine is disclosed.
    Type: Application
    Filed: June 23, 2003
    Publication date: May 6, 2004
    Applicant: Seagate Technology LLC
    Inventors: Travis D. Fox, Edwin S. Olds, Mark A. Gaertner, Abbas Ali
  • Publication number: 20040088481
    Abstract: A disk cache may include a volatile memory such as a dynamic random access memory and a nonvolatile memory such as a polymer memory. When a cache line needs to be allocated on a write, the polymer memory may be allocated and when a cache line needs to be allocated on a read, the volatile memory may be allocated.
    Type: Application
    Filed: November 4, 2002
    Publication date: May 6, 2004
    Inventor: John I. Garney
  • Publication number: 20040088482
    Abstract: Data storage systems are provided. One such data storage system includes a first data storage carrier that incorporates multiple disk drives mounted adjacent to each other. Other systems also are provided.
    Type: Application
    Filed: November 4, 2002
    Publication date: May 6, 2004
    Inventors: Herbert J. Tanzer, Patrick S. McGoey
  • Publication number: 20040088483
    Abstract: A method for providing online raid migration without non-volatile memory employs reconstruction of the RAID drives. In this manner, the method of the present invention protects online migration of data from power failure with little or no performance loss so that data can be recovered if power fails while migration is in progress, and migration may be resumed without the use of non-volatile memory.
    Type: Application
    Filed: November 4, 2002
    Publication date: May 6, 2004
    Inventors: Paresh Chatterjee, Parag Maharana, Sumanesh Samanta
  • Publication number: 20040088484
    Abstract: A primary controller operates to transmit write data and a write time to a secondary controller in the earlier sequence of the write times after reporting a completion of a request for write to a processing unit. The secondary controller stores the write data and the write time transmitted from the primary controller in the cache memory. At a time, the secondary controller stores the write data in a disk unit in the earlier sequence of the write time. These operations make it possible to guarantee all the write data on or before the reference time.
    Type: Application
    Filed: July 9, 2003
    Publication date: May 6, 2004
    Inventors: Akira Yamamoto, Katsunori Nakamura, Shigeru Kishiro
  • Publication number: 20040088485
    Abstract: A computer implemented cache memory for a RAID-5 configured disk storage system to achieve a significant enhancement of the data access and write speed of the raid disk. A memory cache is provided between the RAID-5 controller and the RAID-5 disks to speed up RAID-5 system volume accesses. It utilizes the time and spatial locality property of parity blocks. The memory cache is central in its physical architecture for easy management, better utilization, and easy application to a generalized computer system. The cache blocks are indexed by their physical disk identifier to improve the cache hit ratio and cache utilization.
    Type: Application
    Filed: October 17, 2003
    Publication date: May 6, 2004
    Inventor: Rung-Ji Shang
  • Publication number: 20040088486
    Abstract: A technique for resynchronizing a memory system. More specifically, a technique for resynchronizing a plurality of memory segments in a redundant memory system after a hot-plug event. After a memory cartridge is hot-plugged into a system, the memory cartridge is synchronized with the operational memory cartridges such that the memory system can operate in lock step. A refresh counter in each memory cartridge is disabled to, generate a first refresh request to the corresponding memory segments in the memory cartridge. After waiting a period of time to insure that regardless of what state each memory cartridge is in when the first refresh request is initiated all cycles have been completely executed, each refresh counter is re-enabled, thereby generating a second refresh request. The generation of the second refresh request to each of the memory segments provides synchronous operation of each of the memory cartridges.
    Type: Application
    Filed: October 21, 2003
    Publication date: May 6, 2004
    Inventors: Gary J. Piccirillo, Jerome J. Johnson, John E. Larson
  • Publication number: 20040088487
    Abstract: A chip-multiprocessing system with scalable architecture, including on a single chip: a plurality of processor cores; a two-level cache hierarchy; an intra-chip switch; one or more memory controllers; a cache coherence protocol; one or more coherence protocol engines; and an interconnect subsystem. The two-level cache hierarchy includes first level and second level caches. In particular, the first level caches include a pair of instruction and data caches for, and private to, each processor core. The second level cache has a relaxed inclusion property, the second-level cache being logically shared by the plurality of processor cores. Each of the plurality of processor cores is capable of executing an instruction set of the ALPHA™ processing core. The scalable architecture of the chip-multiprocessing system is targeted at parallel commercial workloads.
    Type: Application
    Filed: October 24, 2003
    Publication date: May 6, 2004
    Inventors: Luiz Andre Barroso, Kourosh Gharachorloo, Andreas Nowatzyk
  • Publication number: 20040088488
    Abstract: A multi-threaded embedded processor that includes an on-chip deterministic (e.g., scratch or locked cache) memory that persistently stores all instructions associated with one or more pre-selected high-use threads. The processor executes general (non-selected) threads by reading instructions from an inexpensive external memory, e.g., by way of an on-chip standard cache memory, or using other potentially slow, non-deterministic operation such as direct execution from that external memory that can cause the processor to stall while waiting for instructions to arrive. When a cache miss or other blocking event occurs during execution of a general thread, the processor switches to the pre-selected thread, whose execution with zero or minimal delay is guaranteed by the deterministic memory, thereby utilizing otherwise wasted processor cycles until the blocking event is complete.
    Type: Application
    Filed: May 7, 2003
    Publication date: May 6, 2004
    Applicant: Infineon Technologies North America Corp.
    Inventors: Robert E. Ober, Roger D. Arnold, Daniel Martin, Erik K. Norden
  • Publication number: 20040088489
    Abstract: A multi-port instruction/data integrated cache which is provided between a parallel processor and a main memory and stores therein a part of instructions and data stored in the main memory has a plurality of banks, and a plurality of ports including an instruction port unit consisting of at least one instruction port used to access an instruction from the parallel processor and a data port unit consisting of at least one data port used to access data from the parallel processor. Further, a data width which can be specified to the bank from the instruction port is set larger than a data width which can be specified to the bank from the data port.
    Type: Application
    Filed: October 15, 2003
    Publication date: May 6, 2004
    Applicant: Semiconductor Technology Academic Research Center
    Inventors: Tetsuo Hironaka, Hans Jurgen Mattausch, Tetsushi Koide, Tai Hirakawa, Koh Johguchi
  • Publication number: 20040088490
    Abstract: A super predictive fetch system and method provides the benefits of a larger word line fill prefetch operation without the penalty normally associated with the larger line fill prefetch operation. Sequential memory access patterns are identified and caused to trigger a fetch of a sequential next line of data. The super predictive fetch operation includes a buffer into which the sequential next line of data is loaded. In one embodiment, the buffer is located in the memory controller. In another embodiment, the buffer is located in the cache controllers.
    Type: Application
    Filed: November 6, 2002
    Publication date: May 6, 2004
    Inventor: Subir Ghosh
  • Publication number: 20040088491
    Abstract: A microprocessor is configured to continue execution in a special Speculative Prefetching After Data Cache Miss (SPAM) mode after a data cache miss is encountered. The microprocessor includes additional registers and program counter, and optionally additional cache memory for use during the special SPAM mode. By continuing execution during the SPAM mode, multiple outstanding and overlapping cache fill requests may be issued, thus improving performance of the microprocessor.
    Type: Application
    Filed: October 24, 2003
    Publication date: May 6, 2004
    Inventors: Norman Paul Jouppi, Keith Istvan Farkas
  • Publication number: 20040088492
    Abstract: According to the present invention, methods and apparatus are provided for increasing the efficiency of data access in a multiple processor, multiple cluster system. Mechanisms for reducing the number of transactions in a multiple cluster system are provided. In one example, probe filter information is used to limit the number of probe requests transmitted to request and remote clusters.
    Type: Application
    Filed: November 4, 2002
    Publication date: May 6, 2004
    Applicant: Newisys, Inc. a Delaware Corporation
    Inventor: David B. Glasco
  • Publication number: 20040088493
    Abstract: According to the present invention, methods and apparatus are provided for increasing the efficiency of data access in a multiple processor, multiple cluster system. Mechanisms for reducing the number of transactions in a multiple cluster system are provided. In one example, memory controller filter information is used to probe a request or remote cluster while bypassing a home cluster memory controller.
    Type: Application
    Filed: November 4, 2002
    Publication date: May 6, 2004
    Applicants: Newisys, Inc., A Delaware Corporation
    Inventor: David B. Glasco
  • Publication number: 20040088494
    Abstract: Cache coherence directory eviction mechanisms are described for use in computer systems having a plurality of multiprocessor clusters. Interaction among the clusters is facilitated by a cache coherence controller in each cluster. A cache coherence directory is associated with each cache coherence controller identifying memory lines associated with the local cluster which are cached in remote clusters. A variety of techniques for managing eviction of entries in the cache coherence directory are provided.
    Type: Application
    Filed: November 5, 2002
    Publication date: May 6, 2004
    Applicant: Newisys, Inc. A Delaware coporation
    Inventors: David B. Glasco, Rajesh Kota, Sridhar K. Valluru
  • Publication number: 20040088495
    Abstract: Cache coherence directory eviction mechanisms are described for use in computer systems having a plurality of multiprocessor clusters. Interaction among the clusters is facilitated by a cache coherence controller in each cluster. A cache coherence directory is associated with each cache coherence controller identifying memory lines associated with the local cluster which are cached in remote clusters. A variety of techniques for managing eviction of entries in the cache coherence directory are provided.
    Type: Application
    Filed: November 5, 2002
    Publication date: May 6, 2004
    Applicant: Newisys, Inc., A Delaware corporation
    Inventors: David B. Glasco, Rajesh Kota, Sridhar K. Valluru
  • Publication number: 20040088496
    Abstract: Cache coherence directory eviction mechanisms are described for use in computer systems having a plurality of multiprocessor clusters. Interaction among the clusters is facilitated by a cache coherence controller in each cluster. A cache coherence directory is associated with each cache coherence controller identifying memory lines associated with the local cluster which are cached in remote clusters. A variety of techniques for managing eviction of entries in the cache coherence directory are provided.
    Type: Application
    Filed: November 5, 2002
    Publication date: May 6, 2004
    Applicant: Newisys, Inc. A Delaware corporation
    Inventors: David Brian Glasco, Rajesh Kota, Sridhar K. Valluru
  • Publication number: 20040088497
    Abstract: Methods and apparatus for exchanging cyclic redundancy check encoded (CRC-encoded) data are presented. An exemplary arrangement includes at least two blocks connected by an address bus and a data bus on which data is exchanged between the blocks. A snoop block, connected to the address and data buses, is configured to receive an address from the data bus. The snoop block includes address masking circuitry configured to mask off the address receivable from the data bus to generate at least one snoop address. A CRC block, connected to the data bus and to the snoop block, is configured to generate a CRC code from the data when a data address, carried on the address bus, matches the at least one snoop address.
    Type: Application
    Filed: November 6, 2002
    Publication date: May 6, 2004
    Inventors: Russell C. Deans, Troy S. Dahlmann
  • Publication number: 20040088498
    Abstract: A system and method for freeing memory from individual pools of memory in response to a threshold being reached that corresponds with the individual memory pools is provided. The collective memory pools form a system wide memory pool that is accessible from multiple processors. When a threshold is reached for an individual memory pool, a page stealer method is performed to free memory from the corresponding memory pool. Remote memory is used to store data if the page stealer is unable to free pages fast enough to accommodate the application's data needs. Memory subsequently freed from the local memory area is once again used to satisfy the memory needs for the application. In one embodiment, memory affinity can be set on an individual application basis so that affinity is maintained between the memory pools local to the processors running the application.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Applicant: International Business Machines Corporation
    Inventors: Jos Manuel Accapadi, Mathew Accapadi, Andrew Dunshea, Dirk Michel
  • Publication number: 20040088499
    Abstract: A distributed computer system is disclosed that allows shared memory resources to be synchronized so that accurate and uncorrupted memory contents are shared by the computer systems within the distribute computer system. The distributed computer system includes a plurality of devices, at least one memory resource shared by the plurality of devices, and a memory controller, coupled to the plurality of devices and to the shared memory resources. The memory controller synchronizes the access of shared data stored within the memory resources by the plurality devices and overrides synchronization among the plurality of devices upon notice that a prior synchronization event has occurred or the memory resource is not to be shared by other devices.
    Type: Application
    Filed: October 21, 2003
    Publication date: May 6, 2004
    Inventor: James Alan Woodward
  • Publication number: 20040088500
    Abstract: One aspect of the invention is a method for automatically readying a medium. The method comprises monitoring a state of a storage medium using readying logic and determining a physical media type for the storage medium using the readying logic. The method also comprises determining a recorded type for the storage medium using the readying logic and selecting, using the readying logic, a file system type for the storage medium if the storage medium is formatted using a file system. The method further comprises automatically readying the storage medium in response to the determinations and selection.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Inventor: Allen J. Piepho
  • Publication number: 20040088501
    Abstract: A method and apparatus are provided for repacking of memory data. For at least one embodiment, data for a plurality of store instructions in a source code program is loaded from memory into the appropriate sub-location of a proxy storage location. The packed data is then written with a single instruction from the proxy storage location into contiguous memory locations.
    Type: Application
    Filed: November 4, 2002
    Publication date: May 6, 2004
    Inventors: Jean-Francois C. Collard, Kalyan Muthukumar
  • Publication number: 20040088502
    Abstract: An integrated multilevel nonvolatile flash memory device has a memory array of a plurality of memory units arranged in a plurality of rows and columns. Each of the memory units has a plurality of memory cells with each memory cell for storing a multibit state. Each of the memory units stores encoded user data and overhead data. The partitioning of the encoded user data and the overhead data stored in a single memory unit may be done virtually. The result is a compact memory unit without the need for an index to overhead data for its associated user data.
    Type: Application
    Filed: November 1, 2002
    Publication date: May 6, 2004
    Inventor: Jack E. Frayer
  • Publication number: 20040088503
    Abstract: An information processing method includes the steps of: obtaining, among arithmetic instructions, information on data sets referred to by memory reference, and assigning to different banks a plurality of data sets simultaneously referred to by memory reference performed in accordance with an arithmetic instruction. This allows bank assignment to be performed automatically without causing memory bank conflict.
    Type: Application
    Filed: October 20, 2003
    Publication date: May 6, 2004
    Applicant: MATSUSHITA ELECTRIC CO., LTD.
    Inventors: Ryoko Miyachi, Wataru Hashiguchi
  • Publication number: 20040088504
    Abstract: A data storage system and method for reorganizing data to improve the effectiveness of data prefetching and reduce the data seek distance. A data reorganization region is allocated in which data is reorganized to service future requests for data. Sequences of data units that have been repeatedly requested are determined from a request stream, preferably using a graph where each vertex of the graph represents a requested data unit and each edge represents that a destination unit is requested shortly after a source unit the frequency of this occurrence. The most frequently requested data units are also determined from the request stream. The determined data is copied into the reorganization region and reorganized according to the determined sequences and most frequently requested units. The reorganized data might then be used to service future requests for data.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Inventors: Windsor Wee Sun Hsu, Honesty Cheng Young
  • Publication number: 20040088505
    Abstract: A method and apparatus are provided for enhancing the performance of storage systems is described. In the making of an initial copy to a secondary subsystem, or in the initial storage of data onto a primary storage subsystem, null data is skipped. The data may be skipped by sending the non-null data in sequence so missing addresses are identified as being null data, or a skip message may be used to designate regions where null data is to be present.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Applicant: Hitachi, Ltd.
    Inventor: Naoki Watanabe
  • Publication number: 20040088506
    Abstract: A data archiving controller automatically determines a whether a main storage devices has usage ratio in excess of a maximum limit and if an archiving or backing storage device has sufficient directory space to accept files from the main storage devices. The data archiving controller then determines using fuzzy logic the number of files to be transferred from the main storage devices to the backing storage devices. The data archiving controller has a set allocating apparatus in communication with the main storage device and the backing storage devices to receive retention device usage parameters for classification within classification sets. A membership rules retaining device contains the classification parameter defining rules by which the retention device usage parameters are assigned to the classification sets. The archiving controller has a rule evaluation apparatus for determining a quantity of data to be archived.
    Type: Application
    Filed: November 1, 2002
    Publication date: May 6, 2004
    Applicant: Taiwan Semiconductor Manufacturing Company
    Inventor: Nan-Jung Chen
  • Publication number: 20040088507
    Abstract: In a computer system that includes a first computer, a second computer, a first storage apparatus storing data in a fixed-length block format used by the second computer, and a backup apparatus connected to the first computer and storing data in a variable-length block format, the present invention provides a backup method for backing up data stored in the first storage apparatus to the backup apparatus. The first computer sends the second computer a request to read data in the fixed-length block format. In response to this request, the second computer reads the fixed-length block format data from the first storage apparatus and transfers this data to the first computer. The first computer converts the transferred fixed-length block format data into variable-length block format data. The converted variable-length block format data is stored in the backup apparatus.
    Type: Application
    Filed: July 17, 2003
    Publication date: May 6, 2004
    Applicant: Hitachi, Ltd.
    Inventors: Ai Satoyama, Akira Yamamoto, Takashi Oeda, Yasutomo Yamamoto, Masaya Watanabe
  • Publication number: 20040088508
    Abstract: A system for backing up data includes a data-directing device configured to receive data to be backed up, a first backup storage device that is communicatively coupled to the data-directing device and that is configured to store the received data, a data-caching device that is coupled to the data-directing device and that is configured to store the received data, a switch that is configured to communicatively couple the data-directing device to a second backup storage device responsive to a backup operation failure, wherein data stored in the data-caching device is transferred to the second backup storage device via the data-directing device responsive to the backup operation failure.
    Type: Application
    Filed: September 8, 2003
    Publication date: May 6, 2004
    Inventors: Curtis C. Ballard, William Wesley Torrey, Michael J. Chaloner
  • Publication number: 20040088509
    Abstract: A microprocessor circuit for organizing access to data or programs stored in a memory has a microprocessor, a memory for storing an operating system, and a memory for storing individual external programs. A plurality of memory areas with respective address spaces is provided in the memory for storing the external programs. Each address space is assigned an identifier. The identifier assigned to a memory area is loaded into a first auxiliary register prior to the addressing of the memory area and the identifier of the addressed memory area is loaded into a second auxiliary register. A comparison of the contents of the first and second auxiliary registers is performed. Furthermore, each address space of a memory area is assigned at least one bit sequence defining access rights, whereby code instructions and sensitive data can be protected against write accesses from other external programs.
    Type: Application
    Filed: August 6, 2003
    Publication date: May 6, 2004
    Inventors: Franz-Josef Brucklmayr, Hans Friedinger, Holger Sedlak, Christian May
  • Publication number: 20040088510
    Abstract: A log region (1415A) and a license region (1415B) are arranged in a memory of a memory card. The license region (1415B) stores licenses such as license IDs and license keys Kc as well as validity flags corresponding to entry numbers 0 (N−1). The log region (1415A) includes a receive log (70) and a send log (80). The memory card serving as a sender of the license accepts a receive state from the memory card on a receiver side, and validates the validity flag of a region designated by the entry number in the send log (8) when the receive state is ON. Consequently, even when communication is interrupted during shifting or copying of the license, the license to be shifted or copied can be restored.
    Type: Application
    Filed: September 15, 2003
    Publication date: May 6, 2004
    Inventor: Yoshihiro Hori
  • Publication number: 20040088511
    Abstract: A system is described for controlling access to non-volatile memory. The system can include logic configured to determine whether to delay access to the non-volatile memory.
    Type: Application
    Filed: October 30, 2002
    Publication date: May 6, 2004
    Inventors: Kinney C. Bacon, Lee R. Johnson
  • Publication number: 20040088512
    Abstract: A double data rate memory controller is provided with a plurality of data and strobe pads, means for receiving data and strobe signals via said pads at 1× double data rate memory speed, and means for receiving data and strobe signals via said pads at M× double data rate memory speed (M≧2).
    Type: Application
    Filed: October 28, 2003
    Publication date: May 6, 2004
    Inventors: Eric M. Rentschler, Jeffrey G. Hargis, Leith L. Johnson
  • Publication number: 20040088513
    Abstract: A computing system includes a processor having an operating system executing thereon, a storage system having one or more storage media, and a controller coupled between the processor and the storage system. The controller maintains partition data defining one or more partitions for the storage media in response to commands received from the operating system, and controls access to the storage media in accordance with the partition data. The controller selects a subset of the partitions as active partitions, and communicates to the operating system a portion of the partition data that defines the active partitions. The controller may, for example, select the subset based on a current authenticated user. The controller intercepts storage access requests from the processor, and rejects storage accesses requests that are not directed to the active partitions.
    Type: Application
    Filed: October 30, 2002
    Publication date: May 6, 2004
    Inventors: David W. Biessener, Kevin J. Tacheny, Gaston R. Biessener
  • Publication number: 20040088514
    Abstract: A storage system that may include one or more memory devices, a memory interface device corresponding to one or more of the memory devices, which are organized in sections, a section controller, and a switch. The switch is capable of reading a data request including a data block identifier and routing the data request and any associated data through the switch on the basis of this data block identifier, such that a data request may be routed to a memory section. The section controller, in response, determines the addresses in the memory devices storing the requested data, and it transfers these addresses to those memory devices storing the requested data.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Inventors: Melvin James Bullen, Steven Louis Dodd, William Thomas Lynch, David James Herbison
  • Publication number: 20040088515
    Abstract: A method of storing data includes the steps of storing data comprising the steps of identifying respective lifetimes of each member of an indexed collection of data elements, each of the data elements referenceable in a data index space representing a set of valid data element indices; identifying a set of pairs of the data elements having overlapping lifetimes; and generating a mapping from the data index space to an address offset space based on the set of pairs of the data elements having the overlapping lifetimes.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Inventors: Robert S. Schreiber, Alain Darte
  • Publication number: 20040088516
    Abstract: A method and system for virtual memory translation of data represented in a multidimensional coordinate system when the physical memory may be located in more than one physical memory location. The translation of one or more virtual addresses into one or more accesses to one or more physical memories is achieved by representing each address of each element of a memory of the one or more physical memories as a point in a Cartesian coordinate system wherein consecutive points in the Cartesian coordinate system represent virtual memory addresses corresponding to elements from different physical memories of the one or more physical memories. Points in the Cartesian coordinate system are translated into one or more corresponding physical memory addresses, and read or write operations may be performed relative to these physical memory addresses. Multiple read or write operations may be performed during a single clock cycle through the use of parallel accesses of the one or more physical memories.
    Type: Application
    Filed: October 30, 2002
    Publication date: May 6, 2004
    Inventors: Malcolm Ryle Dwyer, Nikolaos Bellas
  • Publication number: 20040088517
    Abstract: A method includes storing a plurality of system status messages of a specified size, and transmitting the status messages as a combined status message of a size larger than said specified size to an external device. In one aspect, the system status messages may have sizes that are less than the width of a bus, and said transmitting the combined status message includes transmitting the combined status message having a width equal to a width of the bus.
    Type: Application
    Filed: November 4, 2002
    Publication date: May 6, 2004
    Inventor: Thomas V. Spencer
  • Publication number: 20040088518
    Abstract: A memory access system is described which generates two memory addresses from a single memory access instruction which identifies a register holding at least two packed objects. In the preferred embodiment, the contents of a base register is combined respectively with each of two or more packed objects in an offset register.
    Type: Application
    Filed: October 29, 2003
    Publication date: May 6, 2004
    Applicant: Broadcom Corporation
    Inventor: Sophie Wilson
  • Publication number: 20040088519
    Abstract: A hyperprocessor includes a control processor controlling tasks executed by a plurality of processor cores, each of which may include multiple execution units, or special hardware units. The control processor schedules tasks according to control threads for the tasks created during compilation and comprising a hardware context including register files, a program counter and status bits for the respective task. The tasks are dispatched to the processor cores or special hardware units for parallel, sequential, out-of-order or speculative execution. A universal register file contains data to be operated on by the task, and an interconnect couples at least the processor cores or special hardware units to each other and to the universal register file, allowing each node to communicate with any other node.
    Type: Application
    Filed: October 30, 2002
    Publication date: May 6, 2004
    Applicant: STMicroelectronics, Inc.
    Inventor: Faraydon O. Karim
  • Publication number: 20040088520
    Abstract: An embodiment of the present invention includes a pipeline comprising a plurality of stages and a pipeline timing controller controlling a plurality of predetermined delays, wherein, when one of the predetermined delays has expired, the pipeline timing controller sends a control signal to initiate at least one process within associated ones of the plurality of stages.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Inventors: Shail Aditya Gupta, Mukund Sivaraman
  • Publication number: 20040088521
    Abstract: A vector processing system for executing vector instructions, each instruction defining multiple value pairs, an operation to be executed and a modifier, the vector processing system comprising a plurality of parallel processing units, each arranged to receive one of said pairs of values and, when selected, to implement an operation on said value pair to generate a result, each processing unit comprising at least one flag and being selectable in dependence on a condition defined by said at least one flag, wherein the modifier defines the condition under which the parallel processing unit is individually selected.
    Type: Application
    Filed: October 31, 2002
    Publication date: May 6, 2004
    Applicant: ALPHAMOSAIC LIMITED
    Inventors: Stephen Barlow, Neil Bailey, Timothy Ramsdale, David Plowman, Robert Swann
  • Publication number: 20040088522
    Abstract: A multi-processor computer system is described in which transaction processing in each cluster of processors is distributed among multiple protocol engines. Each cluster includes a plurality of local nodes and an interconnection controller interconnected by a local point-to-point architecture. The interconnection controller in each cluster comprises a plurality of protocol engines for processing transactions. Transactions are distributed among the protocol engines using destination information associated with the transactions.
    Type: Application
    Filed: November 5, 2002
    Publication date: May 6, 2004
    Applicant: Newisys, Inc.
    Inventors: Charles Edward Watson,, Rajesh Kota, David Brian Glasco
  • Publication number: 20040088523
    Abstract: A multi-processor computer system permits various types of partitions to be implemented to contain and isolate hardware failures. The various types of partitions include hard, semi-hard, firm, and soft partitions. Each partition can include one or more processors. Upon detecting a failure associated with a processor, the connection to adjacent processors in the system can be severed, thereby precluding corrupted data from contaminating the rest of the system. If an inter-processor connection is severed, message traffic in the system can become congested as messages become backed up in other processors. Accordingly, each processor includes various timers to monitor for traffic congestion that may be due to a severed connection. Rather than letting the processor continue to wait to be able to transmit its messages, the timers will expire at preprogrammed time periods and the processor will take appropriate action, such as simply dropping queued messages, to keep the system from locking up.
    Type: Application
    Filed: October 23, 2003
    Publication date: May 6, 2004
    Inventors: Richard E. Kessler, Peter J. Bannon, Kourosh Gharachorloo, Thukalan V. Verghese
  • Publication number: 20040088524
    Abstract: A system includes a first processor coupled to a second processor. The first and second processors are coupled to memory. The first processor fetches and executes supported instructions until an unsupported instruction is detected. The second processor executes the unsupported instruction. If there are less than a threshold number of consecutive supported instructions before the next unsupported instruction, the second processor loads the instructions in the first processor for execution so that the first processor does not fetch the instructions. If there are more than a threshold number of consecutive supported instructions before the next unsupported instruction, the first processor fetches and executes those instructions.
    Type: Application
    Filed: July 31, 2003
    Publication date: May 6, 2004
    Applicant: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Serge Lasserre