Patents Issued in April 1, 2004
  • Publication number: 20040064640
    Abstract: A queuing architecture and method for scheduling disk drive access requests in a video server. The queuing architecture employs a controlled admission policy that determines how a new user is assigned to a specific disk drive in a disk drive array. The queuing architecture includes, for each disk drive, a first queue for requests from users currently receiving information from the server, and a second queue for all other disk access requests, as well as a queue selector selecting a particular first queue or second queue for enqueuing a request based on the controlled admission policy. The controlled admission policy defines a critical time period such that if a new user request can be fulfilled without causing a steady-state access request for a particular disk drive to miss a time deadline, the new user request is enqueued in the second queue of the particular disk drive; otherwise, the controlled admission policy enqueues the new user request in a second queue of another disk drive.
    Type: Application
    Filed: September 16, 2003
    Publication date: April 1, 2004
    Inventors: Robert G. Dandrea, Danny Chin, Jesse S. Lerman, Clement G. Taylor, James Fredrickson
  • Publication number: 20040064641
    Abstract: A method of reallocating data among physical disks corresponding to a logical disk is provided. A logical disk is partitioned into a plurality of groups, each group comprising at least one segment on at least one of a first plurality of physical disks corresponding to the logical disk. One group of the plurality of groups is partitioned into a plurality of sub-groups, and, for each sub-group of the plurality of sub-groups but one, the sub-group is copied to at least one segment on at least one of a second plurality of physical disks corresponding to the logical disk.
    Type: Application
    Filed: September 19, 2003
    Publication date: April 1, 2004
    Applicant: Hitachi, Ltd.
    Inventor: Shoji Kodama
  • Publication number: 20040064642
    Abstract: An automatic browser Web cache resizing system allows a browser to adjust its Web cache size to its environment automatically. When the browser starts up, the browser examines the host computer's hard drive for the amount of the available free space and allocates the maximum reasonable amount of the free space on the hard drive for the Web cache that it needs to run efficiently. During the browser's shutdown sequence, it optionally reexamines the free space on the hard drive and gives up as much of its allocated Web cache space as it can. Every time the browser writes to or reads from the Web cache, the browser checks to see if its Web cache allocation is needed for free space. If the browser sees that the amount of free space is low, it will give up some of its Web cache space that it allocated. Another preferred embodiment of the invention integrates the invention with the operating system of the host computer. The browser requests memory from the operating system.
    Type: Application
    Filed: October 1, 2002
    Publication date: April 1, 2004
    Inventor: James Roskind
  • Publication number: 20040064643
    Abstract: A method and apparatus for optimizing line writes in cache coherent systems. A new cache line may be allocated without loading data to fill the new cache line when a store buffer coalesces enough stores to fill the cache line. Data may be loaded to fill the line if an insufficient number of stores are coalesced to fill the entire cache line. The cache line may be allocated by initiating a read and invalidate request and asserting a back-off signal to cancel the read if there is an indication that the coalesced stores will fill the cache line.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventors: Sujat Jamil, Hang T. Nguyen, Samantha J. Edirisooriya, David E. Miner, R. Frank O'Bleness, Steven J. Tu
  • Publication number: 20040064644
    Abstract: The present invention relates to a structure and a method of data update in a cache memory inside a local processor, which uses the feature of cache control. A buffer block of a header buffer is mapped to a memory space at several different address sectors addressed by the local processor. Whenever the local processor attempts to access the internal cache memory, cache missing will occur so that a local processor will be forced to alternatively request new data from buffer blocks of a header buffer in a HCA. Consequently, the whole block is loaded into cache memory. This does not only boost cache update performance but also accelerates packet access.
    Type: Application
    Filed: April 28, 2003
    Publication date: April 1, 2004
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: Patrick Lin, Wei-Pin Chen
  • Publication number: 20040064645
    Abstract: In a multiprocessor data processing system including: a main memory; at least first and second shared caches; a system bus coupling the main memory and the first and second shared caches; at least four processors having respective private caches with the first and second private caches being coupled to the first shared cache and to one another via a first internal bus, and the third and fourth private caches being coupled to the second shared cache and to one another via a second internal bus; method and apparatus for preventing hogging of ownership of a gateword stored in the main memory and which governs access to common code/data shared by processes running in at least three of the processors. Each processor includes a gate control flag. A gateword CLOSE command, establishes ownership of the gateword in one processor and prevents other processors from accessing the code/data guarded until the one processor has completed its use.
    Type: Application
    Filed: September 26, 2002
    Publication date: April 1, 2004
    Inventors: Wayne R. Buzby, Charles P. Ryan
  • Publication number: 20040064646
    Abstract: A memory controller system is provided, which includes a plurality of system buses, a multi-port memory controller and a plurality of error correcting code (ECC) encoders. The memory controller has a plurality of system bus ports and a memory port. Each ECC encoder is coupled between a respective system bus and a respective system bus port of the memory controller.
    Type: Application
    Filed: September 26, 2002
    Publication date: April 1, 2004
    Inventors: Steven M. Emerson, Gregory F. Hammitt
  • Publication number: 20040064647
    Abstract: A method and apparatus to improve the read/write performance of a hard drive is presented. The hard drive includes solid state, non-volatile (NV) memory as a read/write cache. Data specified by the operating system is stored in the NV memory. The operating system provides a list of data to be put in NV memory. The data includes data to be pinned in NV memory and data that is dynamic. Pinned data persists in NV memory until the operating system commands it to be flushed. Dynamic data can be flushed by the hard drive controller. Data sent by an application for storage is temporary stored in NV memory in data blocks until the operating system commits it to the disk.
    Type: Application
    Filed: February 21, 2003
    Publication date: April 1, 2004
    Applicant: Microsoft Corporation
    Inventors: Dean L. DeWhitt, Clark D. Nicholson, W. Jeff Westerinen, Michael R. Fortin, John M. Parchem, Charles P. Thacker
  • Publication number: 20040064648
    Abstract: Method and apparatus for prefetching cache with requested data are described. A processor initiates a read access to main memory for data which is not in the main memory. After the requested data is brought into the main memory, but before the read access is reinitiated, the requested data is prefetched from main memory into the cache subsystem of the processor which will later reinitiate the read access.
    Type: Application
    Filed: September 26, 2002
    Publication date: April 1, 2004
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jeffrey D. Brown, John D. Irish, Steven R. Kunkel
  • Publication number: 20040064649
    Abstract: Methods and apparatus are provided for supplying data to a processor in a digital processing system. The method includes holding data required by the processor in a chache memory, supplying data from the cache memory to the processor in response to processor requests, performing a cache line fill operation in response to a chache miss, supplying data from a prefetch buffer to the cache memory in response to the cache line fill operation, and speculatively loading data from a lower level memory to the prefetch buffer in response to the cache line fill operation.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventors: Thomas A. Volpe, Michael S. Allen
  • Publication number: 20040064650
    Abstract: Provided are a method, system, and program for maintaining data in distributed caches. A copy of an object is maintained in at least one cache, wherein multiple caches may have different versions of the object, and wherein the objects are capable of having modifiable data units. Update information is maintained for each object maintained in each cache, wherein the update information for each object in each cache indicates the object, the cache including the object, and indicates whether each data unit in the object was modified. After receiving a modification to a target data unit in one target object in one target cache, the update information for the target object and target cache is updated to indicate that the target data unit is modified, wherein the update information for the target object in any other cache indicates that the target data unit is not modified.
    Type: Application
    Filed: September 27, 2002
    Publication date: April 1, 2004
    Applicant: International Business Machines Corporation
    Inventor: Sandra K. Johnson
  • Publication number: 20040064651
    Abstract: A data processor (120) recognizes a special data processing operation in which data will be stored in a cache (124) for one use only. The data processor (120) allocates a memory location to at least one cache line of the cache (124). A data producer such as a data communication driver program running on a central processing unit (122) then writes a data element to the allocated memory location. A data consumer (160) reads the data element by sending a READ ONCE request to a host bridge (130). The host bridge (130) provides the READ ONCE request to a memory controller (126), which reads the data from the cache (124) and de-allocates the at least one cache line without performing a writeback from the cache to a main memory (170). In one form the memory controller (126) de-allocates the at least one cache line by issuing a probe marking the next state of the associated cache line as invalid.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventor: Patrick Conway
  • Publication number: 20040064652
    Abstract: A method and apparatus for a mechanism for handling i/o transactions with known transaction length to coherent memory in a cache coherent multi-node architecture is described. In one embodiment, the invention is a method. The method includes receiving a request for a current copy of a data line. The method further includes finding the data line within a cache-coherent multi-node system. The method also includes copying the data line without disturbing a state associated with the data line. The method also includes providing a copy of the data line in response to the request. The method also includes determining if the data line is a last data line of a transaction based on a known transaction length of the transaction.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventors: Kenneth C. Creta, Manoj Khare, Lily P. Looi, Akhilesh Kumar
  • Publication number: 20040064653
    Abstract: A protocol engine is for use in each node of a computer system having a plurality of nodes. Each node includes an interface to a local memory subsystem that stores memory lines of information, a directory, and a memory cache. The directory includes an entry associated with a memory line of information stored in the local memory subsystem. The directory entry includes an identification field for identifying sharer nodes that potentially cache the memory line of information. The identification field has a plurality of bits at associated positions within the identification field. Each respective bit of the identification field is associated with one or more nodes. The protocol engine furthermore sets each bit in the identification field for which the memory line is cached in at least one of the associated nodes.
    Type: Application
    Filed: September 26, 2003
    Publication date: April 1, 2004
    Inventors: Kourosh Gharachorloo, Luiz A. Barroso, Robert J. Stets, Mosur K. Ravishankar, Andreas Nowatzyk
  • Publication number: 20040064654
    Abstract: A sharing mechanism is herein disclosed for multiple logical processors using a translation lookaside buffer (TLB) to translate virtual addresses into physical addresses. The mechanism supports sharing of TLB entries among logical processors, which may access address spaces in common. The mechanism further supports private TLB entries among logical processors, which may each access a different physical address through identical virtual addresses. The sharing mechanism provides for installation and updating of TLB entries as private entries or as shared entries transparently, without requiring special operating system support or modifications. Sharability of virtual address translations by logical processors may be determined by comparing page table physical base addresses of the logic processors. Using the disclosed sharing mechanism, fast and efficient virtual address translation is provided without requiring more expensive functional redundancy.
    Type: Application
    Filed: September 24, 2003
    Publication date: April 1, 2004
    Inventors: Thomas E. Willis, Achmed R. Zahir
  • Publication number: 20040064655
    Abstract: A method of generating physical memory access statistics for a computer system having a non-uniform memory access architecture which includes a plurality of processors located on a respective plurality of boards. The method includes monitoring when a memory trap occurs, determining a physical memory access location when the memory trap occurs, determining a frequency of physical memory accesses by the plurality of processors based upon the physical memory access locations, and generating physical memory statistics showing the frequency of physical memory accesses by the plurality of processors for each board of the computer system.
    Type: Application
    Filed: September 27, 2002
    Publication date: April 1, 2004
    Inventor: Dominic Paulraj
  • Publication number: 20040064656
    Abstract: A system and method for verifying a memory consistency model for a shared memory multiprocessor computer systems generates random instructions to run on the processors, saves the results of the running of the instructions, and analyzes the results to detect a memory subsystem error if the results fall outside of the space of possible outcomes consistent with the memory consistency model. A precedence relationship of the results is determined by uniquely identifying results of a store location with each result distinct to allow association of a read result value to the instruction that created the read result value. A precedence graph with static, direct and derived edges identifies errors when a cycle is detected that indicates results that are inconsistent with memory consistency model rules.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventors: Sudheendra Hangal, Durgam Vahia, Juin-Yeu Lu
  • Publication number: 20040064657
    Abstract: According to some embodiments, a memory structure is provided including information storage elements and associated validity storage elements.
    Type: Application
    Filed: September 27, 2002
    Publication date: April 1, 2004
    Inventors: Muraleedhara Navada, Sreenath Kurupati
  • Publication number: 20040064658
    Abstract: An access control method and apparatus for a RAID storage device that includes a data hard disk and a backup hard disk are disclosed herein. In response to a write command, data associated with the write command is written onto the data hard disk and the backup hard disk concurrently. Moreover, in response to a read command, data corresponding to the read command is read from the data hard disk, and the data read from the data hard disk is written concurrently onto the backup hard disk.
    Type: Application
    Filed: December 4, 2002
    Publication date: April 1, 2004
    Applicant: Dynapac Corporation
    Inventor: Jack Chang
  • Publication number: 20040064659
    Abstract: In a storage apparatus system, after having obtained the coherency between a file system of a main storage apparatus system and the stored data, a host computer issues a freezing instruction to a main DKC which transfers in turn the disk image at a time point of the issue of freezing instruction to a sub-DKC and then transmits a signal, showing that all the data has been transmitted, to the sub-DKC. In the sub-DKC, the disk image at a time point of reception of the freezing instruction is held until a signal showing that all the data has been transmitted is issued next time, and when the main storage apparatus system becomes unusable at an arbitrary time point, the data of the disk image, at a time point of issue of the freezing instruction, which is held by the sub-storage apparatus system can be utilized.
    Type: Application
    Filed: September 5, 2003
    Publication date: April 1, 2004
    Applicant: Hitachi, Ltd.
    Inventors: Kyosuke Achiwa, Takashi Oeda, Katsunori Nakamura
  • Publication number: 20040064660
    Abstract: Embodiments of present invention may provide a method, systems and/or computer program products for data storage utilizing multiple arrays of memories. Multiple read clocks may be generated to reflect the multiple signal path lengths of the signals to or from the multiple arrays of memories. The memory arrays may operate synchronously and may terminate and reinstate data transfer bursts without intermediate re-addressing, refreshing or restarting.
    Type: Application
    Filed: January 22, 2003
    Publication date: April 1, 2004
    Inventor: Michael Stewart Lyons
  • Publication number: 20040064661
    Abstract: A self-clocking memory device comprises a memory array, a memory input circuit, and a memory control circuit. The memory input circuit is operable to receive an input clock signal and generate a memory operation initiation signal in response thereto, while the memory control circuit is operable to receive the memory operation initiation signal and generate one or more control signals to initiate a memory operation in response thereto. The memory control circuit is further operable to identify completion of the memory operation and generate a cycle ready strobe signal in response thereto. The memory input circuit receives the cycle ready strobe signal as an input and generates a next memory operation initiation signal in response thereto for initiation of a next memory operation.
    Type: Application
    Filed: September 16, 2003
    Publication date: April 1, 2004
    Inventors: Bryan D. Sheffield, Vikas K. Agrawal, Stephen W. Spriggs, Eric L. Badi
  • Publication number: 20040064662
    Abstract: A bus interface unit is provided for a digital signal processor including a core processor, a memory and two or more system buses for transfer of data to and from system components. The bus interface unit includes a first bus controller for receiving processor transfer requests from the core processor on two or more processor buses and for directing the processor transfer requests to the memory on a first memory bus. The bus interface further includes a second bus controller for receiving system transfer requests from the system components on the two or more system buses and for directing the system transfer requests to the memory on a second memory bus. The bus controllers may have pipelined architectures and may be configured to service transfer requests independently.
    Type: Application
    Filed: September 26, 2002
    Publication date: April 1, 2004
    Applicant: Analog Devices, Inc.
    Inventors: Moinul I. Syed, Michael S. Allen
  • Publication number: 20040064663
    Abstract: The present invention relates to techniques for predicting memory access in a data processing apparatus and particular to a technique for determining whether a data item to be accessed crosses an address boundary and will hence require multiple memory accesses.
    Type: Application
    Filed: October 1, 2002
    Publication date: April 1, 2004
    Inventor: Richard Roy Grisenthwaite
  • Publication number: 20040064664
    Abstract: An architecture and method for dynamically allocating and deallocating memory for variable length packets with a variable number of virtual lanes in an Infiniband subnetwork. This architecture uses linked lists and tags to handle the variable number of virtual Lanes and the variable packet sizes. The memory allocation scheme is independent of Virtual Lane allocation and the maximum Virtual Lane depth. The disclosed architecture is also able to process Infiniband packet data comprising variable packet lengths, a fixed memory allocation size, and deallocation of memory when packets are either multicast or unicast. The memory allocation scheme uses linked lists to perform memory allocation and deallocation, while tags are used to track Infiniband subnetwork and switch-specific issues. Memory allocation and deallocation is performed using several data and pointer tables. These tables store packet data information, packet buffer address information, and pointer data and point addresses.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventor: Mercedes E. Gil
  • Publication number: 20040064665
    Abstract: The 64-bit single cycle fetch method described here relates to a specific ‘megastar’ core processor employed in a range of new digital signal processor devices. The ‘megastar’ core incorporates 32-bit memory blocks arranged into separate entities or banks. Because the parent CPU has only three 16-bit buses, a maximum read in one clock cycle through the memory interface would normally be 48-bits. This invention describes an approach for a fetch method involving tapping into the memory bank data at an earlier stage prior to the memory interface. This allows the normal 48-bit fetch to be extended to 64-bits as required for full performance of the numerical processor accelerator and other speed critical operations and functions.
    Type: Application
    Filed: September 27, 2002
    Publication date: April 1, 2004
    Inventors: Roshan J. Samuel, Jason D. Kridner
  • Publication number: 20040064666
    Abstract: In the method of generating an interleaved address, each 2{circumflex over ( )}i mod (p−1) value for i=0 to x−1 is stored. Here, p is a prime number dependent on a block size K of a data block being processed and x is greater than one. An inter-row sequence number is multiplied with a column index number to obtain a binary product. Both the inter-row sequence number and the column index number are for the block size K and the prime number p. Then, each binary component of the binary product is multiplied with a respective one of the stored 2{circumflex over ( )}i mod (p−1) values to obtain a plurality of intermediate mod value. An intra-row permutation address is generated based on the plurality of intermediate mod values, and an interleaved address is generated based on the intra-row permutation address.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventor: Mark Andrew Bickerstaff
  • Publication number: 20040064667
    Abstract: A memory system and a method for operating a memory system are provided. The memory system includes a set of memory banks, logic for calculating a first address in each memory bank from the set of memory banks and a controller receiving a transfer address from a computing device. The controller includes logic for selecting a memory bank from the set of memory banks based on the transfer address and the first addresses of the memory banks, and for mapping the transfer address to a target address in the selected memory bank based on a first address in the selected memory bank. As a result, the set of memory banks has a contiguous memory space.
    Type: Application
    Filed: September 26, 2002
    Publication date: April 1, 2004
    Applicant: Analog Devices, Inc.
    Inventors: Thomas A. Volpe, Michael S. Allen, Aaron Bauch
  • Publication number: 20040064668
    Abstract: A software monitor, interposed between the hardware layer of a computer system and one or more guest operating systems, constructs and maintains a guest-physical-address-to-host-physical-address map for each guest operating system, and maintains a virtual memory addressing context for each guest operating system that may include a virtual-hash-page table for each guest operating system, the contents of translation registers for each guest operating system, CPU-specific virtual-memory translations for each guest operating system, and the contents of various status registers.
    Type: Application
    Filed: September 26, 2002
    Publication date: April 1, 2004
    Inventors: Todd Kjos, Jonathan Ross, Christophe de Dinechin
  • Publication number: 20040064669
    Abstract: A system, method, and computer program product are disclosed for invalidating specified pretranslations maintained in a data processing system which maintains decentralized copies of pretranslations. A centralized mapping of virtual addresses to their associated physical addresses is established. The centralized mapping includes a listing of pretranslations of the virtual addresses to their associated physical addresses. Multiple lists of pretranslations are generated. Control of the lists may be passed from one entity to another, such that the lists are not owned by any particular entity. Each one of the lists includes a copy of pretranslations for virtual addresses. A particular one of the physical addresses is specified. Each list that includes a pretranslation of a virtual address to the specified physical addresses is located. The pretranslation of the virtual address to the specified physical address is then invalidated within each one of the lists.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Applicant: International Business Machines Corporation
    Inventors: Luke Matthew Browning, Bruce G. Mealey, Randal Craig Swanberg
  • Publication number: 20040064670
    Abstract: A method of carrying out a data fetch operation for a data-parallel processor such as a SIMD processor is described. The operation is specifically involving the use of a plurality of non-sequential data addresses. The method comprises constructing a linear address vector from the non-sequential addresses, and using the address vector in a block fetch command to a data store.
    Type: Application
    Filed: October 29, 2003
    Publication date: April 1, 2004
    Inventors: John Lancaster, Martin Whitaker
  • Publication number: 20040064671
    Abstract: Providing physically contiguous memory from memory that is allocated without any guarantee of whether the underlying contiguous physical memory is contiguous involves identifying contiguous pages of physical memory in the allocated virtual memory. Such pages are contributed to a pool of contiguous physical memory for use as required. Any pages that are not contributed to the pool of contiguous physical memory, are not allocated with other pages that are contributed to the pool of contiguous physical memory, can be freed from allocation for alternative uses.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Applicant: International Business Machines Corporation
    Inventor: Manoj C. Patil
  • Publication number: 20040064672
    Abstract: A method and apparatus for managing a dynamic alias page table are provided. With the apparatus and method, alias page table entries are added to an alias page table dynamically by determining if the alias page table has space for the entry and, if so, the entry describing the virtual address to physical address mapping is added to the alias page table and a successful completion is returned to the virtual memory manager. If the alias page table does not have space for the entry, a new page is used to map the next virtual page of the alias page table. This page must be marked as a fixed page if it not so marked already. This page is pinned in the software page frame table, and the hardware page table entry for this page is also pinned.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Applicant: International Business Machines Corporation
    Inventors: Matthew David Fleming, Mark Douglass Rogers
  • Publication number: 20040064673
    Abstract: A system, method, and computer program product are disclosed for migrating real pages. A real page of data is established. Virtual addresses that are associated with the real addresses that are included within the real page are generated. A mapping table is established that includes mappings of the virtual addresses to these real addresses. A routine is executed that accesses the mapping table to obtain the mappings of virtual addresses to real addresses. The routine utilizes the virtual addresses to access the data that is stored in the real page. While the routine is executing, the data is migrated from the real page to a new real page. The mapping table is then updated while the routine is executing so that the routine utilizes the same virtual addresses to access the data that is now stored in the new real page. Execution of the routine continues while the mapping table is being updated.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Applicant: International Business Machines Corporation
    Inventors: Mark Douglass Rogers, Randal Craig Swanberg
  • Publication number: 20040064674
    Abstract: A sum decoder is disclosed including multiple sum predecoders, a carry generator, and multiple rotate logic units. Each sum predecoder receives multiple bit pairs of non-overlapping segments of a first and second address signal, and produces an input signal dependent upon the bit pairs. The carry generator receives a lower-ordered portion of the first and second address signals, and generates multiple carry signals each corresponding to a different one of the sum predecoders. Each rotate logic unit receives the input signal produced by a corresponding sum predecoders and a corresponding one of the carry signals, rotates the bits of the input signal dependent upon the carry signal, and produces either the input signal or the rotated input signal as an output signal. A memory is described including the sum decoder, a final decode block, and a data array. The final decode block performs logical operations on the output signals of the sum decoder to produce selection signals.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Applicant: International Business Machines Corporation
    Inventors: Toru Asano, Sang Hoo Dhong, Joel Abraham Silberman, Osamu Takahashi
  • Publication number: 20040064675
    Abstract: An embedded symmetrical multiprocessor system includes arbitration logic that determines which central processing unit has access to shared memory. Upon grant of access, the memory address is stored in a memory address register. An address compare circuit compares the access address of any other central processing unit with this stored address. Upon a match, the arbitration logic stalls the second accessing central processing unit until expiration of a programmable number of wait states following the first access. These wait states give the first central processing unit enough time to determine the state of a lock variable and take control of an operation protected by the lock variable. The application boot code can determine how long the read-check-write operation requires and program that value into the wait-state generator.
    Type: Application
    Filed: September 27, 2002
    Publication date: April 1, 2004
    Inventor: Steven R. Jahnke
  • Publication number: 20040064676
    Abstract: A method, apparatus, and computer instructions for broadcasting information. A change in data used by a number of processors in the data processing system is identified. A message is sent to the number of processors in the data processing system in which the message is sent with a priority level equal to a set of routines that use the data in response to identifying the change. This message is responded to only when the recipient is at an interrupt priority less favored than the priority of the message. A flag is set for each of the number of processors to form a plurality of set flags for the message in which the plurality of set flags are located in memory locations used by the number of processors in which the plurality of set flags remains set until a response is made to the message.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Applicant: International Business Machines Corporation
    Inventors: Ramanjaneya Sarma Burugula, Matthew David Fleming, Joefon Jann, Mark Douglass Rogers
  • Publication number: 20040064677
    Abstract: A data processor includes program registers with individual byte-location write enables. Bypass networks allow a precision pipeline to respond to read requests by accessing a program register or pipeline stage on a byte-by-byte basis. The data processor can thus write to individual byte locations without overwriting other byte locations within the same register. The data processor has an instruction set with instructions that combine two operands and yield a one-byte result that is stored in a specified byte location of a specified result register. Eight instances of this instruction can pack eight results into a single 64-bit result register without additional packing instructions and without using a read port to read the result register before writing to it. As plural functional units can write concurrently to different subwords of the same result register, a system with four functional units can pack eight results into a result register in two instruction cycles.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventor: Dale Morris
  • Publication number: 20040064678
    Abstract: A scheduling window hierarchy to facilitate high instruction level parallelism by issuing latency-critical instructions to a fast schedule window or windows where they are stored for scheduling by a fast scheduler or schedulers and execution by a fast execution unit or execution cluster. Furthermore, embodiments of the invention pertain to issuing latency-tolerant instructions to a separate scheduler or schedulers and execution unit or execution cluster.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventors: Bryan P. Black, Edward A. Brekelbaum, Jeff P. Rupley
  • Publication number: 20040064679
    Abstract: A scheduling window hierarchy to facilitate high instruction level parallelism by issuing latency-critical instructions to a fast schedule window or windows where they are stored for scheduling by a fast scheduler or schedulers and execution by a fast execution unit or execution cluster. Furthermore, embodiments of the invention pertain to issuing latency-tolerant instructions to a separate scheduler or schedulers and execution unit or execution cluster.
    Type: Application
    Filed: January 29, 2003
    Publication date: April 1, 2004
    Inventors: Bryan P. Black, Edward A. Brekelbaum, Jeff P. Rupley
  • Publication number: 20040064680
    Abstract: One embodiment of the present invention provides a system that reduces the time required to access registers from a register file within a processor. During operation, the system receives an instruction to be executed, wherein the instruction identifies at least one operand to be accessed from the register file. Next, the system looks up the operands in a register pane, wherein the register pane is smaller and faster than the register file and contains copies of a subset of registers from the register file. If the lookup is successful, the system retrieves the operands from the register pane to execute the instruction. Otherwise, if the lookup is not successful, the system retrieves the operands from the register file, and stores the operands into the register pane. This triggers the system to reissue the instruction to be executed again, so that the re-issued instruction retrieves the operands from the register pane.
    Type: Application
    Filed: September 26, 2002
    Publication date: April 1, 2004
    Inventors: Sudarshan Kadambi, Adam R. Talcott, Wayne I. Yamamoto
  • Publication number: 20040064681
    Abstract: A stack pointer update technique in which the stack pointer is updated without executing micro-operations to add or subtract a stack pointer value. The stack pointer update technique is also described to reset the stack pointer to a predetermined value without executing micro-operations to add or subtract stack a stack pointer value.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventors: Stephan J. Jourdan, Alan B. Kyker, Nicholas G. Samra
  • Publication number: 20040064682
    Abstract: A processor is disclosed including several features allowing the processor to simultaneously execute instructions of multiple conditional execution instruction groups. Each conditional execution instruction group includes a conditional execution instruction and a code block specified by the conditional execution instruction. In one embodiment, the processor includes multiple state machines simultaneously assignable to a corresponding number of conditional execution instruction groups. In another embodiment, the processor includes multiple registers for storing marking data pertaining to a number of instructions in each of multiple execution pipeline stages. In another embodiment, the processor includes multiple attribute queues simultaneously assignable to a corresponding number of conditional execution instruction groups. In another embodiment, the processor includes write enable logic and an execution unit.
    Type: Application
    Filed: September 27, 2002
    Publication date: April 1, 2004
    Inventors: Hung Nguyen, Shannon A. Wichman
  • Publication number: 20040064683
    Abstract: A system for conditionally executing an instruction depending on a previously existing condition. The system disclosed is configured to handle conditional execution instructions typically specifying at least one target instruction, a processor register, and a condition within the register. The system saves a result of each of the target instructions dependent upon the existence of the condition in the specified register during execution of the conditional execution instruction. When the conditional execution instruction specifies a first flag register, the system copies the flag bits in the first flag register to a corresponding second flag register, and saves a result of each of the target instructions dependent upon the specified condition in the first flag register during execution of the conditional execution instruction.
    Type: Application
    Filed: September 27, 2002
    Publication date: April 1, 2004
    Inventors: Seshagiri P. Kalluri, Ramon C. Trombetta, Adam C. Krolnik
  • Publication number: 20040064684
    Abstract: A processor is disclosed including an instruction unit and an execution unit. The instruction unit fetches and decodes a conditional execution instruction and one or more target instructions. The conditional execution instruction specifies the target instructions, a register, and a register condition, and includes pointer update information. The execution unit saves a result of each of the target instructions dependent upon the existence of the specified register condition during execution of the conditional execution instruction. When a target instruction is an instruction involving a pointer subject to update, the execution unit updates the pointer dependent upon the pointer update information. A system (e.g., a computer system) is described including the processor coupled to a memory system. A method is disclosed for conditionally executing at least one instruction, including inputting the conditional execution instruction and the target instructions.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventors: Seshagiri P. Kalluri, Shannon A. Wichman, Ramon C. Trombetta
  • Publication number: 20040064685
    Abstract: A processor is disclosed including trace and profile logic for gathering and producing data corresponding to events occurring during instruction execution. In one embodiment, the trace and profile logic includes a discontinuity buffer for storing data corresponding to a “discontinuity instruction” subject to grouping with other instructions for simultaneous execution. A “discontinuity instruction” alters, or is executed as a result of an altering of, sequential instruction fetching. In another embodiment, the trace and profile logic includes a serial queue for serializing data corresponding to multiple discontinuity instructions grouped together for simultaneous execution. In another embodiment, the trace and profile logic includes stall filtering logic that asserts an output signal for a time period during which repeated data generated due to a pipeline stall condition are to be ignored.
    Type: Application
    Filed: September 27, 2002
    Publication date: April 1, 2004
    Inventors: Hung Nguyen, Mark Boike
  • Publication number: 20040064686
    Abstract: A method and apparatus for marking current memory configuration is presented. In this regard, an enhanced Basic Input/Output System (BIOS) is introduced to initialize system memory during an initial boot and to store initialization settings in a non-volatile memory for use during a subsequent boot.
    Type: Application
    Filed: September 30, 2002
    Publication date: April 1, 2004
    Inventors: Gregory L. Miller, Nicholas J. Yoke
  • Publication number: 20040064687
    Abstract: This invention provides identity-related information about a client application to an honest requesting entity, ensuring identity of client applications and preventing man-in-the-middle attacks.
    Type: Application
    Filed: August 8, 2003
    Publication date: April 1, 2004
    Applicant: International Business Machines Corporation
    Inventors: Birgit M. Pfitzmann, Michael Waidner
  • Publication number: 20040064688
    Abstract: A method for processing packets with encrypted data received by a client from a server through at least one network wherein the data packets comprise at least an encryption header (46) and payload (45), extracting the encryption header (54, 55; 69) from a data packet, extracting and decrypting the encrypted payload to form a clear data, generating a clear data packet segment. Secure packet-based transmission of content data from a server to at least one client comprises retrieving a clear data packet comprising an unencrypted payload, dividing the unencrypted payload into one or more segments, applying an encrypted algorithm to each segment to generate encrypted segments (47), generating encryption header for each encrypted segment composing a packet with encrypted data for each encrypted segment comprising the encrypted header (46), a data packet header and transmission of each of the composed packets to the client.
    Type: Application
    Filed: October 3, 2003
    Publication date: April 1, 2004
    Inventor: Andre Jacobs
  • Publication number: 20040064689
    Abstract: Systems and methods that securely handle control information are provided. In one example, a system may include an application specific integrated circuit (ASIC). The ASIC may include, for example, a content processing block and a control processing block. The content processing block may be coupled to the control processing block. The content received by the ASIC may be associated with the control information received by the ASIC. The control processing block may be adapted to validate the control information received by the ASIC. The content processing block may be adapted to process the content received by the ASIC in accordance with the validated control information.
    Type: Application
    Filed: December 4, 2002
    Publication date: April 1, 2004
    Inventor: Jeffrey Douglas Carr