Stack Cache Patents (Class 711/132)
  • Patent number: 7024537
    Abstract: A system may include a memory file and an execution core. The memory file may include an entry configured to store an addressing pattern and a tag. If an addressing pattern of a memory operation matches the addressing pattern stored in the entry, the memory file may be configured to link a data value identified by the tag to a speculative result of the memory operation. The addressing pattern of the memory operation includes an identifier of a logical register, and the memory file may be configured to predict whether the logical register is being specified as a general purpose register or a stack frame pointer register in order to determine whether the addressing pattern of the memory operation matches the addressing pattern stored in the entry. The execution core may be configured to access the speculative result when executing another operation that is dependent on the memory operation.
    Type: Grant
    Filed: January 21, 2003
    Date of Patent: April 4, 2006
    Assignee: Advanced Micro Devices, Inc.
    Inventors: James K. Pickett, Benjamin Thomas Sander, Kevin Michael Lepak
  • Patent number: 7020747
    Abstract: Briefly, embodiments of the invention provide an architecture including two or more stack memories defined on separate memory banks. An apparatus in accordance with embodiments of the invention may include, for example, a processor associated with two stack memories defined on separate single-access memory banks. Embodiments of the invention further provide a method of compilation including, for example, allocating a first variable to a first memory bank and allocating a second variable to a stack memory defined on a second memory bank.
    Type: Grant
    Filed: March 31, 2003
    Date of Patent: March 28, 2006
    Assignee: Intel Corporation
    Inventor: Omry Paiss
  • Patent number: 6996677
    Abstract: Method and apparatus for protecting processing elements from buffer overflow attacks are provided. The apparatus includes a memory stack for, upon execution of a jump to subroutine, storing a return address in a first location in a stack memory. A second location separate from the stack memory for storing an address of the first location and a third location separate from the stack memory for storing the return address itself are included. A first comparator upon completion of the subroutine, compares the address stored in the second location to the first location in the stack memory and a first interrupt generator provides an interrupt signal if locations are not the same. A second comparator looks at the return address stored in the third location and the return address stored in the first location in the stack memory and has a second interrupt generator for generating an interrupt signal if addresses are not the same.
    Type: Grant
    Filed: February 20, 2003
    Date of Patent: February 7, 2006
    Assignee: Nortel Networks Limited
    Inventors: Michael C. Lee, Lawrence Dobranski
  • Patent number: 6993646
    Abstract: An automatically configuring storage array includes media storage devices coupled together within a network. Preferably, the network is an IEEE 1394-1995 serial bus network. The media storage devices record and retrieve data transmitted within the network. The media storage devices communicate to store and retrieve data over multiple media storage devices. When a record or playback command is received by a media storage device, the media storage devices send communications between themselves to record or transmit the data. Control of operations is transferred between the media storage devices to utilize the capacity of available media storage devices. Preferably, data is recorded utilizing redundancy techniques. Object descriptors are stored within recorded streams of data to facilitate search and retrieval of recorded data. Preferably, the media storage devices accept control instructions directly from devices.
    Type: Grant
    Filed: May 21, 2001
    Date of Patent: January 31, 2006
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventor: Scott D. Smyers
  • Patent number: 6965962
    Abstract: A computer implemented method of managing processor requests to load data items provides for the classification of the requests based on the type of data being loaded. In one approach, a pointer cache is used, where the pointer cache is dedicated to data items that contain pointers. In other approaches, the cache system replacement scheme is modified to age pointer data items more slowly than non-pointer data items. By classifying load requests, cache misses on pointer loads can be overlapped regardless of whether the pointer loads are part of a linked list of data structures.
    Type: Grant
    Filed: December 17, 2002
    Date of Patent: November 15, 2005
    Assignee: Intel Corporation
    Inventor: Gad S. Sheaffer
  • Patent number: 6957302
    Abstract: A system and method for performing write operations in a disk drive having a write stack drive is disclosed. The disk drive has two drives, a normal drive and a write stack drive. The write stack receives a write command from an array controller and performs write stack operations to store data from the write operations of the write command. The write stack drive includes a non-volatile stack cache memory comprising tracks to store the data. A metadata file identifies the current data stored within the write stack drive. The normal drive then executes the write operations of the write command.
    Type: Grant
    Filed: September 20, 2001
    Date of Patent: October 18, 2005
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Steven E. Fairchild
  • Patent number: 6952756
    Abstract: The present invention provides a speculatively loaded memory for use in a data processing system. The present invention may include a memory block including rows each identified by an address. A first register may store a first address of the memory block and a second register may store a second address of the memory block. A control circuit may be coupled to the first and second registers, and may receive control signals. The control circuit causes contents of the first register to be stored into the second register in response to a first state of the control signals, and the control circuit causes contents of the second register to be stored into the first register in response to a second state of the control signals.
    Type: Grant
    Filed: May 6, 2002
    Date of Patent: October 4, 2005
    Assignee: LeWiz Communications
    Inventor: Chinh H. Le
  • Patent number: 6948034
    Abstract: The present invention has an objective of minimizing deterioration of the processing speed of a Java accelerator device even when stack overflow occurs in a stack memory unit. A first thread presently allocated to a first stack area of a stack memory unit 113 to which a fourth thread belongs is saved in a virtual stack area of a main storage medium 103. Thereafter, the data of the fourth thread as stack data to be switched is copied to the first stack area of the stack memory unit 113 by the controller unit 112 (accelerator device 101).
    Type: Grant
    Filed: November 21, 2002
    Date of Patent: September 20, 2005
    Assignee: NEC Corporation
    Inventor: Yayoi Aoki
  • Patent number: 6931517
    Abstract: A microprocessor apparatus is provided for performing a pop-compare operation. The microprocessor apparatus includes paired operation translation logic, load logic, and execution logic. The paired operation translation logic receives a macro instruction that prescribes the pop-compare operation, and generates a pop-compare micro instruction. The pop-compare micro instruction directs pipeline stages in a microprocessor to perform the pop-compare operation. The load logic is coupled to the paired operation translation logic. The load logic receives the pop-compare micro instruction, and retrieves a first operand from an address in memory, where the address is specified by contents of a register. The register is prescribed by the pop-compare micro instruction. The execution logic is coupled to the load logic. The execution logic receives the first operand, and compares the first operand to a second operand.
    Type: Grant
    Filed: October 22, 2002
    Date of Patent: August 16, 2005
    Assignee: IP-First, LLC
    Inventors: Gerard M. Col, G. Glenn Henry, Terry Parks
  • Patent number: 6920526
    Abstract: The present invention comprises a dual bank FIFO memory buffer operable to buffer read data from memory and thereby compensate for specific timing problems in certain computerized systems. One embodiment of the invention includes a dual bank FIFO that comprises a first bank of memory elements operable to buffer memory data and a second bank of memory elements operable to buffer memory data. Write control address logic is operable to store selected memory data in memory elements with selected addresses within a bank of memory elements, and write control timing logic is operable to selectively grant write access to the banks of memory elements at predetermined time. Also, read control logic operable to read data stored in the first and second banks.
    Type: Grant
    Filed: July 20, 2000
    Date of Patent: July 19, 2005
    Assignee: Silicon Graphics, Inc.
    Inventors: Mark Ronald Sikkink, Nan Ma
  • Patent number: 6892278
    Abstract: One embodiment of the present invention provides a system that implements a last-in first-out buffer. The system includes a plurality of cells arranged in a linear array to form the last-in first-out buffer, wherein a given cell in the interior of the linear array is configured to receive get and put calls from a preceding cell in the linear array, and to make get and put calls to a subsequent cell in the linear array. If the given cell contains no data items, the given cell is configured to make a get call to retrieve a data item from the subsequent cell. In this way the data item becomes available in the given cell to immediately satisfy a subsequent get call to the given cell without having to wait for the data item to propagate to the given cell from subsequent cells in the linear array. If the given cell contains no space for additional data items, the given cell is configured to make a put call to transfer a data item to the subsequent cell.
    Type: Grant
    Filed: March 5, 2002
    Date of Patent: May 10, 2005
    Assignee: Sun Microsystems, Inc.
    Inventor: Josephus C. Ebergen
  • Patent number: 6888847
    Abstract: To provide a communication apparatus that can relieve the processing load when packet transfer is made with hardware. A packet transfer apparatus includes an input buffer for storing temporarily an input packet, an address acquiring section for acquiring the information needed for the transfer, a retrieval circuit for retrieving the information regarding the output with the acquired destination address as a key, a label insertion circuit for encapsulating a packet with the labels in the maximum number of stack layers M designated for a packet group having a unit of destination address, and a switch section for switching the encapsulated packet to a desired output destination port.
    Type: Grant
    Filed: July 19, 2001
    Date of Patent: May 3, 2005
    Assignee: NEC Corporation
    Inventor: Hiroshi Ueno
  • Patent number: 6886069
    Abstract: An IC card having nonvolatile and volatile memory is disclosed. An IC card generates a volatile object, and accesses the volatile object using a reference address on a nonvolatile memory. These volatile objects are dynamically generated. The objects are allocated addresses in order from volatile objects with shorter terms to volatile objects with longer terms, so as to allow garbage collection and reuse of a volatile memory.
    Type: Grant
    Filed: May 30, 2002
    Date of Patent: April 26, 2005
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Atsuyoshi Kawaura
  • Patent number: 6882656
    Abstract: A speculative transmit function, utilizing a configurable logical buffer, is implemented in a network. When a transmission is started the logical buffer is configured as a FIFO to reduce transmit latency. If a data under-run lasts for more than a fixed time interval the transmission is abandoned and the logical buffer is reconfigured as a STORE-AND-FORWARD buffer. The transmission is restarted after all transmit data is buffered.
    Type: Grant
    Filed: April 13, 2004
    Date of Patent: April 19, 2005
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: William P. Bunton, David A. Brown, John C. Krause
  • Patent number: 6862670
    Abstract: A tag address stack (TAS) for reducing the number of address latches and address comparators needed to insure data coherency in a pipelined microprocessor. The TAS is a small pool of address latches shared among data buffers in the microprocessor that stores a unique set of memory addresses that specify data in the data buffers. A correspondingly small number of address comparators compare the unique TAS addresses with a new load/store address. If the new address matches a TAS address, the new load/store operation latches a unique tag associated with the matching TAS address. Otherwise, the new address is loaded into a free latch and the new load/store latches its associated unique tag. If no latches are free, the pipeline stalls until a latch in the pool becomes free. Rather than storing the full addresses in conjunction with the data buffers, the tags are stored, which facilitates faster compares.
    Type: Grant
    Filed: August 22, 2002
    Date of Patent: March 1, 2005
    Assignee: IP-First, LLC
    Inventors: G. Glenn Henry, Rodney E. Hooker
  • Patent number: 6859846
    Abstract: An automatically configuring storage array includes a plurality of media storage devices coupled together within a network of devices. Preferably, the network of devices is an IEEE 1394-2000 serial bus network of devices. The media storage devices are utilized to record and retrieve streams of data transmitted within the network of devices. The media storage devices communicate with each other in order to store and retrieve streams of data over multiple media storage devices, if necessary. When a record or playback command is received by any one of the media storage devices, the media storage devices send control communications between themselves to ensure that the stream of data is recorded or transmitted, as appropriate. Control of the record or transmit operation is also transferred between the media storage devices in order to utilize the full capacity of the available media storage devices. Preferably, streams of data are recorded utilizing redundancy techniques.
    Type: Grant
    Filed: August 2, 2002
    Date of Patent: February 22, 2005
    Assignees: Sony Corporation, Sony Electronics Inc.
    Inventors: Thomas Ulrich Swidler, Bruce Alan Fairman, Glen David Stone, Scott David Smyers
  • Patent number: 6845437
    Abstract: A computer system has a heap for storing objects and a card table for tracking updates to objects on the heap, typically for garbage collection purposes. In particular, the heap is divided into segments, each corresponding to a card in the card table, and any update to a segment in the heap triggers a write barrier to mark the corresponding card in the card table. It is important that this write barrier is as efficient as possible to optimize system performance. In some circumstances an object update may be made to an address outside the heap. To ensure that this still properly maps to a card in the card table, the entire memory space is folded cyclically, so that any given memory address corresponds to one, and only one card, in the card table.
    Type: Grant
    Filed: August 13, 2002
    Date of Patent: January 18, 2005
    Assignee: International Business Machines Corporation
    Inventors: Samuel David Borman, Andrew Dean Wharmby
  • Publication number: 20040210720
    Abstract: An embedded ROM-based processor system including a processor, system memory, a programmable memory, a data selector and a patch controller. The system memory includes a read-only memory (ROM). The programmable memory stores patch information including patch code and one or more patch vectors. Each patch vector includes a break-out address from the ROM and a patch-in address to a corresponding location within the patch code. The data selector has an input coupled to the system memory and an output coupled to the processor. The patch controller is operative to compare an address provided by the processor with each break-out address to determine a breakout condition, and to control the selector to transfer the processor to a corresponding location within the patch code in response to a break-out condition. The programmable memory may be volatile memory, where the patch information is loaded from an external memory during initialization.
    Type: Application
    Filed: April 17, 2003
    Publication date: October 21, 2004
    Inventors: Yuqian C. Wong, Langford M. Wasada, Daniel C. Bozich, Mitchell A. Buznitsky
  • Patent number: 6799253
    Abstract: Methods and apparatus for dynamically allocating space within virtual memory at run-time while substantially minimizing an associated path length are disclosed. According to one aspect of the present invention, a method for allocating virtual storage associated with a computer system includes creating a scratchpad, allocating a unit of storage space at a current location within the scratchpad, and writing a set of information into the unit of storage space such that the set of information is substantially not tracked. The scratchpad supports allocation of storage space therein, and includes a first pointer that identifies a current location within the scratchpad. Finally, the method includes moving the first pointer in the scratchpad to identify a second location within the scratchpad. The first pointer moves in the first linear space in substantially only a top-to-bottom direction.
    Type: Grant
    Filed: May 30, 2002
    Date of Patent: September 28, 2004
    Assignee: Oracle Corporation
    Inventor: Ronald J. Peterson
  • Publication number: 20040186959
    Abstract: A data memory cache unit is provided which is capable of heightening the speed of memory access. The cache unit 117 executes reading and writing of data in a 16-byte width line unit in a main memory unit 131, executes reading and writing of data in an MPU 113 in the unit of a four-byte width small area included in each line. When the MPU 113 executes a push instruction, if a cache miss takes place on a line which includes a small area that holds data which should be read out to the MPU 113 (NO at S1), then the cache unit 117 opens the line (S301). If a small area into which the data sent from the MPU 113 should be written is adjacent to a line boundary on the side where an address is larger or on the side where write-in is earlier executed (YES at S56), then the cache unit 117 does not execute a refill, and if this small area is not adjacent to the line boundary (NO at S56), then it executes a refill (S21).
    Type: Application
    Filed: March 18, 2004
    Publication date: September 23, 2004
    Inventor: Takuji Kawamoto
  • Publication number: 20040162947
    Abstract: A variable latency cache memory is disclosed. The cache memory includes a plurality of storage elements for storing stack memory data in a first-in-first-out manner. The cache memory distinguishes between pop and load instruction requests and provides pop data faster than load data by speculating that pop data will be in the top cache line of the cache. The cache memory also speculates that stack data requested by load instructions will be in the top one or more cache lines of the cache memory. Consequently, if the source virtual address of a load instruction hits in the top of the cache memory, the data is speculatively provided faster than the case where the data is in a lower cache line or where a full physical address compare is required or where the data must be provided from a non-stack cache memory in the microprocessor, but slower than pop data.
    Type: Application
    Filed: January 16, 2004
    Publication date: August 19, 2004
    Applicant: IP-First, LLC.
    Inventor: Rodney E. Hooker
  • Publication number: 20040158678
    Abstract: A system and a method for stack-caching method frames are disclosed, which utilize FSO (Frame Size Overflow) flag to control the execution mode of the processor. When FSO flag is set, it indicates that the method frame of the next method is larger than the limit value of a stack cache so that the method frame is placed to a memory stack. When FSO flag is cleared, it indicates that the method frame of the next method is smaller than the limit value of the stack cache so that the method frame is placed into the cache stack.
    Type: Application
    Filed: April 17, 2003
    Publication date: August 12, 2004
    Applicant: Industrial Technology Research Institute
    Inventor: Cheng-Che Chen
  • Patent number: 6772292
    Abstract: The present invention provides a unique stack memory system to store the execution environment, including registers in one area and local variables and parameters of functions in a different area so that every function creates its own portion in the stack. These portions include two pointers, one for referencing the portion of the previous function and the other for referencing either the portion of the next function or the free space of the stack. This structure permits the run-time creation of variables, while also enabling the deletion of both run-time and compile-time created variables without storage fragmentation.
    Type: Grant
    Filed: June 4, 2002
    Date of Patent: August 3, 2004
    Inventors: Isaak Garber, Gordon Stuart Bassen
  • Publication number: 20040148468
    Abstract: A cache memory for performing fast speculative load operations is disclosed. The cache memory caches stack data in a LIFO manner and stores both the virtual and physical address of the cache lines stored therein. The cache compares a load instruction virtual address with the virtual address of the top cache entry substantially in parallel with translation of the virtual load address into a physical load address. If the virtual addresses match, the cache speculatively provides the requested data to the load instruction from the top entry. The cache subsequently compares the physical load address with the top cache entry physical address and if they mismatch, the cache generates an exception and the processor provides the correct data. If the virtual and physical load addresses both miss in the stack cache, the data is provided by a non-stack cache that is accessed substantially in parallel with the stack cache.
    Type: Application
    Filed: January 16, 2004
    Publication date: July 29, 2004
    Applicant: IP-First, LLC.
    Inventor: Rodney E. Hooker
  • Publication number: 20040148467
    Abstract: A stack cache memory in a microprocessor and apparatus for performing fast speculative pop instructions is disclosed. The stack cache stores cache lines of data implicated by push instructions in a last-in-first-out fashion. An offset is maintained which specifies the location of the newest non-popped push data within the cache line stored in the top entry of the stack cache. The offset is updated when an instruction is encountered that updates the stack pointer register. When a pop instruction requests data, the stack cache speculatively provides data specified by the offset from the top entry to the pop instruction, before determining whether the pop instruction source address matches the address of the data provided. If the source address and the address of the data provided are subsequently determined to mismatch, then an exception is generated to provide the correct data.
    Type: Application
    Filed: January 16, 2004
    Publication date: July 29, 2004
    Applicant: IP-First, LLC.
    Inventor: Rodney E. Hooker
  • Patent number: 6765922
    Abstract: A speculative transmit function, utilizing a configurable logical buffer, is implemented in a network. When a transmission is started the logical buffer is configured as a FIFO to reduce transmit latency. If a data under-run lasts for more than a fixed time interval the transmission is abandoned and the logical buffer is reconfigured as a STORE-AND-FORWARD buffer. The transmission is restarted after all transmit data is buffered.
    Type: Grant
    Filed: September 27, 2000
    Date of Patent: July 20, 2004
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: William P. Bunton, David A. Brown, John C. Krause
  • Patent number: 6760834
    Abstract: A microprocessor may be switchable between a normal mode and a test mode for performing a test program and may include a central processing unit (CPU) for saving contextual data in a stack of the microprocessor at the time of switching to the test mode. The CPU may deliver, at the beginning of the test program and on an input/output port, contextual data present in the stack beginning with the top of the stack. The CPU may also decrement a stack pointer by a value corresponding to a number of contextual data delivered.
    Type: Grant
    Filed: November 28, 2001
    Date of Patent: July 6, 2004
    Assignee: STMicroelectronics SA
    Inventors: Franck Roche, Thierry Bouquier
  • Patent number: 6757771
    Abstract: A method and mechanism for performing an unconditional stack switch in a processor. A processor includes a processing unit coupled to a memory. The memory includes a plurality of stacks, a special mode task state segment, and a descriptor table. The processor detects interrupts and accesses a descriptor corresponding to the interrupt within the descriptor table. Subsequent to accessing the descriptor, the processor is configured to access an index within the descriptor in order to determine whether or not an interrupt stack table mechanism is enabled. In response to detecting the interrupt stack table mechanism is enabled, the index is used to select an entry in the interrupt stack table. The selected entry in the interrupt stack table indicates a stack pointer which is then used to perform an unconditional stack switch.
    Type: Grant
    Filed: August 1, 2001
    Date of Patent: June 29, 2004
    Assignee: Advanced Micro Devices, Inc.
    Inventor: David S. Christie
  • Publication number: 20040123038
    Abstract: A circuit generally comprising a memory and a core module is disclosed. The memory may be configured as (i) a first stack having a plurality of index pointers and (ii) a table having a plurality of entries. The core module may be configured to (i) pop a first index pointer of the index pointers from the first stack in response to receiving a first command generated by a first module external to the circuit, (ii) assign a first entry of the entries identified by the first index pointer to the first module, (iii) generate an address in response to converting the first index pointer and (iv) transfer the address to the first module.
    Type: Application
    Filed: December 19, 2002
    Publication date: June 24, 2004
    Applicant: LSI LOGIC CORPORATION
    Inventors: Qasim R. Shami, Jagmohan Rajpal
  • Patent number: 6748491
    Abstract: A system, computer program product and method for designing a cache. A server in a network system, e.g., file system, database system, may receive requests forming a workload. A trace may be performed on the workload to provide information such as the frequency count for each Logical Block Address (LBA) requested in the workload. The trace may then be analyzed by grouping the LBA's with the same frequency count and determining the number of groups counted in the trace. Upon analyzing the trace, an LRU-LFU cache may be designed. An LRU-LFU cache may comprise one or more stacks of cache entries where the number of stacks corresponds to the number of frequency groups counted in the trace. Each particular stack may then have a length based on the number of logical addresses with the same frequency count associated with that particular stack.
    Type: Grant
    Filed: April 19, 2001
    Date of Patent: June 8, 2004
    Assignee: International Business Machines Corporation
    Inventor: Jorge R. Rodriguez
  • Patent number: 6745295
    Abstract: A system, computer program product and method for reconfiguring a cache. A cache array may be created with one or more stacks of cache entries based on a workload. The one or more stacks may be ordered from most frequently used to least frequently used. The cache entries in each particular stack may be ordered from most recently used to least recently used. When a cache hit occurs, the cache entry requested may be stored in the next higher level stack if the updated frequency count is associated with the next higher level stack. When a cache miss occurs, the cache entry in a least recently used stack position in the stack with the lowest number of cache hits in the one or more stack positions tracked during a particular period of time may be evicted thereby allowing the requested information to be stored in the lowest level stack.
    Type: Grant
    Filed: April 19, 2001
    Date of Patent: June 1, 2004
    Assignee: International Business Machines Corporation
    Inventor: Jorge R. Rodriguez
  • Patent number: 6745288
    Abstract: When a new control thread is initialized in a multi-thread software program, it is determined whether a like control thread has previously been instantiated. If so, a stack offset for the new control thread is set to be staggered from the stack offset for the previously instantiated like thread. By staggering the stack offsets of respective duplicate control threads, cache conflicts may be minimized.
    Type: Grant
    Filed: May 21, 2002
    Date of Patent: June 1, 2004
    Assignee: International Business Machines Corporation
    Inventor: Brent William Jacobs
  • Patent number: 6745289
    Abstract: A system for processing data includes a first set of cache memory and a second set of cache memory that are each coupled to a main memory. A compute engine coupled to the first set of cache memory transfers data from a communications medium into the first set of cache memory. The system transfers the data from the first set of cache memory to the second set of cache memory, in response to a request for the data from a compute engine coupled to the second set of cache memory. Data is transferred between the sets of cache memory without accessing main memory, regardless of whether the data has been modified. The data is also transferred directly between sets of cache memory when the data is exclusively owned by a set of cache memory or shared by sets of cache memory.
    Type: Grant
    Filed: March 25, 2002
    Date of Patent: June 1, 2004
    Assignee: Juniper Networks, Inc.
    Inventors: Frederick Gruner, Elango Ganesan, Nazar Zaidi, Ramesh Panwar
  • Publication number: 20040103252
    Abstract: Method and apparatus for protecting processing elements from buffer overflow attacks are provided. The apparatus includes a memory stack for, upon execution of a jump to subroutine, storing a return address in a first location in a stack memory. A second location separate from the stack memory for storing an address of the first location and a third location separate from the stack memory for storing the return address itself are included. A first comparator upon completion of the subroutine, compares the address stored in the second location to the first location in the stack memory and a first interrupt generator provides an interrupt signal if locations are not the same. A second comparator looks at the return address stored in the third location and the return address stored in the first location in the stack memory and has a second interrupt generator for generating an interrupt signal if addresses are not the same.
    Type: Application
    Filed: February 20, 2003
    Publication date: May 27, 2004
    Applicant: Nortel Networks Limited
    Inventors: Michael C. Lee, Lawrence Dobranski
  • Patent number: 6742102
    Abstract: A microprocessor capable of suppressing reduction in performance caused due to a cache miss when a specific command is issued. The processor according to the present invention comprises a command buffer/queue; an execution unit; a subroutine call decoder; a data cache control unit; an Addiu decoder for detecting an addiu command; a pre-fetch control section; an adder; a PAdr register; a selector; and an adder circuit. When a subroutine call occurs, a stack pointer is moved by an amount used in a subroutine, and data used in the subroutine is pre-fetched to be stored in an area used by the subroutine in a data cache. Therefore, it is possible to reduce cache miss penalties due to stack access which is apt to be generated at the time of a subroutine call.
    Type: Grant
    Filed: March 28, 2001
    Date of Patent: May 25, 2004
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Masashi Sasahara
  • Publication number: 20040049639
    Abstract: Systems and methods that cache are provided. In one example, a system may include a spatial cache system coupled to a processing unit and to a memory. The spatial cache system may be adapted to reduce the memory latency of the processing unit. The spatial cache system may be adapted to store prefetched blocks, each stored prefetched block including a plurality of cache lines. If a cache line requested by the processing unit resides in one of the stored prefetched blocks and does not reside in the processing unit, then the spatial cache system may be adapted to provide the processing unit with the requested cache line.
    Type: Application
    Filed: November 14, 2002
    Publication date: March 11, 2004
    Inventors: Kimming So, Jin Chin Wang
  • Publication number: 20040034742
    Abstract: A stack allocation system and method is described. In one implementation, an attempt is made to allocate N bytes of data to a stack having a fixed depth. A probe size for the stack is determined. Verification is then made to ascertain whether the probe size and the N bytes of data exceed the fixed depth of the stack, prior to allocating the N bytes of data to the stack. In another implementation, the N bytes of data are allocated to a heap; if the probe size and the N bytes of data exceed the fixed depth of the stack.
    Type: Application
    Filed: June 24, 2002
    Publication date: February 19, 2004
    Inventors: Scott A. Field, Jonathan David Schwartz, Clifford P. Van Dyke
  • Publication number: 20040034743
    Abstract: A method of managing a free list and ring data structure, which may be used to store journaling information, by storing and modifying information describing a structure of the free list or ring data structure in a cache memory that may also be used to store information describing a structure of a queue of buffers.
    Type: Application
    Filed: August 13, 2002
    Publication date: February 19, 2004
    Inventors: Gilbert Wolrich, Mark B. Rosenbluth, Debra Bernstein, John Sweeney, James D. Guilford
  • Patent number: 6694420
    Abstract: An address range checking circuit capable of determining if a target address, A[M:0], is within an address space having 2N address locations beginning at a base address location, B[M:0], is disclosed, wherein the address range checking circuit does not require a large comparator circuit.
    Type: Grant
    Filed: December 5, 2001
    Date of Patent: February 17, 2004
    Assignee: STMicroelectronics, Inc.
    Inventor: Lun Bin Huang
  • Publication number: 20040024969
    Abstract: Methods and apparatuses are disclosed for managing a memory. In some embodiments, the apparatuses may include a processor, a memory coupled to the processor, a stack that exists in memory and contains stack data, and a memory controller coupled to the memory. The memory may further include multiple levels. The processor may issue data requests and the memory controller may adjust memory management policies between the various levels of memory based on whether the data requests refer to stack data. In this manner, data may be written to a first level of memory without allocating data from a second level of memory. Thus, memory access time may be reduced and overall power consumption may be reduced.
    Type: Application
    Filed: July 31, 2003
    Publication date: February 5, 2004
    Applicant: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Serge Lasserre, Dominique D'Inverno
  • Patent number: 6687790
    Abstract: A cache controller is intimately associated with a microprocessor CPU on a single chip. The physical address bus is routed directly from the CPU to the cache controller where it is sent to the cache tag directory table. For a cache hit, the cache address is remapped to the proper cache set address. For a cache miss, the cache address is remapped in accordance with the LRU logic to direct the cache write to the least recently used set. The cache is thereby functionally divided into associative sets, but without the need to physically divide the cache into independent banks of SRAM.
    Type: Grant
    Filed: August 14, 2001
    Date of Patent: February 3, 2004
    Assignee: Intel Corporation
    Inventors: Edward Zager, Gregory Mathews
  • Publication number: 20040019744
    Abstract: An allocation instructions and an extension instructions allow a program to continue to execute even when the program requires more stack space than has been allocated to the program. The methods and systems thereby allow programs to run to completion in more situations than programs running in conventional data processing systems. As a result, the programs avoid wasting computing resources by terminating prematurely, without producing results.
    Type: Application
    Filed: July 24, 2002
    Publication date: January 29, 2004
    Applicant: SUN MICROSYSTEMS, INC.
    Inventor: Michael L. Boucher
  • Patent number: 6671196
    Abstract: A CPU includes a register file including a plurality of architectural registers for storing data loaded from a primary memory for execution by the CPU. A stack cache memory coupled to the register file includes a plurality of cache lines, each of which corresponds to one of the architectural registers and implements a first-in, last-out queue for data spilled from the corresponding architectural register. Data spilled from the register file into the stack cache memory is maintained in the stack cache until subsequently restored to the register file without accessing primary memory. The stack cache memory does not participate in cache writeback operations to primary memory.
    Type: Grant
    Filed: February 28, 2002
    Date of Patent: December 30, 2003
    Assignee: Sun Microsystems, Inc.
    Inventor: Jan Civlin
  • Patent number: 6665793
    Abstract: Method and apparatus for managing access to registers that are outside a current register stack frame are disclosed. An instruction execution unit in a processor receives an instruction to be executed. A processor includes a register stack, the register stack including a plurality of register stack frames. Each of the register stack frames includes zero or more registers. One of the plurality of register stack frames is a current register stack frame. When execution of the instruction requires writing to a register referenced by the instruction, the instruction execution unit determines whether the register referenced by the instruction is within the current register stack frame. If the instruction execution unit determines that the register is not within the current register stack frame, the instruction execution unit does not execute the instruction and may, for example, generate a fault.
    Type: Grant
    Filed: December 28, 1999
    Date of Patent: December 16, 2003
    Assignee: Institute for the Development of Emerging Architectures, L.L.C.
    Inventors: Achmed Rumi Zahir, Cary A. Coutant, Carol L. Thompson, Jonathan K. Ross
  • Publication number: 20030229766
    Abstract: Mechanisms and techniques operate in a computerized device to perform a memory management technique such as garbage collection. The mechanisms and techniques operate to detect, within a storage structure associated with a thread, general memory references that reference storage locations in a general memory area such as a heap. The storage structure may be a stack utilized by the thread, which may be, for example, a Java thread, during operation of the thread in the computerized device. The system maintains a reference structure containing an association to the general memory area for each detected general memory reference within the storage structure. The system then operates a memory management technique on the general memory area for locations in the general memory area other than those for which an association to the general memory area is maintained in the reference structure, thus increasing the performance of the memory management technique.
    Type: Application
    Filed: June 6, 2002
    Publication date: December 11, 2003
    Inventors: David Dice, Alexander T. Garthwaite
  • Publication number: 20030225973
    Abstract: The present invention provides a unique stack memory system to store the execution environment, including registers in one area and local variables and parameters of functions in a different area so that every function creates its own portion in the stack. These portions include two pointers, one for referencing the portion of the previous function and the other for referencing either the portion of the next function or the free space of the stack. This structure permits the run-time creation of variables, while also enabling the deletion of both run-time and compile-time created variables without storage fragmentation.
    Type: Application
    Filed: June 4, 2002
    Publication date: December 4, 2003
    Inventors: Isaak Garber, Gordon Stuart Bassen
  • Patent number: 6654871
    Abstract: A method and a device for performing stack operations within a processing system. A first and second stack pointers point to a top of a stack and to a memory location following the top of the stack. A first stack pointer is used during pop operations and a second stack pointer is used during push operations. When a stack pointer is selected, it replaces the other stack pointer. The selected memory pointer is provided to a memory module in which a stack is implemented, and is also updated. When a pop operation is executed the updated stack pointer points to a memory location preceding a memory location pointed by the selected stack pointer and when a push operation is executed the updated stack pointer points to a memory address following that address.
    Type: Grant
    Filed: November 9, 1999
    Date of Patent: November 25, 2003
    Assignee: Motorola, Inc.
    Inventors: Fabrice Aidan, Yoram Salant, Mark Elnekave, Leonid Tsukerman
  • Patent number: 6651142
    Abstract: A method and apparatus for processing data using multi-tier caching are described. In one embodiment, the method includes receiving a user request containing one or more data parameters and searching cache memories of multiple tiers until finding a parameterized result set associated with the data parameters. The multiple tiers correspond to stages in the transformation of data retrieved from one or more data sources according to the user request. Once the parameterized result set associated with the data parameters is found, it is used to create a final result set.
    Type: Grant
    Filed: May 5, 2000
    Date of Patent: November 18, 2003
    Assignee: Sagent Technology
    Inventors: Vladimir Gorelik, Glenn A. Shapland, Craig R. Powers
  • Patent number: 6643662
    Abstract: In a method, system, and apparatus for managing storage of data elements, a storage area having a first and second end is provided for storing the data elements. In the storage area, a first stack of data elements has first and second ends respectively facing the first and second ends of the storage area, and a second stack has data elements located proximate both ends of the first stack. That is, the second stack is split, with the first stack interposed between data elements of the second stack. Likewise, there may be a third and fourth stack, and so on, which are split into more than one part. The stacks increase in size toward both the first end of the storage area and the second end of the storage area, responsive to the storing of successive ones of the data elements in the respective stacks. Furthermore, the increasing in size of one of the split stacks may include increasing away from the first stack, or alternatively, increasing toward the first stack.
    Type: Grant
    Filed: September 21, 2000
    Date of Patent: November 4, 2003
    Assignee: International Business Machines Corporation
    Inventors: Steven Robert Farago, Kenneth Lee Wright
  • Patent number: 6631462
    Abstract: A method includes pushing a datum onto a stack by a first processor and popping the datum off the stack by a second processor.
    Type: Grant
    Filed: January 5, 2000
    Date of Patent: October 7, 2003
    Assignee: Intel Corporation
    Inventors: Gilbert Wolrich, Matthew J. Adiletta, William Wheeler, Daniel Cutter, Debra Bernstein