Stack Cache Patents (Class 711/132)
-
Patent number: 7694094Abstract: A transaction method manages the storing of persistent data to be stored in at least one memory region of a non-volatile memory device before the execution of update operations that involve portions of the persistent data. Values of the persistent data are stored in a transaction stack that includes a plurality of transaction entries before the beginning of the update operations so that the memory regions involved in such an update are restored in a consistent state if an unexpected event occurs. A push extreme instruction reads from the memory cells a remaining portion of the persistent data that is not involved in the update operation, and stores the remaining portion in a subset of the transaction entries. The push extreme instruction is executed instead of a push instruction when the restoring of the portion of persistent data is not required after the unexpected event. The restoring corresponds to the values that the persistent data had before the beginning of the update operations.Type: GrantFiled: June 29, 2007Date of Patent: April 6, 2010Assignee: Incard S.A.Inventors: Paolo Sepe, Luca Di Cosmo, Enrico Musella
-
Publication number: 20100077151Abstract: A computer system includes a data cache supported by a copy-back buffer and pre-allocation request stack. A programmable trigger mechanism inspects each store operation made by the processor to the data cache to see if a next cache line should be pre-allocated. If the store operation memory address occurs within a range defined by START and END programmable registers, then the next cache line that includes a memory address within that defined by a programmable STRIDE register is requested for pre-allocation. Bunches of pre-allocation requests are organized and scheduled by the pre-allocation request stack, and will take their turns to allow the cache lines being replaced to be processed through the copy-back buffer. By the time the processor gets to doing the store operation in the next cache line, such cache line has already been pre-allocated and there will be a cache hit, thus saving stall cycles.Type: ApplicationFiled: January 24, 2008Publication date: March 25, 2010Applicant: NXP, B.V.Inventor: Jan Willem Van De Waerdt
-
Patent number: 7685601Abstract: Methods and apparatus provide for allocating a first stack module in response to a first function call of a software program running on a processing system; and allocating a second stack module in response to a second function call of the software program, wherein the second stack module is non-contiguous with respect to the first stack module.Type: GrantFiled: February 28, 2005Date of Patent: March 23, 2010Assignee: Sony Computer Entertainment Inc.Inventor: Tatsuya Iwamoto
-
Patent number: 7681000Abstract: A system for protecting supervisor mode data from user code in a register window architecture of a processor is provided. The system, when transitioning from supervisor mode to user mode, setting at least one invalid window bit in the invalid window mask of the architecture additional to the invalid window bit set for the reserved window of the invalid window mask. The additional bit is set for a transition window between supervisor and user data windows.Type: GrantFiled: July 10, 2006Date of Patent: March 16, 2010Assignee: Silverbrook Research Pty LtdInventors: David William Funk, Barry Gauke, Kia Silverbrook
-
Publication number: 20090307431Abstract: Methods, software media, compilers and programming techniques are described for creating copyable stack-based closures, such as a block, for languages which allocate automatic or local variables on a stack memory structure. In one exemplary method, a data structure of the block is first written to the stack memory structure, and this may be the automatic default operation, at run-time, for the block; then, a block copy instruction, added explicitly (in one embodiment) by a programmer during creation of the block, is executed to copy the block to a heap memory structure. The block includes a function pointer that references a function which uses data in the block.Type: ApplicationFiled: September 30, 2008Publication date: December 10, 2009Inventors: Gerald Blaine Garst, JR., William Bumgarner, Fariborz Jahanian, Christopher Arthur Lattner
-
Patent number: 7617264Abstract: A garbage collector that operates in multiple threads divides a generation of a garbage-collected heap into heap sections, with which it associates respective remembered sets of locations where references to objects in those heap sections have been found. When such a heap section comes up for collection, each of a plurality of parallel garbage-collector threads that is processing its remembered set maintains a separate “popularity”—indicating count map, which includes an entry for each of a set of segments into which the collector has divided that heap section. The thread increments an entry in its count map each time it finds a reference to an object in the associated segment. If an object is located in a segment for which the associated count-map entry has exceeded a threshold, the thread evacuates the object in a manner different from that in which it evacuates objects not thus been found to be popular.Type: GrantFiled: April 15, 2004Date of Patent: November 10, 2009Assignee: Sun Microsystems, Inc.Inventor: Alexander T. Garthwaite
-
Patent number: 7615857Abstract: A chip multiprocessor die supports optional stacking of additional dies. The chip multiprocessor includes a plurality of processor cores, a memory controller, and stacked cache interface circuitry. The stacked cache interface circuitry is configured to attempt to retrieve data from a stacked cache die if the stacked cache die is present but not if the stacked cache die is absent. In one implementation, the chip multiprocessor die includes a first set of connection pads for electrically connecting to a die package and a second set of connection pads for communicatively connecting to the stacked cache die if the stacked cache die is present. Other embodiments, aspects and features are also disclosed.Type: GrantFiled: February 14, 2007Date of Patent: November 10, 2009Assignee: Hewlett-Packard Development Company, L.P.Inventor: Norman Paul Jouppi
-
Patent number: 7606977Abstract: A multi-threaded processor adapted to couple to external memory comprises a controller and data storage operated by the controller. The data storage comprises a first portion and a second portion, and wherein only one of the first or second portions is active at a time, the non-active portion being unusable. When the active portion does not have sufficient capacity for additional data to be stored therein, the other portion becomes the active portion. Upon a thread switch from a first thread to a second thread, only one of the first or second portions is cleaned to the external memory if one of the first or second portions does not contain valid data.Type: GrantFiled: July 25, 2005Date of Patent: October 20, 2009Assignee: Texas Instruments IncorporatedInventors: Jean-Philippe Lesot, Gilbert Cabillic
-
Patent number: 7590739Abstract: A method and mechanism for a distributed on-demand computing system. The system automatically provisions distributed computing servers with customer application programs. The parameters of each customer application program are taken into account when a server is selected for hosting the program. The system monitors the status and performance of each distributed computing server. The system provisions additional servers when traffic levels exceed a predetermined level for a customer's application program and, as traffic demand decreases to a predetermined level, servers can be un-provisioned and returned back to a server pool for later provisioning. The system tries to fill up one server at a time with customer application programs before dispatching new requests to another server. The customer is charged a fee based on the usage of the distributed computing servers.Type: GrantFiled: March 24, 2005Date of Patent: September 15, 2009Assignee: Akamai Technologies, Inc.Inventors: Eric Sven-Johan Swildens, Richard David Day, Vikas Garg, Zaide “Edward” Liu
-
Publication number: 20090193194Abstract: A method and apparatus for eliminating, in a multi-nodes data handling system, contention for exclusivity of lines in cache memory through improved management of system buses, processor cross-invalidate stacks, and the system operations that can lead to these requested cache operations being rejected.Type: ApplicationFiled: January 29, 2008Publication date: July 30, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Garrett M. Drapala, Pak-kin Mak, Vesselina K. Papazova, Craig R. Walters
-
Patent number: 7558935Abstract: Methods, systems, and articles of manufacture consistent with the present invention optimize allocation of items to a stack memory instead of a heap memory. It is determined whether an item to be placed on the heap memory escapes from the scope of the item's allocator, and whether the item survives the item's allocator. The item is allocated to the stack memory responsive to the item not escaping from the scope of the item's allocator and not surviving the item's allocator.Type: GrantFiled: May 4, 2004Date of Patent: July 7, 2009Assignee: Sun Microsystems, Inc.Inventors: Michael L. Boucher, Lawrence A. Crowl, Terrence C. Miller
-
Publication number: 20090150616Abstract: A system is provided that includes processing logic and a memory management module. The memory management module is configured to allocate a portion of memory space for a thread stack unit and to partition the thread stack unit to include a stack and a thread-local storage region. The stack is associated with a thread that is executable by the processing logic and the thread-local storage region is adapted to store data associated with the thread. The portion of memory space allocated for the thread stack unit is based on a size of the thread-local storage region that is determined when the thread is generated and a size of the stack.Type: ApplicationFiled: December 10, 2007Publication date: June 11, 2009Applicant: International Business Machines CorporationInventors: LIAM JAMES FINNIE, Lan Pham, Matthew Albert Huras
-
Patent number: 7536510Abstract: A cache read request is received at a cache comprising a plurality of data arrays, each of the data arrays comprising a plurality of ways. Cache line data from each most recently used way of each of the plurality of data arrays is selected in response to the cache read request and selecting a first data of the received cache line data from the most recently used way of the cache. An execution of an instruction is stalled if data identified by the cache read request is not present in the cache line data from the most recently used way of the cache. A second data from a most recently used way of one of the plurality of data arrays other than the most recently used data array is selected as comprising data identified by the cache read request. The second data is provided for use during the execution of the instruction.Type: GrantFiled: October 3, 2005Date of Patent: May 19, 2009Assignee: Advanced Micro Devices, Inc.Inventor: Stephen P. Thompson
-
Patent number: 7533217Abstract: An optical disc apparatus includes a buffer memory for shared use between write data and read data. Upon receipt of a write command from an external host device, a system controller stores, in a buffer memory, write data attached to the write command, and, after storing the write data, supplies the write data to an optical pickup. When a read command is received after the receipt of the write command, execution of the write command is interrupted at a predetermined time, to store read data retrieved from the optical disc in a recorded area of the buffer memory. Further, a segment of the write data is recorded, and pre-read data retrieved from the optical disc is also stored in the area of the buffer memory, to improve a data hit rate.Type: GrantFiled: November 9, 2005Date of Patent: May 12, 2009Assignee: TEAC CorporationInventor: Kaname Hayasaka
-
Patent number: 7512738Abstract: Provided are a method, system, and program for allocating call stack frame entries at different memory levels to functions in a program. Functions in a program accessing state information stored in call stack frame entries are processed. Call stack frame entries are allocated to the state information for each function, wherein the call stack frame entries span multiple memory levels, and wherein one function is capable of being allocated stack entries in multiple memory levels.Type: GrantFiled: September 30, 2004Date of Patent: March 31, 2009Assignee: Intel CorporationInventors: Vinod K. Balakrishnan, Ruiqi Lian, Junchao Zhang, Dz-ching Ju
-
Patent number: 7500060Abstract: A hardware stack (HSTACK) structure using programmable logic can include a look-up table (LUT) random access memory (RAM) circuit and circuitry within the LUT RAM circuit for propagating data upwards and downwards. The hardware structure can be arbitrarily assembled into a larger structure by adding stacks to a top portion, a bottom portion, or a portion between the top portion and the bottom portion. The hardware stack structure can further include a virtual stack (VSTACK) structure coupled to the HSTACK structure within a field programmable gate array (FPGA) fabric. The VSTACK can be arranged in the form of an appended peripheral memory and cache control for virtual extension to an HSTACK address space. The hardware stack structure can further include an auxiliary reset circuit.Type: GrantFiled: March 16, 2007Date of Patent: March 3, 2009Assignee: XILINX, Inc.Inventors: James B. Anderson, Sean W. Kao, Arifur Rahman
-
Patent number: 7454572Abstract: The disclosure defined by the invention described a stack caching system with a swapping mechanism for reducing software resource. The stack caching system utilizes a swap memory with higher access speed to increase the performance of the stack caching system. The stack caching system moves at least one first stack block which is the most frequently accessed stack block by the system from a first memory to the swap memory. Then, the stack caching system controls a pointer originally pointing to the first stack block to point to a corresponding address in the second memory. When the stack caching system accesses the first stack block, the stack caching system is directed to the second memory to access the first stack block.Type: GrantFiled: November 8, 2005Date of Patent: November 18, 2008Assignee: Mediatek Inc.Inventor: Chih-Hsing Chu
-
Publication number: 20080229026Abstract: This invention discloses an extended memory comprising a first tag RAM for storing one or more tags corresponding to data stored in a first storage module, and a second tag RAM for storing one or more tags corresponding to data stored in a second storage module, wherein the first and second storage modules are separated and independent memory units, the numbers of bits in the first and second tag RAMs differ, and an address is concurrently checked against both the first and second tag RAMs using a first predetermined bit field of the address for checking against a first tag from the first tag RAM and using a second predetermined bit field of the address for checking against a second tag from the second tag RAM.Type: ApplicationFiled: March 15, 2007Publication date: September 18, 2008Inventor: Shine Chung
-
Publication number: 20080195818Abstract: The present invention is related with the management of memory in environments of limited resources, such as those found for example in a smart card. In a more particular manner, the invention relates to a method of managing the data storage resources of volatile memory, the object of which is to reduce the size of volatile memory necessary to implement the stack of the system, and thereby to reserve more volatile memory available for other needs or procedures of the system or of other applications When the stack grows and comes close to its established limit, the system carries out a transfer of a stack block located in the volatile memory to an area of non-volatile memory, hence this transfer allows a compression of the stack increasing its size in a virtual manner.Type: ApplicationFiled: August 10, 2004Publication date: August 14, 2008Applicant: SANDISK IL LTD.Inventor: Javier Canis Robles
-
Publication number: 20080189488Abstract: A computer implemented method, apparatus, and computer usable program code for monitoring and managing a stack. Usage of stack space is monitored for a plurality of threads. Usage of stack space is compared to a policy to form a comparison. An action is selectively initiated based on the comparison to the policy.Type: ApplicationFiled: October 9, 2006Publication date: August 7, 2008Inventors: Jimmie Earl DeWitt, Riaz Y. Hussain, Frank Eliot Levine
-
Publication number: 20080172530Abstract: An apparatus and method for managing stacks for efficient memory usage. The apparatus includes a fault cause analysis unit to recognize a page fault caused by a marking page; a control unit to set the marking page, to request compression of a first stack page depending on whether a page fault occurs, to release a mapping of a second stack page that becomes empty due to the compression, and to return the second stack page; a memory allocation unit to receive the second stack page and to allocate a new stack page to the control unit upon completion of the compression; and a compression unit to compresses the first stack page at the request of the control unit.Type: ApplicationFiled: July 31, 2007Publication date: July 17, 2008Applicant: Samsung Electronics Co. Ltd.Inventors: Min-chan Kim, Gyong-jin Joung, Young-jun Jang
-
Publication number: 20080126711Abstract: A method and system of executing stack-based memory reference code. At least some of the illustrated embodiments are methods comprising waking a computer system from a reduced power operational state in which a memory controller loses at least some configuration information, executing memory reference code that utilizes a stack (wherein the memory reference code configures the main memory controller), and passing control of the computer system to an operating system. The time between executing a first instruction after waking the computer system and passing control to the operating system takes less than 200 milliseconds.Type: ApplicationFiled: September 25, 2006Publication date: May 29, 2008Inventors: Louis B. Hobson, Mark A. Piwonka
-
Patent number: 7380245Abstract: A technique for detecting corruption associated with a stack in a storage device is disclosed. In one embodiment, the technique is realized by having a processing device insert a quantity of information adjacent to the stack in the storage device, wherein the quantity of information has an initial state. The processing device then inspects the quantity of information so as to identify any deviation from the initial state and thereby detect corruption associated with the stack in the storage device.Type: GrantFiled: November 23, 1998Date of Patent: May 27, 2008Assignee: Samsung Electronics Co., Ltd.Inventor: Steven Eugene Lovette
-
Patent number: 7376786Abstract: An intelligent disk drive is described which includes means for prioritizing execution of command by maintaining an associated priority with each command in a pending command list and executing the highest priority commands first. The command structure according to the invention includes a field in which the host specifies the priority of the command. One embodiment uses a plurality of stacks which are used to sort the command according to priority. Another embodiment uses a list structure. In an alternative embodiment the drive has means for ensuring that designated data written to the disk is not subject to fragmentation. The disk drive embodiments described above can be implemented in an intelligent disk drive with distributed processing capability.Type: GrantFiled: February 28, 2005Date of Patent: May 20, 2008Assignee: Hitachi Global Storage Technologies Netherlands B.V.Inventor: Larry Lynn Williams
-
Publication number: 20080052468Abstract: Representative is a computer-implemented method of detecting a buffer overflow condition. In accordance with the method, a destination address for a computer process' desired right operation is received and a determination is made as to whether the destination address is within an illegitimate writable memory segment within the process' virtual address space (VAS). If so, the process is preferably alerted of the potential buffer overflow condition. A determination may also be made as to whether the destination address is legitimate, in which case the process may be informed of the memory segment which corresponds to the destination address.Type: ApplicationFiled: December 12, 2005Publication date: February 28, 2008Applicant: Sytex, Inc.Inventors: William R. Speirs, Eric B. Cole
-
Patent number: 7337275Abstract: A method of managing a free list and ring data structure, which may be used to store journaling information, by storing and modifying information describing a structure of the free list or ring data structure in a cache memory that may also be used to store information describing a structure of a queue of buffers.Type: GrantFiled: August 13, 2002Date of Patent: February 26, 2008Assignee: Intel CorporationInventors: Gilbert Wolrich, Mark B. Rosenbluth, Debra Bernstein, John Sweeney, James D. Guilford
-
Patent number: 7330937Abstract: A method is disclosed that comprises determining whether a data subsystem is to operate as cache memory or as scratchpad memory in which line fetches from external memory are suppressed and programming a control bit to cause the data subsystem to be operated as either a cache or scratchpad memory depending on the determination. Other embodiments are disclosed herein as well.Type: GrantFiled: April 5, 2004Date of Patent: February 12, 2008Assignee: Texas Instruments IncorporatedInventors: Gerard Chauvel, Serge Lasserre, Dominique D'Inverno, Maija Kuusela, Gilbert Cabillic, Jean-Philippe Lesot, Michel Banâtre, Jean-Paul Routeau, Salam Majoul, Frédéric Parain
-
Patent number: 7290176Abstract: In various embodiments of the present invention, debugging and program-behavior-analysis software can reconstruct register-based processor states for nested routine calls from the backing-store memory employed by a modern processor, and by processors of similar architectures, to automatically spill and restore register values via a register stack engine. Sufficient information resides in the backing-store memory to reconstruct the stack frames for all nested routines. However, reconstructing the stack frames from the backing-store memory depends on identifying stored register vales in the backing-store memory containing saved values of the previous-frame-marker application register. Various embodiments of the present invention employ a set of heuristic tests to evaluate stored values in the backing-store memory in order to identify those values corresponding to the stored contents of the previous-frame-marker application register.Type: GrantFiled: July 31, 2004Date of Patent: October 30, 2007Assignee: Hewlett-Packard Development Company, L.P.Inventor: Robert D. Gardner
-
Publication number: 20070214322Abstract: An apparatus and method for managing stacks in a virtual machine are provided. The apparatus includes a first memory which checks the space of a stack chunk and allocates a frame pointer if at least one of a push and a pop are performed in a virtual machine; and a second memory which is connected with an external bus, and stores frame pointers copied from the first memory via the external bus.Type: ApplicationFiled: February 15, 2007Publication date: September 13, 2007Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Young-ho Choi, Hyo-jung Song, Jung-pil Choi
-
Patent number: 7254702Abstract: An automatically configuring storage array includes a plurality of media storage devices coupled together within a network of devices. Preferably, the network of devices is an IEEE 1394-2000 serial bus network of devices. The media storage devices are utilized to record and retrieve streams of data transmitted within the network of devices. The media storage devices communicate with each other in order to store and retrieve streams of data over multiple media storage devices, if necessary. When a record or playback command is received by any one of the media storage devices, the media storage devices send control communications between themselves to ensure that the stream of data is recorded or transmitted, as appropriate. Control of the record or transmit operation is also transferred between the media storage devices in order to utilize the full capacity of the available media storage devices. Preferably, streams of data are recorded utilizing redundancy techniques.Type: GrantFiled: January 7, 2005Date of Patent: August 7, 2007Assignees: Sony Corporation, Sony Electronics, Inc.Inventors: Thomas Ulrich Swidler, Bruce Alan Fairman, Glen David Stone, Scott David Smyers
-
Patent number: 7219196Abstract: In order to manage, in the interrupt stage, a memory stack associated with a microcontroller according to a Program Counter signal and to a Condition Code Register signal that can be contained in respective registers, a first part of memory stack is provided which comprises a register for the Program Counter signal, and a second part of memory stack consisting of a bank of memory elements equal in number to the number of bits of the Condition Code Register signal for the number of the interrupts of the microcontroller. The two parts of stack are made to function in parallel by respective stack-pointer signals.Type: GrantFiled: July 17, 2003Date of Patent: May 15, 2007Assignee: STMicroelectronics S.r.l.Inventors: Santi Carlo Adamo, Edmondo Gangi
-
Patent number: 7203797Abstract: A processor preferably comprises a processing core that generates memory addresses to access a main memory and on which a plurality of methods operate. Each method uses its own set of local variables. The processor also includes a cache subsystem comprising a multi-way set associative cache and a data memory that holds a contiguous block of memory defined by an address stored in a register, wherein local variables are stored in said data memory.Type: GrantFiled: July 31, 2003Date of Patent: April 10, 2007Assignee: Texas Instruments IncorporatedInventors: Gerard Chauvel, Maija Kuusela, Dominique D'Inverno
-
Patent number: 7197600Abstract: The present invention provides a method for use with program overlays, wherein code segments, along with data segments pertaining to the code segments, are transferred into a receiving memory segment. Program code is separated into common code and overlay code. The overlay code is then broken down into segments according to functionality, and the need to create segments that will fit into a receiving memory segment. An overlay control file is created for each segment, then a wrapper file. Linker command files are created for the common area and code area. The wrapper files and linker command files are used to create a common image file for the code/data. The common image is used to produce overlay sections, which are then concatenated together into one file. This one file is then loaded into memory for transfer of the overlays to a receiving memory area.Type: GrantFiled: October 24, 2001Date of Patent: March 27, 2007Assignee: Broadcom CorporationInventors: Dave Hylands, Craig Hemsing, Andrew Jones, Henry W. H. Li, Susan Pullman
-
Patent number: 7191291Abstract: A variable latency cache memory is disclosed. The cache memory includes a plurality of storage elements for storing stack memory data in a first-in-first-out manner. The cache memory distinguishes between pop and load instruction requests and provides pop data faster than load data by speculating that pop data will be in the top cache line of the cache. The cache memory also speculates that stack data requested by load instructions will be in the top one or more cache lines of the cache memory. Consequently, if the source virtual address of a load instruction hits in the top of the cache memory, the data is speculatively provided faster than the case where the data is in a lower cache line or where a full physical address compare is required or where the data must be provided from a non-stack cache memory in the microprocessor, but slower than pop data.Type: GrantFiled: January 16, 2004Date of Patent: March 13, 2007Assignee: IP-First, LLCInventor: Rodney E. Hooker
-
Patent number: 7174427Abstract: A device and method for handling Multiprotocol Label Switching (MPLS) label stacks. An incoming label mapping (ILM) table is stored in a first memory. A received packet's label stack is accessed, and an entry corresponding to a top label of the stack is read from the ILM table. A number of other entries are also read from the ILM table, and these other entries are cached in a second memory.Type: GrantFiled: December 5, 2003Date of Patent: February 6, 2007Assignee: Intel CorporationInventor: Kannan Babu Ramia
-
Patent number: 7167954Abstract: Systems and methods that cache are provided. In one example, a system may include a spatial cache system coupled to a processing unit and to a memory. The spatial cache system may be adapted to reduce the memory latency of the processing unit. The spatial cache system may be adapted to store prefetched blocks, each stored prefetched block including a plurality of cache lines. If a cache line requested by the processing unit resides in one of the stored prefetched blocks and does not reside in the processing unit, then the spatial cache system may be adapted to provide the processing unit with the requested cache line.Type: GrantFiled: November 14, 2002Date of Patent: January 23, 2007Assignee: Broadcom CorporationInventors: Kimming So, Jin Chin Wang
-
Patent number: 7162586Abstract: A system comprises a main stack, a local data stack and plurality of flags. The main stack comprises a plurality of entries and is located outside a processor's core. The local data stack is coupled to the main stack and is located internal to the processor's core. The local data stack has a plurality of entries that correspond to entries in the main stack. Each flag is associated with a corresponding entry in the local data stack and indicates whether the data in the corresponding local data stack entry is valid. The system performs two instructions. One instruction synchronizes the main stack to the local data stack and invalidates the local data stack, while the other instruction synchronizes the main stack without invalidating the local data stack.Type: GrantFiled: July 31, 2003Date of Patent: January 9, 2007Assignee: Texas Instruments IncorporatedInventors: Gerard Chauvel, Serge Lasserre
-
Patent number: 7142541Abstract: According to some embodiments, routing information for an information packet is determined in accordance with a destination address and a device address.Type: GrantFiled: August 9, 2002Date of Patent: November 28, 2006Assignee: Intel CorporationInventors: Alok Kumar, Raj Yavatkar
-
Microprocessor and apparatus for performing fast speculative pop operation from a stack memory cache
Patent number: 7139876Abstract: A stack cache memory in a microprocessor and apparatus for performing fast speculative pop instructions is disclosed. The stack cache stores cache lines of data implicated by push instructions in a last-in-first-out fashion. An offset is maintained which specifies the location of the newest non-popped push data within the cache line stored in the top entry of the stack cache. The offset is updated when an instruction is encountered that updates the stack pointer register. When a pop instruction requests data, the stack cache speculatively provides data specified by the offset from the top entry to the pop instruction, before determining whether the pop instruction source address matches the address of the data provided. If the source address and the address of the data provided are subsequently determined to mismatch, then an exception is generated to provide the correct data.Type: GrantFiled: January 16, 2004Date of Patent: November 21, 2006Assignee: IP-First, LLCInventor: Rodney E. Hooker -
Patent number: 7139877Abstract: A cache memory for performing fast speculative load operations is disclosed. The cache memory caches stack data in a LIFO manner and stores both the virtual and physical address of the cache lines stored therein. The cache compares a load instruction virtual address with the virtual address of the top cache entry substantially in parallel with translation of the virtual load address into a physical load address. If the virtual addresses match, the cache speculatively provides the requested data to the load instruction from the top entry. The cache subsequently compares the physical load address with the top cache entry physical address and if they mismatch, the cache generates an exception and the processor provides the correct data. If the virtual and physical load addresses both miss in the stack cache, the data is provided by a non-stack cache that is accessed substantially in parallel with the stack cache.Type: GrantFiled: January 16, 2004Date of Patent: November 21, 2006Assignee: IP-First, LLCInventor: Rodney E. Hooker
-
Patent number: 7136990Abstract: A method and apparatus for performing a fast pop operation from a random access cache is disclosed. The apparatus includes a stack onto which is pushed the row and way of push instruction data stored into the cache. When a pop instruction is encountered, the apparatus uses the row and way values at the top of the stack to access the cache. In one embodiment, an offset of the most recent push data within the current cache line specified by the top row and way values is maintained. The offset is updated on each push or pop. If a pop overflows the offset, the top entry of the stack is popped. If a push underflows the offset, the row and way values are pushed onto the stack. The row, way, and offset values are subsequently compared with the actual pop address to determine whether incorrect data was provided.Type: GrantFiled: January 16, 2004Date of Patent: November 14, 2006Assignee: IP-First, LLC.Inventor: Rodney E. Hooker
-
Patent number: 7130972Abstract: A function execution method, a function execution apparatus, a computer program and a recorded medium to execute a program of stacking, in a stack area of a memory, a function record area according to a format of an invoked function which is invoked by an invoking function including a process to invoke another function, invoking the invoked function, executing the invoked function and then discarding the stacked function record area, wherein a predetermined alternative function which substitutes the invoking function is executed when it is judged, from analysis of the invoking function by byte-code such as a JVM, that the invoking function is a trail-recursive invoking function, so that the alternative function as an invoking function invokes the invoked function utilizing the function record area utilized for executing the invoking function and the invoked function which has been invoked is executed.Type: GrantFiled: July 25, 2003Date of Patent: October 31, 2006Assignee: Kansai Technology Licensing Organization Co., LtdInventors: Akishige Yamamoto, Taiichi Yuasa
-
Patent number: 7124292Abstract: An automatically configuring storage array includes a plurality of media storage devices coupled together within a network of devices. Preferably, the network of devices is an IEEE 1394-1995 serial bus network of devices. The media storage devices are utilized to record and retrieve streams of data transmitted within the network of devices. The media storage devices communicate with each other in order to store and retrieve streams of data over multiple media storage devices, if necessary. When a record or playback command is received by any one of the media storage devices, the media storage devices send control communications between themselves to ensure that the stream of data is recorded or transmitted, as appropriate. Control of the record or transmit operation is also transferred between the media storage devices in order to utilize the full capacity of the available media storage devices. Preferably, streams of data are recorded utilizing redundancy techniques.Type: GrantFiled: July 25, 2005Date of Patent: October 17, 2006Assignees: Sony Corporation, Sony Electronics INCInventor: Scott D. Smyers
-
Patent number: 7124251Abstract: A stack allocation system and method is described. In one implementation, an attempt is made to allocate N bytes of data to a stack having a fixed depth. A probe size for the stack is determined. Verification is then made to ascertain whether the probe size and the N bytes of data exceed the fixed depth of the stack, prior to allocating the N bytes of data to the stack. In another implementation, the N bytes of data are allocated to a heap; if the probe size and the N bytes of data exceed the fixed depth of the stack.Type: GrantFiled: June 24, 2002Date of Patent: October 17, 2006Assignee: Microsoft CorporationInventors: Jason Clark, Scott A. Field, Jonathan David Schwartz, Clifford P. Van Dyke
-
Patent number: 7117335Abstract: A method of controlling an industrial process by a programmable process control has the steps of taking data in form of resulting values which are decisive for the process, storing the data in a storage of a programmable process control, during starting a control program reading pre-defined configuration data which are stored in a storage in the control and connected with a control program, based on the configuration data selecting a subset of the resulting values adapted to a resulting value storage available in the control, and subsequently storing it in this storage.Type: GrantFiled: October 1, 2004Date of Patent: October 3, 2006Assignee: Bosch Rexroth AGInventors: Alexander Sailer, Martin Merz, Albrecht Schindler, Thorsten Klepsch
-
Patent number: 7114040Abstract: One embodiment disclosed relates to a method of selecting a default locality for a memory object requested by a process running on a CPU in a multiprocessor system. A determination is made as to whether the memory object comprises a shared-memory object. If the memory object comprises said shared-memory object, then the default locality is selected to be within interleaved memory in the system. If not, a further determination may be made as to whether the memory object comprises a stack-type object. If the memory object comprises said stack-type object, then the default locality may be selected to be within local memory at a same cell as the requesting CPU. If not, a further determination may be made as to whether the requesting process has threads running on multiple cells.Type: GrantFiled: March 2, 2004Date of Patent: September 26, 2006Assignee: Hewlett-Packard Development Company, L.P.Inventor: Michael E. Yoder
-
Patent number: 7065613Abstract: The invention is directed to efficient stack cache logic, which reduces the number of accesses to main memory. More specifically, in one embodiment, the invention prevents writing old line data to main memory when the old line data represents a currently unused area of the cache. In another embodiment, the invention prevents reading previous line data for a new tag from main memory when the new tag represents a currently unused area of the cache.Type: GrantFiled: June 6, 2003Date of Patent: June 20, 2006Assignee: Maxtor CorporationInventors: Lance Flake, Andrew Vogan
-
Patent number: 7058765Abstract: Methods and apparatuses are disclosed for implementing a processor with a split stack. In some embodiments, the processor includes a main stack and a micro-stack. The micro-stack preferably is implemented in the core of the processor, whereas the main stack may be implemented in areas that are external to the core of the processor. Operands are preferably provided to an arithmetic logic unit (ALU) by the micro-stack, and in the case of underflow (micro-stack empty), operands may be fetched from the main stack. Operands are written to the main stack during overflow (micro-stack full) or by explicit flushing of the micro-stack. By optimizing the size of the micro-stack, the number of operands fetched from the main stack may be reduced, and consequently the processor's power consumption may be reduced.Type: GrantFiled: July 31, 2003Date of Patent: June 6, 2006Assignee: Texas Instruments IncorporatedInventors: Gerard Chauvel, Maija Kuusela, Serge Lasserre
-
Patent number: 7035977Abstract: A system and a method for stack-caching method frames are disclosed, which utilize FSO (Frame Size Overflow) flag to control the execution mode of the processor. When FSO flag is set, it indicates that the method frame of the next method is larger than the limit value of a stack cache so that the method frame is placed to a memory stack. When FSO flag is cleared, it indicates that the method frame of the next method is smaller than the limit value of the stack cache so that the method frame is placed into the cache stack.Type: GrantFiled: April 17, 2003Date of Patent: April 25, 2006Assignee: Industrial Technology Research InstituteInventor: Cheng-Che Chen
-
Patent number: 7028163Abstract: A digital data processor comprising a stack storage having a plurality of locations classified into two or more banks, and a stack pointer circuit pointing to one or more stack banks of the stack storage. The stack pointer circuit operates in response to decoding signals from an instruction decoder which decodes a current instruction to determine whether a one-word or a multi-word stack operation is desired.Type: GrantFiled: June 22, 1999Date of Patent: April 11, 2006Assignee: Samsung Electronics, Co., Ltd.Inventors: Yong-Chun Kim, Hong-Kyu Kim, Seh-Woong Jeong