With Dedicated Cache, E.g., Instruction Or Stack, Etc. (epo) Patents (Class 711/E12.02)
-
Publication number: 20100153654Abstract: In a data-processing method, first result data may be obtained using a plurality of configurable coarse-granular elements, the first result data may be written into a memory that includes spatially separate first and second memory areas and that is connected via a bus to the plurality of configurable coarse-granular elements, the first result data may be subsequently read out from the memory, and the first result data may be subsequently processed using the plurality of configurable coarse-granular elements. In a first configuration, the first memory area may be configured as a write memory, and the second memory area may be configured as a read memory. Subsequent to writing to and reading from the memory in accordance with the first configuration, the first memory area may be configured as a read memory, and the second memory area may be configured as a write memory.Type: ApplicationFiled: September 30, 2009Publication date: June 17, 2010Inventors: MARTIN VORBACH, Jürgen Becker, Markus Weinhardt, Volker Baumgarte, Frank May
-
Publication number: 20100131796Abstract: A system and method are provided for detecting and recovering from errors in an Instruction Cache RAM and/or Operand Cache RAM of an electronic data processing system. In some cases, errors in the Instruction Cache RAM and/or Operand Cache RAM are detected and recovered from without any required interaction of an operating system of the data processing system. Thus, and in many cases, errors in the Instruction Cache RAM and/or Operand Cache RAM can be handled seamlessly and efficiently, without requiring a specialized operating system routine, or in some cases, a maintenance technician, to help diagnose and/or fix the error.Type: ApplicationFiled: December 17, 2009Publication date: May 27, 2010Inventors: Kenneth L. Engelbrecht, Lawrence R. Fontaine, John S. Kuslak, Conrad S. Shimada
-
Patent number: 7702855Abstract: A processing device employs a stack memory in a region of an external memory. The processing device has a stack pointer register to store a current top address for the stack memory. One of several techniques is used to determine which portion or portions of the external memory correspond to the stack region. A more efficient memory policy is implemented, whereby pushes to the stack do not have to read data from the external memory in to a cache, and whereby pops from the stack do not cause stale stack data to be written back from the cache to the external memory.Type: GrantFiled: August 11, 2005Date of Patent: April 20, 2010Assignee: Cisco Technology, Inc.Inventors: Jonathan Rosen, Earl T. Cohen
-
Publication number: 20100095069Abstract: For each process a stack data structure that includes two stacks, which are joined at their bases, is created. The two stacks include a normal stack, which grows downward, and an inverse stack, which grows upward. Items on the stack data structure are segregated into protected and unprotected classes. Protected items include frame pointers and return addresses, which are stored on the normal stack. Unprotected items are function parameters and local variables. The unprotected items are stored on the inverse stack.Type: ApplicationFiled: December 21, 2009Publication date: April 15, 2010Inventors: Michael L. Asher, Charles C. Giddens, Harold Jeffrey Stewart
-
Publication number: 20100095070Abstract: An information processing apparatus including a main memory and a processor, the processor includes: a cache memory that stores data fetched to the cache memory; an instruction processing unit that accesses a part of the data in the cache memory sub block by sub block; an entry holding unit that holds a plurality of entries including a plurality of block addresses and access history information; and a controller that controls fetching of data from the main memory to the cache memory, while the access by the instruction processing unit to sub blocks of data in a block indicated by another of the entries immediately preceding the one of the entries, in accordance with order of the access from the instruction processing unit to sub blocks in the block indicated by the another of the entries and access history information associated with the one of the entries.Type: ApplicationFiled: December 16, 2009Publication date: April 15, 2010Inventors: HIDEKI OKAWARA, IWAO YAMAZAKI
-
Publication number: 20100095066Abstract: A method to generate and save a resource representation recited by a request encoded in a computer algorithm, wherein the method receives from a requesting algorithm an Unresolved resource request. The method resolves the resource request to an endpoint and evaluates the resolved resource request by the endpoint to generate a resource representation. The method further generates and saves in a cache at least one Unresolved request scope key, a resolved request scope key, and a cache entry comprising the resource representation. The method associates the cache entry with the resolved request scope key and with the at least one Unresolved request scope key using a mapping function encoded in the cache.Type: ApplicationFiled: September 22, 2009Publication date: April 15, 2010Applicant: 1060 RESEARCH LIMITEDInventors: PETER JAMES RODGERS, ANTONY ALLAN BUTTERFIELD
-
Publication number: 20100088460Abstract: Memory requests for information from a processor are received in an interface device, and the interface device is coupled to a stack including two or more memory devices. The interface device is operated to select a memory device from a number of memory devices including the stack, and to retrieve some or all of the information from the selected memory device for the processor. Additional apparatus, systems and methods are disclosed.Type: ApplicationFiled: October 7, 2008Publication date: April 8, 2010Inventor: Joe M. Jeddeloh
-
Publication number: 20100088688Abstract: Disclosed herein is a method of optimising an executable program to improve instruction cache hit rate when executed on a processor. A method of predicting instruction cache behaviour of an executable program is also disclosed. According to further aspects of the present invention, there is provided a software development tool product comprising code which when executed on a computer will perform the method of optimising an executable program. A linker product and a computer program are also disclosed.Type: ApplicationFiled: October 2, 2009Publication date: April 8, 2010Applicant: ICERA Inc.Inventors: David Alan Edwards, Alan Alexander
-
Publication number: 20100083120Abstract: There is provided a storage system including one or more LDEVs, one or more processors, a local memory or memories corresponding to the processor or processors, and a shared memory, which is shared by the processors, wherein control information on I/O processing or application processing is stored in the shared memory, and the processor caches a part of the control information in different storage areas on a type-by-type basis in the local memory or memories corresponding to the processor or processors in referring to the control information stored in the shared memory.Type: ApplicationFiled: December 18, 2008Publication date: April 1, 2010Inventors: Shintaro Ito, Norio Shimozono
-
Publication number: 20100077146Abstract: An instruction cache system includes an instruction-cache data storage unit that stores cache data per index, and an instruction cache controller that compresses and writes the cache data in the instruction-cache data storage unit, and controls a compression ratio of the written cache data. The instruction cache controller calculates a memory capacity of a redundant area generated due to compression in a memory area belonging to an index, in which n pieces of cache data are written based on the controlled compression ratio, to compress and write new cache data in the redundant area based on the calculated memory capacity.Type: ApplicationFiled: May 4, 2009Publication date: March 25, 2010Applicant: KABUSHIKI KAISHA TOSHIBAInventor: Soichiro HOSODA
-
Publication number: 20100077145Abstract: A method of parallel execution of a first and a second instruction in an in-order processor. Embodiments of the invention enable parallel execution of memory instructions that are stalled by cache memory misses. The in-order processor processes cache memory misses of instructions in parallel by overlapping the first cache memory miss with cache memory misses that occur after the first cache memory miss. Memory-level parallelism in the in-order processor can be increased when more parallel and outstanding cache memory misses are generated.Type: ApplicationFiled: September 25, 2008Publication date: March 25, 2010Inventors: Sebastian C. Winkel, Kalyan Muthukumar, Don C. Soltis, JR.
-
Publication number: 20090328022Abstract: Systems, methods and media for updating CRTM code in a computing machine are disclosed. In one embodiment, the CRTM code initially resides in ROM and updated CRTM is stored in a staging area of the ROM. A logical partition of L2 cache may be created to store a heap and a stack and a data store. The data store holds updated CRTM code copied to the L2 cache. When a computing system is started, it first executes CRTM code. The CRTM code checks the staging area of the ROM to determine if there is updated CRTM code. If so, then CRTM code is copied into the L2 cache to be executed from there. The CRTM code loads the updated code into the cache and verifies its signature. The CRTM code then copies the updated code into the cache where the current CRTM code is located.Type: ApplicationFiled: June 26, 2008Publication date: December 31, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Sean P. Brogan, Sumeet Kochar
-
Publication number: 20090320006Abstract: A exemplary system and method are provided for learning and cache management in software defined contexts. Exemplary embodiments of the present invention described herein address the problem of the data access wall resulting from processor stalls due to the increasing discrepancies between processor speed and the latency of access to data that is not stored in the immediate vicinity of the processor requesting the data.Type: ApplicationFiled: May 13, 2008Publication date: December 24, 2009Inventors: Peter A. Franaszek, Luis Alfonso Lastras Montano, R. Brett Tremaine
-
Publication number: 20090307431Abstract: Methods, software media, compilers and programming techniques are described for creating copyable stack-based closures, such as a block, for languages which allocate automatic or local variables on a stack memory structure. In one exemplary method, a data structure of the block is first written to the stack memory structure, and this may be the automatic default operation, at run-time, for the block; then, a block copy instruction, added explicitly (in one embodiment) by a programmer during creation of the block, is executed to copy the block to a heap memory structure. The block includes a function pointer that references a function which uses data in the block.Type: ApplicationFiled: September 30, 2008Publication date: December 10, 2009Inventors: Gerald Blaine Garst, JR., William Bumgarner, Fariborz Jahanian, Christopher Arthur Lattner
-
Publication number: 20090271867Abstract: One embodiment of the invention discloses a method for receiving in a virtual machine (VM) contents of a program for creating a virtual environment for interacting with a host platform in a computing device; and determining by the VM if the received contents comprise predetermined instructions for performing at least one unauthorized task. Another embodiment of the invention discloses a method for receiving a system call for a host platform in communication with a VM of a computing device; and determining by the VM if the received system call comprises at least one predetermined system call for performing at least one unauthorized task. Yet another embodiment of the invention discloses a method for receiving a virtualized memory address for a host platform in communication with a VM of a computing device; and determining by the VM if the received virtualized memory address comprises at least one predetermined unauthorized virtualized memory address.Type: ApplicationFiled: December 30, 2005Publication date: October 29, 2009Inventor: Peng Zhang
-
Publication number: 20090265507Abstract: A system comprising a host, a solid state device, and an abstract layer. The host may be configured to generate a plurality of input/output (IO) requests. The solid state device may comprise a write cache region and a read cache region. The read cache region may be a mirror of the write cache region. The abstract layer may be configured to (i) receive the plurality of IO requests, (ii) process the IO requests, and (iii) map the plurality of IO requests to the write cache region and the read cache region.Type: ApplicationFiled: April 2, 2009Publication date: October 22, 2009Inventors: Mahmoud K. Jibbe, Senthil Kannan
-
Publication number: 20090216952Abstract: A method for managing cache memory including receiving an instruction fetch for an instruction stream in a cache memory, wherein the instruction fetch includes an instruction fetch reference tag for the instruction stream and the instruction stream is at least partially included within a cache line, comparing the instruction fetch reference tag to a previous instruction fetch reference tag, maintaining a cache replacement status of the cache line if the instruction fetch reference tag is the same as the previous instruction fetch reference tag, and upgrading the cache replacement status of the cache line if the instruction fetch reference tag is different from the previous instruction fetch reference tag, whereby the cache replacement status of the cache line is upgraded if the instruction stream is independently fetched more than once. A corresponding system and computer program product.Type: ApplicationFiled: February 26, 2008Publication date: August 27, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Robert J. Sonnelitter, III, Gregory W. Alexander, Brian R. Prasky
-
Publication number: 20090216847Abstract: Methods and systems for determining a suitability for a mobile client to display information are disclosed. A particular exemplary method includes generating a first value for a first targeted content message based on at least a first weighted sum of first user profile attributes, a user profile attribute being a numeric quantity relating to at least one of a preference and a demographic of a user of the mobile client, comparing the first value with a stored list of second values related to respective stored messages in a cache memory of the mobile client to produce a comparison result; and updating the cache memory based on the comparison result by storing the first targeted content message in the cache memory if the comparison result indicates a higher desirability of the first targeted content message over at least one stored message.Type: ApplicationFiled: November 11, 2008Publication date: August 27, 2009Applicant: QUALCOMM IncorporatedInventors: Dilip KRISHNASWAMY, Pooja Aggarwal, Robert S. Daley, Martin Renschler, Patrik Lundqvist
-
Publication number: 20090193194Abstract: A method and apparatus for eliminating, in a multi-nodes data handling system, contention for exclusivity of lines in cache memory through improved management of system buses, processor cross-invalidate stacks, and the system operations that can lead to these requested cache operations being rejected.Type: ApplicationFiled: January 29, 2008Publication date: July 30, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Garrett M. Drapala, Pak-kin Mak, Vesselina K. Papazova, Craig R. Walters
-
Publication number: 20090187712Abstract: The present subject matter relates to operation frame filtering, building, and execution. Some embodiments include identifying a frame signature, counting a number of execution occurrences of the frame signature, and building a frame of operations to execute instead of operations identified by the frame signature.Type: ApplicationFiled: March 30, 2009Publication date: July 23, 2009Inventors: Stephan Jourdan, Per Hammarlund, Alexandre Farcy, John Alan Miller
-
Publication number: 20090172284Abstract: A method and apparatus for monitor and mwait in a distributed cache architecture is disclosed. One embodiment includes an execution thread sending a MONITOR request for an address to a portion of a distributed cache that stores the data corresponding to that address. At the distributed cache portion the MONITOR request and an associated speculative state is recorded locally for the execution thread. The execution thread then issues an MWAIT instruction for the address. At the distributed cache portion the MWAIT and an associated wait-to-trigger state are recorded for the execution thread. When a write request matching the address is received at the distributed cache portion, a monitor-wake event is then sent to the execution thread and the associated monitor state at the distributed cache portion for that execution thread can be reset to idle.Type: ApplicationFiled: December 28, 2007Publication date: July 2, 2009Inventors: Zeev Offen, Alon Naveh, Iris Sorani
-
Publication number: 20090172285Abstract: A method and apparatus for tracking temporal use associated with cache evictions to reduce allocations in a victim cache is disclosed. Access data for a number of sets of instructions in an instruction cache is tracked at least until the data for one or more of the sets reach a predetermined threshold condition. Determinations whether to allocate entry storage in the victim cache may be made responsive in part to the access data for sets reaching the predetermined threshold condition. A micro-operation can be inserted into the execution pipeline in part to synchronize the access data for all the sets. Upon retirement of the micro-operation from the execution pipeline, access data for the sets can be synchronized and/or any previously allocated entry storage in the victim cache can be invalidated.Type: ApplicationFiled: December 29, 2007Publication date: July 2, 2009Inventors: Peter J. Smith, Mongkol Ekpanyapong, Harikrishna Baliga, Ilhyun Kim
-
Publication number: 20090164729Abstract: A method, system and process for retiring data entries held within a store queue (STQ). The STQ of a processor cache is modified to receive and process multiple synchronized groups (sync-groups). Sync groups comprise thread of execution synchronized (thread-sync) entries, all thread of execution synchronized (all-thread-sync) entries, and regular store entries (non-thread-sync and non-all-thread-sync). The task of storing data entries, from the STQ out to memory or an input/output device, is modified to increase the effectiveness of the cache. Sync-groups are created for each thread and tracked within the STQ via a synchronized identification (SID). An entry is eligible for retirement when the entry is within a currently retiring sync-group as identified by the SID.Type: ApplicationFiled: December 21, 2007Publication date: June 25, 2009Applicant: IBM CorporationInventor: ERIC F. ROBINSON
-
Publication number: 20090157967Abstract: A prefetch data machine instruction having an M field performs a function on a cache line of data specifying an address of an operand. The operation comprises either prefetching a cache line of data from memory to a cache or reducing the access ownership of store and fetch or fetch only of the cache line in the cache or a combination thereof. The address of the operand is either based on a register value or the program counter value pointing to the prefetch data machine instruction.Type: ApplicationFiled: December 12, 2007Publication date: June 18, 2009Applicant: International Business Machines CorporationInventors: Dan F. Greiner, Timothy J. Slegel
-
Publication number: 20090150611Abstract: A method and apparatus for managing the caching of data on an auxiliary memory of a computer. Pages of data may be cached on an auxiliary memory, such as a flash memory, at a virtual level using an identifier that does not involve a physical address of the pages on a memory. Pages may be cached on auxiliary memory that may be removable from the computer, e.g., by unplugging the memory from the computer. Page data may be encrypted and/or compressed on the auxiliary memory. An authentication indicator may be used to verify the accuracy of cached data in the case of an interrupted connection to the auxiliary memory, e.g., as a result of computer power down, hibernation, removal of the memory from the computer, etc.Type: ApplicationFiled: December 10, 2007Publication date: June 11, 2009Applicant: Microsoft CorporationInventors: Michel Fortin, Cenk Ergan, Mehmet Iyigun, Yevgeniy Bak, Ben Mickle, Aaron Dietrich, Alexander Kirshenbaum
-
Publication number: 20090132059Abstract: A multicore processor for industrial control provides for the execution of separate operating systems on the cores under control of one of the cores to tailor the operating system to optimum execution of different applications of industrial control and communication. One core may provide for a reduced instruction set for execution of industrial control programs with the remaining cores providing a general-purpose instruction set.Type: ApplicationFiled: November 13, 2008Publication date: May 21, 2009Inventors: Ronald E. Schultz, Scot A. Tutkovics, Richard J. Grgic, James J. Kay, James W. Kenst, Daniel W. Clark
-
Publication number: 20090119456Abstract: A processor and a memory management method are provided. The processor includes a processor core, a cache which transceives data to/from the processor core via a single port, and stores the data accessed by the processor core, and a Scratch Pad Memory (SPM) which transceives the data to/from the processor core via at least one of a plurality of multi ports.Type: ApplicationFiled: March 14, 2008Publication date: May 7, 2009Inventors: Il Hyun PARK, Soojung Ryu, Dong-Hoon Yoo, Dong Kwan Suh, Jeongwook Kim, Choon Ki Jang
-
Publication number: 20090077322Abstract: A system and method for using a data-only transfer protocol to store atomic cache line data in a local storage area is presented. A processing engine includes an atomic cache and a local storage. When the processing engine encounters a request to transfer cache line data from the atomic cache to the local storage (e.g., GETTLAR command), the processing engine utilizes a data-only transfer protocol to pass cache line data through the external bus node and back to the processing engine. The data-only transfer protocol comprises a data phase and does not include a prior command phase or snoop phase due to the fact that the processing engine communicates to the bus node instead of an entire computer system when the processing engine sends a data request to transfer data to itself.Type: ApplicationFiled: September 19, 2007Publication date: March 19, 2009Inventors: Charles Ray Johns, Roy Moonseuk Kim, Peichun Peter Liu, Shigehiro Asano, Anushkumar Rengarajan
-
Publication number: 20090070531Abstract: A system and method of using an n-way cache are disclosed. In an embodiment, a method includes determining a first way of a first instruction stored in a cache and storing the first way in a list of ways. The method also includes determining a second way of a second instruction stored in the cache and storing the second way in the list of ways. In an embodiment, the first way may be used to access a first cache line containing the first instruction and the second way may be used to access a second cache line containing the second instruction.Type: ApplicationFiled: September 10, 2007Publication date: March 12, 2009Applicant: QUALCOMM INCORPORATEDInventors: Suresh Venkumahanti, Phillip Matthew Jones
-
Publication number: 20090037659Abstract: A method of performing cache coloring includes the steps of generating a dynamic function flow representing a temporal sequence in which a plurality of functions are called at a time of executing a program comprised of the plurality of functions by executing the program by a computer, generating function strength information in response to the dynamic function flow by use of the computer, the function strength information including information about runtime relationships between any given one of the plurality of functions and all the other ones of the plurality of functions in terms of a way the plurality of functions are called and further including information about degree of certainty of cache miss occurrence, and allocating the plurality of functions to memory space by use of the computer in response to the function strength information such as to reduce instruction cache conflict.Type: ApplicationFiled: April 2, 2008Publication date: February 5, 2009Applicant: FUJITSU LIMITEDInventor: Shigeru Kimura
-
Publication number: 20090013132Abstract: A cache memory comprises a first set of storage locations for holding syllables and addressable by a first group of addresses; a second set of storage locations for holding syllables and addressable by a second group of addresses; addressing circuitry operable to provide in each addressing cycle a pair of addresses comprising one from the first group and one from the second group, thereby accessing a plurality of syllables from each set of storage locations; and selection circuitry operable to select from said plurality of syllables to output to a processor lane based on whether a required syllable is addressable by an address in the first or second group.Type: ApplicationFiled: July 1, 2008Publication date: January 8, 2009Applicant: STMicroelectronics (Research & Development) LimitedInventor: Tariq Kurd
-
Publication number: 20080301369Abstract: A method and system of storing to an instruction stream with a multiprocessor or multiple-address-space system is disclosed. A central processing unit may cache instructions in a cache from a page of primary code stored in a memory storage unit. The central processing unit may execute cached instructions from the cache until a serialization operation is executed. The central processing unit may check in a message queue for a notification message indicating potential storing to the page. If the notification message is present in the message queue, cached instructions from the page are invalidated.Type: ApplicationFiled: April 30, 2008Publication date: December 4, 2008Applicant: PLATFORM SOLUTIONS, INC.Inventors: Gary A. Woffinden, Paul T. Leisy, Ronald N. Hilton
-
Publication number: 20080276044Abstract: A processing apparatus which executes a program and performs processes of the program, includes an execution circuit including a plurality of central processing units, each having a respective cache memory, and each of the respective cache memories has an N-way set-associative structure with N-ways in which one line is made up of plural words. Each of the respective cache memories includes a data memory array which is simultaneously read-out in multiple-word-widths, and can be read-out using one of a type one read-out and a type two read-out. In the type one read-out, plural words in the same word positions within respective lines are simultaneously read-out from corresponding lines belonging to different ways, and in the type two read out, plural words making up one line of one way are simultaneously read-out. The cache memory has a first read-out mode and a second read-out mode.Type: ApplicationFiled: June 13, 2008Publication date: November 6, 2008Applicant: Matsushita Electric Industrial Co., Ltd.Inventor: Shinji OZAKI
-
Publication number: 20080263326Abstract: A novel trace cache design and organization to efficiently store and retrieve multi-path traces. A goal is to design a trace cache, which is capable of storing multi-path traces without significant duplication in the traces. Furthermore, the effective access latency of these traces is reduced.Type: ApplicationFiled: April 28, 2008Publication date: October 23, 2008Applicant: International Business Machines CorporationInventors: Galen A. Rasche, Jude A. Rivers, Vijayalakshmi Srinivasan
-
Publication number: 20080229022Abstract: An efficient system for bootstrap loading scans cache lines into a cache store queue during a scan phase, and then transmits the cache lines from the cache store queue to a cache memory array during a functional phase. Scan circuitry stores a given cache line in a set of latches associated with one of a plurality of cache entries in the cache store queue, and passes the cache line from the latch set to the associated cache entry. The cache lines may be scanned from test software that is external to the computer system. Read/claim dispatch logic dispatches store instructions for the cache entries to read/claim machines which write the cache lines to the cache memory array without obtaining write permission, after the read/claim machines evaluate a mode bit which indicates that cache entries in the cache store queue are scanned cache lines. In the illustrative embodiment the cache memory is an L2 cache.Type: ApplicationFiled: May 24, 2008Publication date: September 18, 2008Inventors: Guy L. Guthrie, Jeffrey W. Kellington, Kevin F. Reick, Hugh Shen
-
Publication number: 20080195818Abstract: The present invention is related with the management of memory in environments of limited resources, such as those found for example in a smart card. In a more particular manner, the invention relates to a method of managing the data storage resources of volatile memory, the object of which is to reduce the size of volatile memory necessary to implement the stack of the system, and thereby to reserve more volatile memory available for other needs or procedures of the system or of other applications When the stack grows and comes close to its established limit, the system carries out a transfer of a stack block located in the volatile memory to an area of non-volatile memory, hence this transfer allows a compression of the stack increasing its size in a virtual manner.Type: ApplicationFiled: August 10, 2004Publication date: August 14, 2008Applicant: SANDISK IL LTD.Inventor: Javier Canis Robles
-
Publication number: 20080183968Abstract: A computer system includes a nonvolatile memory for storing instructions, a microprocessor, for controlling operation of the computer system, and a cache system coupled to the microprocessor and directly connected to the nonvolatile memory. The cache system is for providing a requested instruction to the microprocessor. If the requested instruction is cached in the cache system, the cache system sends the requested instruction to the microprocessor; otherwise, the cache system retrieves the requested instruction from the nonvolatile memory, caches the requested instruction, and sends the requested instruction to the microprocessor.Type: ApplicationFiled: January 30, 2007Publication date: July 31, 2008Inventor: Chi-Ting Huang
-
Publication number: 20080172529Abstract: Improved thrashing aware and self configuring cache architectures that reduce cache thrashing without increasing cache size or degrading cache hit access time, for a DSP. In one example embodiment, that is accomplished by selectively caching only the instructions having a higher probability of recurrence to considerably reduce cache thrashing.Type: ApplicationFiled: January 17, 2007Publication date: July 17, 2008Inventors: Tushar Prakash Ringe, Abhijit Giri
-
Publication number: 20080162819Abstract: A design structure for prefetching instruction lines is provided. The design structure is embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design. The design structure comprises a processor having a level 2 cache, and a level 1 cache configured to receive instruction lines from the level 2 cache is described, wherein each instruction line comprises one or more instructions. The processor also includes a processor core configured to execute instructions retrieved from the level 1 cache, and circuitry configured to fetch a first instruction line from a level 2 cache, identify, in the first instruction line, an address identifying a first data line containing data targeted by a data access instruction contained in the first instruction line or a different instruction line, and prefetch, from the level 2 cache, the first data line using the extracted address.Type: ApplicationFiled: March 13, 2008Publication date: July 3, 2008Inventor: DAVID A. LUICK
-
Publication number: 20080155358Abstract: A data relay device relays a read request from a source device to a destination device and relays data corresponding to the read request from the destination device to the source device. The data relay device monitors elapsed time from a time point at which a read request is relayed to the destination device. When the elapsed time reaches warning time or error time, the data relay device sends a warning message or an error message to the source device.Type: ApplicationFiled: October 18, 2007Publication date: June 26, 2008Applicant: FUJITSU LIMITEDInventors: Nina Arataki, Sadayuki Ohyama
-
Publication number: 20080126711Abstract: A method and system of executing stack-based memory reference code. At least some of the illustrated embodiments are methods comprising waking a computer system from a reduced power operational state in which a memory controller loses at least some configuration information, executing memory reference code that utilizes a stack (wherein the memory reference code configures the main memory controller), and passing control of the computer system to an operating system. The time between executing a first instruction after waking the computer system and passing control to the operating system takes less than 200 milliseconds.Type: ApplicationFiled: September 25, 2006Publication date: May 29, 2008Inventors: Louis B. Hobson, Mark A. Piwonka
-
Publication number: 20080120466Abstract: A method and system for accessing a single port multi-way cache includes an address multiplexer that simultaneously addresses a set of data and a set of program instructions in the multi-way cache. Duplicate output way multiplexers respectively select data and program instructions read from the cache responsive to the address multiplexer.Type: ApplicationFiled: November 20, 2006Publication date: May 22, 2008Inventor: Klaus Oberlaender