With Look-ahead Addressing Means (epo) Patents (Class 711/E12.004)
-
Patent number: 8151075Abstract: A method for accessing a memory includes receiving a first address wherein the first address corresponds to a demand fetch, receiving a second address wherein the second address corresponds to a speculative prefetch, providing first data from the memory in response to the demand fetch in which the first data is accessed asynchronous to a system clock, and providing second data from the memory in response to the speculative prefetch in which the second data is accessed synchronous to the system clock. The memory may include a plurality of pipeline stages in which providing the first data in response to the demand fetch is performed such that each pipeline stage is self-timed independent of the system clock and providing the second data in response to the speculative prefetch is performed such that each pipeline stage is timed based on the system clock to be synchronous with the system clock.Type: GrantFiled: January 22, 2010Date of Patent: April 3, 2012Assignee: Freescale Semiconductor, Inc.Inventors: Timothy J. Strauss, David W. Chrudimsky, William C. Moyer
-
Publication number: 20120072674Abstract: A prefetch unit includes a program prefetch address generator that receives memory read requests and in response to addresses associated with the memory read request generates prefetch addresses and stores the prefetch addresses in slots of the prefetch unit buffer. Each slot includes a buffer for storing a prefetch address, two data buffers for storing data that is prefetched using the prefetch address of the slot, and a data buffer selector for alternating the functionality of the two data buffers. A first buffer is used to hold data that is returned in response to a received memory request, and a second buffer is used to hold data from a subsequent prefetch operation having a subsequent prefetch address, such that the data in the first buffer is not overwritten even when the data in the first buffer is still in the process of being read out.Type: ApplicationFiled: August 31, 2011Publication date: March 22, 2012Inventors: Matthew D. Pierson, Joseph R.M. Zbiciak
-
Patent number: 8131920Abstract: A data formatting system and method to improve data efficiency and integrity in a hard disk are disclosed. One embodiment provides a disk drive system having a plurality of lookup tables which store a plurality of randomizer seeds which may be dynamically encoded into the preamble field of a customer data block if the customer data is deemed marginal. Encoding the randomizer seed into the preamble field prevents adjacent data track mis-writes and mis-reads.Type: GrantFiled: December 6, 2007Date of Patent: March 6, 2012Assignee: Hitachi Global Storage Technologies, Netherlands B.V.Inventors: Ryoheita Hattori, David H. Jen, Bernd Lamberts, Remmelt Pit, Kris Schouterden
-
Patent number: 8117398Abstract: A prefetch scheme in a shared memory multiprocessor disables the prefetch when an address falls within a powered down memory bank. A register stores a bit corresponding to each independently powered memory bank to determine whether that memory bank is prefetchable. When a memory bank is powered down, all bits corresponding to the pages in this row are masked so that they appear as non-prefetchable pages to the prefetch access generation engine preventing an access to any page in this memory bank. A powered down status bit corresponding to the memory bank is used for masking the output of the prefetch enable register. The prefetch enable register is unmodified. This also seamlessly restores the prefetch property of the memory banks when the corresponding memory row is powered up.Type: GrantFiled: January 20, 2009Date of Patent: February 14, 2012Assignee: Texas Instruments IncorporatedInventors: Sajish Sajayan, Alok Anand, Sudhakar Surendran
-
Publication number: 20120030431Abstract: A system for prefetching memory in caching systems includes a processor that generates requests for data. A cache of a first level stores memory lines retrieved from a lower level memory in response to references to addresses generated by the processor's requests for data. A prefetch buffer is used to prefetch an adjacent memory line from the lower level memory in response to a request for data. The adjacent memory line is a memory line that is adjacent to a first memory line that is associated with an address of the request for data. An indication that a memory line associated with an address associated with the requested data has been prefetched is stored. A prefetched memory line is transferred to the cache of the first level in response to the stored indication that a memory line associated with an address associated with the requested data has been prefetched.Type: ApplicationFiled: July 27, 2010Publication date: February 2, 2012Inventors: Timothy D. ANDERSON, Kai Chirca
-
Publication number: 20110271058Abstract: A method of identifying a cache line of a cache memory (180) for replacement, is disclosed. Each cache line in the cache memory has a stored sequence number and a stored transaction data stream identifying label. A request (e.g., 400) associated with a label identifying a transaction data stream is received. The label corresponds to the stored transaction data stream identifying label of the cache line. The stored sequence number of the cache line is compared with a response sequence number. The response sequence number is associated with the stored transaction data stream identifying label of the cache line. The cache line is identified for replacement based on the comparison.Type: ApplicationFiled: April 26, 2011Publication date: November 3, 2011Applicant: CANON KABUSHIKI KAISHAInventor: David Charles Ross
-
Publication number: 20110264864Abstract: In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache.Type: ApplicationFiled: June 21, 2011Publication date: October 27, 2011Inventors: Sudarshan Kadambi, Puneet Kumar, Po-Yung Chang
-
Publication number: 20110238922Abstract: A data prefetcher in a microprocessor having a cache memory receives memory accesses each to an address within a memory block. The access addresses are non-monotonically increasing or decreasing as a function of time. As the accesses are received, the prefetcher maintains a largest address and a smallest address of the accesses and counts of changes to the largest and smallest addresses and maintains a history of recently accessed cache lines implicated by the access addresses within the memory block. The prefetcher also determines a predominant access direction based on the counts and determines a predominant access pattern based on the history. The prefetcher also prefetches into the cache memory, in the predominant access direction according to the predominant access pattern, cache lines of the memory block which the history indicates have not been recently accessed.Type: ApplicationFiled: February 24, 2011Publication date: September 29, 2011Applicant: VIA Technologies, Inc.Inventors: Rodney E. Hooker, John Michael Greer
-
Publication number: 20110238923Abstract: A microprocessor includes a first-level cache memory, a second-level cache memory, and a data prefetcher that detects a predominant direction and pattern of recent memory accesses presented to the second-level cache memory and prefetches cache lines into the second-level cache memory based on the predominant direction and pattern. The data prefetcher also receives from the first-level cache memory an address of a memory access received by the first-level cache memory, wherein the address implicates a cache line. The data prefetcher also determines one or more cache lines indicated by the pattern beyond the implicated cache line in the predominant direction. The data prefetcher also causes the one or more cache lines to be prefetched into the first-level cache memory.Type: ApplicationFiled: February 24, 2011Publication date: September 29, 2011Applicant: VIA Technologies, Inc.Inventors: Rodney E. Hooker, John Michael Greer
-
Patent number: 7975107Abstract: Software assists a processor subsystem in making cache replacement decisions by providing an intermediary with information regarding how instructions and/or data of a working set are expected to be used and accessed by the software. The intermediary uses this information along with its knowledge of system requirements, policy and the cache configuration to determine cache usage and management hints for the working sets. The cache usage and management hints are passed by the intermediary to the processor subsystem.Type: GrantFiled: June 22, 2007Date of Patent: July 5, 2011Assignee: Microsoft CorporationInventors: Bradford Beckmann, Bradley M. Waters
-
Publication number: 20110145508Abstract: Read-ahead of data blocks in a storage system is performed based on a policy. The policy is stochastically selected from a plurality of policies in respect to probabilities. The probabilities are calculated based on past performances, also referred to as rewards. Policies which induce better performance may be given precedence over other policies. However, the other policies may be also utilized to reevaluate them. A balance between exploration of different policies and exploitation of previously discovered good policies may be achieved.Type: ApplicationFiled: December 15, 2009Publication date: June 16, 2011Applicant: International Business Machines CorporationInventors: Dan Pelleg, Eran Raichstein, Amir Ronen
-
Patent number: 7958316Abstract: A method, processor, and data processing system for dynamically adjusting a prefetch stream priority based on the consumption rate of the data by the processor. The method includes a prefetch engine issuing a prefetch request of a first prefetch stream to fetch one or more data from the memory subsystem. The first prefetch stream has a first assigned priority that determines a relative order for scheduling prefetch requests of the first prefetch stream relative to other prefetch requests of other prefetch streams. Based on the receipt of a processor demand for the data before the data returns to the cache or return of the data along time before the receiving the processor demand, logic of the prefetch engine dynamically changes the first assigned priority to a second higher or lower priority, which priority is subsequently utilized to schedule and issue a next prefetch request of the first prefetch stream.Type: GrantFiled: February 1, 2008Date of Patent: June 7, 2011Assignee: International Business Machines CorporationInventors: William E. Speight, Lixin Zhang
-
Patent number: 7930507Abstract: A method of performing a storage operation includes: receiving a storage command, estimating the completion time of the associated storage operation, and providing the estimated completion time to a processor.Type: GrantFiled: July 22, 2007Date of Patent: April 19, 2011Assignee: SanDisk IL Ltd.Inventor: Nir Perry
-
Patent number: 7873792Abstract: A system and method of improved handling of large pages in a virtual memory system. A data memory management unit (DMMU) detects sequential access of a first sub-page and a second sub-page out of a set of sub-pages that comprise a same large page. Then, the DMMU receives a request for the first sub-page and in response to such a request, the DMMU instructs a pre-fetch engine to pre-fetch at least the second sub-page if the number of detected sequential accesses equals or exceeds a predetermined value.Type: GrantFiled: January 17, 2008Date of Patent: January 18, 2011Assignee: International Business Machines CorporationInventors: Vaijayanthimala K. Anand, Sandra K. Johnson
-
Patent number: 7840761Abstract: A processor executes one or more prefetch threads and one or more main computing threads. Each prefetch thread executes instructions ahead of a main computing thread to retrieve data for the main computing thread, such as data that the main computing thread may use in the immediate future. Data is retrieved for the prefetch thread and stored in a memory, such as data fetched from an external memory and stored in a buffer. A prefetch controller determines whether the memory is full. If the memory is full, a cache controller stalls at least one prefetch thread. The stall may continue until at least some of the data is transferred from the memory to a cache for use by at least one main computing thread. The stalled prefetch thread or threads are then reactivated.Type: GrantFiled: April 1, 2005Date of Patent: November 23, 2010Assignee: STMicroelectronics, Inc.Inventors: Osvaldo M. Colavin, Davide Rizzo
-
Publication number: 20100287354Abstract: Various embodiments for adaptive reorganization of a virtual storage access method (VSAM) data set are provided. In one exemplary embodiment, upon each control interval (CI) split of a plurality of CI splits occurring over a period of time, historical data including a key value for a record causing each CI split is recorded in a data repository. The historical data is analyzed with a predictive algorithm to determine an amount of free space to be allocated to each of a plurality of control intervals generated pursuant to the adaptive reorganization. The predictive algorithm allocates a greater percentage of the free space to a first location of the VVDS having a larger proportion of historically placed key values than a second location of the VVDS having a smaller proportion of the historically placed key values.Type: ApplicationFiled: May 5, 2009Publication date: November 11, 2010Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Douglas L. LEHR, Franklin E. McCUNE, David C. REED, Max D. SMITH
-
Publication number: 20100287341Abstract: A wake-and-go mechanism is provided for a data processing system. The wake-and-go mechanism is configured to issue a look-ahead load command on a system bus to read a data value from a target address and perform a comparison operation to determine whether the data value at the target address indicates that an event for which a thread is waiting has occurred. In response to the comparison resulting in a determination that the event has not occurred, the wake-and-go engine populates the wake-and-go storage array with the target address and snoops the target address on the system bus.Type: ApplicationFiled: February 1, 2008Publication date: November 11, 2010Inventors: Ravi K. Arimilli, Satya P. Sharma, Randal C. Swanberg
-
Patent number: 7827359Abstract: Systems and/or methods that facilitate reading data from a memory component associated with a network are presented. A pre-fetch generation component generates a pre-fetch request based in part on a received read command. To facilitate a reduction in latency associated with transmitting the read command via an interconnect network component to which the memory component is connected, the pre-fetch request is transmitted directly to the memory component bypassing a portion of the interconnect network component. The memory component specified in the pre-fetch request receives the pre-fetch request and reads the data stored therein, and can store the read data in a buffer and/or transmit the read data to the requester via the interconnect network component, even though the read command has not yet reached the memory component. The read data is verified by comparison with the read command at a convergence point.Type: GrantFiled: December 14, 2007Date of Patent: November 2, 2010Assignee: Spansion LLCInventor: Richard Carmichael
-
Publication number: 20100217937Abstract: A data processing apparatus is described which comprises a processor operable to execute a sequence of instructions and a cache memory having a plurality of cache lines operable to store data values for access by the processor when executing the sequence of instructions. A cache controller is also provided which comprises preload circuitry operable in response to a streaming preload instruction received at the processor to store data values from a main memory into one or more cache lines of the cache memory. The cache controller also comprises identification circuitry operable in response to the streaming preload instruction to identify one or more cache lines of the cache memory for preferential reuse.Type: ApplicationFiled: February 20, 2009Publication date: August 26, 2010Applicant: ARM LIMITEDInventors: Dominic Hugo Symes, Jonathan Sean Callan, Hedley James Francis, Paul Gilbert Meyer
-
Publication number: 20100169603Abstract: A method of performing a storage operation includes: receiving a storage command, estimating the completion time of the associated storage operation, and providing the estimated completion time to a processor.Type: ApplicationFiled: July 22, 2007Publication date: July 1, 2010Applicant: SanDisk Ltd.Inventor: Nir Perry
-
Publication number: 20100161933Abstract: In a particular embodiment, a system is disclosed that includes a controller to read data from and write data to a first storage medium. The controller is adapted to monitor logical block addresses (LBAs) of each read operation from the first storage medium and to selectively store files associated with the monitored LBAs that are less than a predetermined length at a second storage medium to enhance performance of applications associated with the LBAs.Type: ApplicationFiled: December 19, 2008Publication date: June 24, 2010Applicant: Seagate Technology LLCInventors: John Edward Moon, Karl L. Enarson
-
Patent number: 7707359Abstract: One embodiment of the present invention provides a system which facilitates selective prefetching based on resource availability. During operation, the system executes instructions in a processor. While executing the instructions, the system monitors the availability of one or more system resources and dynamically adjusts an availability indicator for each system resource based on the current availability of the system resource. Upon encountering a prefetch instruction which involves the system resource, the system checks the availability indicator. If the availability indicator indicates that the system resource is not sufficiently available, the system terminates the execution of the prefetch instruction, whereby terminating execution prevents prefetch instructions from overwhelming the system resource.Type: GrantFiled: March 27, 2006Date of Patent: April 27, 2010Assignee: Oracle America, Inc.Inventors: Wayne Mesard, Paul Caprioli
-
Publication number: 20090300320Abstract: A processing device includes a memory and a processor that generates a plurality of read commands for reading read data from the memory and a plurality of write commands for writing write data to the memory. A prefetch memory interface prefetches prefetch data to a prefetch buffer, retrieves the read data from the prefetch buffer when the read data is included in the prefetch buffer, and retrieves the read data from the memory when the read data is not included in the prefetch buffer, wherein the prefetch buffer is managed via a linked list.Type: ApplicationFiled: May 28, 2008Publication date: December 3, 2009Inventor: Jing Zhang
-
Publication number: 20090271576Abstract: There is a need for providing a data processor capable of easily prefetching data from a wide range. A central processing unit is capable of performing a specified instruction that adds an offset to a value of a register to generate an effective address for data. This register can be assigned an intended value in accordance with execution of an instruction. A buffer maintains part of instruction streams and data streams stored in memory. The buffer includes cache memories for storing the instruction stream and the data stream. From the memory, the buffer prefetches a data stream containing data corresponding to an effective address designated by the specified instruction stored in the cache memory. A data prefetch operation is easy because a data stream is prefetched by finding the specified instruction from the fetched instruction stream. Data can be prefetched from a wider range than the use of a PC-relative load instruction.Type: ApplicationFiled: March 30, 2009Publication date: October 29, 2009Inventors: Tetsuya Yamada, Naoki Kato, Kesami Hagiwara
-
Publication number: 20090198894Abstract: A method of updating a cache in an integrated circuit is provided. The integrated circuit incorporates the cache, memory and a memory interface connected to the cache and memory. Following a cache miss, the method fetches, using the memory interface, first data associated with the cache miss and second data from the memory, where the second data is stored in the memory adjacent the first data, and updates the cache with the fetched first and second data via the memory interface. The cache includes instruction and data cache, the method performing arbitration between instruction cache misses and data cache misses such that the fetching and updating are performed for data cache misses before instruction cache misses.Type: ApplicationFiled: April 13, 2009Publication date: August 6, 2009Inventor: Simon Robert Walmsley
-
Publication number: 20090198950Abstract: A processor includes a first address translation engine, a second address translation engine, and a prefetch engine. The first address translation engine is configured to determine a first memory address of a pointer associated with a data prefetch instruction. The prefetch engine is coupled to the first translation engine and is configured to fetch content, included in a first data block (e.g., a first cache line) of a memory, at the first memory address. The second address translation engine is coupled to the prefetch engine and is configured to determine a second memory address based on the content of the memory at the first memory address. The prefetch engine is also configured to fetch (e.g., from the memory or another memory) a second data block (e.g., a second cache line) that includes data at the second memory address.Type: ApplicationFiled: February 1, 2008Publication date: August 6, 2009Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
-
Publication number: 20090198905Abstract: A technique for data prefetching using indirect addressing includes monitoring data pointer values, associated with an array, in an access stream to a memory. The technique determines whether a pattern exists in the data pointer values. A prefetch table is then populated with respective entries that correspond to respective array address/data pointer pairs based on a predicted pattern in the data pointer values. Respective data blocks (e.g., respective cache lines) are then prefetched (e.g., from the memory or another memory) based on the respective entries in the prefetch table.Type: ApplicationFiled: February 1, 2008Publication date: August 6, 2009Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
-
Publication number: 20090187714Abstract: A memory module includes a memory hub coupled to several memory devices. The memory hub includes history logic that predicts on the basis of read memory requests which addresses in the memory devices from which date are likely to be subsequently read. The history logic applies prefetch suggestions corresponding to the predicted addresses to a memory sequencer, which uses the prefetch suggestions to generate prefetch requests that are coupled to the memory devices. Data read from the memory devices responsive to the prefetch suggestions are stored in a prefetch buffer. Tag logic stores prefetch addresses corresponding to addresses from which data have been prefetched. The tag logic compares the memory request addresses to the prefetch addresses to determine if the requested read data are stored in the prefetch buffer. If so, the requested data are read from the prefetch buffer. Otherwise, the requested data are read from the memory devices.Type: ApplicationFiled: August 4, 2008Publication date: July 23, 2009Applicant: Micron Technology, Inc.Inventors: Terry R. Lee, Joseph Jeddeloh
-
Publication number: 20090172293Abstract: A method includes detecting a cache miss. The method further includes, in response to detecting the cache miss, traversing a plurality of linked memory nodes in a memory storage structure being used to store data to determine if the memory storage structure is a binary tree. The method further includes, in response to determining that the memory storage structure is a binary tree, prefetching data from the memory storage structure. An associated machine readable medium is also disclosed.Type: ApplicationFiled: December 28, 2007Publication date: July 2, 2009Inventor: Mingqiu Sun
-
Publication number: 20090150618Abstract: A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design is provided. The design structure generally includes a computer system that includes a CPU, a storage device, circuitry for providing a speculative access threshold corresponding to a selected percentage of the total number of accesses to the storage device that can be speculatively issued, and circuitry for intermixing demand accesses and speculative accesses in accordance with the speculative access threshold.Type: ApplicationFiled: May 5, 2008Publication date: June 11, 2009Inventors: James J. Allen, JR., Steven K. Jenkins, James A. Mossman, Michael R. Trombley
-
Publication number: 20090150637Abstract: A data formatting system and method to improve data efficiency and integrity in a hard disk are disclosed. One embodiment provides a disk drive system having a plurality of lookup tables which store a plurality of randomizer seeds which may be dynamically encoded into the preamble field of a customer data block if the customer data is deemed marginal. Encoding the randomizer seed into the preamble field prevents adjacent data track mis-writes and mis-reads.Type: ApplicationFiled: December 6, 2007Publication date: June 11, 2009Inventors: Ryoheita Hattori, David H. Jen, Bernd Lamberts, Remmelt Pit, Kris Schouterden
-
Publication number: 20090138661Abstract: A computer system and method. In one embodiment, a computer system comprises a processor and a cache memory. The processor executes a prefetch instruction to prefetch a block of data words into the cache memory. In one embodiment, the cache memory comprises a plurality of cache levels. The processor selects one of the cache levels based on a value of a prefetch instruction parameter indicating the temporal locality of data to be prefetched. In a further embodiment, individual words are prefetched from non-contiguous memory addresses. A single execution of the prefetch instruction allows the processor to prefetch multiple blocks into the cache memory. The number of data words in each block, the number of blocks, an address interval between each data word of each block, and an address interval between each block to be prefetched are indicated by parameters of the prefetch instruction.Type: ApplicationFiled: November 26, 2007Publication date: May 28, 2009Inventor: Gary Lauterbach
-
Publication number: 20090043985Abstract: A data processing device employs a first translation look-aside buffer (TLB) to translate virtual addresses to physical addresses. If a virtual address to be translated is not located in the first TLB, the physical address is requested from a set of page tables. When the data processing device is in a hypervisor mode, a second TLB is accessed in response to the request to access the page tables. If the virtual address is located in the second TLB, the hypervisor page tables are bypassed and the second TLB provides a physical address or information to access another table in the set of page tables. By bypassing the hypervisor page tables, the time to translate an address in the hypervisor mode is reduced, thereby improving the efficiency of the data processing device.Type: ApplicationFiled: August 6, 2007Publication date: February 12, 2009Applicant: ADVANCED MICRO DEVICES, INC.Inventors: Michael Edward Tuuk, Michael Clark
-
Publication number: 20090019239Abstract: A memory controller receives read requests from a processor into a read queue. The memory controller dynamically modifies an order of servicing the requests based on how many pending requests are in the read queue. When the read queue is relatively empty, requests are serviced oldest first to minimize latency. When the read queue becomes progressively fuller, requests are progressively, using three or more memory access modes, serviced in a manner that increases throughput on a memory bus to reduce the likelihood that the read queue will become full and further requests from the processor would have to be halted.Type: ApplicationFiled: July 10, 2007Publication date: January 15, 2009Inventors: Brian David Allison, Wayne Barrett, Joseph Allen Kirscht, Elizabeth A. McGlone, Brian T. Vanderpool
-
Publication number: 20080301324Abstract: A cache receives a request from an instruction execution unit, searches for necessary data, outputs the data to the instruction execution unit if there is a cache hit, and instructs a request storage unit to request a move-in of the data if a cache miss occurs. The request storage unit stores therein the request corresponding to the instruction of the cache while the requested process is being executed. A REQID assignment unit reads the request stored in the request storage unit, selects an unused REQID from a REQID table, and assigns the unused REQID to the read request. The REQID is an identification number of the request based on the number of requests set as the maximum number that can be received at a simultaneous time by a system controller of the response side.Type: ApplicationFiled: July 31, 2008Publication date: December 4, 2008Applicant: FUJITSU LIMITEDInventor: Masaki Ukai
-
Publication number: 20080288751Abstract: A processor system (100) includes a central processing unit (102) and a prefetch engine (110). The prefetch engine (110) is coupled to the central processing unit (102). The prefetch engine (110) is configured to detect, when data associated with the central processing unit (102) is read from a memory (114), a stride pattern in an address stream based upon whether sums of a current stride and a previous stride are equal for a number of consecutive reads. The prefetch engine (110) is also configured to prefetch, for the central processing unit (102), data from the memory (114) based on the detected stride pattern.Type: ApplicationFiled: May 17, 2007Publication date: November 20, 2008Applicant: ADVANCED MICRO DEVICES, INC.Inventor: Andrej Kocev
-
Publication number: 20080288724Abstract: A plurality of new snoop transaction types are described. Some include address information in the requests, and others include cache entry information in the requests. Some responses include tag address information, and some do not. Some provide tag address content on the data bus lines during the data portion of the transaction. These new snoop transaction types are very helpful during debug of a data processing system.Type: ApplicationFiled: May 14, 2007Publication date: November 20, 2008Inventors: William C. Moyer, Michael D. Snyder
-
Publication number: 20080184006Abstract: A method and system for page preloading using a control flow are provided. The method includes extracting preload page information from one or more pages in a first program code, and generating a second program code including the first program code and the extracted preload page information. The second program code is stored in non-volatile memory. When loading a page from the second program code stored in the non-volatile memory into main memory, preloading one or more pages from the non-volatile memory based on the preload page information stored in the loaded page.Type: ApplicationFiled: August 2, 2007Publication date: July 31, 2008Inventors: Min-Soo Moon, Chan Ik Park
-
Publication number: 20080133872Abstract: A storage system implements a storage operating system configured to concurrently perform speculative readahead for a plurality of different read streams. Unlike previous implementations, the operating system manages a separate set of readahead metadata for each of the plurality of read streams. Consequently, the operating system can “match” a received client read request with a corresponding read stream, then perform readahead operations for the request in accordance with the read stream's associated set of metadata. Because received client read requests are matched to their corresponding read streams on a request-by-request basis, the operating system can concurrently perform readahead operations for multiple read streams, regardless of whether the read streams' file read requests are received by the storage system in sequential, nearly-sequential or random orders.Type: ApplicationFiled: February 6, 2008Publication date: June 5, 2008Applicant: NETWORK APPLIANCE, INC.Inventor: Robert L. Fair
-
Publication number: 20080059716Abstract: A moving-picture processing apparatus has a pre-fetch memory pre-fetching a portion of a decoded picture stored in an external memory, and a miss/hit determination unit determining a manner in which a miss occurs in response to a read request to the pre-fetch memory.Type: ApplicationFiled: September 4, 2007Publication date: March 6, 2008Applicant: Fujitsu LimitedInventors: Yasuhiro Watanabe, Mitsuharu Wakayoshi, Naoyuki Takeshita
-
Publication number: 20080052470Abstract: Methods and arrangements for accessing a storage structure. Included are an arrangement for providing a storage access instruction, an arrangement for inputting an address into a storage structure data cache responsive to a storage access instruction, an arrangement for extending a storage access instruction with a predicted register number field, the predicted register number field containing a predicted register number corresponding to a speculative location of a load/store operand associated with a storage access instruction, an arrangement for speculatively accessing a storage structure with a storage access instruction extended by the extending arrangement, and an arrangement for reverting to the arrangement for inputting an address if the load/store operand is not in the speculative location.Type: ApplicationFiled: October 30, 2007Publication date: February 28, 2008Applicant: International Business Machines CorporationInventors: Kartik Agaram, Marc Auslander, Kemal Ebcioglu