Predicting, Look-ahead Patents (Class 711/204)
  • Patent number: 7904660
    Abstract: A computer system and a method for enhancing the cache prefetch behavior. A computer system including a processor, a main memory, a prefetch controller, a cache memory, a prefetch buffer, and a main memory, wherein each page in the main memory has associated with it a tag, which is used for controlling the prefetching of a variable subset of lines from this page as well as lines from at least one other page. And, coupled to the processor is a prefetch controller, wherein the prefetch controller responds to the processor determining a fault (or miss) occurred to a line of data by fetching a corresponding line of data with the corresponding tag, with the corresponding tag to be stored in the prefetch buffer, and sending the corresponding line of data to the cache memory.
    Type: Grant
    Filed: August 23, 2007
    Date of Patent: March 8, 2011
    Assignee: International Business Machines Corporation
    Inventor: Peter Franaszek
  • Patent number: 7895399
    Abstract: A processor reads a program including a prefetch command and a load command and data from a main memory, and executes the program. The processor includes: a processor core that executes the program; a L2 cache that stores data on the main memory for each predetermined unit of data storage; and a prefetch unit that pre-reads the data into the L2 cache from the main memory on the basis of a request for prefetch from the processor core. The prefetch unit includes: a L2 cache management table including an area in which a storage state is held for each position in the unit of data storage of the L2 cache and an area in which a request for prefetch is reserved; and a prefetch control unit that instructs, the L2 cache to perform the request for prefetch reserved or the request for prefetch from the processor core.
    Type: Grant
    Filed: February 13, 2007
    Date of Patent: February 22, 2011
    Assignee: Hitachi, Ltd.
    Inventors: Aki Tomita, Naonobu Sukegawa
  • Patent number: 7881320
    Abstract: Multiplexing data from bitstreams is described. Data status is determined for data of each of the bitstreams. Stream numbers are assigned respectively to the bitstreams, and the data of each of the bitstreams is controllably stored in respective memory. A memory buffer of the memory buffers is controllably selected. The data obtained from the memory buffer selected is parsed to provide an output. The controllably selecting and the parsing are repeated to obtain and parse the data stored in at least one other memory buffer of the memory buffers to provide the output. The output is multiplexed data from the bitstreams respectively associated with the memory buffer and the at least one other memory buffer.
    Type: Grant
    Filed: December 12, 2005
    Date of Patent: February 1, 2011
    Assignee: Xilinx, Inc.
    Inventors: Paul R. Schumacher, Kornelis Antonius Vissers
  • Patent number: 7873792
    Abstract: A system and method of improved handling of large pages in a virtual memory system. A data memory management unit (DMMU) detects sequential access of a first sub-page and a second sub-page out of a set of sub-pages that comprise a same large page. Then, the DMMU receives a request for the first sub-page and in response to such a request, the DMMU instructs a pre-fetch engine to pre-fetch at least the second sub-page if the number of detected sequential accesses equals or exceeds a predetermined value.
    Type: Grant
    Filed: January 17, 2008
    Date of Patent: January 18, 2011
    Assignee: International Business Machines Corporation
    Inventors: Vaijayanthimala K. Anand, Sandra K. Johnson
  • Patent number: 7873779
    Abstract: Methods and apparatuses are presented for memory page size auto detection. A method for automatically determining a page size of a memory device includes receiving page size extents of the memory device, determining a bus width of the memory device, detecting a number of pages having an automatic detection marker, and determining the page size of the memory device based upon the detected number of pages and the received page size extents. An apparatus for automatically determining page size detection includes logic for performing the above presented method.
    Type: Grant
    Filed: May 13, 2008
    Date of Patent: January 18, 2011
    Assignee: QUALCOMM Incorporated
    Inventors: Srini Maddali, Arshad Noormohammed Bebal, Tom T. Kuo, Tun Yong Yang
  • Patent number: 7849228
    Abstract: The present invention provides mechanisms that enable application instances to pass block mode storage requests directly to a physical I/O adapter without run-time involvement from the local operating system or hypervisor. In one aspect of the present invention, a mechanism is provided for handling user space creation and deletion operations for creating and deleting allocations of linear block addresses of a physical storage device to application instances. For creation, it is determined if there are sufficient available resources for creation of the allocation. For deletion, it is determined if there are any I/O transactions active on the allocation before performing the deletion. Allocation may be performed only if there are sufficient available resources and deletion may be performed only if there are no active I/O transactions on the allocation being deleted.
    Type: Grant
    Filed: November 12, 2008
    Date of Patent: December 7, 2010
    Assignee: International Business Machines Corporation
    Inventors: William Todd Boyd, John Lewis Hufferd, Agustin Mena, III, Renato John Recio, Madeline Vega
  • Patent number: 7831799
    Abstract: An improved address translation method and mechanism for memory management in a computer system is disclosed. A segmentation mechanism employing segment registers maps virtual addresses into a linear address space. A paging mechanism optionally maps linear addresses into physical or real addresses. Independent protection of address spaces is provided at each level. Information about the state of real memory pages is kept in segment registers or a segment register cache potentially enabling real memory access to occur simultaneously with address calculation, thereby increasing performance of the computer system.
    Type: Grant
    Filed: November 1, 2004
    Date of Patent: November 9, 2010
    Inventor: Richard Belgard
  • Patent number: 7818514
    Abstract: A low latency memory system access is provided in association with a weakly-ordered multiprocessor system. Bach processor in the multiprocessor shares resources, and each shared resource has an associated lock within a locking device that provides support for synchronization between the multiple processors in the multiprocessor and the orderly sharing of the resources. A processor only has permission to access a resource when it owns the lock associated with that resource, and an attempt by a processor to own a lock requires only a single load operation, rather than a traditional atomic load followed by store, such that the processor only performs a read operation and the hardware locking device performs a subsequent write operation rather than the processor. A simple prefetching for non-contiguous data structures is also disclosed.
    Type: Grant
    Filed: August 22, 2008
    Date of Patent: October 19, 2010
    Assignee: International Business Machines Corporation
    Inventors: Matthias A. Blumrich, Dong Chen, Paul W. Coteus, Alan G. Gara, Mark E. Giampapa, Philip Heidelberger, Dirk Hoenicke, Martin Ohmacht, Burkhard D. Steinmacher-Burow, Todd E. Takken, Pavlos M. Vranas
  • Patent number: 7818530
    Abstract: Data management systems, articles of manufacture, and data storage methods are described. According to one aspect, a data management system provides a data storage system configured to store data of a plurality of client protected computer systems, wherein the data storage system comprises a plurality of storage devices individually having a respective capacity, and a quantity of the data of the protected computer systems to be stored exceeds capacities of individual ones of the storage devices and storage control circuitry coupled with the data storage system and configured to assign individual ones of the individual storage devices to store data for respective ones of the protected computer systems.
    Type: Grant
    Filed: November 26, 2003
    Date of Patent: October 19, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Stephen Gold, Harald Burose, Sebastien Schikora
  • Patent number: 7793067
    Abstract: In an embodiment, a system memory stores a set of input/output (I/O) translation tables. One or more I/O devices initiate direct memory access (DMA) requests including virtual addresses. An I/O memory management unit (IOMMU) is coupled to the I/O devices and the system memory, wherein the IOMMU is configured to translate the virtual addresses in the DMA requests to physical addresses to access the system memory according to an I/O translation mechanism implemented by the IOMMU. The IOMMU comprises one or more caches, and is configured to read translation data from the I/O translation tables responsive to a prefetch command that specifies a first virtual address. The reads are responsive to the first virtual address and the I/O translation mechanism, and the IOMMU is configured to store data in the caches responsive to the read translation data.
    Type: Grant
    Filed: April 30, 2008
    Date of Patent: September 7, 2010
    Assignee: Globalfoundries Inc.
    Inventors: Andrew G. Kegel, Mark D. Hummel, Erich S. Boleyn
  • Patent number: 7779208
    Abstract: In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache.
    Type: Grant
    Filed: January 7, 2009
    Date of Patent: August 17, 2010
    Assignee: Apple Inc.
    Inventors: Sudarshan Kadambi, Puneet Kumar, Po-Yung Chang
  • Publication number: 20100199063
    Abstract: A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner.
    Type: Application
    Filed: April 13, 2010
    Publication date: August 5, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Stuart Sechrest, Michael R. Fortin, Mehmet Iyigun, Cenk Ergan
  • Patent number: 7752350
    Abstract: A system and method for an efficient implementation of a software-managed cache is presented. When an application thread executes on a simple processor, the application thread uses a conditional data select instruction for eliminating a conditional branch instruction when accessing a software-managed cache. An application thread issues a conditional data select instruction (DMA transfer) after a cache directory lookup, wherein the size of the requested data is dependent upon the outcome of the cache directory lookup. When the cache directory lookup results in a cache hit, the application thread requests a transfer of zero bits of data, which results in a DMA controller (DMAC) performing a no-op instruction. When the cache directory lookup results in a cache miss, the application thread requests a data block transfer the size of a corresponding cache line.
    Type: Grant
    Filed: February 23, 2007
    Date of Patent: July 6, 2010
    Assignee: International Business Machines Corporation
    Inventors: Daniel Alan Brokenshire, Michael Norman Day, Barry L Minor, Mark Richard Nutter
  • Publication number: 20100169606
    Abstract: A microprocessor includes a cache memory, a prefetch unit, and detection logic. The prefetch unit may be configured to monitor memory accesses that miss in the cache and to determine whether to prefetch one or more blocks of memory from a system memory based upon previous memory accesses. The prefetch unit may be further configured to use addresses of the memory accesses that miss to calculate each next memory block to prefetch. The detection logic may be configured to provide a notification to the prefetch unit in response to detecting a memory access instruction including a particular hint. In response to receiving the notification, the prefetch unit may be configured to inhibit using an address associated with the memory access instruction including the particular hint, when calculating subsequent memory blocks to prefetch.
    Type: Application
    Filed: December 30, 2008
    Publication date: July 1, 2010
    Inventor: Thomas M. Deneau
  • Patent number: 7739476
    Abstract: In one embodiment, a processor comprises a memory management unit (MMU) and an interface unit coupled to the MMU and to an interface unit of the processor. The MMU comprises a queue configured to store pending hardware-generated page table entry (PTE) updates. The interface unit is configured to receive a synchronization operation on the interface that is defined to cause the pending hardware-generated PTE updates, if any, to be written to memory. The MMU is configured to accept a subsequent hardware-generated PTE update generated subsequent to receiving the synchronization operation even if the synchronization operation has not completed on the interface. In some embodiments, the MMU may accept the subsequent PTE update responsive to transmitting the pending PTE updates from the queue. In other embodiments, the pending PTE updates may be identified in the queue and subsequent updates may be received.
    Type: Grant
    Filed: November 4, 2005
    Date of Patent: June 15, 2010
    Assignee: Apple Inc.
    Inventors: Jesse Pan, Ramesh Gunna
  • Patent number: 7721067
    Abstract: A processor having a multistage pipeline includes a TLB and a TLB controller. In response to a TLB miss signal, the TLB controller initiates a TLB reload, requesting address translation information from either a memory or a higher-level TLB, and placing that information into the TLB. The processor flushes the instruction having the missing virtual address, and refetches the instruction, resulting in re-insertion of the instruction at an initial stage of the pipeline above the TLB access point. The initiation of the TLB reload, and the flush/refetch of the instruction, are performed substantially in parallel, and without immediately stalling the pipeline. The refetched instruction is held at a point in the pipeline above the TLB access point until the TLB reload is complete, so that the refetched instruction generates a “hit” in the TLB upon its next access.
    Type: Grant
    Filed: January 20, 2006
    Date of Patent: May 18, 2010
    Assignee: QUALCOMM Incorporated
    Inventors: Brian Joseph Kopec, Victor Roberts Augsburg, James Norris Dieffenderfer, Jeffrey Todd Bridges, Thomas Andrew Sartorius
  • Publication number: 20100122062
    Abstract: In one embodiment, an input/output (I/O) memory management unit (IOMMU) comprises at least one memory and control logic coupled to the memory. The memory is configured to store translation data corresponding to one or more I/O translation tables stored in a memory system of a computer system that includes the IOMMU. The control logic is configured to translate an I/O device-generated memory request using the translation data. The translation data includes a type field indicating one or more attributes of the translation, and the control logic is configured to control the translation responsive to the type field.
    Type: Application
    Filed: January 11, 2010
    Publication date: May 13, 2010
    Inventors: Mark D. Hummel, Geoffrey S. Strongin, Andrew W. Lueck
  • Patent number: 7716673
    Abstract: A system comprises a first processor, a second processor coupled to the first processor, an operating system that executes exclusively only on the first processor and not on the second processor, and a middle layer software running on the first processor and that distributes tasks to run on either or both processors. A synchronization unit coupled to the first and second processors also may be provided to synchronize the processors. Further still, a translation lookaside buffer may be included that is shared between the processors. Each entry in the translation lookaside buffer (“TLB”) may include a task identifier to permit the operating system or middle layer software to selectively flush only some of the TLB entries (e.g., the entries pertaining to only one of the processors).
    Type: Grant
    Filed: July 31, 2003
    Date of Patent: May 11, 2010
    Assignee: Texas Instruments Incorporated
    Inventors: Gerard Chauvel, Dominique D'Inverno
  • Patent number: 7716441
    Abstract: A tiered storage system according to the present invention provides for the management of migration groups. When a migration group is defined, a reference tier position is determined and the relative tier position of each constituent logical device is determined. Movement of a migration group involves migrating data in its constituent logical devices to target logical devices. The migration group is then defined by the target devices. A virtualization system makes the transition transparent to host devices.
    Type: Grant
    Filed: September 11, 2007
    Date of Patent: May 11, 2010
    Assignee: Hitachi, Ltd.
    Inventor: Yoshiki Kano
  • Patent number: 7716420
    Abstract: A filer converts a traditional volume to a flexible volume by: creating an aggregate on storage devices other than the storage devices of the traditional volume; on the aggregate, creating a flexible volume large enough to store metadata describing files residing on the traditional volume; on the flexible volume, creating metadata structures that describe the files of the traditional volume, except that the metadata indicates that data blocks and indirect blocks are absent and must be fetched from another location. As the filer handles I/O requests directed to the flexible volume, the filer calculates physical volume block number (PVBN) addresses where the requested blocks would be located in the aggregate and replaces the absent pointers with the calculated addresses. After the absent pointers have been replaced, the filer adds the storage devices of the traditional volume.
    Type: Grant
    Filed: April 28, 2006
    Date of Patent: May 11, 2010
    Assignee: Network Appliance, Inc.
    Inventors: Abhijeet Gole, Joydeep Sen Sarma
  • Publication number: 20100106935
    Abstract: Pretranslating input/output buffers in environments with multiple page sizes that include determining a pretranslation page size for an input/output buffer under an operating system that supports more than one memory page size, identifying pretranslation page frame numbers for the buffer in dependence upon the pretranslation page size, pretranslating the pretranslation page frame numbers to physical page numbers, and storing the physical page numbers in association with the pretranslation page size. Typical embodiments also include accessing the buffer, including translating a virtual memory address in the buffer to a physical memory address in dependence upon the physical page numbers and the pretranslation page size and accessing the physical memory of the buffer at the physical memory address.
    Type: Application
    Filed: January 11, 2010
    Publication date: April 29, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: David Alan Hepkin
  • Patent number: 7680984
    Abstract: An object of the present invention is to improve the usage efficiency of a storage extent in a storage system using the Allocation on Use (AOU) technique. A controller in the storage system allocates a storage extent in an actual volume to an extent in a virtual volume accessed by a host computer, detects any decrease in necessity for maintaining that allocation, and cancels the allocation of the storage extent in the actual volume to the extent in the virtual volume based on the detection result.
    Type: Grant
    Filed: September 28, 2006
    Date of Patent: March 16, 2010
    Assignee: Hitachi, Ltd.
    Inventors: Kentaro Kakui, Kyosuke Achiwa
  • Patent number: 7669033
    Abstract: Pretranslating input/output buffers in environments with multiple page sizes that include determining a pretranslation page size for an input/output buffer under an operating system that supports more than one memory page size, identifying pretranslation page frame numbers for the buffer in dependence upon the pretranslation page size, pretranslating the pretranslation page frame numbers to physical page numbers, and storing the physical page numbers in association with the pretranslation page size. Typical embodiments also include accessing the buffer, including translating a virtual memory address in the buffer to a physical memory address in dependence upon the physical page numbers and the pretranslation page size and accessing the physical memory of the buffer at the physical memory address.
    Type: Grant
    Filed: July 9, 2008
    Date of Patent: February 23, 2010
    Assignee: International Business Machines Corporation
    Inventor: David A. Hepkin
  • Patent number: 7640400
    Abstract: A method, computer program product, and system are provided for prefetching data into a cache memory. As a program is executed an object identifier is obtained of a first object of the program. A lookup operation is performed on a data structure to determine if the object identifier is present in the data structure. Responsive to the object identifier being present in the data structure, a referenced object identifier is retrieved that is referenced by the object identifier. Then, the data associated with the referenced object identifier is prefetched from main memory into the cache memory.
    Type: Grant
    Filed: April 10, 2007
    Date of Patent: December 29, 2009
    Assignee: International Business Machines Corporation
    Inventors: William A. Maron, Greg R. Mewhinney, Mysore S. Srinivas, David B. Whitworth
  • Patent number: 7590830
    Abstract: Concurrently branch predicting for multiple branch-type instructions demands of high performance environments. Concurrently branch predicting for multiple branch-type instructions provides the instruction flow for a high bandwidth pipeline utilized in advanced performance environments. Branch predictions are concurrently generated for multiple branch-type instructions. The concurrently generated branch predictions are then supplied for further processing of the corresponding branch-type instructions.
    Type: Grant
    Filed: February 28, 2005
    Date of Patent: September 15, 2009
    Assignee: Sun Microsystems, Inc.
    Inventors: Shailender Chaudhry, Paul Caprioli
  • Patent number: 7584328
    Abstract: A discussion of a local memory with at least a command block section and a cache section that facilitates an efficient interrupt processing. The command-block section is allocated on a per interrupt basis and contains pointers to cache-lines. When an interrupt is recognized an interrupt, the proposal uses the pointers in the command-block to prefetch the corresponding cache-lines from the cache section of the local memory, which it loads into its local cache buffer. Thus, when the CPU recognizes an interrupt, the information for the context-switch is already available in cache.
    Type: Grant
    Filed: November 14, 2005
    Date of Patent: September 1, 2009
    Assignee: Intel Corporation
    Inventors: Peter C. Brink, Shrikant M. Shah, Peter R. Munguia
  • Publication number: 20090198950
    Abstract: A processor includes a first address translation engine, a second address translation engine, and a prefetch engine. The first address translation engine is configured to determine a first memory address of a pointer associated with a data prefetch instruction. The prefetch engine is coupled to the first translation engine and is configured to fetch content, included in a first data block (e.g., a first cache line) of a memory, at the first memory address. The second address translation engine is coupled to the prefetch engine and is configured to determine a second memory address based on the content of the memory at the first memory address. The prefetch engine is also configured to fetch (e.g., from the memory or another memory) a second data block (e.g., a second cache line) that includes data at the second memory address.
    Type: Application
    Filed: February 1, 2008
    Publication date: August 6, 2009
    Inventors: Ravi K. Arimilli, Balaram Sinharoy, William E. Speight, Lixin Zhang
  • Patent number: 7562192
    Abstract: An apparatus in a microprocessor for selectively retiring a prefetched cache line is disclosed. The microprocessor includes a prefetch buffer that stores a cache line prefetched from a system memory coupled to the microprocessor. The microprocessor also includes a cache memory, comprising an array of storage elements for storing cache lines, indexed by an index input. One of the storage elements of the array indexed by an index portion of an address of the prefetched cache line stored in the prefetch buffer is storing a replacement candidate line for the prefetched cache line. The microprocessor also includes control logic that determines whether the replacement candidate line in the cache memory is invalid, and if so, replaces the replacement candidate line in the one of the storage elements with the prefetched cache line from the prefetch buffer.
    Type: Grant
    Filed: November 27, 2006
    Date of Patent: July 14, 2009
    Assignee: Centaur Technologies
    Inventors: G. Glenn Henry, Rodney E. Hooker
  • Patent number: 7558922
    Abstract: A storage system includes a client host, a storage device, and a separate data search appliance. The client software executing on the client host composes a query and sends a data search request to the storage device. The storage device passes the received query to the connected data search appliance. The search appliance invokes search process to find search candidates using meta information of the data stored in the storage device. Upon the completion of the search process, the search appliance returns the identified search results to the storage device. Upon receipt of the search results from the search appliance, the storage device passes them to the client. At the same time, the storage device pre-fetches the actual data which corresponds to the search results into its cache memory to ensure fast future retrieval.
    Type: Grant
    Filed: December 28, 2005
    Date of Patent: July 7, 2009
    Assignee: Hitachi, Ltd.
    Inventor: Atsushi Murase
  • Patent number: 7543132
    Abstract: A method and apparatus for improved performance for reloading translation look-aside buffers in multithreading, multi-core processors. TSB prediction is accomplished by hashing a plurality of data parameters and generating an index that is provided as an input to a predictor array to predict the TSB page size. In one embodiment of the invention, the predictor array comprises two-bit saturating up-down counters that are used to enhance the accuracy of the TSB prediction. The saturating up-down counters are configured to avoid making rapid changes in the TSB prediction upon detection of an error. Multiple misses occur before the prediction output is changed. The page size specified by the predictor index is searched first. Using the technique described herein, errors are minimized because the counter leads to the correct result at least half the time.
    Type: Grant
    Filed: June 30, 2004
    Date of Patent: June 2, 2009
    Assignee: Sun Microsystems, Inc.
    Inventors: Greg F. Grohoski, Ashley Saulsbury, Paul J. Jordan, Manish Shah, Rabin A. Sugumar, Mark Debbage, Venkatesh Iyengar
  • Patent number: 7536530
    Abstract: A system and method for a processor to determine a memory page management implementation used by a memory controller without necessarily having direct access to the circuits or registers of the memory controller is disclosed. In one embodiment, a matrix of counters correspond to potential page management implementations and numbers of pages per block. The counters may be incremented or decremented depending upon whether the corresponding page management implementations and numbers of pages predict a page boundary whenever a long access latency is observed. The counter with the largest value after a period of time may correspond to the actual page management implementation and number of pages per block.
    Type: Grant
    Filed: December 30, 2005
    Date of Patent: May 19, 2009
    Assignee: Intel Corporation
    Inventors: Eric A. Sprangle, Anwar Q. Rohillah
  • Patent number: 7533220
    Abstract: A microprocessor coupled to a system memory has a memory subsystem with a translation look-aside buffer (TLB) for storing TLB information. The microprocessor also includes an instruction decode unit that decodes an instruction that specifies a data stream in the system memory and an abnormal TLB access policy. The microprocessor also includes a stream prefetch unit that generates a prefetch request to the memory subsystem to prefetch a cache line of the data stream from the system memory into the memory subsystem. If a virtual page address of the prefetch request causes an abnormal TLB access, the memory subsystem selectively aborts the prefetch request based on the abnormal TLB access policy specified in the instruction.
    Type: Grant
    Filed: August 11, 2006
    Date of Patent: May 12, 2009
    Assignee: MIPS Technologies, Inc.
    Inventor: Keith E. Diefendorff
  • Publication number: 20090119476
    Abstract: Data is extracted from at least one data source. The data is translated according to a metadata model and is stored in a staging data store. A migration management user interface is provided that includes a mechanism for indicating at least some of the data to be included in a migration event. The migration event is initiated based at least in part on the input received via the user interface. The at least some of the data is migrated from the staging data store to a target data store according to a hierarchy of controls.
    Type: Application
    Filed: November 1, 2007
    Publication date: May 7, 2009
    Applicant: Verizon Business Network Services Inc.
    Inventors: Stephen Jernigan, Ronald Boals
  • Patent number: 7529891
    Abstract: Balanced prefetching automatically balances the benefits of prefetching data that has not been accessed recently against the benefits of caching recently accessed data, and can be applied to most types of structured data without needing application-specific details or hints. Balanced prefetching is performed in applications in a computer system, such as storage-centric applications, including file systems and databases. Balanced prefetching exploits the structure of the data being prefetched, providing superior application throughput. For a fixed amount of memory, it is automatically and dynamically determined how much memory should be devoted to prefetching.
    Type: Grant
    Filed: September 19, 2005
    Date of Patent: May 5, 2009
    Assignee: Microsoft Corporation
    Inventors: Chandramohan A. Thekkath, John P. MacCormick, Lidong Zhou, Nicholas Charles Murphy
  • Patent number: 7519777
    Abstract: Methods, systems and computer program products for concomitant pair per-fetching. Exemplary embodiments include a method for concomitant pair prefetching, the method including detecting a stride pattern, detecting an indirect access pattern to define an access window, prefetching candidates within the defined access window, wherein the prefetching comprises obtaining prefetch addresses from a history table, updating a miss stream window, selecting a candidate of a concomitant pair from the miss stream window, producing an index from the candidate pair, accessing an aging filter, updating the history table and selecting another concomitant pair candidate from the miss stream window.
    Type: Grant
    Filed: June 11, 2008
    Date of Patent: April 14, 2009
    Assignee: International Business Machines Corporation
    Inventors: Kattamuri Ekanadham, Il Park, Pratap C. Pattnaik
  • Patent number: 7516278
    Abstract: A system controller, which executes a speculative fetch from a memory before determining whether data requested for a memory fetch request is in a cache by searching tag information of the cache, includes a consumption determining unit that monitors a consumption status of a hardware resource used in the speculative fetch, and determines whether a consumption of the hardware resource exceeds a predetermined value; and a speculative-fetch issuing unit that stops issuing the speculative fetch when the consumption determining unit determines that the consumption of the hardware resource exceeds the predetermined value.
    Type: Grant
    Filed: December 1, 2004
    Date of Patent: April 7, 2009
    Assignee: Fujitsu Limited
    Inventors: Akira Watanabe, Go Sugizaki, Shigekatsu Sagi, Masahiro Mishima
  • Patent number: 7512699
    Abstract: A method for managing position independent code using a software framework is presented. A software framework provides the ability to cache multiple plug-in's which are loaded in a processor's local storage. A processor receives a command or data stream from another processor, which includes information corresponding to a particular plug-in. The processor uses the plug-in identifier to load the plug-in from shared memory into local memory before it is required in order to minimize latency. When the data stream requests the processor to use the plug-in, the processor retrieves a location offset corresponding to the plug-in and applies the plug-in to the data stream. A plug-in manager manages an entry point table that identifies memory locations corresponding to each plug-in and, therefore, plug-ins may be placed anywhere in a processor's local memory.
    Type: Grant
    Filed: November 12, 2004
    Date of Patent: March 31, 2009
    Assignee: International Business Machines Corporation
    Inventors: Michael Stan Gowen, Barry L Minor, Mark Richard Nutter, John Kevin Patrick O'Brien
  • Patent number: 7512740
    Abstract: A microprocessor coupled to a system memory by a bus includes an instruction decode unit that decodes an instruction that specifies a data stream in the system memory and a stream prefetch priority. The microprocessor also includes a load/store unit that generates load/store requests to transfer data between the system memory and the microprocessor. The microprocessor also includes a stream prefetch unit that generates a plurality of prefetch requests to prefetch the data stream from the system memory into the microprocessor. The prefetch requests specify the stream prefetch priority. The microprocessor also includes a bus interface unit (BIU) that generates transaction requests on the bus to transfer data between the system memory and the microprocessor in response to the load/store requests and the prefetch requests. The BIU prioritizes the bus transaction requests for the prefetch requests relative to the bus transaction requests for the load/store requests based on the stream prefetch priority.
    Type: Grant
    Filed: August 11, 2006
    Date of Patent: March 31, 2009
    Assignee: MIPS Technologies, Inc.
    Inventor: Keith E. Diefendorff
  • Patent number: 7509459
    Abstract: A microprocessor has a plurality of stream prefetch engines for prefetching a respective data stream from the system memory into the microprocessor cache memory and an instruction decoder that decodes instructions of the microprocessor instruction set. The instruction set includes a stream prefetch instruction that returns an identifier uniquely associating a data stream specified by the instruction with one of the engines. The instruction set also includes an explicit prefetch-triggering load instruction that specifies a stream identifier returned by a previously executed stream prefetch instruction. When the decoder decodes a conventional load instruction it does not prefetch; however, when it decodes an explicit prefetch-triggering load instruction it commences prefetching the specified data stream. In one embodiment, an indicator of the load instruction may explicitly specify non-prefetch-triggering.
    Type: Grant
    Filed: October 13, 2006
    Date of Patent: March 24, 2009
    Assignee: MIPS Technologies, Inc.
    Inventor: Keith E. Diefendorff
  • Patent number: 7509472
    Abstract: Address translation for instruction fetching can be obviated for sequences of instruction instances that reside on a same page. Obviating address translation reduces power consumption and increases pipeline efficiency since accessing of an address translation buffer can be avoided. Certain events, such as branch mis-predictions and exceptions, can be designated as page boundary crossing events. In addition, carry over at a particular bit position when computing a branch target or a next instruction instance fetch target can also be designated as a page boundary crossing event. An address translation buffer is accessed to translate an address representation of a first instruction instance. However, until a page boundary crossing event occurs, the address representations of subsequent instruction instances are not translated. Instead, the translated portion of the address representation for the first instruction instance is recycled for the subsequent instruction instances.
    Type: Grant
    Filed: February 1, 2006
    Date of Patent: March 24, 2009
    Assignee: Sun Microsystems, Inc.
    Inventors: Paul Caprioli, Shailender Chaudhry
  • Publication number: 20090077304
    Abstract: Disclosed is a method for reading data in a memory system including a buffer memory and a nonvolatile memory, the method being comprised of: determining whether an input address in a read request is allocated to the buffer memory; determining whether a size of requested data is larger than a reference unless the input address is allocated to the buffer memory; and conducting a prefetch reading operation from the nonvolatile memory if the requested data size is larger than the reference.
    Type: Application
    Filed: July 7, 2008
    Publication date: March 19, 2009
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sung-Pack Hong, Hye-Jeong Nam, Se-Wook Na, Shea-Yun Lee, Tae-Beom Kim
  • Patent number: 7506106
    Abstract: A microprocessor has a data stream prefetch unit for processing a data stream prefetch instruction. The instruction specifies a data stream and a speculative stream hit policy indicator. If a load instruction hits in the data stream, then if the load is non-speculative the stream prefetch unit prefetches a portion of the data stream from system memory into cache memory; however, if the load is speculative the stream prefetch unit selectively prefetches a portion of the data stream from the system memory into the cache memory based on the value of the policy indicator. The load instruction is speculative if it is not guaranteed to complete execution, such as if it follows a predicted branch instruction whose outcome has not yet been finally determined to be correct. In one embodiment, the stream prefetch unit performs a similar function for store instructions that hit in the data stream.
    Type: Grant
    Filed: October 13, 2006
    Date of Patent: March 17, 2009
    Assignee: MIPS Technologies, Inc.
    Inventor: Keith E. Diefendorff
  • Patent number: 7500061
    Abstract: A preload controller for controlling a bus access device that reads out data from a main memory via a bus and transfers the readout data to a temporary memory, including a first acquiring device to acquire access hint information which represents a data access interval to the main memory, a second acquiring device to acquire system information which represents a transfer delay time in transfer of data via the bus by the bus access device, a determining device to determine a preload unit count based on the data access interval represented by the access hint information and the transfer delay time represented by the system information, and a management device to instruct the bus access device to read out data for the preload unit count from the main memory and to transfer the readout data to the temporary memory ahead of a data access of the data.
    Type: Grant
    Filed: June 14, 2005
    Date of Patent: March 3, 2009
    Assignee: Kabushiki Kaisha Toshiba
    Inventors: Seiji Maeda, Yusuke Shirota
  • Patent number: 7493451
    Abstract: In one embodiment, a processor comprises a prefetch unit coupled to a data cache. The prefetch unit is configured to concurrently maintain a plurality of separate, active prefetch streams. Each prefetch stream is either software initiated via execution by the processor of a dedicated prefetch instruction or hardware initiated via detection of a data cache miss by one or more load/store memory operations. The prefetch unit is further configured to generate prefetch requests responsive to the plurality of prefetch streams to prefetch data in to the data cache.
    Type: Grant
    Filed: June 15, 2006
    Date of Patent: February 17, 2009
    Assignee: P.A. Semi, Inc.
    Inventors: Sudarshan Kadambi, Puneet Kumar, Po-Yung Chang
  • Patent number: 7493621
    Abstract: An apparatus, program product and method initiate, in connection with a context switch operation, a prefetch of data likely to be used by a thread prior to resuming execution of that thread. As a result, once it is known that a context switch will be performed to a particular thread, data may be prefetched on behalf of that thread so that when execution of the thread is resumed, more of the working state for the thread is likely to be cached, or at least in the process of being retrieved into cache memory, thus reducing cache-related performance penalties associated with context switching.
    Type: Grant
    Filed: December 18, 2003
    Date of Patent: February 17, 2009
    Assignee: International Business Machines Corporation
    Inventors: Jeffrey Powers Bradford, Harold F. Kossman, Timothy John Mullins
  • Patent number: 7493450
    Abstract: Exemplary systems and methods include pre-fetching data in response to a read cache hit. Various exemplary methods include priming a read cache with initial data, and triggering a read pre-fetch operation in response to a read cache hit upon the initial data in the read cache. Another exemplary implementation includes a storage device having a read cache and a trigger module that causes a pre-fetch of data from a mass storage medium in response to a read cache hit upon data in the read cache.
    Type: Grant
    Filed: April 14, 2003
    Date of Patent: February 17, 2009
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventor: Brian S. Bearden
  • Patent number: 7480769
    Abstract: A microprocessor coupled to a system memory includes a load request signal that requests data be loaded from the system memory into the microprocessor in response to a load instruction. The load request signal includes a load virtual page address. The microprocessor also includes a prefetch request signal that requests a cache line be prefetched from the system memory into the microprocessor in response to a prefetch instruction. The prefetch request signal includes a prefetch virtual page address.
    Type: Grant
    Filed: August 11, 2006
    Date of Patent: January 20, 2009
    Assignee: MIPS Technologies, Inc.
    Inventors: Keith E. Diefendorff, Thomas A. Petersen
  • Patent number: 7478197
    Abstract: In a computer system with a memory hierarchy, when a high-level cache supplies a data copy to a low-level cache, the shared copy can be either volatile or non-volatile. When the data copy is later replaced from the low-level cache, if the data copy is non-volatile, it needs to be written back to the high-level cache; otherwise it can be simply flushed from the low-level cache. The high-level cache can employ a volatile-prediction mechanism that adaptively determines whether a volatile copy or a non-volatile copy should be supplied when the high-level cache needs to send data to the low-level cache. An exemplary volatile-prediction mechanism suggests use of a non-volatile copy if the cache line has been accessed consecutively by the low-level cache. Further, the low-level cache can employ a volatile-promotion mechanism that adaptively changes a data copy from volatile to non-volatile according to some promotion policy, or changes a data copy from non-volatile to volatile according to some demotion policy.
    Type: Grant
    Filed: July 18, 2006
    Date of Patent: January 13, 2009
    Assignee: International Business Machines Corporation
    Inventors: Xiaowei Shen, Man Cheuk Ng, Aaron Christoph Sawdey
  • Publication number: 20080294867
    Abstract: In an information processing apparatus of this invention having a cache memory, a TLB and a TSB, a second retrieval unit retrieves a second physical address from an address translation buffer by using a second virtual address corresponding one-to-one to a first virtual address, and a prefetch controller enters a first address translation pair of the first virtual address from an address translation table into a cache memory by using a second physical address which is a result of the retrieval, thereby largely shortening the processing time of a memory access when a TLB miss occurs at the time of the memory access.
    Type: Application
    Filed: April 3, 2008
    Publication date: November 27, 2008
    Applicant: Fujitsu Limited
    Inventor: Hiroaki KIMURA
  • Publication number: 20080288715
    Abstract: Methods and apparatuses are presented for memory page size auto detection. A method for automatically determining a page size of a memory device includes receiving page size extents of the memory device, determining a bus width of the memory device, detecting a number of pages having an automatic detection marker, and determining the page size of the memory device based upon the detected number of pages and the received page size extents. An apparatus for automatically determining page size detection includes logic for performing the above presented method.
    Type: Application
    Filed: May 13, 2008
    Publication date: November 20, 2008
    Applicant: QUALCOMM INCORPORATED
    Inventors: Srini Maddali, Arshad Noormohammed Bebal, Tom T. Kuo, Tun Yong Yang