Partitioned Cache Patents (Class 711/129)
-
Patent number: 7231497Abstract: In one embodiment, the present invention includes a method for writing data to a disk if inserting the data into a cache, such as a disk cache associated with the disk, would cause a threshold of dirty data in the cache to be met or exceeded. Further, in certain embodiments, the cache may store data according to a first cache policy and a second cache policy. A determination of whether to store data according to the first or second policies may be dependent upon an amount of dirty data in the cache, in certain embodiments. In certain embodiments, the cache may include at least one portion reserved for clean data.Type: GrantFiled: June 15, 2004Date of Patent: June 12, 2007Assignee: Intel CorporationInventors: Sanjeev N. Trika, John I. Garney, Michael K. Eschmann
-
Patent number: 7213107Abstract: A method and apparatus for a dedicated cache memory are described. Under an embodiment of the invention, a cache memory includes a general-purpose sector and a dedicated sector. The general-purpose sector is to be used for general computer operations. The dedicated sector is to be dedicated to use for a first computer process.Type: GrantFiled: December 31, 2003Date of Patent: May 1, 2007Assignee: Intel CorporationInventor: Blaise B. Fanning
-
Patent number: 7213108Abstract: An instruction virtual address space includes only virtual addresses corresponding to physical addresses of address areas of a physical address space storing pages of only instructions. A data virtual address space includes only virtual addresses corresponding to physical addresses of address areas of the physical address space storing pages of only data. The instruction and data virtual address spaces use duplicated virtual addresses. Instruction and data address translation units translate virtual addresses of the instruction and data virtual address spaces into physical addresses of the single physical address space. A data access efficiency and an instruction execution speed can be improved.Type: GrantFiled: April 2, 2004Date of Patent: May 1, 2007Assignee: Sony CorporationInventor: Koji Ozaki
-
Patent number: 7203797Abstract: A processor preferably comprises a processing core that generates memory addresses to access a main memory and on which a plurality of methods operate. Each method uses its own set of local variables. The processor also includes a cache subsystem comprising a multi-way set associative cache and a data memory that holds a contiguous block of memory defined by an address stored in a register, wherein local variables are stored in said data memory.Type: GrantFiled: July 31, 2003Date of Patent: April 10, 2007Assignee: Texas Instruments IncorporatedInventors: Gerard Chauvel, Maija Kuusela, Dominique D'Inverno
-
Patent number: 7200721Abstract: A method and apparatus for testing cache coherency in a multiprocessor data processing arrangement. Selected values are written to memory by a plurality of threads, and consistency of the values in the memory with the values written by the plurality of threads is verified. Performance characteristics of the data processing system are measured while writing the values, and in response to the performance characteristics relative to target performance characteristics, parameters that control writing by the plurality of threads are selectively adjusted.Type: GrantFiled: October 9, 2002Date of Patent: April 3, 2007Assignee: Unisys CorporationInventors: Michelle J. Lang, William Judge Yohn
-
Patent number: 7191441Abstract: A computer system includes a software virtual machine (such as Java) for running one or more applications. An object is provided that is responsive to a call from an application for placing the virtual machine and application into a state of suspension. This involves interrupting all current threads, and recording the state of the components of the virtual machine, including heap, threads, and stack, into a serialization data structure. Subsequently the serialization data structure can be invoked to resume the virtual machine and application from the state of suspension. Note that many virtual machines can be cloned from the single stored data structure. One benefit of this approach is that a new virtual machine can effectively be created in an already initialized state.Type: GrantFiled: August 6, 2002Date of Patent: March 13, 2007Assignee: International Business Machines CorporationInventors: Paul Harry Abbott, Matthew Paul Chapman
-
Patent number: 7185126Abstract: Various embodiments of a method and apparatus for implementing multiple transaction translators that share a single memory in a serial hub are disclosed. For example, in one embodiment, a USB (Universal Serial Bus) hub may include a shared memory device, at least one faster data handler coupled to transfer data between the shared memory device and a faster port, and several slower handlers each coupled to transfer data between the shared memory device and a respective one of several slower ports.Type: GrantFiled: February 24, 2003Date of Patent: February 27, 2007Assignee: Standard Microsystems CorporationInventor: Piotr Szabelski
-
Patent number: 7181577Abstract: A storage includes: host interface units; file control processors which receives a file input/output request and translates the file input/output request into a data input/output request; file control memories which store translation control data; groups of disk drives; disk control processors; disk interface units which connect the groups of disk drives and the disk control processors; cache memories; and inter-processor communication units. The storage logically partitions these devices to cause the partitioned devices to operate as two or more virtual NASs.Type: GrantFiled: February 19, 2004Date of Patent: February 20, 2007Assignee: Hitachi, Ltd.Inventors: Kentaro Shimada, Akiyoshi Hashimoto
-
Patent number: 7181573Abstract: In response to receiving a request to perform an enqueue or dequeue operation a corresponding queue descriptor specifying the structure of the queue is referenced to execute the operation. The queue descriptor is stored in a processor's memory controller logic.Type: GrantFiled: January 7, 2002Date of Patent: February 20, 2007Assignee: Intel CorporationInventors: Gilbert Wolrich, Mark B. Rosenbluth, Debra Bernstein
-
Patent number: 7177981Abstract: A method and system is disclosed for minimizing data array accesses during a read operation in a cache memory. The cache memory has one or more tag arrays and one or more data arrays. After accessing each tag array, a selected data array is identified, and subsequently activated. At least one predetermined data entry from the activated data array is accessed, wherein all other data arrays are deactivated during the read operation. In another example, the cache memory is divided into multiple sub-groups so that only a particular sub-group is involved in a memory read operation. By deactivating any many circuits as possible throughout the read operation, the power consumption of the cache memory is greatly reduced.Type: GrantFiled: May 9, 2003Date of Patent: February 13, 2007Assignee: VIA-Cyrix, Inc.Inventor: Timothy D. Davis
-
Patent number: 7174426Abstract: The invention relates to a method and respective system for accessing a cache memory in a computer system, wherein the cache memory is split up in at least two segments, wherein the cache memory is accessed by a plurality of competing cache memory requests via a number of commonly used input registers, wherein a cache segment model is utilized for reflecting the cache use by said competing requests, wherein cache memory requests are processed by a processing pipe and wherein each cache-request, before entering the processing pipe, is checked whether the segments of the cache memory are available at the cycle it needs, wherein said memory comprises the steps of: a) marking a segment model cell as busy with storing, if a store-request targeting to a cache segment corresponding to said model cell has received pipe access, b) blocking off from pipe access a fetch-request targeting to a segment model cell, which is marked busy with a store operation; and c) blocking off any store-request from pipe access, if atType: GrantFiled: July 22, 2004Date of Patent: February 6, 2007Assignee: International Business Machines CorporationInventor: Hanno Ulrich
-
Patent number: 7167953Abstract: An adaptive replacement cache policy dynamically maintains two lists of pages, a recency list and a frequency list, in addition to a cache directory. The policy keeps these two lists to roughly the same size, the cache size c. Together, the two lists remember twice the number of pages that would fit in the cache. At any time, the policy selects a variable number of the most recent pages to exclude from the two lists. The policy adaptively decides in response to an evolving workload how many top pages from each list to maintain in the cache at any given time. It achieves such online, on-the-fly adaptation by using a learning rule that allows the policy to track a workload quickly and effectively.Type: GrantFiled: June 13, 2005Date of Patent: January 23, 2007Assignee: International Business Machines CorporationInventors: Nimrod Megiddo, Dharmendra Shantilal Modha
-
Patent number: 7142541Abstract: According to some embodiments, routing information for an information packet is determined in accordance with a destination address and a device address.Type: GrantFiled: August 9, 2002Date of Patent: November 28, 2006Assignee: Intel CorporationInventors: Alok Kumar, Raj Yavatkar
-
Patent number: 7133969Abstract: A system may include an instruction cache, a trace cache including a plurality of trace cache entries, and a trace generator coupled to the instruction cache and the trace cache. The trace generator may be configured to receive a group of instructions output by the instruction cache for storage in one of the plurality of trace cache entries. The trace generator may be configured to detect an exceptional instruction within the group of instructions and to prevent the exceptional instruction from being stored in a same one of the plurality of trace cache entries as any non-exceptional instruction.Type: GrantFiled: October 1, 2003Date of Patent: November 7, 2006Assignee: Advanced Micro Devices, Inc.Inventors: Mitchell Alsup, Gregory William Smaus, James K. Pickett, Brian D. McMinn, Michael A. Filippo, Benjamin T. Sander
-
Patent number: 7130961Abstract: To execute cache data identity control between disk controllers in plural disk controllers provided with each cache. To prevent a trouble from being propagated to another disk controller even if the trouble occurs in a specific disk controller. The identity control of data is executed via a communication means between disk controllers. In case update access from a host is received, data in a cache memory of a disk controller that controls at least a data storage drive is updated. It is desirable that a cache area is divided into an area for a drive controlled by the disk controller and an area for a drive controlled by another disk controller and is used.Type: GrantFiled: February 20, 2001Date of Patent: October 31, 2006Assignee: Hitachi, Ltd.Inventors: Hiroki Kanai, Kazuhisa Fujimoto, Akira Fujibayashi
-
Patent number: 7127560Abstract: A power saving cache and a method of operating a power saving cache. The power saving cache includes circuitry to dynamically reduce the logical size of the cache in order to save power. Preferably, a method is used to determine optimal cache size for balancing power and performance, using a variety of combinable hardware and software techniques. Also, in a preferred embodiment, steps are used for maintaining coherency during cache resizing, including the handling of modified (“dirty”) data in the cache, and steps are provided for partitioning a cache in one of several way to provide an appropriate configuration and granularity when resizing.Type: GrantFiled: October 14, 2003Date of Patent: October 24, 2006Assignee: International Business Machines CorporationInventors: Erwin B. Cohen, Thomas E. Cook, Ian R. Govett, Paul D. Kartschoke, Stephen V. Kosonocky, Peter A. Sandon, Keith R. Williams
-
Patent number: 7127585Abstract: A storage includes: host interface units; file control processors which receives a file input/output request and translates the file input/output request into a data input/output request; file control memories which store translation control data; groups of disk drives; disk control processors; disk interface units which connect the groups of disk drives and the disk control processors; cache memories; and inter-processor communication units. The storage logically partitions these devices to cause the partitioned devices to operate as two or more virtual NASs.Type: GrantFiled: June 23, 2004Date of Patent: October 24, 2006Assignee: Hitachi, Ltd.Inventors: Kentaro Shimada, Akiyoshi Hashimoto
-
Patent number: 7120651Abstract: Various techniques are described for improving the performance of a multiple node system by allocating, in two or more nodes of the system, partitions of a shared cache. A mapping is established between the data items managed by the system, and the various partitions of the shared cache. When a node requires a data item, the node first determines which partition of the shared cache corresponds to the required data item. If the data item does not currently reside in the corresponding partition, the data item is loaded into the corresponding partition even if the partition does not reside on the same node that requires the data item. The node then reads the data item from the corresponding partition of the shared cache.Type: GrantFiled: April 23, 2004Date of Patent: October 10, 2006Assignee: Oracle International CorporationInventors: Roger J. Bamford, Sashikanth Chandrasekaran, Angelo Pruscino
-
Patent number: 7111124Abstract: A method, apparatus, and signal-bearing medium for improving the performance of a cache when request streams with different spatial and/or temporal properties access the cache. A set in the cache is partitioned into subsets with different request streams using different subsets within the cache. In this way, interference between the different request streams is reduced.Type: GrantFiled: March 12, 2002Date of Patent: September 19, 2006Assignee: Intel CorporationInventors: Ravishankar R. Iyer, Pete D. Vogt
-
Patent number: 7107397Abstract: A sequential buffer for a magnetic tape data storage system comprises a plurality of segments. A buffer management system buffers data in the sequential buffer, conducting a data transfer process. Subsequently, some of the buffered data is maintained in some, but less than all, the segments of the buffer. Additionally, the maintained buffered data is indicated as VALID data. Thus, a subsequent process may be conducted directly using the data maintained in the buffer, and avoids moving the tape to reread the data.Type: GrantFiled: May 29, 2003Date of Patent: September 12, 2006Assignee: International Business Machines CorporationInventors: Kirby Grant Dahman, Paul Merrill Greco, Glen Alan Jaquette
-
Patent number: 7107404Abstract: A data processing system comprising a storage apparatus and computers which executes a first program and a second program, the storage apparatus having a cache memory with a first area and a second area and a disk unit for storing data of the cache memory. The storage apparatus writes data into the first area corresponding to area identification information included in a data storage request in response to an input of the data storage request. The data is written into the second area corresponding to area identification information included in a data storage request in response to an input of the data storage request. The data stored in the second area is copied to the first area in response to an input of a copy request for causing the data in the second area to be reflected in the first area.Type: GrantFiled: September 2, 2004Date of Patent: September 12, 2006Assignee: Hitachi, Ltd.Inventors: Kouichi Ohtsubo, Nobuo Kawamura
-
Patent number: 7107403Abstract: A method and system for dynamically allocating cache space in a storage system among multiple workload classes each having a unique set of quality-of-service (QoS) requirements. The invention dynamically adapts the space allocated to each class depending upon the observed response time for each class and the observed temporal locality in each class. The dynamic allocation is achieved by maintaining a history of recently evicted pages for each class, determining a future cache size for the class based on the history and the QoS requirements where the future cache size might be different than a current cache size for the class, determining whether the QoS requirements for the class are being met, and adjusting the future cache size to maximize the number of classes in which the QoS requirements are met. The future cache sizes are increased for the classes whose QoS requirements are not met while they are decreased for those whose QoS requirements are met.Type: GrantFiled: September 30, 2003Date of Patent: September 12, 2006Assignee: International Business Machines CorporationInventors: Dharmendra Shantilal Modha, Divyesh Jadav, Pawan Goyal, Renu Tewari
-
Patent number: 7103722Abstract: A method and structure is disclosed for constraining cache line replacement that processes a cache miss in a computer system. The invention contains a K-way set associative cache that selects lines in the cache for replacement. The invention constrains the selecting process so that only a predetermined subset of each set of cache lines is selected for replacement. The subset has at least a single cache line and the set size is at least two cache lines. The invention may further select between at least two cache lines based upon which of the cache lines was accessed least recently. A selective enablement of the constraining process is based on a free space memory condition of a memory associated with the cache memory. The invention may further constrain cache line replacement based upon whether the cache miss is from a non-local node in a nonuniform-memory-access system. The invention may also process cache writes so that a predetermined subset of each set is known to be in an unmodified state.Type: GrantFiled: July 22, 2002Date of Patent: September 5, 2006Assignee: International Business Machines CorporationInventors: Caroline Benveniste, Peter Franaszek, John T. Robinson, Charles Schulz
-
Patent number: 7096321Abstract: A method, system, and program storage medium for adaptively managing pages in a cache memory included within a system having a variable workload, comprising arranging a cache memory included within a system into a circular buffer; maintaining a pointer that rotates around the circular buffer; maintaining a bit for each page in the circular buffer, wherein a bit value 0 indicates that the page was not accessed by the system since a last time that the pointer traversed over the page, and a hit value 1 indicates that the page has been accessed since the last time the pointer traversed over the page; and dynamically controlling a distribution of a number of pages in the cache memory that are marked with bit 0 in response to a variable workload in order to increase a hit ratio of the cache memory.Type: GrantFiled: October 21, 2003Date of Patent: August 22, 2006Assignee: International Business Machines CorporationInventor: Dharmendra S. Modha
-
Patent number: 7096323Abstract: A computer system with a processor cache that stores remote cache presence information. In one embodiment, a plurality of presence vectors are stored to indicate whether particular blocks of data mapped to another node are being remotely cached. Rather than storing the presence vectors in a dedicated storage, the remote cache presence vectors may be stored in designated locations of a cache memory subsystem, such as an L2 cache, associated with a processor core. For example, a designated way of the cache memory subsystem may be allocated for storing remote cache presence vectors, while the remaining ways of the cache are used to store normal processor data. New data blocks may be remotely cached in response to evictions from the cache memory subsystem. In yet a further embodiment, additional entries of the cache memory subsystem may be used for storing directory entries to filter probe command and response traffic.Type: GrantFiled: September 27, 2002Date of Patent: August 22, 2006Assignee: Advanced Micro Devices, Inc.Inventors: Patrick Conway, Frederick D. Weber
-
Patent number: 7096319Abstract: A computer system acquires mapping information of data storage regions in respective layers from a layer of DBMSs to a layer of storage subsystems, grasps correspondence between DB data and storage positions of each storage subsystem on the basis of the mapping information, decides a cache partitioning in each storage subsystem on the basis of the correspondence and sets the cache partitioning for each storage subsystem. When cache allocation in the DBMS or the storage subsystem needs to be changed, information for estimating the cache effect due to the change in cache allocation acquired by the DBMS is used for estimating the cache effect in the storage subsystem.Type: GrantFiled: June 17, 2004Date of Patent: August 22, 2006Assignee: Hitachi, Ltd.Inventors: Kazuhiko Mogi, Norifumi Nishikawa
-
Patent number: 7093073Abstract: A mechanism for caching Web services requests and responses, including testing an incoming request against the cached requests and associated responses is provided. The requests are selectively tested against the cached data in accordance with a set of policies. If a request selected hits in the cache, the response is served up from the cache. Otherwise, the request is passed to the corresponding Web-services server/application. Additionally, a set of predetermined cache specifications for generating request identifiers may be provided. The identifier specification may be autonomically adjusted by determining cache hit/cache miss ratios over the set of identifier specifications and over a set of sample requests. The set of specifications may then be sorted to reflect the performance of the respective cache specification algorithms for the current mix of requests.Type: GrantFiled: June 26, 2003Date of Patent: August 15, 2006Assignee: International Business Machines CorporationInventor: Gregory Louis Truty
-
Patent number: 7093075Abstract: A system and method for reducing latency in memory systems is provided. A copy way is established in a set of a set associative cache, which is physically closer to a requesting entity than other memory positions. Likely to be accessed data is copied to the copy way for subsequent access. In this way, subsequent accesses of the most likely data have their access time reduced due to the physical proximity of the data being close to the requesting entity. Methods herein further provide ranking and rearranging blocks in the cache based on coupled local and global least recently used (LRU) algorithms to reduce latency time.Type: GrantFiled: November 7, 2003Date of Patent: August 15, 2006Assignee: International Business Machines CorporationInventors: William Robert Reohr, Zhigang Hu
-
Patent number: 7093071Abstract: A memory system including a programmable memory, such as a flash memory, may include a write buffer. A processor may generate and store a queue of commands to write a sequence of valid bytes in a reclaim operation. A controller in the memory system may perform the write commands in the write buffer without intervention by the processor.Type: GrantFiled: October 9, 2002Date of Patent: August 15, 2006Assignee: Intel CorporationInventor: John C. Rudelic
-
Patent number: 7089361Abstract: Methods, apparatus, and program product are disclosed for use in a computer system to provide for dynamic allocation of a directory memory in a node memory controller in which one or more coherent multiprocessor nodes comprise the computer system. The directory memory in a node is partitioned between a snoop directory portion and a remote memory directory portion. During a predetermined time interval, snoop directory entry refills and remote memory directory entry refills are accumulated. After the time interval has elapsed, a ratio of the snoop directory entry refills to the number of remote memory directory entry refills is computed. The ratio is compared to a desired ratio. Respondent to a difference between the ratio and the desired ratio, adjustments are made to the allocation of the memory directory between the snoop directory and the remote memory directory.Type: GrantFiled: August 7, 2003Date of Patent: August 8, 2006Assignee: International Business Machines CorporationInventor: John Michael Borkenhagen
-
Patent number: 7085888Abstract: A cache class in a software-administered cache of a multiprocessor is assigned cache space that is localized to a single region of a memory and is contiguous. Synchronization and LRU operations can step sequentially through the given region, removing the need for SLB searches or the penalty for a miss, while other threads remain random access. The threads that manage each virtual memory area can then be attached to specific processors, maintaining physical locality as well.Type: GrantFiled: October 9, 2003Date of Patent: August 1, 2006Assignee: International Business Machines CorporationInventor: Zachary Merlynn Loafman
-
Patent number: 7080205Abstract: The invention herein pertains to a data processing device (1) with a processor (2) and a memory (3). The memory (3) comprises a first memory sector (4) and a second memory sector (6), a first cache (5) being arrayed for the first memory sector (4) and a second cache (7) being arrayed for the second memory sector (6). It is the function of the second cache (7) that predetermined and selected sub-programs, interrupt vectors (8) and interrupt handlers (9), which are normally stored in the second memory sector (6), which is, for example, a ROM memory or a RAM memory, are saved in temporary storage in the second cache (7). It is an advantage that no displacement cycles take place in the second cache (7).Type: GrantFiled: March 20, 2001Date of Patent: July 18, 2006Assignee: Fujitsu Siemens Computer GmbHInventor: Nikolaus Demharter
-
Patent number: 7080128Abstract: When communications among a plurality of processors employed in a network storage system are required, any of the processors initiating a communication on the transmission side issues a request to an I/O processing apparatus, which is used for controlling a disk unit and a disk cache common to the processors, in order to allocate an area in the common disk cache as a communication buffer. At such a request, the I/O processing apparatus allocates a specific area in the common disk cache as a communication buffer and gives a notice of the allocation to the requesting processors on the transmission side. Receiving the notice, the transmission-side processors write data to be transferred into the specific area of the disk cache and, then, the reception-side processors fetch the transferred data from the specific area.Type: GrantFiled: August 12, 2003Date of Patent: July 18, 2006Assignee: Hitachi, Ltd.Inventor: Akihiko Sakaguchi
-
Patent number: 7073033Abstract: A memory model for a run-time environment is disclosed that includes a process-specific area of memory where objects in call-specific area of memory and session-specific area of memory can be migrated to at the end of a database call. User-specific objects can be then migrated to the session-specific area of memory. In one embodiment, the process-specific area of memory can be saved in a disk file and used to hot start another instance of an application server.Type: GrantFiled: May 8, 2003Date of Patent: July 4, 2006Assignee: Oracle International CorporationInventors: Harlan Sexton, David Unietis, Peter Benson
-
Patent number: 7069387Abstract: A method for optimizing a cache memory used for multitexturing in a graphics system is implemented. The graphics system comprises a texture memory, which stores texture data comprised in texture maps, coupled to a texture cache memory. Active texture maps for an individual primitive, for example a triangle, are identified, and the texture cache memory is divided into partitions. In one embodiment, the number of texture cache memory partitions equals the number of active texture maps. Each texture cache memory partition corresponds to a respective single active texture map, and is operated as a direct mapped cache for its corresponding respective single active texture map. In one embodiment, each texture cache memory partition is further operated as an associative cache for the texture data comprised in the partition's corresponding respective single active texture map. The cache memory is dynamically re-configured for each primitive.Type: GrantFiled: March 31, 2003Date of Patent: June 27, 2006Assignee: Sun Microsystems, Inc.Inventor: Brian D. Emberling
-
Patent number: 7065628Abstract: Memory access efficiency for packet applications may be improved by transferring full partitions of data. The number of full partitions written to external memory may be increased by temporarily storing packets using on-chip memory that is on a chip with the processor. Before writing packets to external memory, packets of length smaller than the external memory partition size may be temporarily stored in the on-chip memory until an amount corresponding to a full or nearly full partition has been collected, at which point the data can be efficiently written to an external memory partition.Type: GrantFiled: May 29, 2002Date of Patent: June 20, 2006Assignee: Intel CorporationInventors: Juan-Carlos Calderon, Jing Ling, Jean-Michel Caia, Vivek Joshi, Anguo T. Huang
-
Patent number: 7058764Abstract: Exemplary systems, methods, and devices dynamically characterize a portion of a total cache as a read cache or a write cache in response to a host workload. Exemplary systems, methods, and devices receive a host workload parameter and allocate cache memory to either of a read cache or a write cache based on the host workload parameter.Type: GrantFiled: April 14, 2003Date of Patent: June 6, 2006Assignee: Hewlett-Packard Development Company, L.P.Inventor: Brian S. Bearden
-
Patent number: 7058753Abstract: By the same method as that of making data access to a data storage area in an online state, it is performed to access a data storage area other than the data storage area. A plurality of logical volumes carried by a disk array apparatus includes an online volume that is in an online state to a host and an offline state that is in an offline state to the host. The host transmits an access command including target information designating a target volume to the disk array apparatus as an access command to a starting volume other than the target volume. The disk array apparatus receives the access command to the starting volume and offers the data access to the target volume on the basis of the target information carried by that access command to the host.Type: GrantFiled: February 3, 2004Date of Patent: June 6, 2006Assignee: Hitachi, Ltd.Inventors: Akihiro Mori, Kiyohisa Miyamoto, Masashi Kimura
-
Patent number: 7058784Abstract: A method for managing the access procedure for large block flash memory by employing a page cache block, so as to reduce the occurrence of swap operation is proposed. At least one block of the nonvolatile memory is used as a page cache block. When a host requests to write a data to storage device, the last page of the data is written into one available page of the page cache block by the controller. A block structure is defined in the controller having a data block for storing original data, a writing block for temporary data storage in the access operation, and a page cache block for storing the last one page data to be written.Type: GrantFiled: July 4, 2003Date of Patent: June 6, 2006Assignee: Solid State System Co., Ltd.Inventor: Chih-Hung Wang
-
Patent number: 7051179Abstract: A processor card for supporting multiple cache configurations, and a microprocessor for selecting one of the multiple cache configurations is disclosed. The processor card has a first static random access memory mounted on a front side thereof and a second static random access memory mounted on a rear side thereof. The address pins of the memories are aligned. Each pair of aligned address pins are electrically coupled to thereby concurrently receive an address bit signal from the microprocessor. During an initial boot of the microprocessor, the microprocessor includes a multiplexor for providing the address bit signals to the address pins in response to a control signal indicative of a selected cache configuration.Type: GrantFiled: September 18, 2003Date of Patent: May 23, 2006Assignee: International Business Machines CorporationInventors: Keenan W. Franz, Michael T. Vaden
-
Patent number: 7047387Abstract: A method for calculating a block cache size for a host process or application on a computer based at least upon virtual memory page evictions and/or virtual memory page reclamations for the computer. A virtual memory page eviction is the act of removing the contents of a physical memory page for the purpose of loading it with the contents of another virtual memory page. A virtual memory page reclamation is the return of a page to a working set that was previously removed by the operating system due to memory constraints. The page must not have been evicted. Additional fundamental properties of the application and the computer may be used, such as available physical memory on the computer, total physical memory, and block evictions. A block eviction is the act of removing the contents of a block from the block cache for the purpose of loading it with new contents.Type: GrantFiled: July 16, 2003Date of Patent: May 16, 2006Assignee: Microsoft CorporationInventor: Andrew E. Goodsell
-
Patent number: 7047366Abstract: Described are various quality of service (QOS) parameters that may be used in characterizing device behavior in connection with a cache. A Partition parameter indicates which portions of available cache may used with data of an associated device. A Survival parameter indicates how long data of an associate device should remain in cache after use. A Linearity parameter indicates a likelihood factor that subsequent data tracks may be used such that this parameter may be used in determining whether to prefetch data. A Flush parameter indicates how long data should remain in cache after a write pending slot is returned to cache after being written out to the actual device. The QOS parameters may be included in configuration data. The QOS parameter values may be read and/or modified.Type: GrantFiled: June 17, 2003Date of Patent: May 16, 2006Assignee: EMC CorporationInventor: Josef Ezra
-
Patent number: 7039762Abstract: A microprocessor, having interleaved cache and two parallel processing pipelines adapted to access all of the interleaved cache. The microprocessor comprising: a cache directory for each of the parallel processing pipelines wherein each said cache directory is split according to the interleaved cache and interleaving of the cache directory is independent of address bits used for cache interleaving.Type: GrantFiled: May 12, 2003Date of Patent: May 2, 2006Assignee: International Business Machines CorporationInventors: Jennifer A. Navarro, Chung-Lung K. Shum, Aaron Tsai
-
Patent number: 7039763Abstract: Briefly, in accordance with an embodiment of the invention, an apparatus and method to share a cache memory is provided. The method may include dynamically partitioning a cache memory to cache data for one or more active clients during operation of the cache memory, wherein the number of active clients varies during operation of the cache memory. The apparatus may include a control device coupled to a cache memory to dynamically partition the cache memory.Type: GrantFiled: April 11, 2003Date of Patent: May 2, 2006Assignee: Intel CorporationInventors: Dennis M. O'Connor, Michael W. Morrow
-
Patent number: 7035978Abstract: Disclosed is a method, system, and program for determining which data to remove from storage. A first policy is used to determine when to remove a block of data of a first type. A second policy is used to determine when to remove a block of data of a second type.Type: GrantFiled: March 11, 2003Date of Patent: April 25, 2006Assignee: International Business Machines CorporationInventors: Michael E. Factor, Shachar Fienblit, Joseph Smith Hyde, II, Thomas Charles Jarvis, William Frank Micka, Gail Andrea Spear, Aviad Zlotnick
-
Patent number: 7028150Abstract: A memory system and method for processing a data structure comprising a plurality of data bits representing a line of memory, wherein the data bits are divided into a plurality of data chunks, each of the data chunks including at least an error correction code portion and a data portion; and a first chunk of said plurality of data chunks having a tag portion, wherein said tag portion includes tag information for the entire line of memory, and wherein subsequent ones of said data chunks do not include tag information.Type: GrantFiled: August 23, 2002Date of Patent: April 11, 2006Assignee: Hewlett-Packard Development Company, L.P.Inventors: Curtis R. McAllister, Robert C. Douglas, Henry Yu
-
Patent number: 7024519Abstract: Methods and apparatus for controlling hierarchical cache memories permit: controlling a first level cache memory including a plurality of cache lines, each cache line being operable to store an address tag and data; controlling a next lower level cache memory including a plurality of cache lines, each cache line being operable to store an address tag, status flags, and data, the status flags of each cache line including an L-flag; and setting the L-flag of a given cache line of the next lower level cache memory to indicate whether or not a corresponding one the of the cache lines of the first level cache memory has been refilled with a copy of the data stored in the given cache line of the next lower level cache memory.Type: GrantFiled: August 26, 2002Date of Patent: April 4, 2006Assignee: Sony Computer Entertainment Inc.Inventor: Hidetaka Magoshi
-
Patent number: 7013375Abstract: Methods, apparatus, and program product are disclosed for use in a computer system in which one or more multiprocessor nodes comprise the computer system. The methods and apparatus provide for configurable allocation of a memory in a node memory controller. In a single node implementation of the computer system, substantially all of the memory is allocated to a snoop directory used to store directory entries for cache lines used by processors in the node. In computer system implementations having more than one node, the amount of the memory allocated to the snoop directory and the amount of the memory allocated to a remote memory directory is controlled respondent to predetermined sizes respondent to the number of nodes in the computer system.Type: GrantFiled: March 31, 2003Date of Patent: March 14, 2006Assignee: International Business Machines CorporationInventors: John Michael Borkenhagen, Philip Rogers Hillier, III, Russell Dean Hoover
-
Patent number: 7007021Abstract: An improved data structure is provided by modifying a public-domain data structure known as a “heap”. When these improvements are applied, the resultant data structure is known as a “pile.” This invention further describes a pipelined hardware implementation of a pile. Piles offer many advantages over heaps: they allow for fast, pipelined hardware implementations with increased throughput, making piles practical for a wide variety of new applications; they remove the requirement to track and update the last position in the heap; they reduce the number of memory reads accesses required during a delete operation; they require only ordinary, inexpensive RAM for storage in a fast, pipelined implementation; and they allow a random mixture of back-to-back insert, remove, and swap operations to be performed without stalling the pipeline.Type: GrantFiled: November 28, 2000Date of Patent: February 28, 2006Assignee: Altera CorporationInventors: Paul Nadj, David W. Carr, Edward D. Funnekotter
-
Patent number: 6996676Abstract: An adaptive replacement cache policy dynamically maintains two lists of pages, a recency list and a frequency list, in addition to a cache directory. The policy keeps these two lists to roughly the same size, the cache size c. Together, the two lists remember twice the number of pages that would fit in the cache. At any time, the policy selects a variable number of the most recent pages to exclude from the two lists. The policy adaptively decides in response to an evolving workload how many top pages from each list to maintain in the cache at any given time. It achieves such online, on-the-fly adaptation by using a learning rule that allows the policy to track a workload quickly and effectively. This allows the policy to balance between recency and frequency in an online and self-tuning fashion, in response to evolving and possibly changing access patterns. The policy is also scan-resistant.Type: GrantFiled: November 14, 2002Date of Patent: February 7, 2006Assignee: International Business Machines CorporationInventors: Nimrod Megiddo, Dharmendra Shantilal Modha