Least Recently Used (lru) Patents (Class 711/160)
-
Publication number: 20090228667Abstract: A method to perform a least recently used (LRU) algorithm for a co-processor is described, which co-processor in order to directly use instructions of a core processor and to directly access a main storage by virtual addresses of said core processor comprises a TLB for virtual to absolute address translations plus a dedicated memory storage also including said TLB, wherein said TLB consists of at least two zones which can be assigned in a flexible manner more than one at a time. Said method to perform a LRU algorithm is characterized in that one or more zones are replaced dependent on an actual compression service call (CMPSC) instruction.Type: ApplicationFiled: March 6, 2009Publication date: September 10, 2009Applicant: International Business Machines CorporationInventors: Thomas Koehler, Siegmund Schlechter
-
Publication number: 20090193205Abstract: A method of regeneration of a recording state of digital data stored in a node of a data network, the method including the steps of classifying files stored in the node, periodically writing a digital file from the node to a temporary memory, the temporary memory being a component of said node, and writing the digital file from the temporary memory to the same node.Type: ApplicationFiled: July 2, 2008Publication date: July 30, 2009Applicant: ATM S.A.Inventor: Jerzy Piotr Walczak
-
Publication number: 20090177854Abstract: A method, system, and computer program product for preemptive page eviction in a computer system are provided. The method includes identifying a region in an input file for preemptive page eviction, where the identified region is infrequently accessed relative to other regions of the input file. The method also includes generating an output file from the input file, where the identified region is flagged as a page for preemptive page eviction in the output file. The method further includes loading the output file to a memory hierarchy including a faster level of memory and a slower level of memory, wherein the flagged page is preemptively written to the slower level of memory.Type: ApplicationFiled: January 4, 2008Publication date: July 9, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Eli M. Dow, Marie R. Laser, Charulatha Dhuvur, Jessie Yu
-
Patent number: 7543109Abstract: A method for caching data in a blade computing complex includes providing a storage blade that includes a disk operative to store pages of data and a cache memory operative to store at least one of the pages. A processor blade is provided that includes a first memory area to store at least one of the pages and a second memory area configured to store an address of each of the pages and a hint value that is assigned to each of the pages. An address of each of the pages is stored in the second memory area, and a hint is assigned to each of the pages, where the hint is one of: likely to be accessed, may be accessed, and unlikely to be accessed. The page is then stored in storage blade cache memory based on the hint.Type: GrantFiled: May 16, 2008Date of Patent: June 2, 2009Assignee: International Business Machines CorporationInventors: Robert H. Bell, Jr., Jose R. Escalera, Octavian F. Herescu, Vernon W. Miller, Michael D. Roll
-
Patent number: 7526607Abstract: A compression device recognizes patterns of data and compressing the data, and sends the compressed data to a decompression device that identifies a cached version of the data to decompress the data. In this way, the compression device need not resend high bandwidth traffic over the network. Both the compression device and the decompression device cache the data in packets they receive. Each device has a disk, on which each device writes the data in the same order. The compression device looks for repetitions of any block of data between multiple packets or datagrams that are transmitted across the network. The compression device encodes the repeated blocks of data by replacing them with a pointer to a location on disk. The decompression device receives the pointer and replaces the pointer with the contents of the data block that it reads from its disk.Type: GrantFiled: September 22, 2005Date of Patent: April 28, 2009Assignee: Juniper Networks, Inc.Inventors: Amit P. Singh, Balraj Singh, Vanco Burzevski
-
Publication number: 20090089520Abstract: In accordance with some embodiments, software transactional memory may be used for both managed and unmanaged environments. If a cache line is resident in a cache and this is not the first time that the cache line has been read since the last write, then the data may be read directly from the cache line, improving performance. Otherwise, a normal read may be utilized to read the information. Similarly, write performance can be accelerated in some instances to improve performance.Type: ApplicationFiled: September 28, 2007Publication date: April 2, 2009Inventors: Bratin Saha, Ali-Reza Adl-Tabatabai, Tatiana Shpeisman, Cheng Wang
-
Patent number: 7509354Abstract: A method, computer program product, and a data processing system for performing data replication in a multi-mastered system is provided. A first data processing system receives a replication command generated by a second data processing system. A conflict is identified between a first entry maintained by the first data processing system and a second entry of the second data processing system. Responsive to identifying the conflict, a one of the first entry and the second entry is determined to be a most recently modified entry and a remaining entry of the first and second entries is determined to be a least recently modified entry. The least recently modified entry is replaced with the most recently modified entry, and the least recently modified entry is logged.Type: GrantFiled: January 7, 2005Date of Patent: March 24, 2009Assignee: International Business Machines CorporationInventor: John Ryan McGarvey
-
Patent number: 7506119Abstract: A method for compiler assisted victim cache bypassing including: identifying a cache line as a candidate for victim cache bypassing; conveying a bypassing-the-victim-cache information to a hardware; and checking a state of the cache line to determine a modified state of the cache line, wherein the cache line is identified for cache bypassing if the cache line that has no reuse within a loop or loop nest and there is no immediate loop reuse or there is a substantial across loop reuse distance so that it will be replaced from both main and victim cache before being reused.Type: GrantFiled: May 4, 2006Date of Patent: March 17, 2009Assignee: International Business Machines CorporationInventors: Yaoqing Gao, William E. Speight, Lixin Zhang
-
Publication number: 20090055595Abstract: Provided are a method, system, and article of manufacture for adjusting parameters used to prefetch data from storage into cache. Data units are added from a storage to a cache, wherein requested data from the storage is returned from the cache. A degree of prefetch is processed indicating a number of data units to prefetch into the cache. A trigger distance is processed indicating a prefetched trigger data unit in the cache. The number of data units indicated by the degree of prefetch is prefetched in response to processing the trigger data unit. The degree of prefetch and the trigger distance are adjusted based on a rate at which data units are accessed from the cache.Type: ApplicationFiled: August 22, 2007Publication date: February 26, 2009Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Binny Sher Gill, Luis Angel Daniel Bathen, Steven Robert Lowe, Thomas Charles Jarvis
-
Patent number: 7496722Abstract: A method of communicating memory mapped page priorities includes a software application storing page priority information for a memory mapped file on a computer readable medium, and an operating system reading the page priority information.Type: GrantFiled: April 26, 2005Date of Patent: February 24, 2009Assignee: Hewlett-Packard Development Company, L.P.Inventor: Gregory William Thelen
-
Patent number: 7480767Abstract: Methods and apparatus, including computer program products, for purging an item from a cache based on the expiration of a period of time and having an associated process to generate an item purged from the cache. A program stores a first item in a cache with an indication of a process to generate the first item, schedules a validity period for the first item, and purges the first item from the cache when the validity period has expired. The validity period may be optimized to be less than a period of time after which the first item would be promoted from a first generation of the cache to a second generation of the cache and invalid objects in the first generation of the cache are freed from memory more frequently than invalid objects in the second generation of the cache.Type: GrantFiled: June 15, 2006Date of Patent: January 20, 2009Assignee: SAP AGInventor: Martin Moser
-
Patent number: 7478210Abstract: Memory reclamation with optimistic concurrency is described. In one example an allocated memory object is tentatively freed in a software transactional memory, the object having pointers into it from at least one transaction. A time when all transactions that are outstanding at the time an object is tentatively freed have ended is detected, and the object is actually freed based on the detection.Type: GrantFiled: June 9, 2006Date of Patent: January 13, 2009Assignee: Intel CorporationInventors: Bratin Saha, Richard L. Hudson, Ali-Reza Adl-tabatabai
-
Patent number: 7472230Abstract: A preemptive write back controller is described. The present invention is well suited for a cache, main memory, or other temporarily private data storage that implements a write back strategy. The preemptive write back controller includes a list of the lines, pages, words, memory locations, or sets of memory locations potentially requiring a write back (i.e., those which previously experienced a write operation into them) in a write back cache, write back main memory, or other write back temporarily private data storage. Thus, the preemptive write back controller can initiate or force a preemptive cleaning of these lines, pages, words, memory locations, or sets of memory locations.Type: GrantFiled: September 14, 2001Date of Patent: December 30, 2008Assignee: Hewlett-Packard Development Company, L.P.Inventor: Manohar K. Prabhu
-
Publication number: 20080320256Abstract: To reduce the number of bits required for LRU control when the number of target entries is large, and achieve complete LRU control. Each time an entry is used, an ID of the used entry is stored to configure LRU information so that storage data 0 stored in the leftmost position indicates an ID of an entry with the oldest last use time (that is, LRU entry), for example as shown in FIG. 1(1). An LRU control apparatus according to a first embodiment of the present invention refers to the LRU information, and selects an entry corresponding to the storage data 0 (for example, entry 1) from the LRU information as a candidate for the LRU control, based on the storage data 0 as the ID of the entry with the oldest last use time.Type: ApplicationFiled: August 27, 2008Publication date: December 25, 2008Applicant: FUJITSU LIMITEDInventors: Tomoyuki Okawa, Hiroyuki Kojima, Masaki Ukai
-
Publication number: 20080320016Abstract: An apparatus for queue scheduling. An embodiment of the apparatus includes a dispatch order data structure, a bit vector, and a queue controller. The dispatch order data structure corresponds to a queue. The dispatch order data structure stores a plurality of dispatch indicators associated with a plurality of pairs of entries of the queue to indicate a write order of the entries in the queue. The queue controller interfaces with the queue and the dispatch order data structure. Multiple queue structures interfaces with an output arbitration logic and schedule packets to achieve optimal throughput.Type: ApplicationFiled: August 29, 2007Publication date: December 25, 2008Applicant: Raza Microelectronics, Inc.Inventors: Gaurav Singh, Srivatsan Srinivasan, Lintsung Wong
-
Publication number: 20080307174Abstract: A dual-use library that is able to handle calls from programs requiring either reference count or garbage collected memory management is described. This capability may be provided by introducing a new assignment routine, assign( ), and instrumenting the reference count routines responsible for updating an object's reference count—e.g., addReference( ) and removeReference( ) routines. The assign( ), addReferenc( ) and removeReference( ) routines determine, at runtime, which memory management scheme is appropriate and execute the appropriate instructions (i.e., reference count or garbage collection specific instructions). The described dual-use library provides equivalent functionality as prior art two library implementations, but with a significantly lower memory footprint.Type: ApplicationFiled: June 8, 2007Publication date: December 11, 2008Applicant: Apple Inc.Inventors: Blaine Garst, Bertrand Philippe Serlet
-
Patent number: 7457920Abstract: The proposed system and associated algorithm when implemented improves the processor cache miss rates and overall cache efficiency in multi-core environments in which multiple CPU's share a single cache structure (as an example). The cache efficiency will be improved by tracking CPU core loading patterns such as miss rate and minimum cache line load threshold levels. Using this information along with existing cache eviction method such as LRU, results in determining which cache line from which CPU is evicted from the shared cache when a capacity conflict arises. This methodology allows one to dynamically allocate shared cache entries to each core within the socket based on the particular core's frequency of shared cache usage.Type: GrantFiled: January 26, 2008Date of Patent: November 25, 2008Assignee: International Business Machines CorporationInventors: Marcus Lathan Kornegay, Ngan Ngoc Pham
-
Publication number: 20080288731Abstract: A bus arbiter receives requests of initiators, and internally includes a page hit/miss determining unit with permissible determining function, a bank open/close determining unit with permissible determining function, and an LRU unit with permissible determining function. Regarding the priority of the request arbitration on the requests, the bank priority on the SDRAM is determined in the order of page hit, bank open, and LRU. Furthermore, each determining unit internally includes a permissible time determining unit, and processes, at top priority, the request of the initiator which the corresponding permissible time is below the count threshold value in the priority processing of the determining unit.Type: ApplicationFiled: May 16, 2008Publication date: November 20, 2008Inventor: Yuji Izumi
-
Patent number: 7454573Abstract: A hardware based method for determining when to migrate cache lines to the cache bank closest to the requesting processor to avoid remote access penalty for future requests. In a preferred embodiment, decay counters are enhanced and used in determining the cost of retaining a line as opposed to replacing it while not losing the data. In one embodiment, a minimization of off-chip communication is sought; this may be particularly useful in a CMP environment.Type: GrantFiled: January 13, 2005Date of Patent: November 18, 2008Assignee: International Business Machines CorporationInventors: Alper Buyuktosunoglu, Zhigang Hu, Jude A. Rivers, John T. Robinson, Xiaowei Shen, Vijayalakshmi Srinivasan
-
Publication number: 20080270715Abstract: A secure memory device and method for obtaining and securely storing information relating to a life moment is disclosed. In the method, a parameter is received and inputted in a search heuristic. A search is made for the information according to the search heuristic and, upon finding the information, metadata is appended to the information. The information and metadata is then stored in a secure memory location. The secure memory location has a housing fabricated to withstand a predetermined stress, a detachable connection to a computer and a memory that stores the information and protects it from unauthorized deletion. In some embodiments, the stored information may be selectively deleted in a safe and controlled manner.Type: ApplicationFiled: June 30, 2008Publication date: October 30, 2008Applicant: MICROSOFT CORPORATIONInventors: Aditha M. Adams, Adrian Mark Chandley, Carl J. Ledbetter, Dale Clark Crosier, Pasquale DeMaio, Steven T. Kaneko, Taryn K. Beck
-
Publication number: 20080189495Abstract: A computer implemented method, an apparatus, and a computer usable program product are provided for reestablishing the hotness, or the retention priority, of a page. When a page is paged out of memory, the page's then-current retention priority is saved. When the page is paged in again later, the retention priority of the page is updated to the retention priority that was saved at or before the time the page was last paged out.Type: ApplicationFiled: February 2, 2007Publication date: August 7, 2008Inventors: Gerald Francis McBrearty, Shawn Patrick Mullen, Jessica Carol Murillo, Johnny Meng-Han Shieh
-
Patent number: 7406568Abstract: A technique to store a plurality of addresses and data to address and data buffers, respectively, in an ordered manner. More particularly, one embodiment of the invention stores a plurality of addresses to a plurality of address buffer entries and a plurality of data to a plurality of data buffer entries according to a true least-recently-used (LRU) allocation algorithm.Type: GrantFiled: June 20, 2005Date of Patent: July 29, 2008Assignee: Intel CorporationInventor: Benjamin Tsien
-
Patent number: 7401190Abstract: Methods and systems for operating computing devices are described. In one embodiment, a small amount of static RAM (SRAM) is incorporated into an automotive computing device. The SRAM is battery-backed to provide a non-volatile memory space in which critical data, e.g. the object store, can be maintained in the event of a power loss.Type: GrantFiled: November 16, 2005Date of Patent: July 15, 2008Assignee: Microsoft, CorporationInventors: Richard Dennis Beckert, Sharon Drasnin, Ronald Otto Radko
-
Publication number: 20080141167Abstract: An image forming apparatus is disclosed. The image forming apparatus includes a displaying unit which displays a predetermined number of menu items in plural menu items on a screen, an inputting unit which selects a menu item from the menu items displayed on the screen by the displaying unit, and a storing unit which registers the menu item selected by the input unit in a user custom menu table for each user having registration regions where a predetermined number of menu items are stored.Type: ApplicationFiled: October 9, 2007Publication date: June 12, 2008Inventors: Naohiko Kubo, Naruhiko Ogasawara, Nobuyuki Iwata, Hiroya Uruta, Takahiro Hirakawa
-
Publication number: 20080140956Abstract: An advanced, processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.Type: ApplicationFiled: January 22, 2008Publication date: June 12, 2008Inventors: David T. Hass, Basab Mukherjee
-
Patent number: 7386673Abstract: Embodiments of the present invention provide methods and systems for efficiently tracking evicted or non-resident pages. For each non-resident page, a first hash value is generated from the page's metadata, such as the page's mapping and offset parameters. This first hash value is then used as an index to point one of a plurality of circular buffers. Each circular buffer comprises an entry for a clock pointer and entries that uniquely represent non-resident pages. The clock pointer points to the next page that is suitable for replacement and moves through the circular buffer as pages are evicted. In some embodiments, the entries that uniquely represent non-resident pages are a hash value that is generated from the page's inode data.Type: GrantFiled: November 30, 2005Date of Patent: June 10, 2008Assignee: Red Hat, Inc.Inventor: Henri Han van Riel
-
Patent number: 7366854Abstract: In an embodiment, a memory scheduler is provided to process memory requests. The memory scheduler may comprise: a plurality of arbitrators that each select memory requests according to age of the memory requests and whether resources are available for the memory requests; and a second-level arbitrator that selects, for an arbitration round, a series of memory requests made available by the plurality of arbitrators, wherein the second-level arbitrator begins the arbitration round by selecting a memory request from a least recently used (LRU) arbitrator of the plurality of arbitrators.Type: GrantFiled: May 8, 2003Date of Patent: April 29, 2008Assignee: Hewlett-Packard Development Company, L.P.Inventors: John M. Wastlick, Michael K. Dugan
-
Patent number: 7366855Abstract: A page replacement method is provided. The page replacement method includes (a) establishing a first page list in which a plurality of pages in a main memory are listed in an order that they have been used, (b) establishing a second page list in which some of the pages in the main memory whose images are stored in a storage medium are listed in an order that they have been used, and (c) storing data downloaded from the storage medium in the pages included in the second page list in an order opposite to the order that the corresponding pages are listed in the second page list.Type: GrantFiled: July 28, 2005Date of Patent: April 29, 2008Assignee: Samsung Electronics Co., Ltd.Inventors: Jin-kyu Kim, Kwang-yoon Lee, Jin-soo Kim, Sun-young Park, Chan-ik Park, Jeong-uk Kang
-
Patent number: 7360043Abstract: One embodiment of the present invention provides a system that manages an LRU list such that the rank, or position, of data records in the sequence can be determined efficiently. The system initializes an index field in each record to the record's initial rank. When a record is accessed, the system moves it to the beginning of the LRU list and appends the value of the record's index field to a “change list.” The system then sets the record's index field to zero. The change list effectively tracks the records accessed since initialization, and combined with the records' index fields can be used to efficiently compute the rank of any record in the list. This ability to efficiently compute the rank of the data record in the LRU list reduces the frequency with which the computationally-expensive initialization operation must be executed on the LRU list.Type: GrantFiled: August 17, 2005Date of Patent: April 15, 2008Assignee: Sun Microsystems, IncInventor: Jan L. Bonebakker
-
Patent number: 7353350Abstract: In accordance with the teaching described herein, systems and methods are provided for managing memory space in a mobile device. A plurality of data storage locations may be included. A plurality of software applications may be included, with each software application being operable to store data to a different data storage location. A data store management system may be operable to access and delete data stored in the plurality of data storage locations. If insufficient memory space is available in one of the data storage locations, then the data store management system may access the one data storage location and at least one other data storage location and delete data from at least one of the accessed data storage locations.Type: GrantFiled: July 23, 2003Date of Patent: April 1, 2008Assignee: Research In Motion LimitedInventors: Gerhard D. Klassen, Robbie J. Maurice
-
Patent number: 7343457Abstract: A memory controller for managing memory requests from a plurality of requesters to a plurality of memory banks is disclosed. The memory controller includes an arbiter, a first path controller, a second path controller, and a synchronizer. The arbiter is configured to receive the memory requests from the plurality of requesters and identify requests for processing responsive to the requested memory banks. The first and second path controllers are coupled to the arbiter and the plurality of memory banks with the first path controller configured to process the first memory request and the second path controller configured to process the second memory request. The synchronizer is coupled between the first path controller and the second path controller for synchronizing the first and second path controllers such that the first and second memory requests processed by the first and second path controllers, respectively, do not conflict.Type: GrantFiled: August 1, 2003Date of Patent: March 11, 2008Assignee: Unisys CorporationInventor: Joseph H. End, III
-
Patent number: 7321954Abstract: An LRU array and method for tracking the accessing of lines of an associative cache. The most recently accessed lines of the cache are identified in the table, and cache lines can be blocked from being replaced. The LRU array contains a data array having a row of data representing each line of the associative cache, having a common address portion. A first set of data for the cache line identifies the relative age of the cache line for each way with respect to every other way. A second set of data identifies whether a line of one of the ways is not to be replaced. For cache line replacement, the cache controller will select the least recently accessed line using contents of the LRU array, considering the value of the first set of data, as well as the value of the second set of data indicating whether or not a way is locked. Updates to the LRU occur after each pre-fetch or fetch of a line or when it replaces another line in the cache memory.Type: GrantFiled: August 11, 2004Date of Patent: January 22, 2008Assignee: International Business Machines CorporationInventors: James N. Dieffenderfer, Richard W. Doing, Brian E. Frankel, Kenichi Tsuchiya
-
Patent number: 7287136Abstract: A cache device and a method for controlling cached data that enable efficient use of a storage area and improve the hit ratio are provided. When cache replacement is carried out in cache devices connected to each other through networks, data control is carried out so that data blocks set to a deletion pending status in each cache device, which includes lists regarding the data blocks set to a deletion pending status, in a cache group are different from those in other cache devices in the cache group. In this way, data control using deletion pending lists is carried out. According to the system of the present invention, a storage area can be used efficiently as compared with a case where each cache device independently controls cache replacement, and data blocks stored in a number of cache devices are collected to be sent to terminals in response to data acquisition requests from the terminals, thereby facilitating network traffic and improving the hit rate of the cache devices.Type: GrantFiled: June 24, 2003Date of Patent: October 23, 2007Assignee: Sony CorporationInventor: Tsutomu Miyauchi
-
Patent number: 7287144Abstract: Using a counter of the Web server 10, a leave probability p1, average value m and variance s2 of think time, and hit ratio r are calculated for a session data cache 12 involving a predetermined Web application. For a first reading of a group of reading plural session data proximate temporally, p1a, ma and s2a, and average value a of the number of data reading sessions in each group are defined. A computational expression setting means 21 sets a computational expression f(a)=a including p1, m, s2, r, p1a, ma and s2a, the computational expression for a fix point computing method having a variable a. A true value searching means 22 searches an almost true value of a by the fix point computing method based on the computational expression f(a)=a. An estimation means 23 estimates ra based on a searched value of a.Type: GrantFiled: October 20, 2004Date of Patent: October 23, 2007Assignee: International Business Machines CorporationInventor: Toshiyuki Hama
-
Patent number: 7284096Abstract: Systems and methods are provided for data caching. An exemplary method for data caching may include establishing a FIFO queue and a LRU queue in a cache memory. The method may further include establishing an auxiliary FIFO queue for addresses of cache lines that have been swapped-out to an external memory. The method may further include determining, if there is a cache miss for the requested data, if there is a hit for requested data in the auxiliary FIFO queue and, if so, swapping-in the requested data into the LRU queue, otherwise swapping-in the requested data into the FIFO queue.Type: GrantFiled: August 5, 2004Date of Patent: October 16, 2007Assignee: SAP AGInventor: Ivan Schreter
-
Patent number: 7281083Abstract: According to embodiments of the present invention, a network processor includes a content addressable memory (CAM) unit having CAM arranged in banks and sharable among microengines. In one embodiment, a mask having a value is used to select/enable one group of CAM banks and to deselect/disable another group of CAM banks. A tag may be looked up in the selected/enabled CAM banks based on the mask value. Upon a “miss,” the CAM banks provide the least recently used (LRU) entry. A LRU entry reelection tree may reelect the LRU entry from among all the CAM banks.Type: GrantFiled: June 30, 2004Date of Patent: October 9, 2007Assignee: Intel CorporationInventor: Tomasz B. Madajczak
-
Patent number: 7275135Abstract: An apparatus and method to de-allocate data in a cache memory is disclosed. Using a clock that has a predetermined number of periods, the invention provides a usage timeframe information to approximate the usage information. The de-allocation decisions can then be made based on the usage timeframe information.Type: GrantFiled: August 31, 2001Date of Patent: September 25, 2007Assignee: Intel CorporationInventor: Richard L. Coulson
-
Patent number: 7263587Abstract: A unified memory controller (UMC) is disclosed. The UMC may be used in a digital television (DTV) receiver. The UMC allows the DTV receiver to use a unified memory. The UMC accepts memory requests from various clients, and determines which requests should receive priority access to the unified memory.Type: GrantFiled: June 25, 2004Date of Patent: August 28, 2007Assignee: Zoran CorporationInventors: Gerard Yeh, Ravi Manyam, Viet Nguyen
-
Patent number: 7260679Abstract: A method is disclosed to manage a data cache. The method provides a data cache comprising a plurality of tracks, where each track comprises one or more segments. The method further maintains a first LRU list comprising one or more first tracks having a low reuse potential, maintains a second LRU list comprising one or more second tracks having a high reuse potential, and sets a target size for the first LRU list. The method then accesses a track, and determines if that accessed track comprises a first track. If the method determines that the accessed track comprises a first track, then the method increases the target size for said first LRU list. Alternatively, if the method determines that the accessed track comprises a second track, then the method decreases the target size for said first LRU list. The method demotes tracks from the first LRU list if its size exceeds the target size; otherwise, the method evicts tracks from the second LRU list.Type: GrantFiled: October 12, 2004Date of Patent: August 21, 2007Assignee: International Business Machines CorporationInventors: Michael T. Benhase, Binny S. Gill, Thomas C. Jarvis, Dharmendra S. Modha
-
Patent number: 7240157Abstract: A system and methods are shown for handling multiple target memory requests. Memory read requests generated by a peripheral component interconnect (PCI) client are received by a PCI bus controller. The PCI bus controller passes the memory request to a memory controller used to access main memory. The memory controller passes the memory request to a bus interface unit used to access cache memory and a processor. The bus interface unit determines if cache memory can be used to provide the data associated with the PCI client's memory request. While the bus interface unit determines if cache memory may be used, the memory controller continues to process the memory request to main memory. If cache memory can be used, the bus interface unit provides the data to the PCI client and sends a notification to the memory controller.Type: GrantFiled: September 26, 2001Date of Patent: July 3, 2007Assignee: ATI Technologies, Inc.Inventors: Michael Frank, Santiago Fernandez-Gomez, Robert W. Laker, Aki Niimura
-
Patent number: 7185028Abstract: In order to improve a data processing unit comprising a data network, a file server integrated into the data network and having a separate data memory for the server and comprising at least one primary data file system, in which data files stored on the server data memory are filed, with respect to its security in the case of failures and the access to the data files following any failure, it is suggested that the data files of the primary data file system be divided into at least two primary activity groups with a different hierarchical ranking by means of a primary hierarchical memory management in accordance with a primary activity criterion, that the memory management copy at least the data files of the primary activity group with a lowest ranking into at least one secondary data file system on a data memory of a data storage unit positioned subsequent to the server data memory and that the memory management generate metadata from the copied data files of the primary activity group with a lowest ranking.Type: GrantFiled: March 11, 2003Date of Patent: February 27, 2007Assignee: Grau Data Storage AGInventor: Ulrich Lechner
-
Patent number: 7184320Abstract: A semiconductor disk wherein a flash memory into which data is rewritten in block unit is employed as a storage medium, the semiconductor disk including a data memory in which file data are stored, a substitutive memory which substitutes for blocks of errors in the data memory, an error memory in which error information of the data memory are stored, and a memory controller which reads data out of, writes data into and erases data from the data memory, the substitutive memory and the error memory. Since the write errors of the flash memory can be remedied, the service life of the semiconductor disk can be increased.Type: GrantFiled: June 28, 2005Date of Patent: February 27, 2007Assignee: Renesas Technology Corp.Inventors: Hajime Yamagami, Kouichi Terada, Yoshihiro Hayashi, Takashi Tsunehiro, Kunihiro Katayama, Kenichi Kaki, Takeshi Furuno
-
Patent number: 7167952Abstract: A method of writing to cache including initiating a write operation to a cache. In a first operational mode, the presence or absence of a write miss is detected and if a write miss is absent, writing data to the cache and if a write miss is present, retrieving the data from a further memory and writing the data to the cache based on least recently used logic. In a second operational mode, the cache is placed in a memory mode and the data is written to the cache based on an address regardless of whether a write miss is present or absent.Type: GrantFiled: September 17, 2003Date of Patent: January 23, 2007Assignee: International Business Machines CorporationInventors: Krishna M. Desai, Anil S. Keste, Tin-chee Lo, Thomas D. Needham, Yuk-Ming Ng, Jeffrey M. Turner
-
Patent number: 7165188Abstract: A method for managing a long-running process carried out upon a plurality of disks is disclosed. A registry is established, the registry having a plurality of entries, each entry corresponding to one of the plurality of disks, each entry having a value indicative of a respective time at which its corresponding disk was last acted-upon by the long-running process. The long-running process executes on each of the disks based upon an order in which the disk having an oldest last acted-upon time is processed first and the disk having the newest last acted-upon time is processed last.Type: GrantFiled: January 31, 2005Date of Patent: January 16, 2007Assignee: Network Appliance, IncInventors: Steven H. Rodrigues, Rajesh Sundaram
-
Patent number: 7154805Abstract: A semiconductor disk wherein a flash memory into which data is rewritten in block unit is employed as a storage medium, the semiconductor disk including a data memory in which file data are stored, a substitutive memory which substitutes for blocks of errors in the data memory, an error memory in which error information of the data memory are stored, and a memory controller which reads data out of, writes data into and erases data from the data memory, the substitutive memory and the error memory. Since the write errors of the flash memory can be remedied, the service life of the semiconductor disk can be increased.Type: GrantFiled: March 22, 2005Date of Patent: December 26, 2006Assignee: Renesas Technology Corp.Inventors: Hajime Yamagami, Kouichi Terada, Yoshihiro Hayashi, Takashi Tsunehiro, Kunihiro Katayama, Kenichi Kaki, Takeshi Furuno
-
Patent number: 7155623Abstract: A method and system for power management including local bounding of device group power consumption provides the responsiveness of local power control while meeting global system power consumption and power dissipation limits. At the system level, a global power bound is determined and divided among groups of devices in the system so that local bounds are determined that meet the global system bound. The local bounds are communicated to device controllers associated with each group of devices and the device controllers control the power management states of the associated devices in the group to meet the local bound. Thus, by action of all of the device controllers, the global bound is met. The controllers may be memory controllers and the devices memory modules, or the devices may be other devices within a processing system having associated local controllers.Type: GrantFiled: December 3, 2003Date of Patent: December 26, 2006Assignee: International Business Machines CorporationInventors: Charles R. Lefurgy, Eric Van Hensbergen
-
Patent number: 7155584Abstract: Methods and systems for operating automotive computing devices are described. In one embodiment, multiple object store pages are maintained in device SRAM that is configured to be battery backed in an event of a power loss. One or more object store pages are periodically flushed to device non-volatile memory to make room for additional object store pages. The frequency of object store page writes is tracked, and object store pages that are least frequently written to are flushed before object store pages that are more frequently written to. In addition, in the event of a power loss, the SRAM is battery backed.Type: GrantFiled: November 10, 2004Date of Patent: December 26, 2006Assignee: Microsoft CorporationInventors: Richard Dennis Beckert, Sharon Drasnin, Ronald Otto Radko
-
Patent number: 7149226Abstract: A method and apparatus for processing data packets including generating an enqueue command specifying a queue descriptor associated with a new buffer. The queue descriptor is part of a cache of queue descriptors each having a head pointer pointing to a first buffer in a queue of buffers, and a tail pointer pointing to a last buffer in the queue. The first buffer having a buffer pointer pointing to next buffer in the queue. The buffer pointer associated with the last buffer and the tail pointer is set to point to the new buffer.Type: GrantFiled: February 1, 2002Date of Patent: December 12, 2006Assignee: Intel CorporationInventors: Gilbert Wolrich, Mark B. Rosenbluth, Debra Bernstein
-
Patent number: 7136949Abstract: A method and apparatus for position dependent data scheduling for communication of data for different domains along a bus is provided. Having an awareness of the relative position of different domains along a bus, one embodiment of the present disclosure schedules bus operations to allow data from multiple bus operations to be simultaneously present on the bus while preventing interference among the data. The present disclosure is compatible with buses having a termination on one end and those having terminations on both ends. In accordance with one embodiment of the present disclosure, bus operations are scheduled so that first data of a first bus operation involving a first domain are not present at domains involved in a second bus operation at times that would result in interference with second data of the second bus operation.Type: GrantFiled: March 11, 2005Date of Patent: November 14, 2006Assignee: Rambus Inc.Inventor: Craig Hampel
-
Patent number: 7120751Abstract: A streaming media cache comprises a mass storage device configured to store streaming media data, a cache memory coupled to the mass storage device, the cache memory configured to store a subset of the streaming media data in a plurality of locations, and configured to provide the subset of the streaming media data to the processor, and a processor coupled to the mass storage device and to the cache memory, the processor configured to use a first retirement algorithm to determine a first location within the cache memory that is to be retired, configured to copy data from the mass storage device to the first location within the cache memory, configured to monitor a cache memory age, wherein the cache memory age is determined in response to an age of data in at least a second location within the cache memory, configured to use a second retirement algorithm to determine a third location within the cache memory that is to be retired when the cache memory age falls below a threshold age, and configured to copy datType: GrantFiled: August 9, 2002Date of Patent: October 10, 2006Assignee: Networks Appliance, Inc.Inventors: Yasahiro Endo, Konstantinos Roussos