Partitioned Cache Patents (Class 711/129)
-
Patent number: 7437513Abstract: An improvement in performance and a reduction of power consumption in a cache memory can both be effectively realized by increasing or decreasing the number of operated ways in accordance with access patterns. A hit determination unit determines the hit way when a cache access hit occurs. A way number increase/decrease determination unit manages, for each of the ways that are in operation, the order from the way for which the time of use is most recent to the way for which the time of use is oldest. The way number increase/decrease determination unit then finds the rank of the hit ways that have been obtained in the hit determination unit and counts the number of hits for each rank in the order. The way number increase/decrease determination unit further determines increase or decrease of the number of operated ways based on the access pattern that is indicated by the relation of the number of hits to each rank in the order.Type: GrantFiled: April 28, 2005Date of Patent: October 14, 2008Assignees: NEC CorporationInventors: Yasumasa Saida, Hiroaki Kobayashi
-
Patent number: 7437514Abstract: A cache system is provided which includes a cache memory and a cache refill mechanism which allocates one or more of a set of cache partitions in the cache memory to an item in dependence on the address of the item in main memory. This is achieved in one of the described embodiments by including with the address of an item a set of partition selector bits which allow a partition mask to be generated to identify into which cache partition the item may be loaded.Type: GrantFiled: July 26, 2007Date of Patent: October 14, 2008Assignee: STMicroelectronics LimitedInventors: Andrew C. Sturges, David May
-
Patent number: 7437512Abstract: A semi-trace cache combines elements and features of an instruction cache and a trace cache. An ICache portion of the semi-trace cache is filled with instructions fetched from the next level of the memory hierarchy while a TCache portion is filled with traces gleaned either from the actual stream of retired instructions or predicted before execution.Type: GrantFiled: February 26, 2004Date of Patent: October 14, 2008Assignee: Marvell International Ltd.Inventor: Michael W. Morrow
-
Patent number: 7434247Abstract: The desirability of programming events may be determined using metadata for programming events that includes goodness of fit scores associated with categories of a classification hierarchy one or more of descriptive data and keyword data. The programming events are ranked in accordance with the viewing preferences of viewers as expressed in one or more viewer profiles. The viewer profiles may each include preference scores associated with categories of the classification hierarchy and may also include one or more keywords. Ranking is performed through category matching and keyword matching using the contents of the metadata and the viewer profiles. The viewer profile keywords may be qualified keywords that are associated with specific categories of the classification hierarchy. The ranking may be performed such that qualified keyword matches generally rank higher than keyword matches, and keyword matches generally rank higher than category matches.Type: GrantFiled: March 28, 2005Date of Patent: October 7, 2008Assignee: Meevee, Inc.Inventors: Gil Gavriel Dudkiewicz, Dale Kittrick Hitt, Jonathan Percy Barker
-
Patent number: 7434001Abstract: A method of accessing cache memory for parallel processing processors includes providing a processor and a lower level memory unit. The processor utilizes multiple instruction processing members and multiple sub-cache memories corresponding to the instruction processing members. Next step is using a first instruction processing member to access a first sub-cache memory. The first instruction processing member will access the rest sub-cache memories when the first instruction processing member does not access the desired data successfully in the first instruction processing member. The first instruction processing member will access the lower level memory unit until the desired data have been accessed, when the first instruction processing member does not access the desired data successfully in the sub-memories. Then, the instruction processing member returns a result.Type: GrantFiled: August 23, 2006Date of Patent: October 7, 2008Inventor: Shi-Wu Lo
-
Publication number: 20080244183Abstract: An object of the present invention is to provide a storage system which is shared by a plurality of application programs, wherein optimum performance tuning for a cache memory can be performed for each of the individual application programs. The storage system of the present invention comprises a storage device which provides a plurality of logical volumes which can be accessed from a plurality of application programs, a controller for controlling input and output of data to and from the logical volumes in response to input/output requests from the plurality of application programs, and a cache memory for temporarily storing data input to and output from the logical volume, wherein the cache memory is logically divided into a plurality of partitions which are exclusively assigned to the plurality of logical volumes respectively.Type: ApplicationFiled: October 26, 2007Publication date: October 2, 2008Inventors: Atushi Ishikawa, Yuko Matsui
-
Patent number: 7428616Abstract: An information processing apparatus has a CPU, a memory, a cache memory and a cache controller. When an acquisition of an area of a prescribed size is requested in the memory, a size equivalent to at least two lines serving as a cache unit is added to the prescribed size requested and this area is reserved in the memory. The area reserved is allocated to an uncacheable memory area of this memory.Type: GrantFiled: January 28, 2005Date of Patent: September 23, 2008Assignee: Canon Kabushiki KaishaInventors: Takeshi Ogawa, Takeo Sakimura
-
Patent number: 7418479Abstract: A security infrastructure and methods are presented that inhibit the ability of a malicious node from disrupting the normal operations of a peer-to-peer network. The methods of the invention allow both secure and insecure identities to be used by nodes by making them self-verifying. When necessary or opportunistic, ID ownership is validated by piggybacking the validation on existing messages. The probability of connecting initially to a malicious node is reduced by randomly selecting to which node to connect. Further, information from malicious nodes is identified and can be disregarded by maintaining information about prior communications that will require a future response. Denial of service attacks are inhibited by allowing the node to disregard requests when its resource utilization exceeds a predetermined limit. The ability for a malicious node to remove a valid node is reduced by requiring that revocation certificates be signed by the node to be removed.Type: GrantFiled: March 15, 2006Date of Patent: August 26, 2008Assignee: Microsoft CorporationInventors: Rohit Gupta, Alexandru Gavrilescu, John L. Miller, Graham A. Wheeler
-
Patent number: 7415575Abstract: A cache shared by multiple clients implements a client specific policy for replacing entries in the event of a cache miss. A request from any client can hit any entry in the cache. For purposes of replacing entries, at least of the clients is restricted, and when a cache miss results from a request by the restricted client, the entry to be replaced is selected from a fixed subset of the cache entries. When a cache misses results from a request by any client other than the restricted client, any cache entry, including a restricted entry, can be selected to be replaced.Type: GrantFiled: December 8, 2005Date of Patent: August 19, 2008Assignee: NVIDIA, CorporationInventors: Peter C. Tong, Colyn S. Case
-
Publication number: 20080162820Abstract: Methods and systems are presented for custom caching. Application threads define caches. The caches may be accessed through multiple index keys, which are mapped to multiple application thread-defined keys. Methods provide for the each index key and each application thread-defined key to be symmetrical. The index keys are used for loading data from one or more data sources into the cache stores on behalf of the application threads. Application threads access the data from the cache store by providing references to the caches and the application-supplied keys. Some data associated with some caches may be shared from the cache store by multiple application threads. Additionally, some caches are exclusively accessed by specific application threads.Type: ApplicationFiled: December 27, 2007Publication date: July 3, 2008Inventors: Christopher J. Kasten, Greg Seitz
-
Patent number: 7392347Abstract: In one embodiment, the present invention is directed to a system processing memory transaction requests. The system includes a controller for storing and retrieving cache lines and a buffer communicatively coupled to the controller and at least one bus. The controller formats cache lines into a plurality of portions, implements an error correction code (ECC) scheme to correct a single-byte error in an ECC code word for pairs of the plurality of portions, stores respective pairs of plurality of portions such that each single-byte of the respective pairs of the plurality of portions is stored in a single one of a plurality of memory components. When the controller processes a memory transaction request that modifies tag data without modifying cache line data, the buffer calculates new ECC data utilizing previous ECC data, previous tag data, and the new tag data without requiring communication of cache line data.Type: GrantFiled: May 10, 2003Date of Patent: June 24, 2008Assignee: Hewlett-Packard Development Company, L.P.Inventor: Theodore C. Briggs
-
Patent number: 7392268Abstract: Systems and methods for partitioning information across multiple storage devices in a web server environment. The system comprises a web server database which includes information related creating a web site. The information is divided into partitions within the database. One of the partitions includes user information and another of the partitions includes content for the web site. Portions of the content for the web site is replicated and maintained within the partition including the user information. Further, a portion of the user information is replicated and maintained in the partition where the content for the web site is maintained. The methods include dividing information into partitions, de-normalizing the received data and replicating the data portions into the various web site locations.Type: GrantFiled: September 19, 2002Date of Patent: June 24, 2008Assignee: The Generations Network, Inc.Inventors: Todd Hardman, James Ivie, Michael Mansfield, Greg Parkinson, Daren Thayne, Mark Wolfgramm, Michael Wolfgramm, Brandt Redd
-
Publication number: 20080147976Abstract: Achieving better uniformity of temperature on an integrated circuit while performing burn-in can result in reduced burn-in time and more uniform acceleration. One way to achieve better temperature uniformity is to control dynamic power in the core and cache by operating at different frequencies and increasing switching activity in the cache(s) during burn-in by changing operation of the cache so that during burn-in a plurality of memory locations in the cache(s) are accessed simultaneously, thereby increasing activity in the cache to achieve higher power utilization in the cache during burn-in.Type: ApplicationFiled: December 13, 2006Publication date: June 19, 2008Inventors: Michael D. Bienek, Victor F. Andrade, Randal L. Posey, Michael C. Braganza
-
Patent number: 7389382Abstract: A technique is described for facilitating block level access operations to be performed at a remote volume via a wide area network (WAN). The block level access operations may be initiated by at least one host which is a member of a local area network (LAN). The LAN includes a block cache mechanism configured or designed to cache block data in accordance with a block level protocol. A block level access request is received from a host on the LAN. In response to the block level access request, a portion of block data may be cached in the block cache mechanism using a block level protocol. In at least one implementation, portions of block data in the block cache mechanism may be identified as “dirty” data which has not yet been stored in the remote volume. Block level write operations may be performed over the WAN to cause the identified dirty data in the block cache mechanism to be stored at the remote volume.Type: GrantFiled: June 8, 2005Date of Patent: June 17, 2008Assignee: Cisco Technology, Inc.Inventors: Dave Thompson, Timothy Kuik, Mark Bakke
-
Patent number: 7386672Abstract: An apparatus and method provide persistent data during a user session on a networked computer system. A global data cache is divided into three sections: trusted, protected, and unprotected. An authorization mechanism stores and retrieves authorization data from the trusted section of the global data store. A common session manager stores and retrieves data from the protected and unprotected sections of the global data cache. Using the authorization mechanism, software applications may verify that a user is authorized without prompting the user for authorization information. Using the common session manager, software applications may store and retrieve data to and from the global data store, allowing the sharing of data during a user session. After the user session terminates, the data in the global data cache corresponding to the user session is invalidated.Type: GrantFiled: August 29, 2002Date of Patent: June 10, 2008Assignee: International Business Machines CorporationInventor: James Casazza
-
Publication number: 20080133839Abstract: Cache management strategies are described for retrieving information from a storage medium, such as an optical disc, using a cache memory including multiple cache segments. A first group of cache segments can be devoted to handling the streaming transfer of a first type of information, and a second group of cache segments can be devoted to handling the bulk transfer of a second type of information. A host system can provide hinting information that identifies which group of cache segments that a particular read request targets. A circular wrap-around fill strategy can be used to iteratively supply new information to the cache segments upon cache hits by performing pre-fetching. Various eviction algorithms can be used to select a cache segment for flushing and refilling upon a cache miss, such as a least recently used (LRU) algorithm or a least frequently used (LFU) algorithm.Type: ApplicationFiled: February 11, 2008Publication date: June 5, 2008Applicant: Microsoft CorporationInventors: Brian L. Schmidt, Jonathan E. Lange, Timothy R. Osborne
-
Patent number: 7379442Abstract: So-called LCH packets are defined in the Hiperlan Type 2 System for wire-free transmission of video and audio data streams. These LCH packets have a length of 54 data bytes. Furthermore, the Hiperlan/2 Standard provides for so-called ARQ messages to be sent back to the transmitter in an SCH packet in a QOS mode (Quality Of Service), in which all the LCH data packets must be confirmed by the receiver. Space for the LCH and SCH data packets must be provided in a buffer store in the Hiperlan/2 interface for each connection that is set up. When there is a possibility of several hundred connections having been set up, separate reservation of memory areas for LCH and SCH packets would involve considerable complexity for the memory organization. The invention proposes that only one common area be reserved for LCH and SCH packets in the buffer store. The section which is provided for each LCH package is of such a size that it corresponds to a value 2n where n?[0, 1, 2, 3, . . .Type: GrantFiled: September 19, 2002Date of Patent: May 27, 2008Assignee: Thomson LicensingInventors: Malte Borsum, Klaus Gaedke, Thomas Brune
-
Patent number: 7380047Abstract: A memory system and method includes a cache having a filtered portion and an unfiltered portion. The unfiltered portion is divided into block sized components, and the filtered portion is divided into sub-block sized components. Blocks evicted from the unfiltered portion have selected sub-blocks thereof cached in the filtered portion for servicing requests.Type: GrantFiled: September 30, 2004Date of Patent: May 27, 2008Assignee: International Business Machines CorporationInventors: Philip George Emma, Allan Mark Hartstein, Thomas Roberts Puzak, Moinuddin Khalil Ahmed Qureshi
-
Patent number: 7380063Abstract: Portions of a cache are flushed in stages. An exemplary flushing of the present invention comprises flushing a first portion, performing operations other than a flush, and then flushing a second portion of the cache. The first portion may be disabled after it is flushed. The cache may be functionally divided into portions prior to a flush, or the portions may be determined in part by an abort signal. The operations may access either the cache or the memory. The operations may involve direct memory access or interrupt servicing.Type: GrantFiled: June 20, 2006Date of Patent: May 27, 2008Assignee: Intel CorporationInventors: John W. Horrigan, Namasivayam Thangavelu, George Vargese, Brian Holscher
-
Patent number: 7376791Abstract: A memory system is described. A processor provides a data access address, and selectively configures a selected number of the ways of a memory device as cache memory belonging to a cacheable region, and configures remaining ways as directly addressable memory belonging to a directly addressable region by the memory configuration information stored in control registers. A cache hit detection circuit includes an address register storing the data access address, tag memories storing tag data of the data access address, a data processing device selectively outputting the tag data or an adjusted tag data as processed data according to a direct address signal, and address comparators each comparing the processed data with portion bits of the data access address from the address register, and outputting an address match signal as comparison match. The tag data is adjusted to a predetermined address by the data processing device, which is the highest address of memory space of the processor.Type: GrantFiled: December 20, 2005Date of Patent: May 20, 2008Assignee: Mediatek Inc.Inventors: Ting-Cheng Hsu, Yen-Yu Lin
-
Patent number: 7370150Abstract: A processing system optimized for data string manipulations includes data string execution circuitry associated with a bus interface unit or memory controller. Cache coherency is maintained, and data move and compare operations may be performed efficiently on cached data. A barrel shifter for realignment of cached data during move operations and comparators for comparing a test data string to cached data a cache line at a time may be provided.Type: GrantFiled: November 26, 2003Date of Patent: May 6, 2008Assignee: Micron Technology, Inc.Inventor: Dean A. Klein
-
Patent number: 7363491Abstract: A processor divides resources into secure resources and non-secure resources. Virtual-to-physical address translation page tables may be stored in either secure or non-secure memory.Type: GrantFiled: March 31, 2004Date of Patent: April 22, 2008Assignee: Intel CorporationInventor: Dennis M. O'Connor
-
Patent number: 7356648Abstract: Buffer memories having hardware controlled buffer space regions in which the hardware controls the dimensions of the various buffer space regions to meet the demands of a particular system. The hardware monitors the usage of the buffer data regions over time and subsequently and automatically adjusts the dimensions of the buffer space regions based on the utilization of those buffer regions.Type: GrantFiled: October 2, 2003Date of Patent: April 8, 2008Assignee: International Business Machines CorporationInventor: Robert A. Shearer
-
Publication number: 20080077741Abstract: A dynamic memory management method and apparatus wherein an area of a memory is partitioned into a plurality of areas to form memory banks. The different priority classes share the memory banks. A policer (write controller) dynamically assigns input frame data of a plurality of classes having different degrees of priority to memory banks in accordance with the degrees of priority and stores the data there for each priority class. A scheduler (read controller) sequentially reads out the data from the frame data stored in the memory bank assigned to the class having the highest degree of priority and transmits the same. For storage of frame data of a priority of class input in a burst like manner, a plurality of memory banks are assigned to that priority class so as to raise the burst tolerance. By controlling writing and reading of data in units of memory banks, the control can be simplified. Due to this, the efficiency of usage of memory is improved and the write/read control is simplified.Type: ApplicationFiled: July 30, 2007Publication date: March 27, 2008Applicant: FUJITSU LIMITEDInventors: Takanori Yasui, Hideki Shiono, Masaki Hiromori, Hirofumi Fujiyama, Satoshi Tomie, Yasuhiro Yamauchi, Sadayoshi Handa
-
Patent number: 7348988Abstract: Provided are methods, systems and graphics processing apparatus, for improving graphics system performance using an adaptive missing table in a multiple cache scheme, such that the table size is dependent on the completeness of the graphics data.Type: GrantFiled: May 6, 2005Date of Patent: March 25, 2008Assignee: Via Technologies, Inc.Inventor: Jiangming Xu
-
Patent number: 7350050Abstract: To provide a storage system partitioned into logical partitions.Type: GrantFiled: November 15, 2004Date of Patent: March 25, 2008Assignee: Hitachi, Ltd.Inventors: Shuji Nakamura, Akira Fujibayashi
-
Publication number: 20080059711Abstract: A method for managing a cache is disclosed. A context switch is identified. It is determined whether an application running after the context switch requires protection. Upon determining that the application requires protection the cache is partitioned. According to an aspect of the present invention, a partitioned section of the cache is completely over written with data associated with the application. Other embodiments are described and claimed.Type: ApplicationFiled: August 31, 2006Publication date: March 6, 2008Inventors: Francis X. McKeen, Leena K. Puthiyedath, Ernie Brickell, James B. Crossland
-
Publication number: 20080052456Abstract: An apparatus, system, and method are disclosed for preventing write starvation in a storage controller with access to low performance storage devices. A storage device allocation module is included to assign a storage device write cache limit for each storage device accessible to a storage controller. The storage device write cache limit comprises a maximum amount of write cache of the storage controller available to a storage device for a write operation. At least one storage device comprises a low performance storage device and a total amount of storage available to the storage devices comprises an amount greater than a total storage capacity of the write cache. A low performance write cache limit module is included to set a low performance write cache limit. The low performance write cache limit comprises an amount of write cache available for use by the at least one low performance storage device for a write operation.Type: ApplicationFiled: August 22, 2006Publication date: February 28, 2008Inventors: Kevin John Ash, Matthew Joseph Kalos, Robert Akira Kubo
-
Patent number: 7337273Abstract: Cache management strategies are described for retrieving information from a storage medium, such as an optical disc, using a cache memory including multiple cache segments. A first group of cache segments can be devoted to handling the streaming transfer of a first type of information, and a second group of cache segments can be devoted to handling the bulk transfer of a second type of information. A host system can provide hinting information that identifies which group of cache segments that a particular read request targets. A circular wrap-around fill strategy can be used to iteratively supply new information to the cache segments upon cache hits by performing pre-fetching. Various eviction algorithms can be used to select a cache segment for flushing and refilling upon a cache miss, such as a least recently used (LRU) algorithm or a least frequently used (LFU) algorithm.Type: GrantFiled: March 31, 2004Date of Patent: February 26, 2008Assignee: Microsoft CorporationInventors: Brian L. Schmidt, Jonathan E. Lange, Timothy R. Osborne
-
Patent number: 7336623Abstract: A method for detecting and repairing cloud splits in a distributed system such as a peer-to-peer (P2P) system is presented. Nodes in a cloud maintain a multilevel cache of entries for a subset of nodes in the cloud. The multilevel cache is built on a circular number space, where each node in the cloud is assigned a unique identifier (ID). Nodes are recorded in levels of the cache according to the distance from the host node. The size of the cloud is estimated using the cache, and cloud-split tests are performed with a frequency inversely proportional to the size of the cloud. Cloud splits are initially detected by polling a seed server in the cloud for a node N having an ID equal to the host ID+1. The request is redirected to another node in the cloud, and a best match for N is resolved. If the best-match is closer to the host than any node in the host's cache, a cloud split is presumed.Type: GrantFiled: October 30, 2003Date of Patent: February 26, 2008Assignee: Microsoft CorporationInventor: Christian Huitema
-
Patent number: 7337276Abstract: A computer implemented method, apparatus, and computer usable code for managing cache data. A partition identifier is associated with a cache entry in a cache, wherein the partition identifier identifies a last partition accessing the cache entry. The partition identifier associated with the cache entry is compared with a previous partition identifier located in a processor register in response to the cache entry being moved into a lower level cache relative to the cache. The cache entry is marked if the partition identifier associated with the cache entry matches the previous partition identifier located in the processor register to form a marked cache entry, wherein the marked cache entry is aged at a slower rate relative to an unmarked cache entry.Type: GrantFiled: August 11, 2005Date of Patent: February 26, 2008Assignee: International Business Machines CorporationInventors: Jos Accapadi, Andrew Dunshea, Greg R. Mewhinney, Mysore Sathyanaranyana Srinivas
-
Patent number: 7334087Abstract: A method of caching contextually variant objects in a common cache. The method can include identifying an object type for a requested object and determining whether the requested object has an object type which is specified among an enumerated set of cacheable object types which can be stored in the common cache. Importantly, each cacheable object type can have an associated context. If the requested object has an object type which is specified among the enumerated set of cacheable object types, a cache key can be computed for the requested object using cache key formulation rules for the associated context. Finally, the requested object can be retrieved from the common cache using the formulated cache key. Notably, in one aspect of the invention, the method also can include the step of invalidating individual objects in the common cache according to corresponding cache policies of associated contexts.Type: GrantFiled: February 8, 2005Date of Patent: February 19, 2008Assignee: International Business Machines CorporationInventors: Gennaro A. Cuomo, Brian Keith Martin, Donald F. Ferguson, Daniel C. Shupp, Goran D. Zlokapa
-
Patent number: 7328324Abstract: A data storage system configured for efficient operation in a single controller mode and to facilitate an upgrade from single controller operation to dual redundant active-active controller operation is provided. More particularly, a first controller having a segmented write cache is provided. The first segment of the write cache is associated with logical unit numbers (LUNs) owned by the first controller. The second segment is associated with LUNs that are designated as being owned by a second controller. During single controller operation, the segments of the write cache operate as primary write cache. The system may be converted to dual redundant controller operation by adding a second controller having a write cache segmented like the write cache of the first controller. Upon adding a second controller, primary control of the LUNs owned by or zoned to the second controller is taken over by the second controller.Type: GrantFiled: April 27, 2005Date of Patent: February 5, 2008Inventors: Yuanru Frank Wang, Paul Andrew Ashmore
-
Publication number: 20080028152Abstract: Caching techniques for storing instructions, constant values, and other types of data for multiple software programs are described. A cache provides storage for multiple programs and is partitioned into multiple tiles. Each tile is assignable to one program. Each program may be assigned any number of tiles based on the program's cache usage, the available tiles, and/or other factors. A cache controller identifies the tiles assigned to the programs and generates cache addresses for accessing the cache. The cache may be partitioned into physical tiles. The cache controller may assign logical tiles to the programs and may map the logical tiles to the physical tiles within the cache. The use of logical and physical tiles may simplify assignment and management of the tiles.Type: ApplicationFiled: July 25, 2006Publication date: January 31, 2008Inventors: Yun Du, Guofang Jiao, Chun Yu, De Dzwo Hsu
-
Patent number: 7321863Abstract: The present invention provides methods for efficiently handling product availability queries. The present invention provides a local availability cache that is prepopulated with product availability listings from various product sources. Customer product availability queries are processed using the prepopulated availability cache, as opposed to independently querying each product source. The present invention also uses methods to manage the cache, such as by limiting the length of use data records stored for each start of use day to a maximum length of use and updating of data in the query using a function that updates data for start dates of use that occur sooner in time than for start dates of use that occur later in time. The present invention also uses functions to determine availability for length of use requests that exceed the maximum length of use stored in the cache by piecing together availability information for smaller lengths of use.Type: GrantFiled: August 6, 2003Date of Patent: January 22, 2008Assignee: Travelocity.com LPInventors: Joshua Hartmann, DeWitt Clinton, Kishore Pallamreddy, Daniel Shtarkman
-
Patent number: 7320053Abstract: A cache memory system may be is organized as a set of numbered banks. If two clients need to access the cache, a contention situation may be resolved by a contention resolution process. The contention resolution process may be based on relative priorities of the clients.Type: GrantFiled: October 22, 2004Date of Patent: January 15, 2008Assignee: Intel CorporationInventors: Prasoonkumar Surti, Brian Ruttenberg, Aditya Navale
-
Patent number: 7320054Abstract: Disclosed is a multiprocessor system in which even if contention occurs when a common memory is accessed from each of a plurality of processors, the number of times the common memory is accessed is capable of being reduced. The common memory of the multiprocessor system is provided with a number of data areas that store data and with a control information area that stores control information indicating whether each of the data areas is in use, and each processor is provided with a storage unit equivalent to the common memory and with an access controller. The access controller of a processor that does not have access privilege monitors data and addresses that flow on the common bus, accepts data written to the common memory and data read from the common memory, and stores this data in the storage unit of its own processor, thereby storing content identical with that of the common memory.Type: GrantFiled: November 24, 2003Date of Patent: January 15, 2008Assignee: Fujitsu LimitedInventors: Hirokazu Matsuura, Takao Murakami, Kazuya Uno
-
Publication number: 20080010413Abstract: One embodiment of the present method and apparatus for application-specific dynamic cache placement includes grouping sets of data in a cache memory system into two or more virtual partitions and processing a load/store instruction in accordance with the virtual partitions, where the load/store instruction specifies at least one of the virtual partitions to which the load/store instruction is assigned.Type: ApplicationFiled: July 7, 2006Publication date: January 10, 2008Inventors: Krishnan Kunjunny Kailas, Rajiv Alazhath Ravindran, Zehra Sura
-
Patent number: 7310706Abstract: A microprocessor includes random cache line refill ordering to lessen side channel leakage in a cache line and thus thwart cryptanalysis attacks such as timing attacks, power analysis attacks, and probe attacks. A random sequence generator is used to randomize the order in which memory locations are read when filling a cache line.Type: GrantFiled: May 10, 2002Date of Patent: December 18, 2007Assignee: MIPS Technologies, Inc.Inventors: Morten Stribaek, Jakob Schou Jensen, Jean-Francois Dhem
-
Patent number: 7302425Abstract: Query results are pre-cached for a substantial portion of or all queries that are likely to be issued by users. One query can be entirely different from another query, yet because corresponding query results are pre-cached, the database need not be accessed, improving response performance. Pre-cached queries are also distributed into multiple partitions to apportion work among multiple computing machines to further enhance performance and provide redundancy in case of the failure of any particular partition. Pre-cached query results are selectively refreshed so that users may enjoy up-to-date information by focusing on queries that are popular as well as queries that are old.Type: GrantFiled: June 9, 2003Date of Patent: November 27, 2007Assignee: Microsoft CorporationInventors: Simon D. Bernstein, Xiongjian Fu, Nishant Dani
-
Patent number: 7287122Abstract: A method of managing a distributed cache structure having separate cache banks, by detecting that a given cache line has been repeatedly accessed by two or more processors which share the cache, and replicating that cache line in at least two separate cache banks. The cache line is optimally replicated in a cache bank having the lowest latency with respect to the given accessing processor. A currently accessed line in a different cache bank can be exchanged with a cache line in the cache bank with the lowest latency, and another line in the cache bank with lowest latency is moved to the different cache bank prior to the currently accessed line being moved to the cache bank with the lowest latency. Further replication of the cache line can be disabled when two or more processors alternately write to the cache line.Type: GrantFiled: October 7, 2004Date of Patent: October 23, 2007Assignee: International Business Machines CorporationInventors: Ramakrishnan Rajamony, Xiaowei Shen, Balaram Sinharoy
-
Patent number: 7281087Abstract: It is desired that the cache hit rate of access from a host (or an application) not be affected by an access pattern of another host (or an application). To achieve this, segment information setting means sets information in a segment information management table based on a setting command from a host (11) or a host (12). Input/output management means identifies to which access group an input/output request from the host (11) or (12) corresponds based on the setting in the segment information management table and the link status of the LRU links corresponding to the cache segments managed in a cache management table and, considering the division sizes allocated to the cache segments, controls discarding data from the cache memory for each cache segment.Type: GrantFiled: October 8, 2003Date of Patent: October 9, 2007Assignee: NEC CorporationInventor: Atsushi Kuwata
-
Patent number: 7269179Abstract: Common control for enqueue and dequeue operations in a pipelined network processor includes receiving in a queue manager a first enqueue or dequeue with respect to a queue and receiving a second enqueue or dequeue request in the queue manager with respect to the queue. Processing of the second request is commenced prior to completion of processing the first request.Type: GrantFiled: December 18, 2001Date of Patent: September 11, 2007Assignee: Intel CorporationInventors: Gilbert Wolrich, Mark B. Rosenbluth, Debra Bernstein, Matthew J. Adiletta
-
Patent number: 7260683Abstract: It is an object of the present invention to provide a semiconductor integrated circuit having a chip layout that reduces line length to achieve faster processing. A cache comprises a TAG memory module and a cache data memory module. The cache data memory module is divided into first and second cache data memory modules which are disposed on both sides of the TAG memory module, and input/output circuits of a data TLB are opposed to the input/output circuit of the TAG memory module and the input/output circuits of the first and second cache data memory modules across a bus area to reduce the line length to achieve faster processing.Type: GrantFiled: July 14, 2004Date of Patent: August 21, 2007Assignee: Matsushita Electric Industrial Co., Ltd.Inventor: Masaya Sumita
-
Patent number: 7251716Abstract: The data processing system controls, from the database management system on a host computer, the storage device subsystem which stores log data supplied from the database management system; allocates on a disk cache in the storage device subsystem in advance a log-dedicated buffer area of a size equal to that of the log data output between checkpoints; writes log data into the buffer area; and, in the event of a host computer failure, reads out the log data from the disk cache without making access to a disk device. Since the log information required for the recovery of the data processing device is cached on the storage device side, the time it takes to read the necessary log information can be shortened, which in turn reduces the system recovery time.Type: GrantFiled: May 20, 2004Date of Patent: July 31, 2007Assignee: Hitachi, Ltd.Inventors: Noriko Nagae, Nobuo Kawamura, Takayoshi Shimokawa
-
Patent number: 7246202Abstract: In a computer system that concurrently executes a plurality of tasks, a cache controller eliminates the possibility of the hit rate of one task dropping due to execution of another task. A region managing unit manages a plurality of regions in a cache memory in correspondence with a plurality of tasks. An address receiving unit receives, from a microprocessor, an address of a location in a main memory at which data to be accessed to execute one of the plurality of tasks is stored. A caching unit acquires, if the data to be accessed is not stored in the cache memory, a data block including the data from the main memory, and stores the acquired data block into a region in the cache memory corresponding to the task.Type: GrantFiled: November 10, 2003Date of Patent: July 17, 2007Assignee: Matsushita Electric Industrial Co., Ltd.Inventors: Hiroyuki Morishita, Tokuzo Kiyohara
-
Patent number: 7246208Abstract: The present invention partitions a cache region of a storage subsystem for each user and prevents interference between user-dedicated regions. A plurality of CLPR can be established within the storage subsystem. A CLPR is a user-dedicated region that can be used by partitioning the cache region of a cache memory. Management information required to manage the data stored in the cache memory is allocated to each CLPR in accordance with the attribute of the segment or slot. The clean queue and clean counter, which manage the segments in a clean state, are provided in each CLPR. The dirty queue and dirty counter are used jointly by all the CLPR. The free queue, classification queue, and BIND queue are applied jointly to all the CLPR, only the counters being provided in each CLPR.Type: GrantFiled: April 13, 2004Date of Patent: July 17, 2007Assignee: Hitachi, Ltd.Inventors: Sachiko Hoshino, Takashi Sakaguchi, Yasuyuki Nagasoe, Shoji Sugino
-
Patent number: 7243170Abstract: An instruction buffer and a method of buffering instructions.Type: GrantFiled: November 24, 2003Date of Patent: July 10, 2007Assignee: International Business Machines CorporationInventors: Taqi N. Buti, Brian W. Curran, Maureen A. Delaney, Saiful Islam, Zakaria M. Khwaja, Jafar Nahidi, Dung Q. Nguyen
-
Patent number: 7240177Abstract: A system and method for improving dynamic memory removals by reducing the file cache size prior to the dynamic memory removal operation initiating are provided. In one exemplary embodiment, the maximum amount of physical memory that can be used to cache files is reduced prior to performing a dynamic memory removal operation. Reducing the maximum amount of physical memory that can be used to cache files causes the page replacement algorithm to aggressively target file pages to bring the size of the file cache below the new maximum limit on the file cache size. This results in more file pages, rather than working storage pages, being paged-out.Type: GrantFiled: May 27, 2004Date of Patent: July 3, 2007Assignee: International Business Machines CorporationInventors: David Alan Hepkin, Bret Ronald Olszewski
-
Patent number: 7237066Abstract: A computer system acquires mapping information of data storage regions in respective layers from a layer of DBMSs to a layer of storage subsystems, grasps correspondence between DB data and storage positions of each storage subsystem on the basis of the mapping information, decides a cache partitioning in each storage subsystem on the basis of the correspondence and sets the cache partitioning for each storage subsystem. When cache allocation in the DBMS or the storage subsystem needs to be changed, information for estimating the cache effect due to the change in cache allocation acquired by the DBMS is used for estimating the cache effect in the storage subsystem.Type: GrantFiled: May 30, 2006Date of Patent: June 26, 2007Assignee: Hitachi, Ltd.Inventors: Kazuhiko Mogi, Norifumi Nishikawa