Partitioned Cache Patents (Class 711/129)
  • Patent number: 8108612
    Abstract: Version indicators within an existing range can be associated with a data partition in a distributed data store. A partition reconfiguration can be associated with one of multiple partitions in the data store, and a new version indicator that is outside the existing range can be assigned to the reconfigured partition. Additionally, a broadcast message can be sent to multiple nodes, which can include storage nodes and/or client nodes that are configured to communicate with storage nodes to access data in a distributed data store. The broadcast message can include updated location information for data in the data store. In addition, a response message can be sent to a requesting node of the multiple nodes in response to receiving from that node a message that requests updated location information for the data. The response message can include the requested updated location information.
    Type: Grant
    Filed: May 15, 2009
    Date of Patent: January 31, 2012
    Assignee: Microsoft Corporation
    Inventors: Lu Xun, Hua-Jun Zeng, Muralidhar Krishnaprasad, Radhakrishnan Srikanth, Ankur Agrawal, Balachandar Pavadaisamy
  • Patent number: 8099578
    Abstract: In a method embodiment, a method includes periodically polling data sent to an output. The output is operable to render the data into a human-perceptible form. The method further includes determining if at least one partition of a first plurality of discrete partitions of the perdiodically polled data is substantially identical to a combination of respective portions of at least two partitions of a second plurality of discrete partitions of data recorded within a computer-readable storage.
    Type: Grant
    Filed: April 25, 2008
    Date of Patent: January 17, 2012
    Assignee: Computer Associates Think, Inc.
    Inventors: Mark R. Godwin, Vivian A. Lloyd
  • Patent number: 8095736
    Abstract: Software, systems and methods are described which provide cache management capabilities. The number of cache sets to be used in each partition of the cache memory space is based on a number of cache pages in each partition and an associativity level associated with the set associative cache. The cache sets can be numbered based on the partition number, a total number of partitions and a cache page index. Cache management according to these exemplary embodiments reduces problems associated with cache trashing in multiprocessor environments sharing common data structures in set associative caches.
    Type: Grant
    Filed: February 25, 2008
    Date of Patent: January 10, 2012
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventor: Frederic Rossi
  • Patent number: 8095633
    Abstract: Methods and systems are provided for delivering content from a website to a computer device. The website and computer device negotiate terms for use of a cache memory coupled to the computer device. The computer device requests content, such as web page objects, from the website. In addition to transmitting the requested content, the website transmits non-requested content to the computer device. The non-requested content is stored in the cache memory for later retrieval by the computer device.
    Type: Grant
    Filed: July 2, 2007
    Date of Patent: January 10, 2012
    Assignee: Nokia, Inc.
    Inventors: Tao Wu, Sudhir Dixit, Sadhna Ahuja
  • Patent number: 8095733
    Abstract: A data processing system includes an interconnect fabric, a system memory coupled to the interconnect fabric and including a virtual barrier synchronization region allocated to storage of virtual barrier synchronization registers (VBSRs), and a plurality of processing units coupled to the interconnect fabric and operable to access the virtual barrier synchronization region. Each of the plurality of processing units includes a processor core and a cache memory including a cache controller and a cache array that caches VBSR lines from the virtual barrier synchronization region of the system memory. The cache controller of a first processing unit, responsive to a memory access request from its processor core that targets a first VBSR line, transfers responsibility for writing back to the virtual barrier synchronization region a second VBSR line contemporaneously held in the cache arrays of first, second and third processing units.
    Type: Grant
    Filed: April 7, 2009
    Date of Patent: January 10, 2012
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, Guy L. Guthrie, Michael Siegel, William J. Starke, Derek E. Williams
  • Patent number: 8095735
    Abstract: A memory interleave system for providing memory interleave for a heterogeneous computing system is provided. The memory interleave system effectively interleaves memory that is accessed by heterogeneous compute elements in different ways, such as via cache-block accesses by certain compute elements and via non-cache-block accesses by certain other compute elements. The heterogeneous computing system may comprise one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements that share access to a common main memory. The cache-block oriented compute elements access the memory via cache-block accesses (e.g., 64 bytes, per access), while the non-cache-block oriented compute elements access memory via sub-cache-block accesses (e.g., 8 bytes, per access).
    Type: Grant
    Filed: August 5, 2008
    Date of Patent: January 10, 2012
    Assignee: Convey Computer
    Inventors: Tony M. Brewer, Terrell Magee, J. Michael Andrewartha
  • Patent number: 8090926
    Abstract: A multiple computer system with hybrid replicated shared memory is disclosed. The local memory (10, 20, . . . 80) of each of the multiple computers M1, M2, . . . Mn is partitioned into a first part (11, 21, . . . 81) and a second part (12, 22, . . . 82). Each of the first parts are identical and each of the second parts are independent. The total memory available to the system is the first memory part plus n times the second memory part, n being the total number of application running multiple computers.
    Type: Grant
    Filed: October 5, 2007
    Date of Patent: January 3, 2012
    Assignee: Waratek Pty Ltd.
    Inventor: John M. Holt
  • Patent number: 8090911
    Abstract: In an embodiment, a target number of discretionary pages for a first partition is calculated as a function of a number of physical page table faults, a number of sampled page faults, a number of shared physical page pool faults, a number of re-page-ins, and a ratio of pages. If the target number of discretionary pages for the first partition is less than a number of the discretionary pages that are allocated to the first partition, a result page is found that is allocated to the first partition and the result page is deallocated from the first partition. If the target number of discretionary pages for the first partition is greater than the number of the discretionary pages that are allocated to the first partition, a free page is allocated to the first partition.
    Type: Grant
    Filed: April 16, 2009
    Date of Patent: January 3, 2012
    Assignee: International Business Machines Corporation
    Inventors: Wade B. Ouren, Edward C. Prosser, Kenneth C. Vossen
  • Publication number: 20110320687
    Abstract: Embodiments of the invention are directed to reducing write amplification in a cache with flash memory used as a write cache. An embodiment of the invention includes partitioning at least one flash memory device in the cache into a plurality of logical partitions. Each of the plurality of logical partitions is a logical subdivision of one of the at least one flash memory device and comprises a plurality of memory pages. Data are buffered in a buffer. The data includes data to be cached, and data to be destaged from the cache to a storage subsystem. Data to be cached are written from the buffer to the at least one flash memory device. A processor coupled to the buffer is provided with access to the data written to the at least one flash memory device from the buffer, and a location of the data written to the at least one flash memory device within the plurality of logical partitions. The data written to the at least one flash memory device are destaged from the buffer to the storage subsystem.
    Type: Application
    Filed: June 29, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Wendy A. Belluomini, Binny S. Gill, Michael A. Ko
  • Publication number: 20110314226
    Abstract: In general, the present invention relates to semiconductor storage systems (SSDs). Specifically, the present invention relates to SSD based cache manager. In a typical embodiment, a cache balancer is coupled to a set of cache meta data units. A set of cache algorithms that utilizes the set of cache meta data units to determine optimal data caching operations. A cache adaptation manger is coupled to and sends volume information to the cache balancer. Typically, this information is computed using the set of cache algorithms. A monitoring manager is coupled to the cache adaptation.
    Type: Application
    Filed: June 16, 2010
    Publication date: December 22, 2011
    Inventor: Byungcheol Cho
  • Patent number: 8078804
    Abstract: A data cache memory coupled to a processor including processor clusters are adapted to operate simultaneously on scalar and vectorial data by providing data locations in the data cache memory for storing data for processing. The data locations are accessed either in a scalar mode or in a vectorial mode. This is done by explicitly mapping the data locations that are scalar and the data locations that are vectorial.
    Type: Grant
    Filed: June 26, 2007
    Date of Patent: December 13, 2011
    Assignees: STMicroelectronics S.r.l., STMicroelectronics N.V.
    Inventors: Francesco Pappalardo, Giuseppe Notarangelo, Elena Salurso, Elio Guidetti
  • Patent number: 8074028
    Abstract: The present solution provides a multi-tiered caching and cache indexing system is depicted. A cache management system uses a memory based object index to reference or identify corresponding objects stored in disk. The memory used to index object may grow proportionally or in relation to growth in the size of the disk. The techniques described herein minimize, reduce or maintain the size of memory for an object index although the size of storage for storing objects is changed. These techniques allow for more optimal use of memory for object indexing while increasing or decreasing disk size for object storage.
    Type: Grant
    Filed: March 12, 2007
    Date of Patent: December 6, 2011
    Assignee: Citrix Systems, Inc.
    Inventor: Robert Plamondon
  • Patent number: 8069308
    Abstract: In a computing system a method and apparatus for cache pooling is introduced. Threads are assigned priorities based on the criticality of their tasks. The most critical threads are assigned to main memory locations such that they are subject to limited or no cache contention. Less critical threads are assigned to main memory locations such that their cache contention with critical threads is minimized or eliminated. Thus, overall system performance is improved, as critical threads execute in a substantially predictable manner.
    Type: Grant
    Filed: February 13, 2008
    Date of Patent: November 29, 2011
    Assignee: Honeywell International Inc.
    Inventors: Aaron Larson, Ryan Roffelsen, Larry James Miller
  • Patent number: 8065485
    Abstract: A method for determining whether to store binary information in a fast way or a slow way of a cache is disclosed. The method includes receiving a block of binary information to be stored in a cache memory having a plurality of ways. The plurality of ways includes a first subset of ways and a second subset of ways, wherein a cache access by a first execution core from one of the first subset of ways has a lower latency time than a cache access from one of the second subset of ways. The method further includes determining, based on a predetermined access latency and one or more parameters associated with the block of binary information, whether to store the block of binary information into one of the first set of ways or one of the second set of ways.
    Type: Grant
    Filed: May 22, 2009
    Date of Patent: November 22, 2011
    Assignee: Oracle America, Inc.
    Inventors: Gideon N. Levinsky, Paul Caprioli, Sherman H. Yip
  • Patent number: 8060689
    Abstract: A method includes configuring a flash memory device including a first memory sector having a primary memory sector correspondence, a second memory sector having an alternate memory sector correspondence, and a third memory sector having a free memory sector correspondence, copying a portion of the primary memory sector to the free memory sector, erasing the primary memory sector, and changing a correspondence of each of the first memory sector, the second memory sector, and the third memory sector.
    Type: Grant
    Filed: May 4, 2010
    Date of Patent: November 15, 2011
    Assignee: Pitney Bowes Inc.
    Inventors: Wesley A. Kirschner, Gary S. Jacobson, John A. Hurd, G. Thomas Atthens, Steven J. Pauly, Richard C. Day, Jr.
  • Patent number: 8055850
    Abstract: A method, system, and computer program product for prioritizing directory scans in cache by a processor is provided. While traversing a directory in the cache, one of attempting to acquire a lock for a directory entry and attempting to acquire access to a track in the directory entry is performed. If one of the lock is not obtained for the directory entry and the access to the track in the directory entry is not obtained, the directory entry is added to a reserved data space. Following completion of traversing the directory, a return is made to the reserved data space to process the directory entry and the track in the directory entry.
    Type: Grant
    Filed: April 6, 2009
    Date of Patent: November 8, 2011
    Assignee: International Business Machines Corporation
    Inventor: Lokesh M. Gupta
  • Patent number: 8041920
    Abstract: Embodiments of apparatuses, methods, and systems for partitioning memory mapped device configuration space are disclosed. In one embodiment, an apparatus includes a configuration space address storage location, an access map storage location, and addressing logic. The configuration space address storage location is to store a pointer to a memory region to which transactions to configure devices in a partition of a partitioned system are addressed. The access map storage location is to store an access map or a pointer to an access map. The addressing logic is to use the access map to determine whether a configuration transaction from a processor to one of the devices is to be allowed.
    Type: Grant
    Filed: December 29, 2006
    Date of Patent: October 18, 2011
    Assignee: Intel Corporation
    Inventors: David A. Konfaty, John I. Garney, Ulhas Warrier, Kiran S. Panesar
  • Patent number: 8032732
    Abstract: A cache-aware Bloom filter system segments a bit vector of a cache-aware Bloom filter into fixed-size blocks. The system hashes an item to be inserted into the cache-aware Bloom filter to identify one of the fixed-size blocks as a selected block for receiving the item and hashes the item k times to generate k hashed values for encoding the item for insertion in the in the selected block. The system sets bits within the selected block with addresses corresponding to the k hashed values such that accessing the item in the cache-aware Bloom filter requires accessing only the selected block to check the k hashed values. The size of the fixed-size block corresponds to a cache-line size of an associated computer architecture on which the cache-aware Bloom filter is installed.
    Type: Grant
    Filed: June 5, 2008
    Date of Patent: October 4, 2011
    Assignee: International Business Machines Corporatio
    Inventors: Kevin Scott Beyer, Sridhar Rajagopalan
  • Patent number: 8028011
    Abstract: A global cylinder group (CG) cache is stored in file server memory and shared by a plurality of file systems supported by the file server. The global CG cache comprises a number CG entries which are pre-allocated in memory. As different file systems are accessed, global CG entries in the CG cache are used to store CG block information for the accesses. With such an arrangement, a file server may support multiple file systems using a single global CG cache without starvation and other the adverse performance impacts of the prior art. According to one aspect of the invention, the global CG cache is periodically scanned to reclaim memory. In contrast to the prior art, where multiple scans were periodically performed of multiple CG caches for memory reclamation, the use of a single CG cache minimizes the impact of CG cache maintenance on file server performance.
    Type: Grant
    Filed: July 13, 2006
    Date of Patent: September 27, 2011
    Assignee: EMC Corporation
    Inventors: Sitaram Pawar, Jean-Pierre Bono
  • Patent number: 8019719
    Abstract: Systems and methods for partitioning information across multiple storage devices in a web server environment. The system comprises a web server database which includes information related creating a web site. The information is divided into partitions within the database. One of the partitions includes user information and another of the partitions includes content for the web site. Portions of the content for the web site is replicated and maintained within the partition including the user information. Further, a portion of the user information is replicated and maintained in the partition where the content for the web site is maintained. The methods include dividing information into partitions, de-normalizing the received data and replicating the data portions into the various web site locations.
    Type: Grant
    Filed: June 23, 2008
    Date of Patent: September 13, 2011
    Assignee: Ancestry.com Operations Inc.
    Inventors: Todd Hardman, James Ivie, Michael Mansfield, Greg Parkinson, Daren Thayne, Mark Wolfgramm, Michael Wolfgramm, Brandt Redd
  • Patent number: 8015358
    Abstract: A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses.
    Type: Grant
    Filed: September 9, 2008
    Date of Patent: September 6, 2011
    Assignee: International Business Machines Corporation
    Inventors: Vicente Enrique Chung, Guy Lynn Guthrie, William John Starke, Jeffrey Adam Stuecheli
  • Patent number: 8001330
    Abstract: A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first directory to access the first array slice while using a second directory to access the second array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In one embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. The cache array is arranged with rows and columns of cache sectors wherein a cache line is spread across sectors in different rows and columns, with a portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency.
    Type: Grant
    Filed: December 1, 2008
    Date of Patent: August 16, 2011
    Assignee: International Business Machines Corporation
    Inventors: Leo James Clark, James Stephen Fields, Jr., Guy Lynn Guthrie, William John Starke
  • Patent number: 8001329
    Abstract: A system and method for partitioning a data stream into tokens includes steps or acts of: receiving the data stream; setting a partition scanner to a beginning point in the data stream; identifying likely token boundaries in the data stream using the partition scanner; partitioning the data stream according to the likely token boundaries as determined by the partition scanner, wherein each partition of the partitioned data stream bounded by the likely token boundaries comprises a chunk; and passing the chunk to a next available token scanner, one chunk per token scanner, for identifying at least one actual token within each chunk.
    Type: Grant
    Filed: May 19, 2008
    Date of Patent: August 16, 2011
    Assignee: International Business Machines Corporation
    Inventor: Christoph von Praun
  • Patent number: 7996620
    Abstract: A cache memory high performance pseudo dynamic address compare path divides the address into two or more address segments. Each segment is separately compared in a comparator comprised of static logic elements. The output of each of these static comparators is then combined in a dynamic logic circuit to generate a dynamic late select output.
    Type: Grant
    Filed: September 5, 2007
    Date of Patent: August 9, 2011
    Assignee: International Business Machines Corporation
    Inventors: Yuen H. Chan, Ann H. Chen, Kenneth M. Lo, Shie-ei Wang
  • Patent number: 7996644
    Abstract: An apparatus and method for fairly accessing a shared cache with multiple resources, such as multiple cores, multiple threads, or both are herein described. A resource within a microprocessor sharing access to a cache is assigned a static portion of the cache and a dynamic portion. The resource is blocked from victimizing static portions assigned to other resources, yet, allowed to victimize the static portion assigned to the resource and the dynamically shared portion. If the resource does not access the cache enough times over a period of time, the static portion assigned to the resource is reassigned to the dynamically shared portion.
    Type: Grant
    Filed: December 29, 2004
    Date of Patent: August 9, 2011
    Assignee: Intel Corporation
    Inventor: Sailesh Kottapalli
  • Publication number: 20110185127
    Abstract: The processor circuit (1) has a Harvard architecture. This processor circuit includes a calculation unit (2), a first memory element (3a) for data storage and a second memory element (4a) for instruction storage. Said first and second memory elements (3a, 4a) are connected by at least one communication bus (5, 6) to the calculation unit. The processor circuit includes management means (8), placed between the first and second memory elements and the calculation unit and capable of saving several data items or instructions to save time during successive data reading.
    Type: Application
    Filed: July 23, 2009
    Publication date: July 28, 2011
    Applicant: EM MICROELECTRONIC-MARIN SA
    Inventors: Yves Theoduloz, Hugo Jaeggi, Tomas Toth
  • Patent number: 7979641
    Abstract: The embodiments of the invention provide a method, apparatus, etc. for a cache arrangement for improving RAID I/O operations. More specifically, a method begins by partitioning a data object into a plurality of data blocks and creating one or more parity data blocks from the data object. Next, the data blocks and the parity data blocks are stored within storage nodes. Following this, the method caches data blocks within a partitioned cache, wherein the partitioned cache includes a plurality of cache partitions. The cache partitions are located within the storage nodes, wherein each cache partition is smaller than the data object. Moreover, the caching within the partitioned cache only caches data blocks in parity storage nodes, wherein the parity storage nodes comprise a parity storage field. Thus, caching within the partitioned cache avoids caching data blocks within storage nodes lacking the parity storage field.
    Type: Grant
    Filed: March 31, 2008
    Date of Patent: July 12, 2011
    Assignee: International Business Machines Corporation
    Inventors: Dingshan He, Deepak R. Kenchammana-Hosekote
  • Publication number: 20110161557
    Abstract: This disclosure is related to distributed media cache for data storage systems, such as disc drives, flash devices, or hybrid devices. In one example, a data storage device comprises a data storage medium and a controller adapted to selectively divide a media cache into a plurality of physically separate media cache portions on the data storage medium based on a physical attribute of the data storage medium and to store data received from a host system into the media cache portions.
    Type: Application
    Filed: December 31, 2009
    Publication date: June 30, 2011
    Applicant: SEAGATE TECHNOLOGY LLC
    Inventors: Jonathan Williams Haines, Brett Alan Cook
  • Patent number: 7970997
    Abstract: A program section layout method capable of improving space efficiency of a cache memory. A grouping unit groups program sections into section groups so that the total size of the program sections composing each section group does not exceed cache memory size. A layout optimization unit optimizes the layout of section group storage regions by combining each section group and a program section that does not belong to any section groups or by combining section groups while keeping the ordering relations of the program sections composing each section group.
    Type: Grant
    Filed: December 23, 2004
    Date of Patent: June 28, 2011
    Assignee: Fujitsu Semiconductor Limited
    Inventor: Manabu Watanabe
  • Patent number: 7965557
    Abstract: A flash memory device includes a memory cell array having a set-up data region configured to store set-up data, wherein the set-up data includes first data and second data. The second data is stored in an empty cell area of the set-up data region. The flash memory also includes a page buffer and decoder configured to read the set-up data from the set-up data region, and a status detector receiving the set-up data from the page buffer and decoder and configured to discriminate the first data from the second data and generate a Pass/Fail status signal.
    Type: Grant
    Filed: April 3, 2008
    Date of Patent: June 21, 2011
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Sang-Gu Kang
  • Publication number: 20110145493
    Abstract: Various embodiments of the present invention are directed a multi-core memory modules. In one embodiment, a memory module (500) includes at least one virtual memory device and a demultiplexer register (502) disposed between the at least one virtual memory device and a memory controller. The demultiplexer register receives a command identifying one of the at least one virtual memory devices from the memory controller and sends the command to the identified virtual memory device. In addition, the at least one virtual memory devices include at least one memory chip.
    Type: Application
    Filed: August 8, 2008
    Publication date: June 16, 2011
    Inventors: Jung Ho Ahn, Norman P. Jouppi, Jacob B. Erich
  • Publication number: 20110145504
    Abstract: Various embodiments of the present invention are directed multi-core memory modules. In one embodiment, a memory module (500) includes memory chips, and a demultiplexer register (502) electronically connected to each of the memory chips and a memory controller. The memory controller groups one or more of the memory chips into at least one virtual memory device in accordance with changing performance and/or energy efficiency needs. The demultiplexer register (502) is configured to receive a command indentifying one of the virtual memory devices and send the command to the memory chips of the identified virtual memory device. In certain embodiments, the memory chips can be dynamic random access memory chips.
    Type: Application
    Filed: August 8, 2008
    Publication date: June 16, 2011
    Inventors: Jung Ho Anh, Norman P. Jouppi
  • Patent number: 7962700
    Abstract: Compressed memory systems are provided to reduce latency associated with accessing compressed memory using stratified compressed memory architectures and memory organization protocols in which a region of compressed main memory is allocated as a direct access memory (DAM) region for storing uncompressed data items. The uncompressed data items in the DAM region can be directly accessed, speculatively, to serve access requests to main memory, requiring access to compressed memory in the event of a DAM miss.
    Type: Grant
    Filed: September 6, 2006
    Date of Patent: June 14, 2011
    Assignee: International Business Machines Corporation
    Inventors: Peter Anthony Franaszek, Luis Alfonso Lastras-Montano, Robert Brett Tremaine
  • Patent number: 7958329
    Abstract: A multiple computer system with hybrid replicated shared memory is disclosed. The local memory (10, 20, . . . 80) of each of the multiple computers M1, M2, . . . Mn is partitioned into a first part (11, 21, . . . 81) and a second part (12, 22, . . . 82). Each of the first parts are identical and each of the second parts are independent. The total memory available to the system is the first memory part plus n times the second memory part, n being the total number of application running multiple computers.
    Type: Grant
    Filed: October 5, 2007
    Date of Patent: June 7, 2011
    Assignee: Waratek Pty Ltd
    Inventor: John M. Holt
  • Publication number: 20110131378
    Abstract: Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area.
    Type: Application
    Filed: November 22, 2010
    Publication date: June 2, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Xiao Jun Dai, Subhendu Das, Zhi Gan, Zhang Yue
  • Patent number: 7949829
    Abstract: In one embodiment, a cache comprises a data memory comprising a plurality of data entries, each data entry having capacity to store a cache block of data, and a cache control unit coupled to the data memory. The cache control unit is configured to dynamically allocate a given data entry in the data memory to store a cache block being cached or to store data that is not being cache but is being staged for retransmission on an interface to which the cache is coupled.
    Type: Grant
    Filed: September 24, 2009
    Date of Patent: May 24, 2011
    Assignee: Apple Inc.
    Inventors: Ruchi Wadhawan, Jason M. Kassoff, George Kong Yiu
  • Patent number: 7941585
    Abstract: A RISC-type processor includes a main register file and a data cache. The data cache can be partitioned to include a local memory, the size of which can be dynamically changed on a cache block basis while the processor is executing instructions that use the main register file. The local memory can emulate as an additional register file to the processor and can reside at a virtual address. The local memory can be further partitioned for prefetching data from a non-cacheable address to be stored/loaded into the main register file.
    Type: Grant
    Filed: December 17, 2004
    Date of Patent: May 10, 2011
    Assignee: Cavium Networks, Inc.
    Inventors: David H. Asher, David A. Carlson, Richard E. Kessler
  • Publication number: 20110107033
    Abstract: An approach is provided for providing an application-level cache. A caching application configures at least one memory of a mobile terminal into an application-level cache with a locked region and a floating region. The caching application then causes, at least in part, actions that result in caching, into each of the locked region and the floating region, of data items that are anticipated to be requested via an application of the mobile terminal.
    Type: Application
    Filed: November 4, 2009
    Publication date: May 5, 2011
    Applicant: Nokia Corporation
    Inventors: Nikolai Grigoriev, Sylvain Legault
  • Patent number: 7930484
    Abstract: Instructions involving a relatively significant data transfer or a particular type of data transfer via a cache result in the application of a restricted access policy to control access to one or more partitions of the cache so as to reduce or prevent the overwriting of data that is expected to be subsequently used by the cache or by a processor. A processor or other system component may assert a signal which is utilized to select between one or more access policies and the selected access policy then may be applied to control access to one or more ways of the cache during the data transfer operation associated with the instruction. The access policy typically represents an access restriction to particular cache partitions, such as a restriction to one or more particular cache ways or one or more particular cache lines.
    Type: Grant
    Filed: February 7, 2005
    Date of Patent: April 19, 2011
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Stephen P. Thompson, Mark A. Krom
  • Publication number: 20110087843
    Abstract: An apparatus, method, and system are disclosed. In one embodiment the apparatus includes a cache memory, which a number of sets. Each of the sets in the cache memory have several cache lines. The apparatus also includes at least one process resource table. The process resource table maintains a cache line occupancy count of a number of cache lines. Specifically, the cache line occupancy count for each cache line describes the number of cache lines in the cache storing information utilized by a process running on a computer system. Additionally, the process resource table stores the occupancy count of less cache lines than the total number of cache lines in the cache memory.
    Type: Application
    Filed: October 9, 2009
    Publication date: April 14, 2011
    Inventors: Li Zhao, Ravishankar Iyer, Rameshkumar G. Illikkal, Erik G. Hallnor, Martin G. Dixon, Donald K. Newell
  • Patent number: 7925866
    Abstract: A data processing apparatus and method are provided for handling instructions to be executed by processing circuitry. The processing circuitry has a plurality of processor states, each processor state having a different instruction set associated therewith. Pre-decoding circuitry receives the instructions fetched from the memory and performs a pre-decoding operation to generate corresponding pre-decoded instructions, with those pre-decoded instructions then being stored in a cache for access by the processing circuitry. The pre-decoding circuitry performs the pre-decoding operation assuming a speculative processor state, and the cache is arranged to store an indication of the speculative processor state in association with the pre-decoded instructions.
    Type: Grant
    Filed: December 3, 2008
    Date of Patent: April 12, 2011
    Assignee: ARM Limited
    Inventors: Peter Richard Greenhalgh, Andrew Christopher Rose, Simon John Craske, Max Zardini
  • Patent number: 7913041
    Abstract: A method for reconfiguring a cache memory is provided. The method in one aspect may include analyzing one or more characteristics of an execution entity accessing a cache memory and reconfiguring the cache based on the one or more characteristics analyzed. Examples of analyzed characteristic may include but are not limited to data structure used by the execution entity, expected reference pattern of the execution entity, type of an execution entity, heat and power consumption of an execution entity, etc. Examples of cache attributes that may be reconfigured may include but are not limited to associativity of the cache memory, amount of the cache memory available to store data, coherence granularity of the cache memory, line size of the cache memory, etc.
    Type: Grant
    Filed: May 30, 2008
    Date of Patent: March 22, 2011
    Assignee: International Business Machines Corporation
    Inventors: Xiaowei Shen, Balaram Sinharoy, Robert B. Tremaine, Robert W. Wisniewski
  • Patent number: 7904658
    Abstract: A design structure for a cache memory system (200) having a cache memory (204) partitioned into a number of banks, or “ways” (204A, 204B). The memory system includes a power controller (244) that selectively powers up and down the ways depending upon which way contains the data being sought by each incoming address (232) coming into the memory system.
    Type: Grant
    Filed: September 6, 2007
    Date of Patent: March 8, 2011
    Assignee: International Business Machines Corporation
    Inventors: Wagdi W. Abadeer, George M. Braceras, John A. Fifield, Harold Pilo
  • Publication number: 20110055827
    Abstract: A mechanism is provided in a virtual machine monitor for providing cache partitioning in virtualized environments. The mechanism assigns a virtual identification (ID) to each virtual machine in the virtualized environment. The processing core stores the virtual ID of the virtual machine in a special register. The mechanism also creates an entry for the virtual machine in a partition table. The mechanism may partition a shared cache using a vertical (way) partition and/or a horizontal partition. The entry in the partition table includes a vertical partition control and a horizontal partition control. For each cache access, the virtual machine passes the virtual ID along with the address to the shared cache. If the cache access results in a miss, the shared cache uses the partition table to select a victim cache line for replacement.
    Type: Application
    Filed: August 25, 2009
    Publication date: March 3, 2011
    Applicant: International Business Machines Corporation
    Inventors: Jiang Lin, Lixin Zhang
  • Patent number: 7899989
    Abstract: A method for writing a logical block into a storage pool includes receiving a request to write the logical block, selecting a block allocation policy, by a file system associated with the storage pool, from a set of allocation policies, obtaining a list of free physical blocks in the storage pool, allocating a physical block from the list of free physical blocks, based on the block allocation policy, and writing the logical block to the physical block.
    Type: Grant
    Filed: April 20, 2006
    Date of Patent: March 1, 2011
    Assignee: Oracle America, Inc.
    Inventors: William H. Moore, Jeffrey S. Bonwick
  • Patent number: 7895398
    Abstract: A system and method is disclosed for the adaptive and dynamic adjustment of the characteristics of a cache on a basis that is specific the operation of each logical unit. A storage controller may include a cache. The cache is subdivided so that a portion of the cache is associated with each logical unit that is coupled to the storage controller. A cache management utility monitors the data access commands transmitted to each logical unit of the storage array. The size of the portion of the cache dedicated to each logical unit may be adjusted on the basis of the data access commands directed to the logical unit. The size of the read cache subportion and the size of the write cache subportion of a cache portion associated with a single logical unit may be adjusted on the basis of the read and write commands directed to the logical unit.
    Type: Grant
    Filed: April 12, 2006
    Date of Patent: February 22, 2011
    Assignee: Dell Products L.P.
    Inventors: Uday D. Shet, Peyman Najafirad, Ramesh S. Rajagopalan
  • Publication number: 20110040940
    Abstract: The present invention discloses a method comprising: sending cache request; monitoring power state; comparing said power state; allocating cache resources; filling cache; updating said power state; repeating said sending, said monitoring, said comparing, said allocating, said filling, and said updating until workload is completed.
    Type: Application
    Filed: August 13, 2009
    Publication date: February 17, 2011
    Inventors: Ryan D. Wells, Michael J. Muchnick, Chinnakrishnan S. Ballapuram
  • Patent number: 7890632
    Abstract: A method, system, and computer usable program product for load balancing using replication delay are provided in the illustrative embodiments. In response to a request to update, a system updates data associated with a write server, forming updated data of a data partition. The system receives a read request for the data partition. The system calculates a time difference between an arrival time of the request to update and an arrival time of the read request. The system receives a set of average replication delays for a set of replica servers serving the data partition. The system directs the read request to a replica server in the set of replica servers whose average replication delay is less than or equal to the time difference.
    Type: Grant
    Filed: August 11, 2008
    Date of Patent: February 15, 2011
    Assignee: International Business Machines Corporation
    Inventors: Kristin Marie Hazlewood, Yogesh Vilas Golwalkar, Magesh Rajamani
  • Patent number: 7877566
    Abstract: A read command protocol and a method of accessing a nonvolatile memory device having an internal cache memory. A memory device configured to accept a first and second read command, outputting a first requested data while simultaneously reading a second requested data. In addition, the memory device may be configured to send or receive a confirmation indicator.
    Type: Grant
    Filed: January 25, 2005
    Date of Patent: January 25, 2011
    Assignee: Atmel Corporation
    Inventor: Vijaya P. Adusumilli
  • Patent number: RE42213
    Abstract: A cache and TLB layout and design leverage repeater insertion to provide dynamic low-cost configurability trading off size and speed on a per application phase basis. A configuration management algorithm dynamically detects phase changes and reacts to an application's hit and miss intolerance in order to improve memory hierarchy performance while taking energy consumption into consideration.
    Type: Grant
    Filed: January 24, 2006
    Date of Patent: March 8, 2011
    Assignee: University of Rochester
    Inventors: Sandhya Dwarkadas, Rajeev Balasubramonian, Alper Buyuktosunoglu, David H. Albonesi