Partitioned Cache Patents (Class 711/129)
-
Patent number: 8819074Abstract: A system includes creation of a first resource queue indicating an order of resources stored in a memory, the order based on respective timestamps associated with the stored resources, association of the first resource queue with a first queue timestamp, reception of a first command to deallocate a first amount of stored resources from the memory, determination that a first stored resource indicated by the first resource queue is associated with a timestamp earlier than the first queue timestamp, deallocation of the first stored resource from the memory, reception of a second command to deallocate a second amount of stored resources from the memory, determination that the first resource queue indicates no stored resources which are associated with a timestamp earlier than the first queue timestamp, and, in response to the determination that the first resource queue indicates no stored resources which are associated with a timestamp earlier than the first queue timestamp, creation of a second resource queue inType: GrantFiled: October 28, 2013Date of Patent: August 26, 2014Assignee: SAP AGInventor: Ivan Schreter
-
Patent number: 8782345Abstract: Subject matter disclosed herein relates to sub-block accessible cache memory.Type: GrantFiled: August 5, 2013Date of Patent: July 15, 2014Assignee: Micron Technology, Inc.Inventors: Giuseppe Ferrari, Procolo Carannante, Angelo Di Sena, Fabio Salvati, Anna Sorgente
-
Patent number: 8775737Abstract: A method of managing memory of a computing device includes providing a first memory that can be allocated as cache memory or that can be used by a computing device component. A first memory segment can be allocated as cache memory in response to a cache miss. Cache size can be dynamically increased by allocating additional first memory segments as cache memory in response to subsequent cache misses. Cache memory size can be dynamically decreased by reallocating first memory cache segments for use by computing device components. The cache memory can be a cache for a second memory accessible to the computing device. The computing device can be a mobile device. The first memory can be an embedded memory and the second memory can comprise embedded, removable or external memory, or any combination thereof. The maximum size of the cache memory scales with the size of the first memory.Type: GrantFiled: December 2, 2010Date of Patent: July 8, 2014Assignee: Microsoft CorporationInventors: Bor-Ming Hsieh, Andrew M. Rogers
-
Publication number: 20140173212Abstract: Methods, parallel computers, and computer program products for acquiring remote shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer determining that a first thread of a first task requires shared resource data stored in a memory partition corresponding to a second thread of a second task. Embodiments also include the runtime optimizer requesting from the second thread, in response to determining that the first thread of the first task requires the shared resource data, SVD information associated with the shared resource data. Embodiments also include the runtime optimizer receiving from the second thread, the SVD information associated with the shared resource data.Type: ApplicationFiled: December 18, 2012Publication date: June 19, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: CHARLES J. ARCHER, JAMES E. CAREY, PHILIP J. SANDERS, BRIAN E. SMITH
-
Publication number: 20140173211Abstract: Some embodiments include a partitioning mechanism that partitions a cache memory into sub-partitions for sub-entities. In the described embodiments, the cache memory is initially partitioned into two or more partitions for one or more corresponding entities. During a partitioning operation, the partitioning mechanism is configured to partition one or more of the partitions in the cache memory into two or more sub-partitions for one or more sub-entities of a corresponding entity. A cache controller then uses a corresponding sub-partition for memory accesses by the one or more sub-entities.Type: ApplicationFiled: December 13, 2012Publication date: June 19, 2014Applicant: ADVANCED MICRO DEVICESInventors: Gabriel H. Loh, Jaewoong Sim
-
Publication number: 20140156936Abstract: Destaging storage tracks from each rank that includes a greater than a predetermined percentage of a predetermined amount of storage space with respect to a current amount of storage space allocated to each rank until the current amount of storage space used by each respective rank is equal to the predetermined percentage of the predetermined amount of storage space. The destage storage tracks are declined from being destaged from each rank that includes less than or equal to the predetermined percentage of the predetermined amount of storage space rank.Type: ApplicationFiled: February 6, 2014Publication date: June 5, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Brent C. BEARDSLEY, Michael T. BENHASE, Binny S. GILL, Lokesh M. GUPTA, Sonny E. WILLIAMS
-
Publication number: 20140156937Abstract: Storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. The storage tracks are refrained from being destaged from the write cache if the at least one host is not idle. Each rank is monitored for write operations from the at least one host, and a determination is made if the at least one host is idle with respect to each respective rank based on monitoring each rank for write operations from the at least one host such that the at least one host may be determined to be idle with respect to a first rank and not idle with respect to a second rank.Type: ApplicationFiled: February 6, 2014Publication date: June 5, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Brent C. BEARDSLEY, Michael T. BENHASE, Binny S. GILL, Lokesh M. GUPTA, Sonny E. WILLIAMS
-
Patent number: 8745618Abstract: A mechanism is provided in a virtual machine monitor for providing cache partitioning in virtualized environments. The mechanism assigns a virtual identification (ID) to each virtual machine in the virtualized environment. The processing core stores the virtual ID of the virtual machine in a special register. The mechanism also creates an entry for the virtual machine in a partition table. The mechanism may partition a shared cache using a vertical (way) partition and/or a horizontal partition. The entry in the partition table includes a vertical partition control and a horizontal partition control. For each cache access, the virtual machine passes the virtual ID along with the address to the shared cache. If the cache access results in a miss, the shared cache uses the partition table to select a victim cache line for replacement.Type: GrantFiled: August 25, 2009Date of Patent: June 3, 2014Assignee: International Business Machines CorporationInventors: Jiang Lin, Lixin Zhang
-
Patent number: 8738868Abstract: A computing device employs a cooperative memory management technique to dynamically balance memory resources between host and guest systems running therein. According to this cooperative memory management technique, memory that is allocated to the guest system is dynamically adjusted up and down according to a fairness policy that takes into account various factors including the relative amount of readily freeable memory resources in the host and guest systems and the relative amount of memory allocated to hidden applications in the host and guest systems.Type: GrantFiled: August 23, 2011Date of Patent: May 27, 2014Assignee: VMware, Inc.Inventors: Harvey Tuch, Craig Newell, Cyprien Laplace
-
Patent number: 8736627Abstract: Provided are methods and systems for reducing memory bandwidth usage in a common buffer, multiple FIFO computing environment. The multiple FIFO's are arranged in coordination with serial processing units, such as in a pipeline processing environment. The multiple FIFO's contain pointers to entry addresses in a common buffer. Each subsequent FIFO receives only pointers that correspond to data that has not been rejected by the corresponding processing unit. Rejected pointers are moved to a free list for reallocation to later data.Type: GrantFiled: December 19, 2006Date of Patent: May 27, 2014Assignee: Via Technologies, Inc.Inventor: John Brothers
-
Patent number: 8739159Abstract: A mechanism is provided in a virtual machine monitor for providing cache partitioning in virtualized environments. The mechanism assigns a virtual identification (ID) to each virtual machine in the virtualized environment. The processing core stores the virtual ID of the virtual machine in a special register. The mechanism also creates an entry for the virtual machine in a partition table. The mechanism may partition a shared cache using a vertical (way) partition and/or a horizontal partition. The entry in the partition table includes a vertical partition control and a horizontal partition control. For each cache access, the virtual machine passes the virtual ID along with the address to the shared cache. If the cache access results in a miss, the shared cache uses the partition table to select a victim cache line for replacement.Type: GrantFiled: April 11, 2012Date of Patent: May 27, 2014Assignee: International Business Machines CorporationInventors: Jiang Lin, Lixin Zhang
-
Patent number: 8738859Abstract: Hybrid caching techniques and garbage collection using hybrid caching techniques are provided. A determination of a measure of a characteristic of a data object is performed, the characteristic being indicative of an access pattern associated with the data object. A selection of one caching structure, from a plurality of caching structures, is performed in which to store the data object based on the measure of the characteristic. Each individual caching structure in the plurality of caching structures stores data objects has a similar measure of the characteristic with regard to each of the other data objects in that individual caching structure. The data object is stored in the selected caching structure and at least one processing operation is performed on the data object stored in the selected caching structure.Type: GrantFiled: September 13, 2012Date of Patent: May 27, 2014Assignee: International Business Machines CorporationInventors: Chen-Yong Cher, Michael K. Gschwind
-
Patent number: 8725687Abstract: Described in detail herein are systems and methods for deduplicating data using byte-level or quasi byte-level techniques. In some embodiments, a file is divided into multiple blocks. A block includes multiple bytes. Multiple rolling hashes of the file are generated. For each byte in the file, a searchable data structure is accessed to determine if the data structure already includes an entry matching a hash of a minimum sequence length. If so, this indicates that the corresponding bytes are already stored. If one or more bytes in the file are already stored, then the one or more bytes in the file are replaced with a reference to the already stored bytes. The systems and methods described herein may be used for file systems, databases, storing backup data, or any other use case where it may be useful to reduce the amount of data being stored.Type: GrantFiled: April 2, 2013Date of Patent: May 13, 2014Assignee: CommVault Systems, Inc.Inventor: Michael F. Klose
-
Patent number: 8719502Abstract: A method for operating a cache that includes both robust cells and standard cells may include receiving a data to be written to the cache, determining whether a type of the data is unmodified data or modified data, and writing the data to robust cells or standard cells as a function of the type of the data. A processor includes a core that includes a cache including both robust cells and standard cells for receiving data, wherein the data is written to robust cells or standard cells as a function of whether a type of the data is determined to be unmodified data or modified data.Type: GrantFiled: March 30, 2012Date of Patent: May 6, 2014Assignee: Intel CorporationInventors: Christopher B. Wilkerson, Alaa R. Alameldeen, Jaydeep P. Kulkarni
-
Patent number: 8695011Abstract: Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.Type: GrantFiled: April 27, 2012Date of Patent: April 8, 2014Assignee: International Business Machines CorporationInventors: Diane G. Flemming, William A. Maron, Ram Raghavan, Satya Prakash Sharma, Mysore S. Srinivas
-
Publication number: 20140095777Abstract: Methods and apparatuses for reducing leakage power in a system cache within a memory controller. The system cache is divided into multiple small sections, and each section is supplied with power from a separately controllable power supply. When a section is not being accessed, the voltage supplied to the section is reduced to a voltage sufficient for retention of data but not for access. Incoming requests are grouped together based on which section of the system cache they target. When enough requests that target a given section have accumulated, the voltage supplied to the given section is increased to a voltage sufficient for access. Then, once the given section has enough time to ramp-up and stabilize at the higher voltage, the waiting requests may access the given section in a burst of operations.Type: ApplicationFiled: September 28, 2012Publication date: April 3, 2014Applicant: APPLE INC.Inventors: Sukalpa Biswas, Shinye Shiu
-
Patent number: 8688916Abstract: A cache memory is utilized effectively because data redundancy elimination is executed. A controller manages the cache memory by dividing it into a first area and a second area. When receiving a write access request from an access requestor, the controller divides a data block, which is an access target, into a plurality of chunks and searches the first area first and then the storage apparatus based on each chunk. If chunk storage information, indicating that each chunk is stored in the storage apparatus, does not exist in the first area or the storage apparatus, the controller executes chunk storage processing and creates and stores the chunk storage information. If the chunk storage information exists, the controller eliminates the chunk storage processing for storing the chunks. If the chunk storage information does not exist in the first area, the controller stages the chunk storage information from the storage apparatus to the first area on condition that the first area has an unused area.Type: GrantFiled: April 22, 2011Date of Patent: April 1, 2014Assignees: Hitachi, Ltd., Hitachi Information & Telecommunication Engineering, Ltd.Inventor: Naomitsu Tashiro
-
Publication number: 20140082290Abstract: A mechanism is provided in a data processing system for enhancing wiring structure for a cache supporting an auxiliary data output. The mechanism splits the data cache into a first data portion and a second data portion. The first data portion provides a first set of data elements and the second data portion provides a second set of data elements. The mechanism connects a first data path to provide the first set of data elements to a primary output and connects a second data path to provide the second set of data elements to the primary output. The mechanism feeds the first data path back into the second data path and feeds the second data path back into the first data path. The mechanism connects a secondary output to the second data path.Type: ApplicationFiled: September 17, 2012Publication date: March 20, 2014Applicant: International Business Machines CorporationInventors: Christian Habermann, Walter Lipponer, Martin Recktenwald, Hans-Werner Tast
-
Patent number: 8677371Abstract: Functionality is implemented to determine that a plurality of multi-core processing units of a system are configured in accordance with a plurality of operating performance modes. It is determined that a first of the plurality of operating performance modes satisfies a first performance criterion that corresponds to a first workload of a first logical partition of the system. Accordingly, the first logical partition is associated with a first set of the plurality of multi-core processing units that are configured in accordance with the first operating performance mode. It is determined that a second of the plurality of operating performance modes satisfies a second performance criterion that corresponds to a second workload of a second logical partition of the system. Accordingly, the second logical partition is associated with a second set of the plurality of multi-core processing units that are configured in accordance with the second operating performance mode.Type: GrantFiled: December 31, 2009Date of Patent: March 18, 2014Assignee: International Business Machines CorporationInventors: Diane G. Flemming, William A. Maron, Ram Raghavan, Satya Prakash Sharma, Mysore S. Srinivas
-
Patent number: 8677084Abstract: A system, method and machine-readable medium are provided to configure a non-volatile memory (NVM) including a plurality of NVM modules, in a system having a hard disk drive (HDD) and an operating system (O/S). In response to a user selection of a hybrid drive mode for the NVM, the plurality of NVM modules are ranked according to speed performance. Boot portions of the O/S are copied to a highly ranked NVM module, or a plurality of highly ranked NVM modules, and the HDD and the highly ranked NVM modules are assigned as a logical hybrid drive of the computer system. Ranking each of the plurality of NVM modules can include carrying out a speed performance test. This approach can provide hybrid disk performance using conventional hardware, or enhance performance of an existing hybrid drive, while taking into account relative performance of available NVM modules.Type: GrantFiled: October 19, 2012Date of Patent: March 18, 2014Assignee: MOSAID Technologies IncorporatedInventor: Hong Beom Pyeon
-
Patent number: 8661201Abstract: A system includes a cache partitioned into multiple ranks configured to store multiple storage tracks and a processor coupled to the cache. The processor is configured to perform the following method. One method includes allocating an amount of storage space in the cache to each rank and monitoring a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank. The method further includes destaging storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank. Also provided are physical computer storage mediums including code that, when executed by a processor, cause the processor to perform the above method.Type: GrantFiled: December 10, 2010Date of Patent: February 25, 2014Assignee: International Business Machines CorporationInventors: Brent C. Beardsley, Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, Sonny E. Williams
-
Patent number: 8661200Abstract: Disclosed herein is a channel controller for a multi-channel cache memory, and a method that includes receiving a memory address associated with a memory access request to a main memory of a data processing system; translating the memory address to form a first access portion identifying at least one partition of a multi-channel cache memory, and at least one further access portion, where the at least one partition includes at least one channel; and applying the at least one further access portion to the at least one channel of the multi-channel cache memory.Type: GrantFiled: February 5, 2010Date of Patent: February 25, 2014Assignee: Nokia CorporationInventors: Jari Nikara, Eero Aho, Kimmo Kuusilinna
-
Patent number: 8656109Abstract: A system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. Also provided are physical computer storage mediums including a computer program product for performing the above method.Type: GrantFiled: December 10, 2010Date of Patent: February 18, 2014Assignee: International Business Machines CorporationInventors: Brent C. Beardsley, Michael T. Benhase, Binny S. Gill, Lokesh M. Gupta, Sonny E. Williams
-
Patent number: 8645620Abstract: An interfacing apparatus and related method is provided for configuring to couple a plurality of memory devices being addressable by means of an address space to a processing unit. In one embodiment, the apparatus comprises a first memory access unit being adapted for receiving a memory address from said processing unit and for accessing said memory devices accordingly based on the address provided. It also comprises a second memory access unit being adapted for receiving content data from the processing unit and for controlling a search or update function accordingly for the received content data in one or more of the memory devices. In addition, an allocation unit is also provided for allocating a first part of the address space of the memory devices to said first memory access unit and allocating a second part of the address space of said memory devices to the second memory access unit, each of the memory access units being assigned to corresponding memory devices of the plurality of memory devices.Type: GrantFiled: June 23, 2008Date of Patent: February 4, 2014Assignee: International Business Machines CorporationInventors: Peter Buchmann, Martin Leo Schmatz, Jan Van Lunteren
-
Patent number: 8639883Abstract: Embodiments of the invention are directed to reducing write amplification in a cache with flash memory used as a write cache. An embodiment of the invention includes partitioning at least one flash memory device in the cache into a plurality of logical partitions. Each of the plurality of logical partitions is a logical subdivision of one of the at least one flash memory device and comprises a plurality of memory pages. Data are buffered in a buffer. The data includes data to be cached, and data to be destaged from the cache to a storage subsystem. Data to be cached are written from the buffer to the at least one flash memory device. A processor coupled to the buffer is provided with access to the data written to the at least one flash memory device from the buffer, and a location of the data written to the at least one flash memory device within the plurality of logical partitions. The data written to the at least one flash memory device are destaged from the buffer to the storage subsystem.Type: GrantFiled: January 15, 2013Date of Patent: January 28, 2014Assignee: International Business Machines CorporationInventors: Wendy A. Belluomini, Binny S. Gill, Michael A. Ko
-
Patent number: 8635410Abstract: A processor interface (24) receives a flush request from a processor (700) and performs a snoop operation to determine whether the data is maintained in a one of the local processors (700) and whether the data has been modified. If the data is maintained locally and it has been modified, an identified local processor (700) receives the flush request from the processor interface (24) and initiates a writeback to a memory directory interface unit (24). If the data is not maintained locally or has not been modified, the processor interface (24) forwards the flush request to the memory directory interface unit (22). Memory directory interface unit (22) determines which remote processors within the system (10) have a copy of the data and forwards the flush request only to those identified processors.Type: GrantFiled: July 20, 2001Date of Patent: January 21, 2014Assignee: Silicon Graphics International, Corp.Inventor: Jeffrey S. Kuskin
-
Patent number: 8631207Abstract: Methods and apparatus to provide for power consumption reduction in memories (such as cache memories) are described. In one embodiment, a virtual tag is used to determine whether to access a cache way. The virtual tag access and comparison may be performed earlier in the read pipeline than the actual tag access or comparison. In another embodiment, a speculative way hit may be used based on pre-ECC partial tag match to wake up a subset of data arrays. Other embodiments are also described.Type: GrantFiled: December 26, 2009Date of Patent: January 14, 2014Assignee: Intel CorporationInventors: Zhen Fang, Meenakshisundara R. Chinthamani, Li Zhao, Milind B. Kamble, Ravishankar Iyer, Seung Eun Lee, Robert S. Chappell, Ryan L. Carlson
-
Publication number: 20140006715Abstract: Method and apparatus to efficiently store and cache data. Cores of a processor and cache slices co-located with the cores may be grouped into a cluster. A memory space may be partitioned into address regions. The cluster may be associated with an address region from the address regions. Each memory address of the address region may be mapped to one or more of the cache slices grouped into the cluster. A cache access from one or more of the cores grouped into the cluster may be biased to the address region based on the association of the cluster with the address region.Type: ApplicationFiled: August 13, 2012Publication date: January 2, 2014Applicant: INTEL CORPORATIONInventors: Ravindra P. Saraf, Rahul Pal, Ashok Jagannathan
-
Patent number: 8621154Abstract: A flow based reply cache of a storage system is illustratively organized into one or more microcaches, each having a plurality of reply cache entries. Each microcache is maintained by a protocol server executing on the storage system and is allocated on a per client basis. To that end, each client is identified by a client connection or logical “data flow” and is allocated its own microcache and associated entries, as needed. As a result, each microcache of the reply cache may be used to identify a logical stream of client requests associated with a data flow, as well as to isolate that client stream from other client streams and associated data flows used to deliver other requests served by the system. The use of microcaches thus provides a level of granularity that enables each client to have its own pool of reply cache entries that is not shared with other clients, thereby obviating starvation of entries allocated to the client in the reply cache.Type: GrantFiled: April 18, 2008Date of Patent: December 31, 2013Assignee: NetApp, Inc.Inventors: Jason L. Goldschmidt, Peter D. Shah, Thomas M. Talpey
-
Patent number: 8612716Abstract: An object of the present invention is to provide a storage system which is shared by a plurality of application programs, wherein optimum performance tuning for a cache memory can be performed for each of the individual application programs. The storage system of the present invention comprises a storage device which provides a plurality of logical volumes which can be accessed from a plurality of application programs, a controller for controlling input and output of data to and from the logical volumes in response to input/output requests from the plurality of application programs, and a cache memory for temporarily storing data input to and output from the logical volume, wherein the cache memory is logically divided into a plurality of partitions which are exclusively assigned to the plurality of logical volumes respectively.Type: GrantFiled: October 26, 2007Date of Patent: December 17, 2013Assignee: Hitachi, Ltd.Inventors: Atushi Ishikawa, Yuko Matsui
-
Publication number: 20130332676Abstract: In a cloud computing environment, a cache and a memory are partitioned into “colors”. The colors of the cache and the memory are allocated to virtual machines independently of one another. In order to provide cache isolation while allocating the memory and cache in different proportions, some of the colors of the memory are allocated to a virtual machine, but the virtual machine is not permitted to directly access these colors. Instead, when a request is received from the virtual machine for a memory page in one of the non-accessible colors, a hypervisor swaps the requested memory page with a memory page with a color that the virtual machine is permitted to access. The virtual machine is then permitted to access the requested memory page at the new color location.Type: ApplicationFiled: June 12, 2012Publication date: December 12, 2013Applicant: Microsoft CorporationInventors: Ramakrishna R. Kotla, Venugopalan Ramasubramanian
-
Patent number: 8606999Abstract: A method and apparatus for partitioning a cache includes determining an allocation of a subcache out of a plurality of subcaches within the cache for association with a compute unit out of a plurality of compute units. Data is processed by the compute unit, and the compute unit evicts a line. The evicted line is written to the subcache associated with the compute unit.Type: GrantFiled: August 30, 2010Date of Patent: December 10, 2013Assignee: Advanced Micro Devices, Inc.Inventors: Greggory D. Donley, William Alexander Hughes, Narsing K. Vijayrao
-
Patent number: 8607000Abstract: This invention is a data processing system having a multi-level cache system. The multi-level cache system includes at least first level cache and a second level cache. Upon a cache miss in both the at least one first level cache and the second level cache the data processing system evicts and allocates a cache line within the second level cache. The data processing system determine from the miss address whether the request falls within a low half or a high half of the allocated cache line. The data processing system first requests data from external memory of the miss half cache line. Upon receipt data is supplied to the at least one first level cache and the CPU. The data processing system then requests data from external memory for the other half of the second level cache line.Type: GrantFiled: September 23, 2011Date of Patent: December 10, 2013Assignee: Texas Instruments IncorporatedInventors: Abhijeet Ashok Chachad, Roger Kyle Castille, Joseph Raymond Michael Zbiciak, Dheera Balasubramanian
-
Patent number: 8601215Abstract: A processor according to an exemplary of the invention includes a first initialization unit which reads a first program for checking a reliability of the processor into a cache memory and executes the first program when the processor is started up, and a second initialization unit which reads a second program for checking a reliability of the cache memory into a predetermined memory area and executes the second program when the second initialization unit receives a notification indicating the completion of the establishment of a communication path between the predetermined memory area and the processor from another processor which exists in a partition in which the processor is added.Type: GrantFiled: February 2, 2010Date of Patent: December 3, 2013Assignee: NEC CorporationInventor: Daisuke Ageishi
-
Patent number: 8595425Abstract: One embodiment of the present invention sets forth a technique for providing a L1 cache that is a central storage resource. The L1 cache services multiple clients with diverse latency and bandwidth requirements. The L1 cache may be reconfigured to create multiple storage spaces enabling the L1 cache may replace dedicated buffers, caches, and FIFOs in previous architectures. A “direct mapped” storage region that is configured within the L1 cache may replace dedicated buffers, FIFOs, and interface paths, allowing clients of the L1 cache to exchange attribute and primitive data. The direct mapped storage region may used as a global register file. A “local and global cache” storage region configured within the L1 cache may be used to support load/store memory requests to multiple spaces. These spaces include global, local, and call-return stack (CRS) memory.Type: GrantFiled: September 25, 2009Date of Patent: November 26, 2013Assignee: NVIDIA CorporationInventors: Alexander L. Minkin, Steven James Heinrich, RaJeshwaran Selvanesan, Brett W. Coon, Charles McCarver, Anjana Rajendran, Stewart G. Carlton
-
Patent number: 8589627Abstract: The present invention provides embodiments of a partially sectored cache. One embodiment of the apparatus includes a cache that includes a tag array for storing information indicating a plurality of tags and a data array for storing a plurality of lines. A first portion of the tags have a one-to-one association with a first portion of the lines and a second portion of the tags have a one-to-many association with a second portion of the lines.Type: GrantFiled: August 27, 2010Date of Patent: November 19, 2013Assignee: Advanced Micro Devices, Inc.Inventor: Tarun Nakra
-
Patent number: 8589629Abstract: A system and method for data allocation in a shared cache memory of a computing system are contemplated. Each cache way of a shared set-associative cache is accessible to multiple sources, such as one or more processor cores, a graphics processing unit (GPU), an input/output (I/O) device, or multiple different software threads. A shared cache controller enables or disables access separately to each of the cache ways based upon the corresponding source of a received memory request. One or more configuration and status registers (CSRs) store encoded values used to alter accessibility to each of the shared cache ways. The control of the accessibility of the shared cache ways via altering stored values in the CSRs may be used to create a pseudo-RAM structure within the shared cache and to progressively reduce the size of the shared cache during a power-down sequence while the shared cache continues operation.Type: GrantFiled: March 27, 2009Date of Patent: November 19, 2013Assignee: Advanced Micro Devices, Inc.Inventors: Jonathan Owen, Guhan Krishnan, Carl D. Dietz, Douglas Richard Beard, William K. Lewchuk, Alexander Branover
-
Publication number: 20130304994Abstract: Systems and methods for allocation of cache lines in a shared partitioned cache of a multi-threaded processor. A memory management unit is configured to determine attributes associated with an address for a cache entry associated with a processing thread to be allocated in the cache. A configuration register is configured to store cache allocation information based on the determined attributes. A partitioning register is configured to store partitioning information for partitioning the cache into two or more portions. The cache entry is allocated into one of the portions of the cache based on the configuration register and the partitioning register.Type: ApplicationFiled: May 8, 2012Publication date: November 14, 2013Applicant: QUALCOMM INCORPORATEDInventors: Christopher Edward Koob, Ajay Anant Ingle, Lucian Codrescu, Suresh K. Venkumahanti
-
Patent number: 8572130Abstract: A system includes creation of a first resource queue indicating an order of resources stored in a memory, the order based on respective timestamps associated with the stored resources, association of the first resource queue with a first queue timestamp, reception of a first command to deallocate a first amount of stored resources from the memory, determination that a first stored resource indicated by the first resource queue is associated with a timestamp earlier than the first queue timestamp, deallocation of the first stored resource from the memory, reception of a second command to deallocate a second amount of stored resources from the memory, determination that the first resource queue indicates no stored resources which are associated with a timestamp earlier than the first queue timestamp, and, in response to the determination that the first resource queue indicates no stored resources which are associated with a timestamp earlier than the first queue timestamp, creation of a second resource queue inType: GrantFiled: June 27, 2011Date of Patent: October 29, 2013Assignee: SAP AGInventor: Ivan Schreter
-
Patent number: 8571042Abstract: A reception apparatus for optimizing a virtual private network operates by defragmenting and deduplicating transfer of variable sized blocks. A large data object is converted to a plurality of data paragraphs by a fingerprinting method. Each data paragraph is cached and hashed. The hashes are transmitted from a primary apparatus. Only data paragraphs which are not previously cached at satellite are received. The data object is integrated from stored and newly transmitted data paragraphs and transmitted to its destination IP address.Type: GrantFiled: June 23, 2011Date of Patent: October 29, 2013Assignee: Barracuda Networks, Inc.Inventors: Subrahmanyam Ongole, Sridhar Srinivasan
-
Patent number: 8572325Abstract: Embodiments of the invention are directed to optimizing the performance of a split disk cache. In one embodiment, a disk cache includes a primary region having a read portion and write portion and one or more smaller, sample regions also including a read portion and a write portion. The primary region and one or more sample region each have an independently adjustable ratio of a read portion to a write portion. Cached reads are distributed among the read portions of the primary and sample region, while cached writes are distributed among the write portions of the primary and sample region. The performance of the primary region and the performance of the sample region are tracked, such as by obtaining a hit rate for each region during a predefined interval. The read/write ratio of the primary region is then selectively adjusted according to the performance of the one or more sample regions.Type: GrantFiled: December 7, 2010Date of Patent: October 29, 2013Assignee: International Business Machines CorporationInventors: Ganesh Balakrishnan, Gordon B. Bell, Timothy H. Heil, MVV Anil Krishna, Brian M. Rogers
-
Patent number: 8566525Abstract: A technique for limiting an amount of write data stored in a cache memory includes determining a usable region of a non-volatile storage (NVS), determining an amount of write data in a current write request for the cache memory, and determining a failure boundary associated with the current write request. A count of the write data associated with the failure boundary is maintained. The current write request for the cache memory is rejected when a sum of the count of the write data associated with the failure boundary and the write data in the current write request exceeds a determined percentage of the usable region of the NVS.Type: GrantFiled: February 17, 2012Date of Patent: October 22, 2013Assignee: International Business Machines CorporationInventors: Kevin J. Ash, Richard A. Ripberger
-
Patent number: 8566526Abstract: In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.Type: GrantFiled: July 10, 2012Date of Patent: October 22, 2013Assignee: Apple Inc.Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
-
Patent number: 8560782Abstract: In a data processing system having a plurality of resources and plurality of partitions, each partition including one or more resources of the plurality of resources, a method includes receiving an access request to a target resource of the plurality of resources; using a first set of transaction attributes of the access request to determine a partition identifier for the access request in which the partition identifier indicates a partition of the plurality of partitions which includes the target resource; using the partition identifier to determine access permissions for the partition indicated by the partition identifier; and based on the access permissions, determining whether or not the access request is permitted.Type: GrantFiled: September 21, 2009Date of Patent: October 15, 2013Assignee: Freescale Semiconductor, Inc.Inventors: Bryan D. Marietta, David B. Kramer, Gregory B. Shippen
-
Patent number: 8543776Abstract: In one embodiment, the present invention includes a semiconductor die such as a system on a chip (SoC) that includes a logic analyzer with a built-in trace buffer to store information communicated between on-die agents at speed and to provide the information to an off-die agent at a slower speed. Other embodiments are described and claimed.Type: GrantFiled: October 31, 2012Date of Patent: September 24, 2013Assignee: Intel CorporationInventors: Tina C. Zhong, Jason G. Sandri, Kenneth P. Griesser, Lori R. Borger
-
Publication number: 20130246710Abstract: A storage system is provided with a plurality of physical storage devices, a cache memory, a control device that is coupled to the plurality of physical storage devices and the cache memory, and a buffer part. The buffer part is a storage region that is formed by using at least a part of a storage region of the plurality of physical storage devices and that is configured to temporarily store at least one target data element that is to be transmitted to a predetermined target. The control device stores a target data element into a cache region that has been allocated to a buffer region (that is a part of the cache memory and that is a storage region of a write destination of the target data element for the buffer part). The control device transmits the target data element from the cache memory.Type: ApplicationFiled: March 15, 2012Publication date: September 19, 2013Inventor: Akira Deguchi
-
Patent number: 8539150Abstract: An embodiment of this invention divides a cache memory of a storage system into a plurality of partitions and information in one or more of the partitions is composed of data different from user data and including control information. The storage system dynamically swaps data between an LU storing control information and a cache partition. Through this configuration, in a storage system having an upper limit in the capacity of the cache memory, a large amount of control information can be used while access performance to control information is kept.Type: GrantFiled: December 27, 2010Date of Patent: September 17, 2013Assignee: Hitachi, Ltd.Inventors: Akihiko Araki, Yusuke Nonaka
-
Patent number: 8521960Abstract: A method, information processing device, and computer program product mitigate busy time in a hierarchical store-through memory cache structure. In one embodiment, a cache directory associated with a memory cache is divided into a plurality of portions each associated with a portion memory cache. Simultaneous cache lookup operations and cache write operations between the plurality of portions of the cache directory are supported. Two or more store commands are simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory.Type: GrantFiled: June 23, 2010Date of Patent: August 27, 2013Assignee: International Business Machines CorporationInventors: Deanna P. Berger, Michael F. Fee, Christine C. Jones, Arthur J. O'Neill, Diana L. Orf, Robert J. Sonnelitter, III
-
Patent number: 8516194Abstract: Apparatus and methods for caching data are disclosed. Data is stored in a non-sub-block accessible nonvolatile memory, such as a NAND flash. A portion of the stored data is cached in a cache implemented using phase change memory using a sub-block accessible address.Type: GrantFiled: November 22, 2010Date of Patent: August 20, 2013Assignee: Micron Technology, Inc.Inventors: Giuseppe Ferrari, Procolo Carannante, Angelo Di Sena, Fabio Salvati, Anna Sorgente
-
Patent number: 8510491Abstract: A method and apparatus for efficient interrupt event notification for a scalable input/output device in a network system. A network interface unit is operably connected to a plurality of processing entities and associated memory units. At least one status register in the network interface unit contains information relating to a process to be performed by at least one processing entity communicated to the processing entity by an interrupt event notification. Shared memory space comprises a mailbox storage register operable to store an image of the interrupt information stored in the status register of the network interface unit. A processing entity can directly access the process information stored in the mailbox status register thereby reducing system latency associated with reading information in the status register. Updated process status information in the network interface status register may be read by the processing entity on an interleaved basis while executing a process.Type: GrantFiled: April 5, 2005Date of Patent: August 13, 2013Assignee: Oracle America, Inc.Inventors: Ariel Hendel, Yatin Gajjar, May Lin, Rahoul Pun, Michael Wong