Associative Patents (Class 711/128)
-
Publication number: 20140173210Abstract: A data processing device is provided that facilitates cache coherence policies. In one embodiment, a data processing device utilizes invalidation tags in connection with a cache that is associated with a processing engine. In some embodiments, the cache is configured to store a plurality of cache entries where each cache entry includes a cache line configured to store data and a corresponding cache tag configured to store address information associated with data stored in the cache line. Such address information includes invalidation flags with respect to addresses stored in the cache tags. Each cache tag is associated with an invalidation tag configured to store information related to invalidation commands of addresses stored in the cache tag. In such embodiment, the cache is configured to set invalidation flags of cache tags based upon information stored in respective invalidation tags.Type: ApplicationFiled: December 19, 2012Publication date: June 19, 2014Applicant: ADVANCED MICRO DEVICES, INC.Inventors: James O'Connor, Bradford M. Beckmann
-
Patent number: 8756375Abstract: Apparatuses, systems, and methods are disclosed for caching data. A method includes directly mapping a logical address of a backing store to a logical address of a non-volatile cache. A method includes mapping, in a logical-to-physical mapping structure, the logical address of the non-volatile cache to a physical location in the non-volatile cache. The physical location may store data associated with the logical address of the backing store. A method includes removing the mapping from the logical-to-physical mapping structure in response to evicting the data from the non-volatile cache so that membership in the logical-to-physical mapping structure denotes storage in the non-volatile cache.Type: GrantFiled: June 29, 2013Date of Patent: June 17, 2014Assignee: Fusion-io, Inc.Inventor: David Flynn
-
Publication number: 20140156935Abstract: In one embodiment, a processor includes at least one execution unit, a near memory, and memory management logic. The memory management logic may be to manage the near memory and a far memory as a unified exclusive memory, where the far memory is external to the processor. Other embodiments are described and claimed.Type: ApplicationFiled: November 30, 2012Publication date: June 5, 2014Inventors: Shlomo Raikin, Zvika Greenfield
-
Publication number: 20140149671Abstract: A persistent cacheable high volume manufacturing (HVM) initialization code is generally presented. In this regard, an apparatus is introduced comprising a processing unit, a unified cache, a unified cache controller, and a control register to selectively mask off access by the unified cache controller to portions of the unified cache. Other embodiments are also described and claimed.Type: ApplicationFiled: February 3, 2014Publication date: May 29, 2014Inventors: Timothy J. Callahan, Snigdha Jana, Nandan A. Kulkarni
-
Publication number: 20140149651Abstract: In an embodiment, a processor includes a decode logic to receive and decode a first memory access instruction to store data in a cache memory with a replacement state indicator of a first level, and to send the decoded first memory access instruction to a control logic. In turn, the control logic is to store the data in a first way of a first set of the cache memory and to store the replacement state indicator of the first level in a metadata field of the first way responsive to the decoded first memory access instruction. Other embodiments are described and claimed.Type: ApplicationFiled: November 27, 2012Publication date: May 29, 2014Inventors: Andrew T. Forsyth, Ramacharan Sundararaman, Eric Sprangle, John C. Mejia, Douglas M. Carmean, Mark C. Davis, Edward T. Grochowski, Robert D. Cavin
-
Publication number: 20140149670Abstract: There is provided a storage system capable to maintain a snapshot family comprising a plurality of members having hierarchical relations therebetween, and a method of operating thereof.Type: ApplicationFiled: September 25, 2013Publication date: May 29, 2014Applicant: Infinidat Ltd.Inventors: Josef Ezra, Yechiel Yochai, Ido Ben-Tsion, Efraim Zeidner
-
Patent number: 8736627Abstract: Provided are methods and systems for reducing memory bandwidth usage in a common buffer, multiple FIFO computing environment. The multiple FIFO's are arranged in coordination with serial processing units, such as in a pipeline processing environment. The multiple FIFO's contain pointers to entry addresses in a common buffer. Each subsequent FIFO receives only pointers that correspond to data that has not been rejected by the corresponding processing unit. Rejected pointers are moved to a free list for reallocation to later data.Type: GrantFiled: December 19, 2006Date of Patent: May 27, 2014Assignee: Via Technologies, Inc.Inventor: John Brothers
-
Publication number: 20140143495Abstract: A method of partitioning a data cache comprising a plurality of sets, the plurality of sets comprising a plurality of ways, is provided. Responsive to a stack data request, the method stores a cache line associated with the stack data in one of a plurality of designated ways of the data cache, wherein the plurality of designated ways is configured to store all requested stack data.Type: ApplicationFiled: July 19, 2013Publication date: May 22, 2014Applicant: Advanced Micro Devices, Inc.Inventors: Lena E. Olson, Yasuko Eckert, Vilas K. Sridharan, James M. O'Connor, Mark D. Hill, Srilatha Manne
-
Publication number: 20140136787Abstract: Systems, methods, and other embodiments associated with speculating whether a read request will cause an access to a memory are described. In one embodiment, a method includes detecting, in a memory, a read request from a processor and speculating whether the read request will cause an access to a memory bank in the memory based, at least in part, on an address identified by the read request. The method selectively enables power to the memory bank in the memory based, at least in part, on speculating whether the read request will cause an access to the memory bank.Type: ApplicationFiled: November 9, 2012Publication date: May 15, 2014Applicant: ORACLE INTERNATIONAL CORPORATIONInventors: Hoyeol CHO, Ioannis ORGINOS, Jinho KWACK
-
Patent number: 8719509Abstract: In an embodiment, a cache stores tags for cache blocks stored in the cache. Each tag may include an indication identifying which of two or more replacement policies supported by the cache is in use for the corresponding cache block, and a replacement record indicating the status of the corresponding cache block in the replacement policy. Requests may include a replacement attribute that identifies the desired replacement policy for the cache block accessed by the request. If the request is a miss in the cache, a cache block storage location may be allocated to store the corresponding cache block. The tag associated with the cache block storage location may be updated to include the indication of the desired replacement policy, and the cache may manage the block in accordance with the policy. For example, in an embodiment, the cache may support both an LRR and an LRU policy.Type: GrantFiled: January 31, 2013Date of Patent: May 6, 2014Assignee: Apple Inc.Inventors: James Wang, Zongjian Chen, James B. Keller, Timothy J. Millet
-
Patent number: 8719500Abstract: A technique to track shared information in a multi-core processor or multi-processor system. In one embodiment, core identification information (“core IDs”) are used to track shared information among multiple cores in a multi-core processor or multiple processors in a multi-processor system.Type: GrantFiled: December 7, 2009Date of Patent: May 6, 2014Assignee: Intel CorporationInventors: Yen-Kuang Chen, Christopher J. Hughes, Changkyn Kim
-
Patent number: 8700855Abstract: A computer-implemented method and system can support a tiered cache, which includes a first cache and a second cache. The first cache operates to receive a request to at least one of update and query the tiered cache; and the second cache operates to perform at least one of an updating operation and a querying operation with respect to the request via at least one of a forward strategy and a listening scheme.Type: GrantFiled: November 26, 2012Date of Patent: April 15, 2014Assignee: Oracle International CorporationInventor: Naresh Revanuru
-
Publication number: 20140095794Abstract: A processor is described having cache circuitry and logic circuitry. The logic circuitry is to manage the entry and removal of cache lines from the cache circuitry. The logic circuitry includes storage circuitry and control circuitry. The storage circuitry is to store information identifying a set of cache lines within the cache that are in a modified state. The control circuitry is coupled to the storage circuitry to receive the information from the storage circuitry, responsive to a signal to flush the cache, and determine addresses of the cache therefrom so that the set of cache lines are read from the cache so as to avoid reading cache lines from the cache that are in an invalid or a clean state.Type: ApplicationFiled: September 28, 2012Publication date: April 3, 2014Inventors: Jaideep MOSES, Ravishankar IYER, Ramesh G. ILLIKKAL, Sadagopan SRINIVASAN
-
Publication number: 20140095777Abstract: Methods and apparatuses for reducing leakage power in a system cache within a memory controller. The system cache is divided into multiple small sections, and each section is supplied with power from a separately controllable power supply. When a section is not being accessed, the voltage supplied to the section is reduced to a voltage sufficient for retention of data but not for access. Incoming requests are grouped together based on which section of the system cache they target. When enough requests that target a given section have accumulated, the voltage supplied to the given section is increased to a voltage sufficient for access. Then, once the given section has enough time to ramp-up and stabilize at the higher voltage, the waiting requests may access the given section in a burst of operations.Type: ApplicationFiled: September 28, 2012Publication date: April 3, 2014Applicant: APPLE INC.Inventors: Sukalpa Biswas, Shinye Shiu
-
Publication number: 20140095795Abstract: A computer program product for reducing penalties for cache accessing operations is provided. The computer program product includes a tangible storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method. The method includes respectively associating platform registers with cache arrays, loading control information and data of a store operation to be executed with respect to one or more of the cache arrays into the one or more of the platform registers respectively associated with the one or more of the cache arrays, and, based on the one or more of the cache arrays becoming available, committing the data from the one or more of the platform registers using the control information from the same platform registers to the one or more of the cache arrays.Type: ApplicationFiled: December 3, 2013Publication date: April 3, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael F. Fee, Christine C. Jones, Arthur J. O'Neill, Diane L. Orf
-
Publication number: 20140089590Abstract: Methods and apparatuses for reducing power consumption of a system cache within a memory controller. The system cache includes multiple ways, and individual ways are powered down when cache activity is low. A maximum active way configuration register is set by software and determines the maximum number of ways which are permitted to be active. When searching for a cache line replacement candidate, a linear feedback shift register (LFSR) is used to select from the active ways. This ensures that each active way has an equal chance of getting picked for finding a replacement candidate when one or more of the ways are inactive.Type: ApplicationFiled: September 27, 2012Publication date: March 27, 2014Applicant: APPLE INC.Inventors: Sukalpa Biswas, Shinye Shiu, Rong Zhang Hu
-
Patent number: 8683465Abstract: A cache image including only cache entries with valid durations of at least a configured deployment date for a virtual machine image is prepared via an application server for the virtual machine image. The virtual machine image is deployed to at least one other application server as a virtual machine with the cache image including only the cache entries with the valid durations of at least the configured deployment date for the virtual machine image.Type: GrantFiled: December 18, 2009Date of Patent: March 25, 2014Assignee: International Business Machines CorporationInventors: Erik J. Burckart, Andrew J. Ivory, Todd E. Kaplinger, Aaron K. Shook
-
Publication number: 20140082289Abstract: Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.Type: ApplicationFiled: November 21, 2013Publication date: March 20, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blake, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth, Pak-Kin Mak, Vesselina K. Papazova
-
Publication number: 20140068193Abstract: An address range of an L2 cache is divided into sets of a predetermined number of ways. A RAM-BIST pattern generating unit generates a memory address corresponding to a way, a test pattern, and an expected value with respect to the test pattern. The L2 cache and an XOR circuit write the test pattern to a memory address in accordance with the test pattern, read data from the memory address to which the test pattern is written, and compares the read data with the expected value. A decode unit generates a selection signal for each way of the L2 cache by using a memory address. A determination latch stores, by using a selection signal and in a way corresponding to each memory address, a comparison result with respect to the memory address, a scan-out being performed on the comparison result stored in each of the ways in a predetermined order.Type: ApplicationFiled: July 24, 2013Publication date: March 6, 2014Applicants: FUJITSU SEMICONDUCTOR LIMITED, FUJITSU LIMITEDInventors: Hitoshi YAMANAKA, Kenichi GOMI
-
Publication number: 20140052922Abstract: A data processing system having a first processor, a second processor, a local memory of the second processor, and a built-in self-test (BIST) controller of the second processor which can be randomly enabled to perform memory accesses on the local memory of the second processor and which includes a random value generator is provided. The system can perform a method including executing a secure code sequence by the first processor and performing, by the BIST controller of the second processor, BIST memory accesses to the local memory of the second processor in response to the random value generator. Performing the BIST memory accesses is performed concurrently with executing the secure code sequence.Type: ApplicationFiled: November 30, 2012Publication date: February 20, 2014Inventors: William C. Moyer, Jeffrey W. Scott
-
Patent number: 8656107Abstract: A data processing system comprises data processing circuitry, a cache memory, and memory access circuitry. The memory access circuitry is operative to assign a memory address region to be allocated in the cache memory with a predefined initialization value. Subsequently, a portion of the cache memory is allocated to the assigned memory address region only after the data processing circuitry first attempts to perform a memory access on a memory address within the assigned memory address region. The allocated portion of the cache memory is then initialized with the predefined initialization value.Type: GrantFiled: April 2, 2012Date of Patent: February 18, 2014Assignee: LSI CorporationInventors: Alexander Rabinovitch, Leonid Dubrovin
-
Patent number: 8656112Abstract: A dual-mode prefetch system for implementing checkpoint tag prefetching includes: a data array for storing data fetched from cache memory; a set of cache tags identifying the data stored in the data array; a checkpoint tag array storing data identification information; and a cache controller with prefetch logic.Type: GrantFiled: September 11, 2012Date of Patent: February 18, 2014Assignee: International Business Machines CorporationInventors: Harold Wade Cain, III, Jong-Deok Choi
-
Patent number: 8656108Abstract: A method and apparatus for disabling ways of a cache memory in response to history based usage patterns is herein described. Way predicting logic is to keep track of cache accesses to the ways and determine if an access to some ways are to be disabled to save power, based upon way power signals having a logical state representing a predicted miss to the way. One or more counters associated with the ways count accesses, wherein a power signal is set to the logical state representing a predicted miss when one of said one or more counters reaches a saturation value. Control logic adjusts said one or more counters associated with the ways according to the accesses.Type: GrantFiled: July 17, 2012Date of Patent: February 18, 2014Assignee: Intel CorporationInventors: Martin Licht, Jonathan Combs, Andrew Huang
-
Patent number: 8645629Abstract: A persistent cacheable high volume manufacturing (HVM) initialization code is generally presented. In this regard, an apparatus is introduced comprising a processing unit, a unified cache, a unified cache controller, and a control register to selectively mask off access by the unified cache controller to portions of the unified cache. Other embodiments are also described and claimed.Type: GrantFiled: September 16, 2009Date of Patent: February 4, 2014Assignee: Intel CorporationInventors: Timothy J. Callahan, Snigdha Jana, Nandan A. Kulkarni
-
Patent number: 8639885Abstract: A processor may include several processor cores, each including a respective higher-level cache, wherein each higher-level cache includes higher-level cache lines; and a lower-level cache including lower-level cache lines, where each of the lower-level cache lines may be configured to store data that corresponds to multiple higher-level cache lines. In response to invalidating a given lower-level cache line, the lower-level cache may be configured to convey a sequence including several invalidation packets to the processor cores via an interface, where each member of the sequence of invalidation packets corresponds to a respective higher-level cache line to be invalidated, and where the interface is narrower than an interface capable of concurrently conveying all invalidation information corresponding to the given lower-level cache line. Each invalidation packet may include invalidation information indicative of a location of the respective higher-level cache line within different ones of the processor cores.Type: GrantFiled: December 21, 2009Date of Patent: January 28, 2014Assignee: Oracle America, Inc.Inventors: Prashant Jain, Sandip Das, Sanjay Patel
-
Publication number: 20140025881Abstract: Associative index extended (AIX) caches can be functionally implemented through a reconfigurable decoder that employs programmable line decoding. The reconfigurable decoder features scalability in the number of lines, the number of index extension bits, and the number of banks. The reconfigurable decoder can switch between pure direct mapped (DM) mode and direct mapped-associative index extended (DM-AIX) mode of operation. For banked configurations, the reconfigurable decoder provides the ability to run some banks in DM mode and some other banks in DM-AIX mode. A cache employing this reconfigurable decoder can provide a comparable level of latency as a DM cache with minimal modifications to a DM cache circuitry of an additional logic circuit on a critical signal path, while providing low power operation at low area overhead with SA cache-like miss rates. Address masking and most-recently-used-save replacement policy can be employed with a single bit overhead per line.Type: ApplicationFiled: July 17, 2012Publication date: January 23, 2014Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Rajiv V. Joshi, Ajay N. Bhoj
-
Patent number: 8635408Abstract: A mechanism for accessing a cache memory is provided. With the mechanism of the illustrative embodiments, a processor of the data processing system performs a first execution a portion of code. During the first execution of the portion of code, information identifying which cache lines in the cache memory are accessed during the execution of the portion of code is stored in a storage device of the data processing system. Subsequently, during a second execution of the portion of code, power to the cache memory is controlled such that only the cache lines that were accessed during the first execution of the portion of code are powered-up.Type: GrantFiled: January 4, 2011Date of Patent: January 21, 2014Assignee: International Business Machines CorporationInventors: Sheldon B. Levenstein, David S. Levitan
-
EFFICIENT DYNAMIC RANDOMIZING ADDRESS REMAPPING FOR PCM CACHING TO IMPROVE ENDURANCE AND ANTI-ATTACK
Publication number: 20140019686Abstract: A method, including monitoring, by a remapping manager, a system state of a computing device for the occurrence of a predefined event, detecting, by the remapping manager, the occurrence of the predefined event, and initiating, by the remapping manager upon the detection of the predefined event, a remapping of first encoded addresses stored in tags, the first encoded addresses are associated with locations in main memory that are cached in a memory cache.Type: ApplicationFiled: December 28, 2011Publication date: January 16, 2014Inventor: Yaozu Dong -
Patent number: 8631207Abstract: Methods and apparatus to provide for power consumption reduction in memories (such as cache memories) are described. In one embodiment, a virtual tag is used to determine whether to access a cache way. The virtual tag access and comparison may be performed earlier in the read pipeline than the actual tag access or comparison. In another embodiment, a speculative way hit may be used based on pre-ECC partial tag match to wake up a subset of data arrays. Other embodiments are also described.Type: GrantFiled: December 26, 2009Date of Patent: January 14, 2014Assignee: Intel CorporationInventors: Zhen Fang, Meenakshisundara R. Chinthamani, Li Zhao, Milind B. Kamble, Ravishankar Iyer, Seung Eun Lee, Robert S. Chappell, Ryan L. Carlson
-
Patent number: 8631206Abstract: Set-associative caches having corresponding methods and computer programs comprise: a data cache to provide a plurality of cache lines based on a set index of a virtual address, wherein each of the cache lines corresponds to one of a plurality of ways of the set-associative cache; a translation lookaside buffer to provide one of a plurality of way selections based on the set index of the virtual address and a virtual tag of the virtual address, wherein each of the way selections corresponds to one of the ways of the set-associative cache; and a way multiplexer to select one of the cache lines provided by the data cache based on the one of the plurality of way selections.Type: GrantFiled: August 20, 2008Date of Patent: January 14, 2014Assignee: Marvell International Ltd.Inventors: R. Frank O'Bleness, Sujat Jamil, David E. Miner, Joseph Delgross, Tom Hameenanttila
-
Publication number: 20140013025Abstract: A hybrid memory system includes a primary memory comprising a host memory space arranged as memory sectors corresponding to host logical block addresses (host LBAs). A secondary memory is implemented as a cache for the primary host memory. A hybrid controller is configured map the clusters of host LBAs to clusters of solid state drive (SSD) LBAs. The SSD LBAs correspond to a memory space of the cache. Mapping of the host LBA clusters to the SSD LBA clusters is fully associative such that any host LBA cluster can be mapped to any SSD LBA cluster.Type: ApplicationFiled: July 6, 2012Publication date: January 9, 2014Applicant: SEAGATE TECHNOLOGY LLCInventor: Sumanth Jannyavula Venkata
-
Publication number: 20140006714Abstract: An apparatus of an aspect includes a plurality of cores. The plurality of cores are logically grouped into a plurality of clusters. A cluster sharing map-based coherence directory is coupled with the plurality of cores and is to track sharing of data among the plurality of cores. The cluster sharing map-based coherence directory includes a tag array to store corresponding pairs of addresses and cluster identifiers. Each of the addresses is to identify data. Each of the cluster identifiers is to identify one of the clusters. The cluster sharing map-based coherence directory also includes a cluster sharing map array to store cluster sharing maps. Each of the cluster sharing maps corresponds to one of the pairs of addresses and cluster identifiers. Each of the cluster sharing maps is to indicate intra-cluster sharing of data identified by the corresponding address within a cluster identified by the corresponding cluster identifier.Type: ApplicationFiled: June 29, 2012Publication date: January 2, 2014Inventors: Naveen Cherukuri, Mani Azimi
-
Publication number: 20130346699Abstract: The present application describes embodiments of a method and apparatus for concurrently accessing dirty bits in a cache. One embodiment of the apparatus includes a cache configurable to store a plurality of lines. The lines are grouped into a plurality of subsets the plurality of lines. This embodiment of the apparatus also includes a plurality of dirty bits associated with the plurality of lines and first circuitry configurable to concurrently access the plurality of dirty bits associated with at least one of the plurality of subsets of lines.Type: ApplicationFiled: June 26, 2012Publication date: December 26, 2013Inventor: WILLIAM L. WALKER
-
Publication number: 20130346683Abstract: A cache subsystem apparatus and method of operating therefor is disclosed. In one embodiment, a cache subsystem includes a cache memory divided into a plurality of sectors each having a corresponding plurality of cache lines. Each of the plurality of sectors is associated with a sector dirty bit that, when set, indicates at least one of its corresponding plurality of cache lines is storing modified data of any other location in a memory hierarchy including the cache memory. The cache subsystem further includes a cache controller configured to, responsive to initiation of a power down procedure, determine only in sectors having a corresponding sector dirty bit set which of the corresponding plurality of cache lines is storing modified data.Type: ApplicationFiled: June 22, 2012Publication date: December 26, 2013Inventor: William L. Walker
-
Publication number: 20130339596Abstract: Embodiments of the disclosure include selectively powering up a cache set of a multi-set associative cache by receiving an instruction fetch address and determining that the instruction fetch address corresponds to one of a plurality of entries of a content addressable memory. Based on determining that the instruction fetch address corresponds to one of a plurality of entries of a content addressable memory a cache set of the multi-set associative cache that contains a cache line referenced by the instruction fetch address is identified and only powering up a subset of cache. Based on the identified cache set not being powered up, selectively powering up the identified cache set of the multi-set associative cache and transmitting one or more instructions stored in the cache line referenced by the instruction fetch address to a processor.Type: ApplicationFiled: June 15, 2012Publication date: December 19, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Brian R. Prasky, Anthony Saporito, Aaron Tsai
-
Publication number: 20130339613Abstract: Embodiments relate to storing data to a system memory. An aspect includes accessing successive entries of a cache directory having a plurality of directory entries by a stepper engine, where access to the cache directory is given a lower priority than other cache operations. It is determined that a specific directory entry in the cache directory has a change line state that indicates it is modified. A store operation is performed to send a copy of the specific corresponding cache entry to the system memory as part of a cache management function. The specific directory entry is updated to indicate that the change line state is unmodified.Type: ApplicationFiled: June 13, 2012Publication date: December 19, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blake, Pak-Kin Mak, Timothy C. Bronson, Hieu T. Huynh, Kenneth D. Klapproth, Vesselina K. Papazova
-
Publication number: 20130318299Abstract: An apparatus and associated method is provided employing data capacity determination logic. The logic dynamically changes a data storage capacity of an electronic data storage memory. The change in capacity is made in relation to a transient energy during a power state change sequence performed by the electronic data storage memory.Type: ApplicationFiled: May 22, 2012Publication date: November 28, 2013Applicant: SEAGATE TECHNOLOGY LLCInventors: David Louis Spengler, Aaron Danis, William Anthony Pagano
-
Patent number: 8583872Abstract: A cache memory having a sector function, operating in accordance with a set associative system, and performing a cache operation to replace data in a cache block in the cache way corresponding to a replacement cache way determined upon an occurrence of a cache miss comprises: storing sector ID information in association with each of the cache ways in the cache block specified by a memory access request; determining, upon the occurrence of the cache miss, replacement way candidates, in accordance with sector ID information attached to the memory access request and the stored sector ID information; selecting and outputting a replacement way from the replacement way candidates; and updating the stored sector ID information in association with each of the cache ways in the cache block specified by the memory access request, to the sector ID information attached to the memory access request.Type: GrantFiled: August 19, 2008Date of Patent: November 12, 2013Assignee: Fujitsu LimitedInventors: Shuji Yamamura, Mikio Hondou, Iwao Yamazaki, Toshio Yoshida
-
Publication number: 20130297880Abstract: Apparatuses, systems, and methods are disclosed for caching data. A method includes directly mapping a logical address of a backing store to a logical address of a non-volatile cache. A method includes mapping, in a logical-to-physical mapping structure, the logical address of the non-volatile cache to a physical location in the non-volatile cache. The physical location may store data associated with the logical address of the backing store. A method includes removing the mapping from the logical-to-physical mapping structure in response to evicting the data from the non-volatile cache so that membership in the logical-to-physical mapping structure denotes storage in the non-volatile cache.Type: ApplicationFiled: June 29, 2013Publication date: November 7, 2013Inventor: David Flynn
-
Publication number: 20130297879Abstract: A computer cache memory organization called Probabilistic Set Associative Cache (PAC) has the hardware complexity and latency of a direct-mapped cache but functions as a set-associative cache for a fraction of the time, thus yielding better than direct mapped cache hit rates. The organization is considered a (1+P)—way set associative cache, where the chosen parameter called Override Probability P determines the average associativity, for example, for P=0.1, effectively it operates as if a 1.1-way set associative cache.Type: ApplicationFiled: May 1, 2012Publication date: November 7, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Bulent Abali, John S. Dodson, Moinuddin K. Qureshi, Balaram Sinharoy
-
Patent number: 8578097Abstract: A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion.Type: GrantFiled: October 24, 2011Date of Patent: November 5, 2013Assignee: Intel CorporationInventors: Daehyun Kim, Christopher J. Hughes, Yen-Kuang Chen, Partha Kundu
-
Patent number: 8577989Abstract: A method of optimizing the delivery of a set of data elements from a first device to a second device. The method includes retrieving from a data source the set of data elements, including a first subset of the set of data elements, a second subset of the set of data elements, and a third subset of the set of data elements. The method also includes transferring the first subset of the set of data elements to the second device. The method further includes selecting a fourth subset of the set of data elements, wherein the fourth subset can be comprised of data elements from the first subset and the second subset, or wherein the fourth subset can be comprised of data elements from the second subset and the third subset. The method also includes transferring a fourth subset of the set of data elements to the second device.Type: GrantFiled: June 14, 2007Date of Patent: November 5, 2013Assignee: Oracle International CorporationInventor: Tal Broda
-
Patent number: 8572324Abstract: A network on chip (‘NOC’) that includes integrated processor (‘IP’) blocks, routers, memory communications controllers, and network interface controllers, each IP block adapted to a router through a memory communications controller and a network interface controller, a multiplicity of computer processors, each computer processor implementing a plurality of hardware threads of execution; and computer memory, the computer memory organized in pages and operatively coupled to one or more of the computer processors, the computer memory including a set associative cache, the cache comprising cache ways organized in sets, the cache being shared among the hardware threads of execution, each page of computer memory restricted for caching by one replacement vector of a class of replacement vectors to particular ways of the cache, each page of memory further restricted for caching by one or more bits of a replacement vector classification to particular sets of ways of the cache.Type: GrantFiled: April 12, 2012Date of Patent: October 29, 2013Assignee: International Business Machines CorporationInventors: Russell D. Hoover, Eric O. Mejdrich
-
Publication number: 20130275683Abstract: Agents may be assigned to discrete portions of a cache. In some cases, more than one agent may be assigned to the same cache portion. The size of the portion, the assignment of agents to the portion and the number of agents may be programmed dynamically in some embodiments.Type: ApplicationFiled: August 29, 2011Publication date: October 17, 2013Applicant: Intel CorporationInventor: Nicolas Kacevas
-
Publication number: 20130262768Abstract: A method for operating a cache that includes both robust cells and standard cells may include receiving a data to be written to the cache, determining whether a type of the data is unmodified data or modified data, and writing the data to robust cells or standard cells as a function of the type of the data. A processor includes a core that includes a cache including both robust cells and standard cells for receiving data, wherein the data is written to robust cells or standard cells as a function of whether a type of the data is determined to be unmodified data or modified data.Type: ApplicationFiled: March 30, 2012Publication date: October 3, 2013Inventors: Christopher B. WILKERSON, Alaa R. ALAMELDEEN, Jaydeep P. KULKARNI
-
Publication number: 20130262772Abstract: A data processing system comprises data processing circuitry, a cache memory, and memory access circuitry. The memory access circuitry is operative to assign a memory address region to be allocated in the cache memory with a predefined initialization value. Subsequently, a portion of the cache memory is allocated to the assigned memory address region only after the data processing circuitry first attempts to perform a memory access on a memory address within the assigned memory address region. The allocated portion of the cache memory is then initialized with the predefined initialization value.Type: ApplicationFiled: April 2, 2012Publication date: October 3, 2013Applicant: LSI CORPORATIONInventors: Alexander Rabinovitch, Leonid Dubrovin
-
Patent number: 8543775Abstract: A method and apparatus are disclosed for implementing early release of speculatively read data in a hardware transactional memory system. A processing core comprises a hardware transactional memory system configured to receive an early release indication for a specified word of a group of words in a read set of an active transaction. The early release indication comprises a request to remove the specified word from the read set. In response to the early release request, the processing core removes the group of words from the read set only after determining that no word in the group other than the specified word has been speculatively read during the active transaction.Type: GrantFiled: December 13, 2012Date of Patent: September 24, 2013Assignee: Advanced Micro Devices, Inc.Inventors: Jaewoong Chung, David S Christie, Michael Hohmuth, Stephan Diestelhorst, Martin Pohlack, Luke Yen
-
Publication number: 20130238860Abstract: Administering registered virtual addresses in a hybrid computing environment that includes a host computer and an accelerator, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions, the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where administering registered virtual addresses includes maintaining, by an operating system, a watch list of ranges of currently registered virtual addresses; upon a change in physical to virtual address mappings of a particular range of virtual addresses falling within the ranges included in the watch list, notifying the system level message passing module by the operating system of the change; and updating, by the system level message passing module, a cache of ranges of currently registered virtual addresses to reflect the change in physical to virtual address mappings.Type: ApplicationFiled: April 25, 2013Publication date: September 12, 2013Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventor: INTERNATIONAL BUSINESS MACHINES CORPORATION
-
Patent number: 8533399Abstract: In a cache memory, energy and other efficiencies can be realized by saving a result of a cache directory lookup for sequential accesses to a same memory address. Where the cache is a point of coherence for speculative execution in a multiprocessor system, with directory lookups serving as the point of conflict detection, such saving becomes particularly advantageous.Type: GrantFiled: January 4, 2011Date of Patent: September 10, 2013Assignee: International Business Machines CorporationInventor: Martin Ohmacht
-
Patent number: 8533396Abstract: Apparatus for memory elements and related methods for performing an allocate operation are provided. An exemplary memory element includes a plurality of way memory elements and a replacement module coupled to the plurality of way memory elements. Each way memory element is configured to selectively output data bits maintained at an input address. The replacement module is configured to enable output of the data bits maintained at the input address of a way memory element of the plurality of way memory elements for replacement in response to an allocate instruction including the input address.Type: GrantFiled: November 19, 2010Date of Patent: September 10, 2013Assignee: Advanced Micro Devices, Inc.Inventors: Michael Ciraula, Carson Henrion, Ryan Freese