CONFIGURABLE CACHE AND METHOD TO CONFIGURE SAME
A method includes receiving an address at a tag state array of a cache, wherein the cache is configurable to have a first size and a second size that is smaller than the first size. The method further includes identifying a first portion of the address as a set index, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size. The method further includes using the set index to locate at least one tag field of the tag state array, identifying a second portion of the address to compare to a value stored at the at least one tag field, locating at least one state field of the tag state array that is associated with a particular tag field that matches the second portion, identifying a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size, and retrieving the cache line.
Latest QUALCOMM Incorporated Patents:
- Techniques for listen-before-talk failure reporting for multiple transmission time intervals
- Techniques for channel repetition counting
- Random access PUSCH enhancements
- Random access response enhancement for user equipments with reduced capabilities
- Framework for indication of an overlap resolution process
This application claims priority from and is a continuation application of U.S. patent application Ser. No. 13/531,803, filed Jun. 25, 2012, entitled “CONFIGURABLE CACHE AND METHOD TO CONFIGURE SAME,” which claims priority from and is a divisional application of U.S. patent application Ser. No. 12/397,185, filed Mar. 3, 2009, entitled “CONFIGURABLE CACHE AND METHOD TO CONFIGURE SAME,” the contents of both of which are hereby incorporated by reference in their entirety.
II. FIELD OF THE DISCLOSUREThe present disclosure is generally directed to a configurable cache and method to configure same.
III. BACKGROUNDAdvances in technology have resulted in smaller and more powerful computing devices. For example, there currently exist a variety of portable personal computing devices, including wireless computing devices, such as portable wireless telephones, personal digital assistants (PDAs), and paging devices that are small, lightweight, and easily carried by users. More specifically, portable wireless telephones, such as cellular telephones and Internet Protocol (IP) telephones, can communicate voice and data packets over wireless networks. Further, many such wireless telephones include other types of devices that are incorporated therein. For example, wireless telephones can also include a digital still camera, a digital video camera, a digital recorder, and an audio file player. Also, such wireless telephones can process executable instructions, including software applications, such as a web browser application, that can be used to access the Internet. As such, these wireless telephones can include significant computing capabilities.
Digital signal processors (DSPs), image processors, and other processing devices are frequently used in portable personal computing devices and operate in conjunction with one or more caches. A cache is usually a copy of data that exists somewhere in a memory hierarchy. In some cases, the cache may have the only “up to date” copy of the data in the system. One typical component of a cache is a data memory. This data memory is divided into cache lines, where each cache line is a copy of a unique (and contiguous) part of the system memory. Another typical component of a cache is a way to associate a system memory address with a particular cache line.
This way to associate a system memory address with a particular cache line is often called a tag. Another typical component of a cache is a state to indicate whether a cache line is valid, modified, owned, and the like.
IV. SUMMARYA configurable cache may be resized by modifying a cache line size without changing a number of tags of the cache. Mapping between different cache sizes may be performed by shifting a location of an index within a memory address for a cache lookup. As an example, a pair of multiplexers may select address bits based on the size of the cache to shift the location of the index during a lookup operation.
In one aspect, a method comprises receiving an address at a tag state array of a cache, wherein the cache is configurable to have a first size and a second size that is smaller than the first size. The method further comprises identifying a first portion of the address as a set index, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size. The method further comprises using the set index to locate at least one tag field of the tag state array, identifying a second portion of the address to compare to a value stored at the at least one tag field, locating at least one state field of the tag state array that is associated with a particular tag field that matches the second portion, identifying a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size, and retrieving the cache line.
In another aspect, an apparatus comprises a cache and a processor coupled to the cache. The processor is configured to receive an address at a tag state array of the cache, wherein the cache is configurable to have one of a first size and a second size that is smaller than the first size. The processor is further configured to use a first portion of the address as a set index to locate at least one tag field of the tag state array, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size. The processor is further configured to locate at least one state field of the tag state array that is associate with a particular tag field of the at least one tag field, wherein the particular tag field matches a second portion of the address. The processor is further configured to retrieve a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size.
In yet another aspect, a non-transitory computer readable medium comprises instructions that, when executed by a processor, cause the processor to receive an address at a tag state array of a cache, wherein the cache is configurable to have a first size and a second size that is smaller than the first size. The instructions further cause the processor to identify a first portion of the address as a set index, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size. The instructions further cause the processor to use the set index to locate at least one tag field of the tag state array, identify a second portion of the address to compare to a value stored at the at least one tag field, locate at least one state field of the tag state array that is associated with a particular tag field that matches the second portion, identify a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size, and retrieve the cache line.
In yet another aspect, an apparatus comprises means for receiving an address at a tag state array of a cache, wherein the cache is configurable to have one of a first size and a second size that is smaller than the first size. The apparatus further comprises means for using the first portion of the address as a set index to locate at least one tag field of the tag state array, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size. The apparatus further comprises means for locating at least one state field of the tag state array that is associated with a particular tag field of the at least one tag field, wherein the particular tag field matches a second portion of the address. The apparatus further comprises means for retrieving a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size.
One particular advantage provided by disclosed embodiments is that a configurable mapping between tags and cache lines is provided to support greater tag utilization for multiple data RAM configurations, so that as the data RAM is configured to be 100% cache, 50% cache, or 25% cache, the cache line size is reduced by an equivalent amount.
Another advantage provided by disclosed embodiments is that the number of tags available is substantially maximized, in a cost and timing effective way, as the data RAM available for caching is reduced, which is of particular importance in a low-powered multi-threaded processor environment where traditional data locality assumptions may not hold. The cache with more tags is a higher performing cache, since address space conflicts are reduced.
Other aspects, advantages, and features of the present disclosure will become apparent after review of the entire application, including the following sections: Brief Description of the Drawings, Detailed Description, and the Claims.
Referring to
The tag state array 108 includes a tag area 116 addressable by the set index, such as the set index 1 122 or the set index 2 124 shown associated with the address 102. The tag state array 108 also includes a state area 118 addressable by a state address 126. Each of the cache lines 112 is associated with a tag address 128. As shown in
As shown in
There may be a relationship between the cache line size, data memory size, and the number of tags. This relationship may be expressed by the formula:
From this formula, it can be seen that increasing the cache line size while keeping the data memory size constant may decrease the number of tags. Decreasing the number of tags may require less physical storage, however, decreasing the number of tags implies fewer unique memory locations (or ranges) may be contained in the cache. As an extreme example, consider a 32 byte cache that only has a single tag. All 32 bytes would be a copy of a contiguous part of system memory. By contrast, if the cache had 8 tags, 8 unrelated 4 byte regions could be contained in the cache. By extension, a single 32 byte contiguous region could also be stored in such a cache.
In some cases, the data memory portion of a cache may not be constant, but may be configurable, as in the configurable cache system 100 of
By adjusting the cache line size together with the data memory size, the configurable cache system 100 of
Referring to
The tag state array 208 includes a tag area 216 addressable by the set index. The tag state array 208 also includes a state area 218 addressable by a state address. Each of the cache lines 212 is addressable by a tag address. The tag state array 208 also includes one or more sets 214. In a particular embodiment, the tag state array 208 may have the same size of a set 214 in the first cache configuration and in the second cache configuration and in the third cache configuration.
In a particular embodiment, the data area 210 has a predetermined number of sets to store data that is accessible via the set index and the tag state array 208. In a first cache configuration, each of the predetermined number of sets of the data area 210 may be configured to store a first amount of data. In a second cache configuration, each of the predetermined number of sets of the data area 210 may be configured to store a second amount of data.
In a particular embodiment, the indexing circuitry 224 is coupled to the memory address register 202 to identify multiple tag entries of the tag state array 208 using the set index. For example, the indexing circuitry 224 may access the tag state array 208 and locate and identify multiple tag entries corresponding to the set index received from the memory address register 202. The indexing circuitry may also be coupled to the selection circuitry by a two-bit connection, as shown in
In a particular embodiment, the comparison circuitry 226 is coupled to the memory address register 202 to compare a tag value of the identified multiple tag entries to a tag portion of the memory address. For example, the comparison circuitry 226 may access the tag state array 208 and compare the tag values of the multiple tag entries identified by the indexing circuitry 224 to respective tag portions of the memory address received from the memory address register 202.
In a particular embodiment, the validation circuitry 228 is coupled to the memory address register 202 to decode the state address and to compare the decoded state address to validation bits 222 of an identified set of the predetermined number of sets of the data area 210. The validation circuitry 228 may access the tag state array 208 and compare the validation bits 222 to the decoded state address portion of the memory address received from the memory address register 202. The validation circuitry 228 may be coupled to the memory address register 202 by a two-bit connection, as shown in
In a particular embodiment, the selection circuitry 230 is coupled to the memory address register 202 and to the indexing circuitry 224 to selectively include a particular bit of the memory address in the set index in the first cache configuration and to not include the particular bit in the set index in the second cache configuration, as will be described in more detail below in connection with
Referring to
The set index 306 ranges over 9 bits from bit 13 to bit 5, sharing two common bits 308 (bit 5 and bit 6) with a state portion 324 of the address, where the state portion 324 ranges over two bits from bit 6 to bit 5. A tag portion 304 of the address ranges from bit 31 to bit 14.
Shifting with a 1-bit shift, as indicated by the arrow 314, gives the set index 312, which ranges over 9 bits from bit 14 to bit 6, sharing one common bit 316 (bit 6) with the state portion 324 of the address. In this case, bit 5 of the state portion 324 of the address may be used to label two cache line segments or sectors, so that the cache having the set index 312 may be twice as big as the cache having the set index 306. A tag portion 310 of the address ranges from bit 31 to bit 15, with an added least significant bit of zero that may be concatenated to bits 31:15.
Shifting with another 1-bit shift, as indicated by the arrow 322, gives the set index 320, which ranges over 9 bits from bit 15 to bit 7, sharing no common bits with the state portion 324 of the address. In this case, both bit 5 and bit 6 of the state portion 324 of the address may be used to label four cache line segments or sectors, so that the cache having the set index 320 may be twice as big as the cache having the set index 312. A tag portion 318 of the address ranges from bit 31 to bit 16, with two least significant bits of zeroes added that may be concatenated to bits 31:16.
The total cache size may be given by the product of the number of sets times the number of ways times the cache line size times the number of segments or sectors. The number of sets indexed by a 9-bit set index is 29=512. For a 4-way cache having a cache line size of 32 bits, the total cache size is 512 times 4 times 32 or about 64 kilobits (kbit) for the cache having the set index 306, where the cache has only one segment or sector for each cache line. For the cache having the set index 312, where the cache has two segments or sectors for each cache line, the total cache size is about 128 kbit. For the cache having the set index 320, where the cache has four segments or sectors for each cache line, the total cache size is about 256 kbit.
Referring to
The memory address register for cache lookup 402 is configured to store 32 bit values, labeled from a least significant bit (LSB), bit 0, to a most significant bit (MSB), bit 31. A multiplexer 404 receives bit 15 from the memory address register for cache lookup 402 as one input, as indicated at 418, and bit 6 as another input, as indicated at 422. The multiplexer 404 outputs either bit 15 or bit 6 to the set index 408, as indicated at 412. The output of the multiplexer 404 is controlled by a cache size 430 control along a two-bit line 410. A multiplexer 406 receives bit 14 as one input, as indicated at 420, and bit 5 as another input, as indicated at 424. The multiplexer 406 outputs either bit 14 or bit 5 to the set index 408, as indicated at 416. The output of the multiplexer 406 is controlled by the cache size 430 control along the two-bit line 410. The set index 408 receives bits from the memory address register for cache lookup 402 ranging from bit 13 to bit 7 along a 7-bit line 414.
When the multiplexer 404 outputs bit 6 and the multiplexer 406 outputs bit 5, then the set index 408 corresponds to the set index 306 of
Referring to
The method 500 further includes using the set index to locate at least one tag field of the tag state array, at 506. For example, either the set index 1 122 or the set index 2 124 may be used to locate at least one tag area 116 of the tag state array 108 shown in
The method 500 also includes identifying a cache line based on a comparison of a third portion of the address to at least two status bits of the at least one state field, at 512. For example, one of the cache lines 112 may be identified based on a comparison of the state address 126 portion of the address 102 to at least two status bits of the at least one state area 118 of the tag state array 108 of
In a particular embodiment, the cache is further configurable to have a third size that is larger than the second size. For example, the data area 210 of the configurable cache 206 may be further configurable to have a third size that is larger than the second size, as shown in
Referring to
The method 600 also includes shifting a location of a set index portion of an address of data to be retrieved from the cache in response to changing the size of the cache, where a bit length of the set index portion is not changed when the location is shifted, at 604. For example, the set index 306 of
In a particular embodiment, the set index portion of the address overlaps at least one bit of a state address portion of the address when the cache is configured to have a first size or when the cache is configured to have a second size that is larger than the first size. For example, the set index 306 of
In a particular embodiment, the cache is further configurable to have a third size that is larger than the second size. For example, the data area 210 of the configurable cache 206 may be further configurable to have a third size that is larger than the second size, as shown in
Referring to
The method 700 also includes shifting a range of bits of a memory address to index a tag state array that is associated with the data array, where the range of bits to index the tag state array is shifted based on changing the cache from the first configuration to the second configuration, at 704. For example, the set index 306 of
In a particular embodiment, the method 700 further includes setting control inputs to a pair of multiplexers that each receive at least one input from the range of bits to index the tag state array and that each output a selectable bit to the set index. For example, the multiplexer 404 of
In a particular embodiment, the method 700 further includes changing the cache from the second configuration having the second data area size to a third configuration having a third data area size, by increasing the amount of data associated with each entry of a data array of the cache and maintaining the first number of entries of the data array that are addressable via the set index, and by maintaining the second number of entries of the data array associated with each value of the set index. For example, the configurable cache 206 of
A configurable cache operating in accordance with the methods of
A camera interface 868 is coupled to the signal processor 810 and also coupled to a camera, such as a video camera 870. A display controller 826 is coupled to the signal processor 810 and to a display device 828. A coder/decoder (CODEC) 834 can also be coupled to the signal processor 810. A speaker 836 and a microphone 838 can be coupled to the CODEC 834. A wireless interface 840 can be coupled to the signal processor 810 and to a wireless antenna 842 such that wireless data received via the antenna 842 and wireless interface 840 can be provided to the processor 810.
The signal processor 810 may be configured to execute computer executable instructions 866 stored at a computer-readable medium, such as the memory 832, that are executable to cause a computer, such as the processor 810, to cause the configurable cache module 864 to change a cache from a first configuration having a first data area size to a second configuration having a second data area size, by increasing an amount of data associated with each entry of a data array of the cache and maintaining a first number of entries of the data array that are addressable via a set index, and by maintaining a second number of entries of the data array associated with each value of the set index. The computer executable instructions are further executable to cause the configurable cache module 864 to shift a range of bits of a memory address to index a tag state array that is associated with the data array, where the range of bits to index the tag state array is shifted based on changing the cache from the first configuration to the second configuration.
In a particular embodiment, the signal processor 810, the display controller 826, the memory 832, the CODEC 834, the wireless interface 840, and the camera interface 868 are included in a system-in-package or system-on-chip device 822. In a particular embodiment, an input device 830 and a power supply 844 are coupled to the system-on-chip device 822. Moreover, in a particular embodiment, as illustrated in
The foregoing disclosed devices and functionalities may be implemented by providing design information and configured into computer files (e.g. RTL, GDSII, GERBER, etc.) stored on computer readable media. Some or all such files may be provided to fabrication handlers who fabricate devices based on such files. Resulting products include semiconductor wafers that are then cut into semiconductor die and packaged into a semiconductor chip. The chips are then employed in devices described above.
Physical device information 902 is received in the manufacturing process 900, such as at a research computer 906. The physical device information 902 may include design information representing at least one physical property of a semiconductor device, such as the components of the configurable cache of
In a particular embodiment, the library file 912 includes at least one data file including the transformed design information. For example, the library file 912 may include a library of data files corresponding to semiconductor devices including the components of the configurable cache of
The library file 912 may be used in conjunction with the EDA tool 920 at a design computer 914 that includes a processor 916, such as one or more processing cores, coupled to a memory 918. The EDA tool 920 may be stored as processor executable instructions at the memory 918 to enable a user of the design computer 914 to design a circuit using the components of the configurable cache of
The design computer 914 may be configured to transform the design information, including the circuit design information 922, to comply with a file format. To illustrate, the file format may include a database binary file format representing planar geometric shapes, text labels, and other information about a circuit layout in a hierarchical format, such as a Graphic Data System (GDSII) file format. The design computer 914 may be configured to generate a data file including the transformed design information, such as a GDSII file 926 that includes information describing the configurable cache of
The GDSII file 926 may be received at a fabrication process 928 to manufacture the configurable cache of
The die 936 may be provided to a packaging process 938 where the die 936 is incorporated into a representative package 940. For example, the package 940 may include the single die 936 or multiple dies, such as a system-in-package (SiP) arrangement. The package 940 may be configured to conform to one or more standards or specifications, such as Joint Electron Device Engineering Council (JEDEC) standards.
Information regarding the package 940 may be distributed to various product designers, such as via a component library stored at a computer 946. The computer 946 may include a processor 948, such as one or more processing cores, coupled to a memory 950. A printed circuit board (PCB) tool may be stored as processor executable instructions at the memory 950 to process PCB design information 942 received from a user of the computer 946 via a user interface 944. The PCB design information 942 may include physical positioning information of a packaged semiconductor device on a circuit board, the packaged semiconductor device corresponding to the package 940 including the configurable cache of
The computer 946 may be configured to transform the PCB design information 942 to generate a data file, such as a GERBER file 952 with data that includes physical positioning information of a packaged semiconductor device on a circuit board, as well as layout of electrical connections such as traces and vias, where the packaged semiconductor device corresponds to the package 940 including the configurable cache of
The GERBER file 952 may be received at a board assembly process 954 and used to create PCBs, such as a representative PCB 956, that are manufactured in accordance with the design information stored within the GERBER file 952. For example, the GERBER file 952 may be uploaded to one or more machines for performing various steps of a PCB production process. The PCB 956 may be populated with electronic components including the package 940 to form a representative printed circuit assembly (PCA) 958.
The PCA 958 may be received at a product manufacture process 960 and integrated into one or more electronic devices, such as a first representative electronic device 962 and a second representative electronic device 964. As an illustrative, non-limiting example, the first representative electronic device 962, the second representative electronic device 964, or both, may be selected from the group of a set top box, a music player, a video player, an entertainment unit, a navigation device, a communications device, a personal digital assistant (PDA), a fixed location data unit, and a computer. As another illustrative, non-limiting example, one or more of the electronic devices 962 and 964 may be remote units, such as mobile phones, hand-held personal communication systems (PCS) units, portable data units such as personal data assistants, global positioning system (GPS) enabled devices, navigation devices, fixed location data units such as meter reading equipment, or any other device that stores or retrieves data or computer instructions, or any combination thereof. Although one or more of
Thus, the configurable cache of
Those of skill would further appreciate that the various illustrative logical blocks, configurations, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, configurations, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a compact disk read-only memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an application-specific integrated circuit (ASIC). The ASIC may reside in a computing device or a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a computing device or user terminal.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope possible consistent with the principles and novel features as defined by the following claims.
Claims
1. A method comprising:
- receiving an address at a tag state array of a cache, wherein the cache is configurable to have a first size and a second size that is smaller than the first size;
- identifying a first portion of the address as a set index, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size;
- using the set index to locate at least one tag field of the tag state array;
- identifying a second portion of the address to compare to a value stored at the at least one tag field;
- locating at least one state field of the tag state array that is associated with a particular tag field that matches the second portion;
- identifying a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size; and
- retrieving the cache line.
2. The method of claim 1, wherein the cache is further configurable to have a third size that is smaller than the second size.
3. The method of claim 2, wherein the first portion of the address overlaps two bits of the third portion of the address when the cache is configured to have the third size, wherein the first portion of the address overlaps a single bit of the third portion of the address when the cache is configured to have the second size, and wherein the first portion of the address does not overlap any bits of the third portion of the address when the cache is configured to have the first size.
4. The method of claim 1, wherein retrieving the cache lines is performed by a processor of an electronic device.
5. The method of claim 1, wherein the first portion of the address is included in a memory address register.
6. The method of claim 1, wherein the third portion of the address is a state address portion of the address.
7. The method of claim 1, wherein the first size of the cache is 256 kilobits and the second size of the cache is 128 kilobits.
8. The method of claim 2, wherein the first size of the cache is 256 kilobits, the second size of the cache is 128 kilobits, and the third size of the cache is 64 kilobits.
9. An apparatus comprising:
- a cache; and
- a processor coupled to the cache, the processor configured to: receive an address at a tag state array of the cache, wherein the cache is configurable to have one of a first size and a second size that is smaller than the first size; use a first portion of the address as a set index to locate at least one tag field of the tag state array, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size; locate at least one state field of the tag state array that is associated with a particular tag field of the at least one tag field, wherein the particular tag field matches a second portion of the address; and retrieve a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size.
10. The apparatus of claim 9, wherein a first location of the first portion of the address and a second location of the second portion of the address are selected based on whether the cache is configured to have the first size or the second size.
11. The apparatus of claim 10, wherein the first portion of the address overlaps a single bit of the third portion of the address when the cache is configured to have the second size.
12. The apparatus of claim 10, wherein the cache is further configurable to have a third size that is smaller than the second size.
13. The apparatus of claim 12, wherein the first portion of the address overlaps two bits of the third portion of the address when the cache is configured to have the third size.
14. The apparatus of claim 12, wherein the first portion of the address overlaps a single bit of the third portion of the address when the cache is configured to have the second size.
15. The apparatus of claim 12, wherein the first portion of the address does not overlap any bits of the third portion of the address when the cache is configured to have the first size.
16. A non-transitory computer readable medium comprising instructions that, when executed by a processor, cause the processor to:
- receive an address at a tag state array of a cache, wherein the cache is configurable to have a first size and a second size that is smaller than the first size;
- identify a first portion of the address as a set index, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size;
- use the set index to locate at least one tag field of the tag state array;
- identify a second portion of the address to compare to a value stored at the at least one tag field;
- locate at least one state field of the tag state array that is associated with a particular tag field that matches the second portion;
- identify a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size; and
- retrieve the cache line.
17. The non-transitory computer readable medium of claim 16, wherein a first location of the first portion of the address and a second location of the second portion of the address are selected based on whether the cache is configured to have the first size or the second size.
18. An apparatus comprising:
- means for receiving an address at a tag state array of a cache, wherein the cache is configurable to have one of a first size and a second size that is smaller than the first size;
- means for using a first portion of the address as a set index to locate at least one tag field of the tag state array, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size;
- means for locating at least one state field of the tag state array that is associated with a particular tag field of the at least one tag field, wherein the particular tag field matches a second portion of the address; and
- means for retrieving a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size.
19. The apparatus of claim 18,
- wherein a first location of the first portion of the address and a second location of the second portion of the address are accessed based on whether the cache is configured to have the first size or the second size.
20. The apparatus of claim 18, wherein the cache is further configurable to have a third size that is smaller than the second size.
Type: Application
Filed: Mar 19, 2014
Publication Date: Jul 24, 2014
Patent Grant number: 8943293
Applicant: QUALCOMM Incorporated (San Diego, CA)
Inventors: Christopher Edward Koob (Round Rock, TX), Ajay Anant Ingle (Austin, TX), Lucian Codrescu (Austin, TX), Jian Shen (San Diego, CA)
Application Number: 14/219,034