ADDRESS TRANSLATION IN A SYSTEM USING MEMORY STRIPING
A system and associated methods are disclosed for translating virtual memory addresses to physical memory addresses in a parallel computing system using memory striping. One method comprises: receiving a virtual memory address, comparing a portion of the received virtual memory address to each of a plurality of entries of a virtual memory address matching table, determining a matching row of the virtual memory address matching table for the portion of the received virtual memory address, shifting a contiguous set of bits of the received virtual memory address, wherein the shifting is performed in accordance with information from the matching row, and combining the shifted contiguous set of bits of the received virtual memory address with high-order physical memory address bits associated with the determined matching row of the virtual memory address matching table, and with low-order bits of the received virtual memory address, to produce a physical memory address.
Latest Cognitive Electronics, Inc. Patents:
- EFFICIENT IMPLEMENTATIONS FOR MAPREDUCE SYSTEMS
- PROFILING AND OPTIMIZATION OF PROGRAM CODE/APPLICATION
- METHODS AND SYSTEMS FOR OPTIMIZING EXECUTION OF A PROGRAM IN A PARALLEL PROCESSING ENVIRONMENT
- Parallel processing computer systems with reduced power consumption and methods for providing the same
- Methods and systems for performing exponentiation in a parallel processing environment
This application claims the benefit of U.S. Provisional Application No. 61/792,013, filed Mar. 15, 2013.
FIELD OF THE INVENTIONThis invention relates to memory address translation in parallel processing computer systems.
BACKGROUND OF THE INVENTIONA particularly compute-intensive activity, employing large arrays of computers, is the performance of searches of the World Wide Web (the “Web”). Google and other companies implement “search engines,” which sort through interconnected web sites and their underlying content from Internet-connected sources all over the world. Within fractions of a second of receiving a search request, a search engine typically returns to the requesting client a listing of applicable sites and textual references. Search engines exploit the massive parallelism of their search algorithms, dividing up web pages amongst their servers such that each server is responsible for searching only a tiny fraction of the Internet.
It is often desirable for processes of a parallel computing task, such as search, to share a single memory address space. However, it would highly inefficient for thousands of separate processors to access a single unified physical memory bank.
What is needed is a system that provides parallel processing with a single virtual memory address space, but the ability to make efficient use of memory local to each processor when possible.
SUMMARY OF THE INVENTIONA system and associated methods are disclosed for translating virtual memory addresses to physical memory addresses in a parallel computing system using memory striping. One method comprises: receiving a virtual memory address, comparing a portion of the received virtual memory address to each of a plurality of entries of a virtual memory address matching table, determining a matching row of the virtual memory address matching table for the portion of the received virtual memory address, shifting a contiguous set of bits of the received virtual memory address, wherein the shifting is performed in accordance with information from the matching row, and combining the shifted contiguous set of bits of the received virtual memory address with high-order physical memory address bits associated with the determined matching row of the virtual memory address matching table, and with low-order bits of the received virtual memory address, to produce a physical memory address.
The foregoing summary, as well as the following detailed description of preferred embodiments of the invention, will be better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there are shown in the drawings embodiments that are presently preferred. It should be understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
Certain terminology is used in the following description for convenience only and is not limiting. Unless specifically set forth herein, the terms “a,” “an” and “the” are not limited to one element, but instead should be read as meaning “at least one.” The terminology includes the words noted above, derivatives thereof and words of similar import.
In another embodiment, the register file 210 has four (4) read ports and two (2) write ports, and the ALU & FPU 240 can operate on two sets of two inputs and produce two outputs. Thus, with four read ports and two write ports the register file 210 is enabled to feed the ALU & FPU 240 the maximum number of values that can be received and write the maximum number of values that can be produced by the ALU & FPU 240. The Load & Store Unit 250 interfaces with the IO processor 120 and allows the processor core 110 to communicate with elements outside of the processor core 110. The control unit 200 directs the flow of processing within the processor core 110 such as sending the program counter 220 to the instruction memory 230 in order to fetch the next instruction of a computer program.
The Control unit 200 sends the Operation Selection communication 610 to the ALU & FPU 240, which selects what operation (or operations) should be performed by the ALU & FPU 240 on the Register Data 600 inputs. The Control unit 200 determines the Operation Selection communication 610 from the Instruction communication 400. The ALU & FPU 240 then performs the operation(s) designated by the Operation Selection 610 communication on the Register Data 600 inputs. The Program Counter 220, Load & Store unit 250, and Instruction Memory 230 do not perform operations during this stage of the exemplary instruction cycle.
The address at which the Result Data 700 is stored in the memory internal to the Register File 210 is designated by the Register Write Address communication 710, which is transmitted from the Control unit 200 to the Register File 210. The Register Write Address 710 is determined by the Control unit 200 from the Instruction communication 400. The Load & Store unit 250, Instruction Memory 230, and Program Counter 220 do not perform operations during this stage of the example instruction cycle.
The Load & Store unit 250 receives the Register Data communication 810 and the Operation selection & offset data communication 820 which affect its behavior. The Load & Store unit 250 determines what data to send in the Address communication 830 by combining the data representing the base address in the Register Data communication 810 and the offset data from the Operation selection & offset data communication 820. In one power-efficient case, the Address data in the Address communication 830 is produced by the Load & Store unit 250 by adding the first piece of data from the Register Data communication 810, representing the base address, with the offset data from the Operation selection & offset data communication 820. The offset data in the Operation selection & offset data communication 820 may be negative resulting in the Address data in the Address communication 830 being less than the original base address data from the Register Data communication 810.
If the Operation data from the Operation selection & offset data communication 820 indicates that the operation desired operation is a Store operation then the Load & Store unit 250 also produces a Data communication 840 with data to be written to the Memory 800. In this case the data in the Data communication 840 is determined from the Register Data communication 810 and, in one embodiment is taken from the second value sent in the Register Data communication 810. The Data communication 840 sent from the Load & Store unit 250 to the Memory 800 also indicates to the Memory 800 to perform a store of the data of the Data communication 840 at the address indicated by the Address communication 830.
If the Data communication 840 does not indicate a Store operation then it indicates a Load operation, which occurs when the Operation selection data from the Operation selection & offset data 820 indicates that a Load operation is desired. In this case the data portion of the Data communication 840 is not used. The Memory 800 receives both the Address communication 830 and the Data communication 840 and performs the indicated operation at the indicated address and, if the indicated operation is a store, uses the indicated data.
During the alternative version of the fourth step of the example instruction cycle depicted in
During the alternative version of the fifth step of the example instruction cycle depicted in
The power-efficient Processor Core implements a pipeline that, it one embodiment, is N stages and supports N virtual processors, with each virtual processor executing a different stage at any given moment in a strict round-robin ordering. In this scenario the Load & Store unit 250 may be called upon to perform a memory operation during each Processor Core 1009 cycle. A power-efficient architecture may use Memory 800 implemented with multiple banks 140, 150, 160, 170, where any individual bank may not have sufficient throughput to initiate a memory operation once per Processor Core 1009 cycle. In this case, the Banks 140, 150, 160, 170 may be operated in a round-robin ordering so that, although an arbitrary bank is not necessarily ready to initiate a memory operation during any cycle of the Processor Core 1009, some bank, either Bank A 140, Bank B 150, Bank C 160, or Bank D 170 will be ready to initiate a memory operation during a given cycle. If no idle cycles are scheduled and the Processor Core 1009 supports N virtual processors and Memory 800 supports M banks of memory, and M divides N evenly, then each virtual processor can be assigned a bank such that it can initiate an operation on its stack values whenever it is executing the Load & Store unit 250.
In
At any given moment there will be memory space in the banks 1044, 1054, 1048, 1058, 1064, 1068, 1074, 1078 that is dedicated to non-stack data such as Heap Data, system-level functionality such as data transfers, or it may be unallocated and waiting to be allocated should the need arise.
To continue the example of
It is important to note that the typical use case of the Memory Allocator & Organizer 1300 for the Software Memory Requestor 1310 is simply to ask for the starting address of a memory region 1360 that is large enough for its purposes, as it designates in 1350. This interface is strikingly simple relative to the methods that may be employed by the Memory Allocator & Organizer 1300 to satisfy the request.
The Setup box proceeds via link 1404 to the “Merge available memory blocks that are adjacent to each other” box 1406. After this step there will not be any unallocated memory blocks contiguous in the address space that are adjacent to each other. In the case that two such memory blocks exist before step 1406, they are merged into a single block during step 1406.
Step 1406 proceeds via link 1408 to the “Sort available memory blocks by increasing size” step 1410. In this step, the list of unallocated memory blocks (also called “available” memory blocks) are reordered as necessary so that the smaller blocks occur earlier in the list than larger blocks, as measured by the size of each block in the units of allocation (such as bytes). After step 1410 the process proceeds via link 1412 to step 1414 “End Setup” which demarcates the end of the Setup Phase. The beginning of the post-setup phase, step 1418, is then proceeded to via link 1416.
Step 1418 is the “Receive memory block request” step during which a Software Memory Requestor 1310 sends the size of the memory block being requested (1350) to the Memory Allocator & Organizer (1300). The process then proceeds via link 1420 to the “Set next available block to first block” step 1422. The “next available block” may preferably be a pointer variable used by the allocation process as it iterates through the List of Free Memory Regions 1330.
The process then proceeds via link 1424 to the “Set current block to next block” step 1426, wherein the current block pointer is set to point at the next block in the List of Free Memory Regions 1330. When proceeding directly from step 1418 to 1422 and then 1426 this sets the current block to the first block in the List of Free Memory Regions 1330. When proceeding to 1426 from link 1436, step 1426 sets the current block pointer to the block following the current block pointed to by the current block pointer.
After step 1426 has completed, the process proceeds to step 1430 via link 1428. Step 1430 is the “Is block as large as memory block request” step, during which the block pointed to by the current block pointer is analyzed in terms of its size (e.g., in total bytes of the current block). The size of the current block is compared to the size of the Size of the memory block 1350 requested. If the current block is not at least as large as the Size of the memory block 1350 requested then the process proceeds via the “No” link 1432 to step 1434, otherwise the process proceeds via the “Yes” link 1454 to step 1456.
In the “No” case the process arrives at the “Is current block the last block in the list?” step 1434. In this step, the current block is checked to see if it is the last block in the List of Free Memory Regions 1330. If it is the last entry in the List of Free Memory Regions 1330 then the process proceeds via the “Yes” link 1438 to step 1440, otherwise the process proceeds via the “No” link 1436 to step 1426 which will begin the process of analyzing the subsequent entry to the current block in the List of Free Memory Regions 1330. It is noteworthy that
In the case that the process has gone through all of the entries in the List of Free Memory Regions 1330 without finding a block of adequate size to satisfy the Size of memory block 1350 requested, the process will arrive at the “Adequate block not found, return NULL to requestor” step 1440. During this step a NULL value is sent as the Starting address of block communication 1360, which is a special value understood by the Software Memory Requestor 1310 to mean failure. The process then proceeds via link 1442 to the “Wait for new memory block request” 1444.
At the “Wait for new memory block request” step the process may proceed via the “New request received” link 1493 back to step 1418 where the process will begin trying to accommodate the new request. Alternatively, step 1444 may proceed to the End 1448 if the Program finished 1446, which may be indicated by a special type of request indicating that all relevant requests that will ever be made by the program are known to have already been requested, an interrupt, an updated shared variable that is intermittently checked, via an interrupt and subsequent deletion of the process, or by some other method.
As previously explained, Step 1430 proceeds to step 1456 via the “Yes” link 1454 in the case that an unallocated block of sufficient size is found. The “Is block the same size as block requested” block further examines the size of the current block and the requested block and selects which step to proceed to based on the outcome. In the case that the requested block and the current block are the same size then the process proceeds via the “Yes” link 1458 to step 1470. It is noteworthy that it may be sufficient for the current block and the requested block size to be within some threshold of difference so that if they differ by only a few bytes, or however many bytes is the minimum unit of allocation, then the algorithm may be configured to treated act as though they are equal acting upon the judgment that they are close enough. In the case that the requested block and current block are not equal nor sufficiently close then the process proceeds via the “No” link 1460 to step 1462.
The “Create new entry representing remaining space of current block once requested block is removed from it” step 1462 is proceeded to via the “No” link 1460. In this step 1462 the remaining space is calculated as the size of the current block minus the Size of memory block requested 1350. An entry for the List of Free Memory Regions 1330 is then created to represent the remaining space of the current block once the requested block has been removed from it. The process then proceeds via link 1464 to step 1466.
At “Insert new entry into available memory block list in sorted order” step 1466, the entry created in step 1462 is inserted into the List of Free Memory Regions 1330 in sorted order. It is noteworthy that using binary search the location of the insertion can be performed in log(n) steps where n is the number of entries in the List of Free Memory Regions 1330. Insertion, however, may require a number of steps on the order of n if the List of Free Memory Regions 1330 is stored as an array because all of the entries occurring after the desired location will have to move further down in the array. If a linked list is used instead of an array for storage of the List of Free Memory Regions 1330 then insertion can be performed after only a few steps (a constant number of steps that does not increase relative to the number of entries in the List of Free Memory Regions 1330) however use of a linked list disallows use of the binary search that might otherwise replace the linear search implemented by the loop of steps 1426, 1430 and 1434. Thus, the standard method depicted in
The “Remove old entry from available block list” step 1470 is reached by both the “Yes” link 1458 from step 1456, and the 1468 link from step 1466. In this step 1470 the block pointed to by the current block pointer is removed from the List of Free Memory Regions 1330. If step 1466 preceded step 1470 (i.e., link 1468 was followed, rather than “Yes” link 1458) and an array data structure is being used to store the List of Free Memory Regions 1330, then it is possible to combine the insertion and deletion since entries occurring in the List of Free Memory Regions 1330 that occur in the list subsequent to the block pointed to by the current block pointer would be moved one space back during the insertion and then one space forward during the deletion of step 1470, and therefore do not need to be moved. Entries occurring after the desired new entry location of step 1466 and before the current block pointer would still need to be moved back in the list. It is noteworthy that in the case that a large block is being used to satisfy a small request, the difference in position from the desired location of the new entry and the location of the current block pointer may not be far from each other, or may even be the same location, thereby requiring very little processing. Thus the selection of the array data structure for List of Free Memory Regions 1330 might be a superior request if it is anticipated that the entries in the List of Free Memory Regions 1330 will be large and the requests will be small.
If a linked list is used to store the List of Free Memory Regions 1330 then the deletion requires only a few steps (a constant number that does not increase with the length of the List of Free Memory Regions 1330). The process then proceeds via link 1472 to step 1474.
The “Create a new entry for the list of allocated blocks with appropriate size and starting address” step 1474 is proceeded to via link 1472. In this step 1474 an entry representing the memory region allocated to the Software Memory Requestor 1310 is created for future addition to the List of allocated memory regions 1340, including its start address and size. The process then proceeds via link 1476 to step 1478.
The “Insert new entry into allocated memory block list in sorted starting address order” step 1478 is proceeded to via link 1476. In this step 1478 the entry created in step 1474 is inserted into the List of allocated memory regions 1340. The List of allocated memory regions 1340 may be stored as a table or hash table since the deallocation process requires the lookup of the size of the memory region given only its starting address. Because the List of allocated memory regions 1340 is only used for lookup purposes, a data structure supporting instant lookup, inclusion, and removal of entries is ideal, and therefore the hash table data structure is a reasonable choice. In the case that a table can be made sufficiently large, a standard lookup table can be used and no hashing functionality is required. After step 1478 completes, the process proceeds to step 1452 via link 1480.
In the “Return starting address to requestor” step 1452, the starting address of the memory region found to satisfy the memory region request is returned to the Software Memory Requestor 1310 in the Starting address of block communication 1360. Once step 1452 is completed, the process proceeds to step 1444 via link 1450, wherein the process will either begin processing another memory block request or proceed to the End 1448.
The “Compile program” step 1540 is proceeded to via link 1530. In this step 1540 a compiler program is executed using the source code collection and instructions collected during the previous step 1520. The compiler program uses the source code files as input and parses the source code to build internal representations that capture the meaning of the source code from the perspective of the programming language definition. After multiple passes the representations are converted into an assembly file, which contains a human readable form of the computer instructions required to carry out the program on a computer. A version of these instructions is constructed that is machine-readable, which is called a binary file. The binary file is the executable file that can be run on a computer, or computers, to perform the functions of the program. Step 1540 leads directly to step 1560 via link 1550.
“Run program” step 1560 is proceeded to via link 1550 in the case that the program has not yet been run, and also via the “Yes” link 1590 in the case that the program is to be run again. In this step 1560 a computer is commanded to execute the binary program created in step 1540, thereby performing the functions originally desired by the programmer to be carried out by the program. At the end of step 1560 the program finishes executing. The program may finish executing through full completion of the program which is the case when the program reaches its end. Alternatively the program may be forced or signaled to end by a user. After the program has finished the process continues via link 1570 to step 1580.
The “Run program again?” step 1580 is proceeded to via link 1570. In this step 1580 it is determined whether the program should be run again. Typically a user is deciding whether to run the program again but it is also common to have a scheduler program run the program at a given interval or at certain times of the day. In addition there may be a separate program, often called a “watchdog” program, which monitors the execution of the program run in step 1560. If it exits, the watchdog restarts the program. In the case that the program is to be restarted, the process proceeds via the “Yes” link 1590 to step 1560 wherein the program will be run again. Otherwise, the process proceeds to the “End” 1598 via the “No” link 1595.
The “Done optimizing?” step is proceeded to the first time via link 1620 and all subsequent times, if any, via link 1655. If this is the first time through this step, and no optimizing is to be done, then the process proceeds to step 1660 via the “Yes” link 1630. Similarly, if optimizing has already been performed, wherein link 1655 was followed to arrive at the current step 1625, and no more optimization is desired, then the process proceeds to step 1660 via the “Yes” link 1630. If the optimization process is to be run again, or run for the first time, then the process proceeds via the “No” link 1635 to step 1640. One reason that the “No” link 1635 might be followed is if the program performance is known to not yet be adequate and it is suspected that improvements might still be had through further optimization. The “Yes” link 1630 might be followed if performance and efficiency are not a priority, in which case no optimization would be desired. Another reason the “Yes” link 1630 might be followed is if the desired level of performance has been achieved and no further optimization is necessary.
The “Compile program using profiling data to optimize & build in instrumentation” step 1640 is proceeded to via the “No” link 1635. This step differs from the “Compile program” step 1540 of
The current step 1640 also uses previously collected profiling data, if present (i.e. not during the first processing of step 1640, but all subsequent passes through step 1640, if any) to optimize the binary that gets built from the source code. One method by which this may be accomplished is the mapping of memory instructions to the location in source code that originally requested the memory be allocated. This memory request may be augmented with a request for a specific locality, such as very close, or not very close, to the processor core. This in turn can reduce the number of cycles taken by those memory instructions during execution, thereby reducing the number of cycles required by the program to get a unit of work done, which improves performance. This may also improve power efficiency, since earlier completion of a program allows it to exit sooner, enabling the computer to enter a powered down state so that less energy is consumed overall.
Some existing systems use profiling to advise programmers in which parts of their source code is the “hot spot”, or executed most frequently, in order to direct the programmer's attention toward those parts of the source code where improvements would have the biggest impact in overall performance. It is unusual, however, to create a system in which profiling is required in order to get a reasonable level of performance. In the present system, for example, it may be the case that a program runs very slowly during its first compilation and run, and this may persist for multiple compilations and executions. The present system may only achieve reasonable performance relative to existing systems after the program has been recompiled with profiling data taken into account. It is the loss of good performance in the uninformed compilation that allows the power efficient processor core 110 to forgo certain functionality in exchange for increased efficiency in the profiling-data-informed recompilation case, which no longer needs the forgone functionality to achieve good performance. The increased efficiency may enable the architecture to reach higher levels of performance-per-watt than a system that is optimized to deliver good performance when running a binary that was compiled without any profiling data to inform the compiler about optimizations. After the program has been compiled with instrumentation and used profiling data, if available (i.e. not the first time through step 1640), to optimize the compiled program the binary has been created and is ready for execution. Step 1640 then proceeds to step 1650 via link 1645.
The “Run program and collect profiling data” step 1650 is proceeded to via link 1645. In this step 1650, the binary for the program created in step 1640 is started and runs to completion as in step 1560. In addition, the profiling data collecting code instrumentation inserted into the binary in step 1640 collects data that is periodically recorded to a data recorder such as network-attached-storage (NAS), a tape drive, memory, or some other collecting device so that it can be used later during a recompilation step that uses profiling data. The profiling data may also be presented to the user. Source code changes that would improve performance may also be automatically derived from the profiling data and inserted into a new version of the source code, which may then be presented to the programmer for acceptance. After the program has been run and the data collected, the process proceeds to step 1625 via link 1655.
The “Compile program using profiling data to optimize” is proceeded to via the “Yes” link 1630. This step 1660 is similar to step 1640 in that the compiler attempts to improve the performance of the program by recompiling using profiling data collected during step 1650. A key difference, however, is that this version of the binary is not instrumented, which itself improves performance because there is some overhead included in the instrumentation code that will not be included in the binary output by step 1660. It may be the case that no profiling data has been collected, which happens if step 1640 was never reached because the first encounter with step 1625 led to step 1660 via “Yes” link 1630. In the event that no profiling is performed, the compilation process of step 1660 does not perform optimizations that require profiling data. The process then proceeds to step 1670 via link 1665.
The “Run & profile program” step 1670 is proceeded to via link 1665. When the program completes, as in scenarios previously described, the process proceeds to step 1680 via link 1675.
The “Run program again?” step 1680 is proceeded to via link 1675. If the program is to be run again, as determined by an initiator such as a scheduler or user, which was previously described, the process proceeds via the “Yes” link 1685 back to step 1670, otherwise the process proceeds via the “No” link 1690 to the “End” 1695.
Similarly to
After initialization the process proceeds via link 1900 to step 1902. It may also be the case that the process of
The “Receive memory block request” step 1902 is proceeded to via link 1900. This step signals the beginning of processing a new memory request. The process immediately proceeds to step 1906 via link 1904.
The “Set next available block to first block” step 1906 is proceeded to via link 1904 during the first iteration of the process depicted in
The “Set current block to next block” step 1910 is proceeded to via link 1908 and via the “No” link 1920. If this step is arrived at by the 1908 link then the current block pointer is set to the beginning of the List of Free Memory Regions 1330. If this step is arrived at via the “No” link 1920, the current block pointer is set to point at the block subsequent to the current block pointed to by the current block pointer in the List of Free Memory Regions 1330. The process then proceeds via link 1912 to step 1914.
The “Is block as large as remaining memory block request?” step 1914 is proceeded to via link 1912. In this step a comparison is made between the size of the remaining memory block request and the size of the block pointed to by the current block pointer. The size of the remaining memory block request starts with a value that is the entire size of the memory block request. If some blocks have already been found that will contribute a memory region to fulfill the memory block request, but the request has not been entirely filled, then the size of the remaining memory block request will be the size of the memory block request minus the sum of the sizes of all such previously found blocks contributing to said fulfillment.
If the size of the remaining memory block request is less than or equal to the size of the memory block pointed to by the current memory block pointer, the process proceeds to step 1932 via the “Yes” link 1922, otherwise the process proceeds to step 1918 via the “No” link 1916.
The “Is current block the last block in the list?” step 1918 is proceeded to via the “No” link 1916. In this step 1918, it is checked whether the current block pointer is pointing to the last block in the List of Free Memory Regions 1330. If it is pointing at this block, then step 1926 is proceeded to via the “Yes” link 1924, which will lead to the current block making a contribution to the memory block request. When the List of Free Memory Regions 1330 is in sorted order, this results in the largest unallocated block being dedicated to fulfill the memory block request in the event that no single block can satisfy the request. If, in the end, it turns out that the memory block request cannot be fulfilled, because the sum the sizes of all unallocated blocks is less than the size of the memory block request, then the current block will not be dedicated to the fulfill the memory block request because the memory block request will not be fulfilled.
If, instead of pointing at the last block in the List of Free Memory Regions 1330, the current block pointer is pointing at a different block, then step 1910 is proceeded to via the “No” link 1920, which leads to the processing of the block subsequent to the block pointed to by the current block pointer in the List of Free Memory Regions 1330.
The “Select current block (or use alternative selection method) to build part of requested block” step 1926 is proceeded to via the “Yes” link 1924. In this step 1926 the block pointed at by the current block pointer, which is the last block in the List of Free Memory Regions 1330, is selected to contribute toward the fulfillment of the memory block request. Alternatively a fallback selection mechanism may be used, such as a brute force attempt to find the set of unallocated memory blocks that, when their sizes are summed, is as close to the size of the memory block request as possible without going over. The method depicted in
The greedy method of
The “Add virtual-to-physical mapping entry to virtual-to-physical address translator for selected block's portion of the virtually contiguous memory block” step 1930 is proceeded to from step 1926 via the 1928 link and from step 1932 via the 1934 link. In this step 1930, an entry is added to Virtual-to-physical address translator 1860 mapping a contiguous region of virtual memory addresses to a contiguous region of physical addresses of the same size. The physical address region starting address is the starting address of the selected block from step 1926 or 1932. If this is the first block to contribute toward the fulfillment of the memory block request, the virtual address may be determined as the address of the next word following the virtual address of the last word that has previously been allocated in the virtual address space. In this case, the virtual address may need to be aligned to some degree (e.g., a multiple of 4096) if the translation table of the Virtual-to-physical address translator 1860 places such requirements on its entries.
If the selected block is not the first block to contribute to the fulfillment of the memory block request, the virtual address is determined as the previously used virtual address for the previously created entry in the translation table of the Virtual-to-physical address translator 1860, plus the size of the previously added memory block (i.e., the memory block that was selected prior to the currently selected memory block). After the virtual-to-physical mapping entry has been added in step 1930, the process proceeds to step 1938 via link 1936.
The “Create new entry representing remaining space of the current block once requested block is removed from it, if there is remaining space” step 1932 is proceeded to via the “Yes” link 1922. In this step 1932, one of two actions are performed. In the first case, the block pointed to by the current block pointer is the same size as the size of the remaining memory block request. In this case, the current block is selected and the process proceeds via link 1934 to step 1930. If the block pointed to by the current block pointer is larger than the size of the remaining memory block request (or if it is larger by X in the case where X is the minimum difference between an allocation and requested size that is required to justify division of a memory block into two blocks), a new entry is created representing the remaining space that will be left over after the memory region necessary to complete fulfillment of the memory block request is removed from the block pointed to by the current block. This entry is then placed into the List of Free Memory Regions 1330. In a standard case, the List of Free Memory Regions is being maintained in sorted order and the new entry will be inserted in the appropriate location so as to maintain the sorted ordering. The current block is then selected. The link 1934 is then followed to step 1930.
The “Remove old entry from available memory block list” step 1938 is proceeded to via the link 1936. In this step 1938, the selected block is removed from the List of Free Memory Regions 1330. Step 1938 proceeds via link 1940 to step 1942.
The “Create a new entry for the list of allocated blocks with appropriate size and starting address” step 1942 is proceeded to via link 1940. In this step 1942, an entry is prepared for the List of allocated memory regions 1340 representing the current block. Three important pieces of data are associated with the entry. The first is its physical starting address. This value can be taken directly from the starting address of the selected block. The second is its size, which represents the amount of memory that will be deallocated when it is deallocated in the future. The size is calculated as the size of the current block, unless a path through step 1932 was followed on the way to this step 1942, and step 1932 determined to make a new entry due to a mismatch in the size of memory necessary to complete fulfillment of the memory block request and the size of the selected memory block. In this case the size of the memory block is the size of the memory region that is dedicated from the selected block toward the fulfillment of the memory block request. The third piece of data is the virtual starting address, which is the means by which entry will be looked up in the future for deallocation. We will see in step 1954 that if the selected block is not the first block contributing to the fulfillment of the memory block request, a pointer from the previously selected memory block to the currently selected memory block is made so that when the requested memory block is deallocated the whole set of dedicated memory blocks will be deallocated together. If the currently selected block is the first block to be dedicated to the fulfillment of the memory block request, then the virtual address is calculated in a manner similar to how step 1474 calculated the starting address relevant to that step. The process then proceeds to step 1946 via link 1944.
The “Insert new entry into allocated memory block list maintaining sort by starting virtual address order (Add to hash table if applicable)” step 1946 is proceeded to via link 1944. In this step 1946, the entry created in step 1942 is inserted into the appropriate location in the List of allocated memory regions 1340 using the virtual address of the entry for sorting, hashing, or table index purposes. The process then proceeds to step 1950 via step 1948.
From “Is this the first partial block?” step 1950, the process proceeds to step 1954 via the “No” link 1952 if the selected block is not the first block to be dedicated to the fulfillment of the memory block request. Otherwise the process proceeds via the “Yes” link 1958 to step 1960.
The “Add pointer from previous partial block to current block” step 1954 is proceeded to via the “No” link 1952. This step 1954 is reached if the selected block is not the first block contributing to the fulfillment of the memory block request. In this case, a pointer from the previously selected memory block to the currently selected memory block is made so that when the requested memory block is deallocated the whole set of dedicated memory blocks will be deallocated together. The process then proceeds to step 1960 via link 1956.
The “Subtract current block size from remaining memory block request” step 1960 is proceeded to via the “Yes link 1958 and link 1956. In this step 1960, the size of the remaining memory block request is calculated so that future iterations of the process of
The “Is the remaining block request zero?” step 1964 is proceeded to via link 1962. In this step 1964 it is determined whether the memory block request has been completely filled. If the memory block request has been completely fulfilled then the size of the remaining memory block request will be zero or less, and the process proceeds via the “Yes” link 1966 to step 1968. Otherwise, step 1974 is proceeded to via the “No” link 1972.
The “Are there any blocks remaining in the available memory block list?” step 1974 is proceeded to via the “No” link 1972. In this step, the List of Free Memory Regions 1330 is analyzed and the number of blocks remaining it is compared with zero. If the number of blocks remaining in the List of Free Memory Regions 1330 is greater than zero, the process goes back to step 1906 via the “Yes” link 1976. Otherwise, the process has failed to fulfill the memory block request and proceeds to step 1980 via the “No” link 1978.
The “Adequate block not found, return NULL to requestor. Undo list changes” step 1980 is proceeded to via the “No” link 1978. In this step 1980, the failure to fulfill the memory block request is acknowledged and a response is sent back to the Software Memory Requestor 1310. The process responds to the Software Memory Requestor 1310 with failure by returning the NULL as the Starting address of block 1360. The Software Memory Requestor 1310 understands that the NULL response is the failure response and will enter into special error handling or exception handling in order to avoid and prevent future writing of data to the NULL address.
The process also undoes all of the changes to the List of Free Memory Regions 1330 and List of allocated memory regions 1340 that were performed during the process depicted in
The “Return starting address to requestor” step 1968 is proceeded to via the “Yes” link 1966. This step 1968 is encountered once a sufficient set of memory blocks has been found to fulfill the memory block request. During this step 1968, the starting virtual address for the first block contributing to the memory block request fulfillment is returned to the Software Memory Requestor 1310. The Software Memory Requestor 1310 will then proceed further along in its program and will be able to execute memory operations that assume access to a contiguous address space of its originally requested size has been granted starting at the returned virtual address. The process then proceeds via step 1970 to End 1984.
The example depicted in
Application of the process depicted in
The Virtual-to-physical address translator 2100 receives an Input virtual address communication 2110, which proceeds through link 2105 and forks to links 2112 and 2114, which provide input to the “Virtual address part to match” table 2120 and the “Mux low virtual address bits with high virtual address bits” unit 2150 respectively. The “Virtual address part to match” table 2120 is depicted as having eight entries, which receive input from link 2112 and send individual outputs 2121-2128 to an Arbiter 2130. The “Virtual address part to match” table 2120 may have more or fewer table entries. A driving force for the architecture is the expense at which additional table entries come. A primary reason for this is that the input to the “Virtual address part to match” table is not matched via simple table lookup. Instead, a more complex matching operation is performed like a hash-table lookup which, when implemented in hardware, is sometimes referred to as a content-addressable-memory, or CAM. The more complex nature of the mechanism by which the input is matched to a table entry results in increased silicon area, power consumption, and latency, when performing the matching operation. Thus, a smaller “Virtual address part to match” table results in lower power, smaller silicon area, and higher performance of the table circuitry. Smaller silicon area results in lower manufacturing cost of the overall integrated circuit utilizing the smaller “Virtual address part to match” table for two reasons: 1) More parts can be fabricated per silicon wafer, which comes at a fixed cost for acquisition and foundry processing, and 2) yield (i.e., percentage of properly functioning parts) improves because the potential area for defects decreases per part, and each defect forces the discard of a smaller total area of silicon because the silicon die on which the defect occurred is smaller. Performance improves because the number levels of logic that must be traversed in operate the “Virtual address part to match” within a given number of cycles decreases as the number of table entries decreases, which results in lower maximum latency of the circuit, allowing for a higher clock speed of the circuit and more functional operations per second.
The “Virtual address part to match” table 2120 matches an entry 2121-2128 to an Input virtual address 2110 when the numeric part of the binary number held in the entry matches the Input virtual address 2110 at the same positions. Binary numbers are started with the prefix “0b” to indicate that it is binary (“0x” is typically used to indicate hexadecimal, and “0o” is used to indicate octal). The first number following the “0b” prefix is the most significant of the numbers following the prefix, and the last number to follow is the least significant. The numbers are divided into groups of four for readability. Values of “x” indicate “don't care” meaning that a match for that table entry does not depend on matching bits in that position. In the example of
The “Virtual address part to match” table 2120 may be implemented more efficiently by forcing the “Don't care” x-values to be in constant positions that are not allowed to change within each entry. Circuits implementing such a table require less logic to implement and thus may hold cost and performance advantages over other table implementations. However, they are less flexible and may require multiple table entries to represent what might have otherwise required only a single table entry. For example, if “Don't care” x-values were forced to be in the first seven bit positions, but were not allowed in the eighth bit position, then the first entry 2121 in the “Virtual address part to match” table 2120 would have required two entries to carry out the same mapping instead of the single entry 2121 of
This principle of increasing the utility of each entry of the tables held in the “Virtual-to-physical address translator” 2100 is shown subsequently to extend further in the system in order to more dramatically reduce the number of entries required for these tables 2120 and 2140, thereby reducing cost and increasing performance for use cases in which the improved mappings are valuable.
The Arbiter unit 2130 outputs a single Entry # via link 2132 to the Corresponding physical addresses table 2140. The Arbiter 2132 is necessary since it is possible that multiple entries match the Input virtual address 2110 provided via link 2112. The Arbiter 2130 may be constructed such that the first entry, starting at Entry #1 and ending at Entry #8 that matches the Input virtual address 2110 is identified as the singularly matching entry. Alternatively, the Arbiter 2130 may be constructed such that the matching entry with the largest number of “Don't care” x-values, which might be considered the “largest” entry, is identified as the singularly matching entry. A second alternative implementation of the Arbiter 2130 identifies the matching entry with the smallest number of x-values as the singularly matching entry. This last method allows larger mapping entries to be overruled by smaller mapping entries, thereby allowing single entries to declare large rules to which exceptions are allowed. The first method described above, wherein the first matching entry is identified as the singularly matching entry, may implement this last method of allowing the smallest mappings to overrule the largest mappings, when the “Virtual address part to match” table 2120 has its entries sorted in order of increasing mapping size, and in which no two entries having identical x-values in the same positions are allowed to have matching numerical bit values. Implementation of the first method via the third may be more efficient because the tie-breaking method does not depend on values held within the entries 2121-2128, but instead simply on the entry position in the table. The system is able to operate under different Arbiter 2130 implementations such as those described above. However, implementations in which the smallest mapping is selected by the Arbiter is a often a preferred embodiment, whether using the sorted technique of the first method or by some other technique.
It is noteworthy that some integrated circuits have forced system software to implement this rule of having one and only one entry capable of matching a given Input virtual address 2110 and, in the case where it was not followed, the hardware was permanently destroyed. Such hardware is implemented at increased risk in order to achieve increased circuit efficiency by handing the complexity over to the software system. The system described herein is applicable to both the destructive and nondestructive forms of “Virtual address part to match” tables 2120 and Arbiters 2130.
The “Corresponding physical addresses” table 2140 receives input from the Arbiter 2130 and is a simple table lookup because the output from the Arbiter 2130 is a value from 1-8 indicating which entry 2141-2148 is to have its “Corresponding physical address” output onto link 2152.
The “Mux low virtual address bits with high physical address bits” unit 2150 receives input from the Input virtual address 2110 via input 2114 and from the “Corresponding physical addresses” table 2140 via link 2152. This unit 2150 integrates the virtual address bits received as input 2114 that are to not be remapped with the physical address bits received as input 2152 that are to be remapped. One method by which this integration may occur is through use of the numeric physical address bits received from the “Corresponding physical addresses” table 2140 via input 2152 and replacing the x-values with the corresponding bit values from the Input virtual address 2110 received via input 2114.
An alternative method for implementation of the unit 2150 is to derive the virtual address bits using the x-values from the corresponding entry in the “Virtual address part to match” table 2120, rather than from the x-values of the entry in the “Corresponding physical addresses” table 2140, which may not be in identical locations, although using identical locations enables a simpler implementation and is a preferred embodiment. This alternative method is completed by adding together the virtual address part and physical address part instead of bitwise-OR'ing them as in the first method. This allows the start of physical address regions to be aligned in memory at a different granularity than the alignment of the virtual address region matched by the “Virtual address part to match” table 2120, thereby enabling increased flexibility in how the physical memory blocks are utilized. The present system is able to utilize either of these techniques in integrating input 2114 and input 2152 in unit 2150. Unit 2150 then provides its computed result as output 2160 which comprises the Output physical address 2162 which signifies where the virtual address lies in physical memory.
Portions of the Input virtual address 2310 that represent the Regularly translatable virtual address region 2311 are noted in the upper left of
The “Within-bank offset region” 2370 also represents a range of bits within the entries of the “Corresponding physical Address Region” table 2340 that contain regularity. In this case the regularity is that all of the entries that contribute to a virtual contiguous address space have the same value, namely “0b 111”, for the 8th, 9th, and 10th bits of the entries. The regularity of the x-values, which comprise the first 7 bits of the entries (2341, 2342, 2343, 2344), signify that each entry maps 128 bytes of physical address space, which is one eighth of a memory bank in the example of
The output 2352 of unit 2340 is sent to unit 2350 which may operate similar to unit 2150 of
We can see that the single active entry 2441 of the eight entries 2441-2448 of unit 2449 has seven x-values, thereby signifying a physical address memory region mapping 2̂7=128 bytes. The single entry has an enhanced meaning in the system due to additional attributes 2470, 2480, 2490 associated with the entry 2441. The “Number of bits to shift” attribute 2470 for the entry 2441 has a value of two, the “Degree of shift” 2480 a value of three, and the “Shift-region starting index” 2490 a value of seven. These three values instruct the downstream units 2450, 2495, 2497 to the two bits (designated by 2470) starting at bit 7 (designated by 2490) to be shifted left three bit positions (designated by 2480). Thus the downstream units carry out an operation whereby a binary virtual address of thirteen bits:
0b B12 B11 B10 B9 B8 B7 B6 B5 B4 B3 B2 B1 B0,
has its bits shifted from indices 7 & 8 to 10 & 11 respectively (three places) to become:
0b B12 B8 B7 B9 0 0 B6 B5 B4 B3 B2 B1 B0,
where Bx is the value of the bit positioned at index X in the Input virtual address 2310 (where index 0 is the least significant bit position). The number of bits, the degree of the shift, and the starting index of the shift are configurable using the table columns 2470, 2480, and 2490. The shift occurs in the Middle Virtual Bits Shifter 2495 utilizing the relevant pieces of the Input virtual address 2310 fed over input 2415 from link 2305. The shift is controlled by values held in columns 2470, 2480, 2490 for the singularly matching entry determined by the arbiter 2330 and fed as input 2332 to unit 2449. The column values 2470, 2480 for the singularly matching entry have their values transmitted as output from 2449 which become inputs 2454 and 2456, respectively, for the Middle Virtual Bits Shifter 2495. The Middle Virtual Bits Shifter 2495 performs the operation described above and places the relevant pieces of output onto output 2458, which are received as input by the “Shifted bits integrator (SUM or bitwise OR)” unit 2497. It is noteworthy that the Shift-region starting index 2490 may not need to be transmitted to the Middle Virtual Bits Shifter 2495 if the value is held as a constant or single configurable value and integrated into the circuitry that constructs the Middle Virtual Bits Shifter 2452. The column is depicted as 2490 to point out this important value, however its storage location may be unique for each entry 2441-2448 or even internal to unit 2449.
In one preferred embodiment, some of the positions of the x-values in the “Corresponding physical addresses” column 2440 are held constant or configurable by only a single value that is not configurable per entry, thereby decreasing the overhead of the hardware implementation of each entry 2441-2448. One preferred embodiment declares a portion of the x-value bit positions as constant or configured via a single parameter, and the remaining bit positions as declared by per-entry column value 2490. In one preferred embodiment, the x-value bit positions that can be individually configured for each of the entries 2441-2448 in the Corresponding physical addresses column 2440 are transmitted link 2451 to the Middle Virtual Bits Shifter 2495 via input 2452. In that preferred embodiment the bit index left of the left-most x-value bit position is identified as the Shift-region starting index and is interpreted as such by the Middle Virtual Bits Shifter 2495.
In one preferred embodiment of the system, the value for the “Number of bits to shift” column 2470 is derived by subtracting the index of the position of the left-most x-value.
The system may also derive the value of the Number of bits to shift 2470 for a given entry (one of 2441-2448) from the left-most positions of the x-values in the Corresponding physical addresses column 2440 and the Virtual address part to match column 2420. In the example of
Unit 2450 merges the bits of the Input virtual address 2310 conveyed via 2305 as input 2414 at those positions at which x-values are present from the singularly matched entry 2441 column 2440. Those bits are merged with the non-x-values of the corresponding physical addresses 2440 of the entry 2441. Continuing the example, the output 2496 from unit 2450 would be:
0b 0 0 0 1 1 1 B6 B5 B4 B3 B2 B1 B0
Note that those portions of inputs 2496 and 2458 to the “Shifted bits integrator (SUM or bitwise OR)” unit 2497 that will always be the same do not need to be processed by both units 2450 and 2495 since the availability of these bits to unit 2497 requires only one of these units to transfer them to the unit 2497. The unit performing the processing may be selected to optimize the hardware implementation for shorter wiring, thereby reducing the cost of implementation. In one preferred embodiment, only the 2450 unit conveys the lower bits. In the example, these would be the least significant bits B6 through B0 transmitted over 2496, but not 2458 to improve the efficiency of the implementation.
The “Shifted bits integrator (SUM or bitwise OR)” unit 2497 merges the results from the upstream units 2450 & 2452 provided by inputs 2496 & 2458 and provides the “Output physical address” 2462 via output 2462. The merge can be performed by bitwise-OR'ing the relevant inputs together or by summing. In this example the input 2496 may be logically represented as:
0b 0 0 0 1 1 1 B6 B5 B4 B3 B2 B1 B0
and input 2458 may be represented as:
0b B12 B8 B7 B9 0 0
Which can be merged in unit 2497 using bitwise-OR to become output 2460:
0b B12 B8 B7 1 1 1 B6 B5 B4 B3 B2 B1 B0
The coordinated transformations of input to output of the constituent units 2420, 2449, 2450, 2452, and 2497 of the Virtual-to-physical address translator 2400 enables a single table entry 2421, 2441 to create one virtually contiguous address region from multiple physically discontiguous memory regions 2290, 2291, 2292, 2293.
The memory banks can be numbered, for example, starting with 1, such that the last bank is numbered N1*N2*N3*N4. When the addresses are ordered according to bank number and each bank comprises 1 Megabyte (MB) of memory, then the first memory represents the first 1 MB of physical memory including addresses 0x00000-0xFFFFF, the second bank represents the second MB including addresses 0x100000-0x1FFFFF. If N1=N2=N3=N4=16, then the last memory bank will process memory addresses 0xFFFF00000-0xFFFFFFFFF. The 65536 memory banks together provide 64 Gigabytes of addressable memory.
Each square in
The range of columns comprising the first 1 MB is labeled 2607, the first 2 MB is labeled 2608, the first 4 MB is labeled 2609, the first 8 MB is labeled 2657, the first 16 MB is labeled 2658, the first 32 MB is labeled 2659, the first 64 MB is labeled 2639, the first 128 MB is labeled 2699.1, the first 256 MB is labeled 2699.2, the first 512 MB is labeled 2699.3, the first 1 GB is labeled 2699.4, the first 2 GB is labeled 2699.5, the first 4 GB is labeled 2699.6, and the first 8 GB (in fact representing all the memory in the example) is labeled 2699.7.
For this example, in cases where an ellipsis column occurs and the units immediately to the left and right of the same ellipsis column in the same row (one of 2640-2655) are filled, the units represented by the ellipsis are implied as filled (see note 2606 of key 2602). Each filled square represents one free unit ready for allocation 2604. Therefore, two units in the same row separated by an ellipsis column implies many intervening free units in the same row, ready for allocation.
The high bits of the address range 2600 for a given memory bank are listed above its column 2610-2638. The system is able to treat physically discontiguous units as a virtual contiguous address range with only a single entry in each of the tables (2420, 2449) of the Virtual-to-physical address translator 2400. This enables the creation of very large virtually contiguous address regions with many physically discontiguous memory units. In some cases, thousands or more physically discontiguous units join to create very large virtual address regions, such as the memory units comprising the free set of units 2681, 2682, 2683, 2684, 2685, 2686, 2687, 2688, which combine to create a 64 MB virtual contiguous address region with 1024 physically discontiguous units. A translation table capable of holding 1024 entries to represent the whole virtual address region at once would require considerable resources and place significant performance and cost constraints on the resulting system. The present system's ability to represent such a virtual contiguous address space with only a single table entry, or pair of entries across the coordinated tables, is a distinct advantage in cases where efficient translation of the full address region (such as in the case of random lookups) at any given moment via the primary translation mechanism is a design criteria.
This is of increased importance when many virtual processors share a data structure that no single virtual processor has the memory space to accommodate. The importance also increases when the cost of accessing the data structure at an unpredictable time is low, as in the case of cores that are time sliced so that each virtual processor runs relatively slow, thereby reducing the penalty of memory access latency and increasing the performance of reduced overhead for each memory access. For these reasons use of the primary memory address translation mechanism for arbitrary accesses within a virtual contiguous address space may be important and the system of high value.
An example comprising 2670 & 2671 may be allocated as a single 2 MB VCMR spanning banks 2623 & 2624 and all banks between (32 banks in total), with one unit per bank. An example comprising 2679 & 2680 may be allocated as a single 1 MB VCMR spanning banks 2621 & 2622 and all banks between (16 banks in total), with one unit per bank. An example comprising 2678 may be allocated as a single 512 KB VCMR spanning banks 2614, 2615, 2616, & 2617 with two units per bank.
An example comprising 2674 & 2675 may be allocated as a single 32 MB VCMR spanning banks 2631 & 2632 and all banks in-between (512 in total) with one unit per bank. An example comprising 2676 & 2677 may be allocated as a single 1 GB VCMR spanning banks 2637 & 2638 and all banks in-between (4096 in total) with four units per bank.
Finally, an example comprising 2689-2699 may be allocated as a single 1 GB VCMR spanning banks 2610 through 2638 and all banks in-between (8192 in total) with two units per bank.
The data structure or data for each bin preferably contains a pointer to a parent bin. Put another way, a bin for a given level LX contains a pointer to a parent bin of level LX−1. Additionally, the data structure or data associated with each bin preferably contains two pointers to two child bins at level LX+1, one designated “left” and one designated “right”. Every virtual processor in the system is associated with a particular memory bank and maintains a pointer to the bin 2764, 2765, 2766 that holds 1-wide blocks (2767) for that bank. The parent of said 1-wide bin (one of 2760, 2761, 2762 from the set of 65536 L15 bins 2763) will contain 2-wide blocks that include that bank. In one preferred embodiment of the system, the blocks are restricted to be aligned according to their width. This limits the positions at which an n-wide block can begin. If the first bank is at index 0, and the second at index 1, and so on, then 2-wide banks must begin with an even bank number (0, 2, 4, 6, 8 . . . etc.). Furthermore, 4-wide banks must begin at a bank with an index that is divisible by 4, 8-wide banks must begin at a bank with an index that is divisible by 8, and so on, for all widths.
The “Receive memory allocation request” step 3004 is proceeded to via link 3003. In this step 3004, the Memory Allocation & Organizer 2900 receives a memory allocation request of a particular “Size of memory block” 2926 from a particular Software Memory Requestor 2928. After the completion of Step 3004, the process proceeds to step 3006 via link 3005.
The “Select a memory block according to search algorithm” step 3006 is proceeded to via link 3005. In this step 3006, a search algorithm is employed which processes the tree of bins data structure of
The “Break selected block apart if possible” step 3008 is proceeded to via link 3007. In this step 3008, the block selected using the search algorithm of step 3006 is broken into smaller blocks if possible. This is possible if the block that was found using the search algorithm was larger, (or may be required to be significantly larger) than the original memory allocation request. To avoid allocating a block larger than necessary, which would otherwise be wasteful, a breaking up process is employed in order to reduce the size of the block that is dedicated to the fulfillment of the memory allocation request. The pieces that will not be used toward the fulfillment of the memory allocation request will be organized in the appropriate bins of the tree data structure so that they can fulfill future memory allocation requests. After the completion of Step 3008, the process proceeds to step 3010 via link 3009.
In the “Return block to requestor” step 3010, the memory block that has been dedicated to the fulfillment of the memory allocation request is returned as the Starting address of block 2927 to the original Software Memory Requestor 2928. After the completion of Step 3010 the process proceeds to step 3012 via link 3011.
In the “Wait for next request” step 3012, the next request for memory 2926, which will be received from a Software memory Requestor 2928, is waited for by the Memory Allocation & Organizer 2900. If all of the memory requests have been processed, the “Last request has been processed” link 3014 is followed. Otherwise, the process returns to step 3006 via the “New request received” link 3013. After the completion of Step 3012, the process proceeds to either step 3006 via the “New Request Received” link 3013 OR proceeds to “End” step 3015 via the “Last request has been processed” link 3014. If the “Last request has been processed” link 3014 is followed then the process reaches its End 3015.
The process stays in the “Wait for new memory allocation request” step 3104 until a Software Memory Requestor 2928 requests a new memory allocation. After the completion of Step 3104, the process proceeds to Step 3106 via the “Received” link 3105.
In the “Set current bin level Lc to level of memory request Lr” Step 3106, a variable named Lc is initialized with the value Lr. Lr represents the level of the memory request. The level of the memory request is an additional communication sent by the Software Memory Requestor 2928 to the Memory Allocation & Organizer 2900, which indicates the level of bin at which the Software Memory Requestor 2928 would ideally like the block for the fulfillment of the memory allocation request to be drawn. Implicit in the designation of the level is the designation that the bank that is local to the Software Memory Requestor 2928 is included in the block. In another preferred embodiment, the Software Memory Requestor can further specify a bit indicating whether or not it is a priority for the bank local to the Software Memory Requestor 2928 to be included in the block selected for the fulfillment of the memory block allocation request. After the completion of Step 3106, the process proceeds to step 3108 via link 3107.
In the “Set current bank Kc to the bank local to the memory requester making the request Kr” Step 3108, the variable Kc is defined and set to the bank that is local to the memory requestor 2928, which is defined as Kr. In the example of
In the “Set current bin Bc to the bin at level Lc containing blocks that contain bank Kc” Step 3110, the variable Bc is defined and set to a particular bin. The bin that Bc is set to is within level Lc, which is initially the bin level requested by the Software Memory Requestor Lr. In one preferred embodiment a restriction of the memory blocks to be aligned with their size restricts them such that only one bin at level Lc contains blocks that may utilize units of bank Kc. The bin Bc is set to that bin at Lc that contains blocks that include units from bank Kc. Upon repeat visitations to step 3110, if any, the level Lc will differ from Lr, such as being equal to Lc+1 in the case that Lc is incremented between the current visitation to step 3110 and a future visitation to step 3110. In one preferred embodiment, it is possible for the value Lc to return to Lr in subsequent visits to step 3110, allowing a more exhaustive search of bins for the selection of the block dedicated to the fulfillment of the memory block allocation request. After the completion of Step 3110, the process proceeds to Step 3112 via link 3111.
The “Does the current bin contain a block of size requested (Sr) or larger?” Step 3112 is proceeded to via link 3111. The size of the requested memory block is communicated via 2926 and has the value Sr. In this step 3112, the blocks held within bin Bc are analyzed to ascertain whether there is a block of sufficient size to fulfill the memory block request. If so, the process proceeds to Step 3117 via the “Yes” link 3116. Otherwise, the process proceeds to Step 3114 via the “No” link 3113. After the completion of Step 3112, the process proceeds to Step 3114 via the “No” link 3113 or to Step 3117 via the “Yes” link 3116.
The “Set current level Lc according to search algorithm. E.g., Decrement Lc (In C: “Lc--;”)” Step 3114 is proceeded to via the “No” link 3113. This step is reached in the event that the bin Bc at level Lc does not contain a memory block of sufficient size to satisfy the memory block allocation request. In order to find a satisfactory memory block, a new bin must be selected for analysis. The first part of selecting the next bin for analysis is modifying the parameters by which the bin is selected. In the case of step 3114, the level of the bin that will be searched Lc is modified. One way in which it can be modified is by decrementing its value from its current value Lc to a new value of Lc−1, which is written in the C programming language as “Lc--;”. This causes the search algorithm to proceed higher in the tree of
In another embodiment the algorithm might proceed differently, such as proceeding up in the tree during some steps of the search, and downwards other times, so as to perform a more exhaustive search of the tree leaves before proceeding to the upper levels of the tree. Such a technique might be employed so as to preserve blocks that remain in bins at the upper levels of the tree so that they are dedicated toward those memory allocation requests for which they are absolutely needed, thereby improving the likelihood that future requests for larger memory blocks will be able to be satisfied. In another preferred embodiment, the Software Memory Requestor 2928 designates the search algorithm that is to be employed when it makes the memory block allocation request. The compiler may vary the technique requested by a given line of code, or in a given runtime situation, so as to optimize the layout of the data structures in memory that is eventually arrived upon. The compiler may use profiler feedback to drive this search, which proceeds over separate runs of a user program. After the completion of Step 3114, the process proceeds to step 3110 via the 3115 link. After the completion of Step 3112, the process proceeds to Step 3117.
The “Select the smallest block with a size at least Sr. Do multiple blocks tie for smallest?” via the “Yes” link 3116. This step is reached once a bin has been found Bc that contains at least one memory block that is of sufficient size to satisfy the memory block allocation request. This step 3116 determines whether multiple such blocks are contained in bin Bc, and if so the set of satisfactory blocks within Bc may presently be said to be tied for selection. In this case the process proceeds via the “Yes” link 3118 to step 3119 for a tie-breaking process, otherwise the “No” link 31130 is followed to step 3131 since no tie-breaking process is necessary. After the completion of Step 3117 the process proceeds to step 3131 via “No” link 3130 or proceed to step 3119 via the “Yes” link 3118.
The “Select the block that would require the most units to be deallocated to allow it to merge with a block and move to a bin with a smaller level value. Do multiple blocks tie by this metric?” Step 3119 is proceeded to via link 3118. In this step 3118, one tie-breaking technique is attempted in which the units in other parts of the tree are analyzed as to whether they are presently allocated or unallocated. The idea is that superior allocations are arrived upon by improving the likelihood that future deallocations of memory blocks will result in larger memory blocks higher in the tree data structure of
The “Assume the merges hypothesized in previous step have occurred, select the component block whose merged block requires the most units to be deallocated to enable another merge. Does a tie remain by this metric?” Step 3121 is proceeded to via link 3120. In this step, the tie-breaking metric of step 3119 is taken further so that those blocks requiring the same number of units to be deallocated before they can merge and become blocks at a higher level in the tree have their ties broken. (Note that since the highest level of the tree has a level index of 1, a higher level in the tree implies a lower index for that level). The ties are broken in this step by comparing the number of units that must be deallocated in order for the blocks to merge twice, once to move up a single level in the tree, and again to move up two levels in the tree. This tie-breaking method can be employed more than twice if ties persist. This is depicted in
In another preferred embodiment the merges hypothesized by step 3121 may be vertical merges within a bin, meaning that they do not integrate additional banks but rather additional adjacent units within those banks. In yet another embodiment the selection of with merge to hypothesize during a given encounter with step 3121 is determined by the original memory block allocation request and can be customized by the compiler during recompilation in order to improve the allocations that are arrived at for better performance. After the completion of Step 3121, the process proceeds to Step 3131 via the “No” link 3128 or to Step 3123 via the “Yes” link 3122.
The “Is the previously hypothesized merged block the largest possible block of this bin (i.e. all units in all the banks spanned by blocks in this bin are used in the previously hypothesized block)?” Step 3123 is proceeded to via link 3122. In one preferred embodiment, this step analyzes whether the hypothesized block of step 3121 spans all units of the banks that comprise it, and proceeds via “Yes” link 3124 if so, otherwise it proceeds via “No” link 3126. In another preferred embodiment, the question analyzed in step 3123 is whether the block hypothesized previously in step 3121 spanned all possible banks If so, the process proceeds via “Yes” link 3124 to step 3125. Otherwise, it proceeds via “No” link 3126 to step 3121. After the completion of Step 3123, the process proceeds to step 3121 via the “No” link 3126 or the process proceeds to Step 3125 via like 3124.
The “Select amongst most recent ties randomly” Step 3125 is proceeded to via link 3124. In this step, the remaining blocks that tie for selection are selected amongst randomly because the set of heuristics employed in the loop including steps 3121 and 3123 have failed to break the tie. After the completion of Step 3125 the process proceeds to Step 3131 via link 3127.
The “Continue processing the request by dividing the selected block, if possible, while still satisfying the requested size Sr.” Step 3131 is proceeded to via link 3127. This step signifies process depicted in
The “Continue” Step 3133 is proceeded to via link 3132, and is the same as step 3133 depicted in
The “Continue” Step 3133 begins the process depicted in
The “Can selected block be split into two blocks, each still spanning the width of the same banks they originally span, and each at least as large as Sr?” Step 3204 is proceeded to via link 3203. In this step, the number of units per bank in the selected block is reduced if possible, and this is iteratively performed through the loop including steps 3204 and 3208. After the completion of Step 3204 proceed to Step 3215 via the “No” link 3214 or continue to step 3208 via the “Yes” link 3205.
The “Perform split, removing selected block from Bc and creating two new blocks each half its size, placing an entry for each in bin Bc.” Step 3208 is proceeded to via link 3205. In this step, the split hypothesized in step 3204, which reduces the number of units per bank dedicated to the fulfillment of the memory block allocation request, is reduced if possible. This is iteratively performed until no longer possible through the loop involving steps 3204 and 3208. After the completion of Step 3208, the process returns to step 3204 via link 3207 and repeats the splitting until proceeding to Step 3215 via the “No” link 3214.
The “Can selected block be split into two blocks, each spanning half as many banks, the same number of units per bank, and each at least as large as Sr?” Step 3215 is proceeded to via link 3214. In this step 3215, the selected block is reduced, if possible, by reducing the number of banks spanned by the block but maintaining the number of units per bank that are part of the block. This step 3215 is performed iteratively with step 3217 and 3219 until the selected block can no longer be split and still maintain a size of Sr or greater, at which point the “No” link 3206 is taken. After the completion of Step 3215 the process proceeds to Step 3209 via the “No” link 3206 or the process proceeds to Step 3217 via the “Yes” link 3216.
The “Perform split, removing selected block from Bc and creating two new blocks each half its size, spanning half as many banks, and placing an entry for each in the proper bin at level Lc−1, which is immediately below Bc in the block allocation tree.” Step 3217 is proceeded to via link 3216. In this step 3217, the split hypothesized in step 3215 is performed, which requires removing the originally selected block from the bin in which it resided, and placing two new entries, each spanning half as many banks as the block that was selected at the beginning of step 3217, into the appropriate bins lower in the tree of bins. This step 3217 occurs in a loop including step 3215 and 3219. After the completion of Step 3217, the process proceeds to step 3219 via link 3218.
The “Set selected block to the newly created block that contains bank Kc. Set current bin Bc to bin containing the selected block. Set current level Lc to Lc−1.” Step 3219 is proceeded to via link 3218. The newly created block from step 3217 that contains the memory bank that is local to the original Software Memory Requestor 2928 is then selected. This step 3219 occurs in a loop including step 3215 and 3217. After the completion of Step 3219, the process proceeds to step 3215 via link 3220 and repeats the splitting iteratively through the loop (3215, 3217, 3219) until proceeding to Step 3209 via the “No” link 3206.
The “Remove selected block from selected bin. Create an entry for the list of allocated blocks representing the selected block. If extra space remains in the block after allocating the requested amount of memory create an entry so that it can be allocated in the future” Step 3209 is proceeded to via link 3206. This link begins the post-refinement stage of the memory block allocation request fulfillment process. By the time step 3209 is reached, the selected block has been refined to a size that is within a factor of two of the memory block allocation request. In this step 3209, the selected block is removed from its bin and a representation of it for the list of allocated blocks 2910 is created. In step 3209, units are removed from the selected block that are not required to achieve the satisfactory size Sr. These units are placed in the appropriate bin for their future possible allocation. This step 3209 furthermore may create an entry for memory allocations that are requested at sizes smaller than a single unit, and such entries may be consulted first before the block allocation mechanisms are used. Profiling may guide the system that allocates for the fulfillment of these smaller memory requests in order to optimize for reduced fragmentation. After the completion of Step 3209, the process proceeds to step 3211 via link 3210.
The “Return the address from the selected block to the original requestor” Step 3211 is proceeded to via link 3210. In this step, the requestor 2928 receives the starting address of block 2927 so that it can continue execution. After the completion of Step 3211, the process proceeds to step 3213 via link 3212.
The “Receive new memory allocation requests and process them as necessary” Step 3213 is proceeded to via link 3212. If a memory allocation request is received then processing proceeds to step 3106 of
The “Lookup in local table(s) for allocation entry” Step 3304 is proceeded to via link 3303. In this step 3304, a local List of allocated memory regions 2910 is searched for the entry representing the allocation that is to be deallocated. The list 2910 may be implemented as a table, hash table, or list. The lookup is performed using the address that was passed to the Memory Allocator & Organizer for deallocation. If found, the process proceeds via the “Found” link 3317 to step 3308. Otherwise, the process proceeds to Step 3306 via the “Not found” link 3305.
The “Lookup in remote table(s) for allocation entry” Step 3306 is proceeded to via link 3305. In this step 3306, one or more remote Lists of allocated memory regions 2910 are searched for the entry representing the allocation that is to be deallocated. The list 2910 may be implemented as a table, hash table, or list. The lookup is performed using the address that was passed to the Memory Allocator & Organizer for deallocation. After the completion of Step 3306, the process proceeds to step 3308 via the “Found” link 3307. (If the entry is not found, an error message is returned to the Software Memory Requestor 2928 indicating that no allocation can be deallocated because none correspond to the address that was passed for deallocation.)
The “Decrement # of allocations for block containing entry” Step 3308 is proceeded to via link 3307. In this step 3308, the “# of allocations” attribute of the found entry is decreased by 1. The “# of allocations” attribute represents the number of times that block has been used to satisfy allocations that have not yet been deallocated. The block cannot return to a bin in the tree of bins for a new allocation until all of the allocations made from that block have been deallocated. (The block may be used to provide new allocations, prior to complete deallocation, using non-block-based searching and allocating schemes, or by freeing sub-blocks prior to complete deallocation). If the new “# of allocations” value for the block is nonzero the process proceeds to Step 3302 via the “New # of allocations is nonzero” link 3310, otherwise it proceeds to step 3311 via the “New # of allocations is zero” link 3309.
The “Deallocate the block and merge with blocks horizontally until not possible” Step 3311 is proceeded to via link 3309. In this step 3311, the found block begins the process of being completely deallocated. This is done by merging the found block with other blocks that are adjacent. In one preferred embodiment, the block adjacent to the found block must be aligned with its size and a merged block must also be aligned with its size. For example, a block comprising four banks with two units per bank, where the units are numbered starting from zero, and the banks are numbered starting from zero, must begin on an even-numbered unit number and begin on a bank number divisible by four, and can only be merged in step 3311 with an adjacent block comprising four banks and two-unit-per-bank block also comprising the same unit numbers in each bank, and the merged block must start on a bank number divisible by eight. This merging process continues for the newly created merged block until no more merges can be performed. In one example, it may be that a block comprising a single bank and a single unit per bank is deallocated, and this leads to a two-bank single-unit-per-bank block being created through a merge, and so on until a block comprising all of the banks with one unit per bank (that unit having the same index in each bank) is created. After the completion of Step 3311, the process proceeds to step 3314 via link 3313.
The “Merge with blocks vertically until not possible” Step 3314 is proceeded to via link 3313. In this step 3314, the found (and possibly merged in step 3311) block begins the process of being completely deallocated. This is done by merging the found block with other blocks that are adjacent. In one preferred embodiment, the block adjacent to the found block must be aligned with its size and a merged block must also be aligned with its size. For example, a block comprising two units per bank for the constituent banks, where the units are numbered starting from zero, must begin on an even-numbered unit number, and can only be merged in step 3314 with an adjacent two-unit-per-bank block also comprising the same banks, and the merged block must start on a unit number divisible by four. This merging process continues for the newly created merged block until no more merges can be performed. In one example, it may be that a block comprising a single unit per bank is deallocated, and this leads to a two-unit-per-bank block being created through a merge, and so on, until a block comprising all of the units of the constituent banks is created. After the completion of Step 3114 the process returns to step 3302 via link 3315.
Step 3302 proceeds to the “End” Step 3316 if the “Last request has already been processed” 3312 via link 3312.
The Simple translator 3455 receives the Virtual memory address 3431 as input and performs a simple transformation in order to map it to the Physical memory address output 3460. This simple operation may be the shifting, etc. previously described, which enabled the compression of multiple table entries in the translation table into just a single entry. In some embodiments, the values directing the shifting may be provided within the Virtual memory address 3431 so that they do not need to be looked up in a table. In another preferred embodiment the input to the simple translator has its own Operand Isolator that locks its input whenever the Active path selector 3435 is “complex,” so as to not cause unnecessary switching within the Simple translator 3455, thereby saving power.
The Complex virtual-to-physical address translator 3450 receives the Virtual memory address 3445 as input and produces the Physical memory address 3455 output. The unit 3450 uses methods to perform translations that are more expensive than the simple translator 3455, such as a content-addressable memory or other table-lookup mechanism. In one preferred embodiment, the output of unit 3450 is provided to the Simple translator 3455 in order to take advantage of some of the mechanisms inside the Simple translator 3455, such as the shifter, etc. In that preferred embodiment, the Simple translator shifting and alignment schemes may be more restrictive, since the Complex virtual-to-physical address translator can be employed in tandem to compensate for the rigidity of the use cases of the Simple translator 3455. The translator 3450 operates in a low power mode when its input is held constant by the Operand isolator 3440 thereby lowering the overhead of the translator's 3450 implementation, but maintaining the flexibility offered by the translator 3450 when it is necessary.
The Selected translator mux 3470 receives the Active path selector input 3435, enabling it to choose between its other two inputs 3460 & 3455 so as to select one for passage onto the output line 3475, which is then output from the Load & Store unit 3400.
The Address calculator 3520 receives a virtual address from register 3510 and address offset 3515 and calculates a Virtual memory address 3525 for input to the Simple translator 3530. In one preferred embodiment, the address calculator 3520 performs an addition operation on its inputs. In another embodiment, address calculator 3520 allows certain bits to pass-through and avoid the addition process. The Simple translator performs simple calculations, such as shifting by small amounts, etc., in order to create the Physical memory address output 3535.
In a preferred embodiment, eight different encoding schemes are used by bits 62-55 in order to reduce the dedication of bits toward redundant information. In all of the cases, the bank is of size 512 KB (524,288 bytes) and each unit is 4 KB (4096 bytes). Thus, each bank contains 128 units. For the purposes of address encoding, the starting and ending banks are implied as the very first and very last banks in the Shared memory system respectively.
The first code example 3720 is the case (3722) where bit 55 is 1, in which case the block designates one unit per bank to be dedicated to the block containing the address. Bits 62-56 (3721) provide seven bits for identifying which of the 128 units is used in each bank dedicated to the block containing the address.
The second code example 3730 is the case (3732) where bit 55 is 0 and bit 56 is 1, in which case the block designates two units per bank to be dedicated to the block containing the address, and bits 62-57 (3731) provide 6 bits for identifying which of the 64 self-aligned sets of contiguous 2-unit-per-bank blocks is used in each bank dedicated to the block containing the address.
The third code example 3740 is the case (3742) where bit 55 is 0, bit 56 is 0, and bit 57 is 1, in which case the block designates 4 units per bank to be dedicated to the block containing the address, and bits 62-58 (3741) provide 5 bits for identifying which of the 32 self-aligned sets of contiguous 4-unit-per-bank blocks is used in each bank dedicated to the block containing the address.
The fourth code example 3750 is the case (3752) where bit 55 is 0, bit 56 is 0, bit 57 is 0, and bit 58 is 1; in which case the block designates 8 units per bank to be dedicated to the block containing the address, and bits 62-59 (3751) provide 4 bits for identifying which of the 16 self-aligned sets of contiguous 8-unit-per-bank blocks is used in each bank dedicated to the block containing the address.
The fifth code example 3760 is the case (3762) where bit 55 is 0, bit 56 is 0, bit 57 is 0, bit 58 is 0, and bit 59 is 1; in which case the block designates 16 units per bank to be dedicated to the block containing the address, and bits 62-60 (3761) provide 3 bits for identifying which of the 8 self-aligned sets of contiguous 16-unit-per-bank blocks is used in each bank dedicated to the block containing the address.
The sixth code example 3770 is the case (3772) where bit 55 is 0, bit 56 is 0, bit 57 is 0, bit 58 is 0, bit 59 is 0, and bit 60 is 1; in which case the block designates 32 units per bank to be dedicated to the block containing the address, and bits 62-61 (3771) provide 2 bits for identifying which of the 4 self-aligned sets of contiguous 32-unit-per-bank blocks is used in each bank dedicated to the block containing the address.
The seventh code example 3780 is the case (3782) where bit 55 is 0, bit 56 is 0, bit 57 is 0, bit 58 is 0, bit 59 is 0, bit 60 is 0, and bit 61 is 1; in which case the block designates 64 units per bank to be dedicated to the block containing the address, and bit 62 (3781) provides 1 bit for identifying which of the 2 self-aligned sets of contiguous 64-unit-per-bank blocks is used in each bank dedicated to the block containing the address.
The eighth code example 3790 is the case (3791) where bit 55 is 0, bit 56 is 0, bit 57 is 0, bit 58 is 0, bit 59 is 0, bit 60 is 0, bit 61 is 0, and bit 62 is 1; in which case the block designates 128 units per bank (all of the units in each bank) to be dedicated to the block containing the address.
Processors 3914-3925 & 3964-3975 connect to the Tier 0 Chip-to-chip switches 3912, 3910, 3962, which connect to the Tier 1 Switch-to-switch switch 3906 via links 3911, 3961.
The virtual processors are connected to the IO Processor 3948, 3998 via links 3949, 3999. The Memory banks 3940-3943 & 3990-3993 contain Physical block movement tables 3939 & 3989 respectively. The Virtual Processors (e.g., 3947, 3997) also have an attribute named the “Invalid reference handling mode bit” (3946 & 3996, respectively), which may replace bit 62 (3819) in virtual addresses, thereby controlling whether memory references exiting a particular Virtual Processor are forced to have the validity of their memory references validated or not. In one preferred embodiment, the mode bit 3946, 3996 overrides bit 62 of an address whenever the mode bit is set to 1, otherwise defaulting to bit 62 coming natively from the address calculated by the address calculators (e.g., 3420).
After exiting the Load & Store unit of Core #2 3985, the memory address of said memory operation has an address such as the 64-bit address of
Once the memory operation arrives at the IO Processor 3998, the “require verification switch” bit 62 (3819) of the memory address indicates to IO processor 3998 that the reference must be verified. Therefore, the IO Processor 3998 forwards the memory reference on to the Valid reference verifier 3994 via link 3995. If the reference is verified then the memory operation proceeds normally, otherwise handling of the invalid reference proceeds.
The “Is address virtual” Step 4004 is proceeded to via link 4003. In the context of step 4004, “Is address virtual?” is asking whether the address is translated via a complex translator such as 3450 of
The “Is corresponding entry in translation table?” Step 4009 is proceeded to via link 4011. In this step, the complex virtual-to-physical address translator 3450 looks in its internal table structure for an entry that matches the input virtual memory address 3445. If such a match exists then the “Yes” link 4012 is traversed to step 4013. Otherwise the “No” link 4010 is traversed on toward step 4035. After the completion of Step 4009 the process either proceeds to Step 4013 via “Yes” link 4012 or Step 4035 via the “No” link 4010.
The “Interrupt with virtual memory translation failure signal” Step 4035 is proceeded to via link 4010. In this step 4035, the system is in a failure mode and a software interrupt occurs to fix and possibly log the error. One way in which the error could be fixed is to remove an entry in the tables internal to the complex address translator 3450 and to subsequently add an entry into the tables that handles the address that originally caused the translation failure. The completion of Step 4035 thereby ends the process depicted in
After the completion of Step 4009, the process proceeds to Step 4013 via “Yes” link 4012. The “Translate using complex translator, fetching stripe size and offset from table entry” Step 4013 is proceeded to via link 4012. In this step, the complex translator 3450 operates on the virtual memory address 3445. The stripe size and offset values are equivalent or derivable from 2470, 2480, 2490, and the x-value positions of matching entries in 2449, 2420, or a subset thereof. They may be used in step 4008 override the stripe size and offset values derivable from the Physical code 3820 of the address. After the completion of Step 4013, the process proceeds to step 4008 via link 4014.
The “Shift address and merge (add or OR) with non-shifted part.” Step 4008 is proceeded to via links 4007 or 4014. In this step, the stripe size and offset, derived either from the Physical Code 3820 within the address or from the table entries in the complex address translator 3450 is used to further translate the memory address using a mechanism such as that of the Middle Virtual Bits Shifter 2452 to allow a virtual contiguous memory region to be inferred directly from the stripe size and offset. Stripe size can mean the number of banks allocated to the memory block of the address, and the number of units per bank dedicated to said memory block. The offset can mean which unit of a bank involved in the block starts the contiguous units allocated to the block in each bank involved in the block. In this way, a simple translator can operate as a post-processing step to the complex translator, or the simple translator can operate as the only translator. After the completion of Step 4008, the process proceeds to step 4016 via link 4015.
The “send memory operation to 10 processor designated by address” Step 4016 is proceeded to via link 4015. In this step, the memory operation is conveyed from the originating processor core to the local IO processor or a remote IO processor, whichever IO processor is local to the memory banks holding the data for the address of the memory operation. After the completion of Step 4016, the process proceeds to step 4018 via link 4017.
The “Is verification required?” Step 4018 is proceeded to via link 4017. In this step, the bit 62 (3819) of the memory address is investigated by the receiving IO processor and the process path diverges based on whether the bit is set or not. After the completion of Step 4018 the process proceeds to either Step 4020 via the “No” link 4019 or to Step 4025 via the “Yes” link 4024.
The “Does valid reference verifier table entry for unit specified by address match high bits of address” Step 4025 is proceeded to via link “Yes” 4024. In this step 4025, the Valid reference verifier 3944, 3994 receives the physical address as input and searches its internal table to verify the high bits of the physical address match the bits held in the table at the index specified by a given range of lower bits. In a preferred embodiment, the lower range of bits extend from the first bit specifying the unit # within a given bank (e.g., bit index 12 for 4 KB units) through to the highest bit required to specify all hardware banks in the Shared Memory system (e.g., bit 39 if all of the memory comprises 1 Terabyte in total). If the bits match, the process proceeds via the “Yes” link 4026, otherwise the “No” link is taken. After the completion of Step 4025 the process either proceeds to step 4032 via “No” link 4030 or to Step 4020 via “Yes” link 4026.
The “Does memory operation depend on additional conditions (such as LL/SC?)” Step 4020 is proceeded to via “No” link 4019 or “Yes” link 4026. This step 4020 is reached if no verification was required, or if the verification was required and was successfully validated. Step 4020 performs an additional check or checks to verify that the memory operation should proceed. In one preferred embodiment, the memory operation is checked as to whether it is a “Store Conditional” memory operation and, if so, the Store memory operation will only be carried out if the “Load Linked” attribute corresponding to the physical memory address of the memory operation is verified as not having been accessed since the Virtual Processor originally initiating the “Store Conditional” memory operation originally performed its previous and corresponding “Load Linked” memory operation. If the memory operation is a “Store Conditional” memory operation or other memory operation requiring a condition to be met in order for the memory operation to be carried out, then the process proceeds via the “Yes” link 4027. Otherwise, it proceeds via the “No” link 4021.
The “Are additional conditions met?” Step 4028 is proceeded to via the “Yes” link 4027. In this step, the condition or conditions that triggered stepping via “Yes” link 4027 to step 4028 are verified as having been met (such as the aforementioned “Store Conditional” condition). If the condition(s) are met, then the process proceeds via “Yes” link 4029. Otherwise, the process proceeds via “No” link 4031. After the completion of Step 4028 the process proceeds to either Step 4032 via the “No” link 4031 or to Step 4022 via the “Yes” link 4029.
The “Perform memory operation-return value(s) as necessary to original memory requestor” Step 4022 is proceeded to via “Yes” link 4029 or via the “No” link 4021. In this step 4022, the memory operation has already been routed to the memory being operated upon and has already been verified that it should be performed. Step 4022 therefore executes the memory operation on the memory at the address designated by the address of the memory operation. If the memory operation is a Load, data is fetched from the address and returned to the original memory operation initiator. If an outstanding Load Linked operation exists for the designated memory address, the attribute data pertaining to that Load Linked operation is modified so that future “Store Conditional” operations behave properly relative to the history of accesses that affects the Store Conditional behavior.
Any other information that must be conveyed back to the original memory operation initiator is conveyed back to the initiator in step 4022 as well. One example might be returning an “ack” message to the initiator if a Store memory operation is being performed and the Store memory operation is a “Store-with-ack” style memory operation which requires an ack message to be returned to said initiator. Otherwise, program progress will stall at some point in the future, or continue stalling if the stall has already begun, until the ack is received. If silent message delivery failure is a possibility, the communication comprising the “ack” or “Load” memory operation result may need to be resent at some point due to a communication failure. After the completion of Step 4022, the process proceeds to the “End” Step 4034 via link 4023.
The “Return the condition of failure that has occurred to the original thread that initiated the memory operation, including an interrupt to the thread if the core is configured to interrupt under the failure condition” Step 4032 is proceeded to via the “No” links 4030 and 4031. This step 4032 is reached if a failure has occurred during valid reference verification or some other condition verification. In this case the thread originally initiating the memory operation is sent a message so that it will interrupt and begin an error handling process through which the failure may be recovered from. After the completion of Step 4032 the process proceeds to the “End” Step 4034 via link 4033.
The process proceeds to the “Retrieve entry for the physical address in the Physical Block Movement Table. Return address-redirect portion of entry and flag if match portion does not match the physical address” Step 4102 via the “No” link 4101. Step 4102 proceeds to the “Did match portion of PBMT entry match?” Step 4104 via link 4103. In these steps 4102, 4104, the Physical Block Movement Table (PBMT) 3939, 3989 local to the physical location of the memory corresponding to the physical memory address is consulted. The PBMT is looked into similar to the table within the Valid reference verifiers (3944, 3994), except that the data is held in regular memory banks 3940-3943 & 3990-3993 in order to reduce the hardware overhead associated with the PBMT and increase its flexibility by allowing more or less memory to be dedicated toward it depending on the performance requirements of the PBMT.
Whereas the Valid reference verifier 3944, 3994 looks up an entry based on a middle bit field within the memory address, and the value held within the entry must match the high bits of the address, the PBMT operates by performing the lookup using those same middle a separate set of middle bits from the address, which, for example, may be a superset of the middle bits used to lookup entries in the Valid reference verifiers 3944, 3994 that includes bits adjacent to the middle bit field on the more-significant end of the field. If the “match portion” of the data held within the PBMT at the entry designated by the PBMT lookup bit field of the original memory operation physical address matches the high bits of the memory address then the process proceeds via “Yes” link 4107, otherwise it proceeds via “No” link 4105. After the completion of Step 4104 the process proceeds either to Step 4106 via the “No” link 4105 or the process proceeds to Step 4108 via the “Yes” link 4107.
The “Interrupt original thread with invalid reference signal. Escalate error” Step 4106 is proceeded to via the “No” link 4105. In this case no fast-mode handling is possible because the data required to perform fast-mode processing is not available from the entry retrieved in step 4102 (e.g., the physical memory address of the memory operation does not match the entry that was looked up). Error handling is initiated within the Virtual Processor that originally initiated the memory operation in order to recover from the failure condition. Because we are already attempting to recover from a fail condition (namely that the Valid reference verifier was unable to verifier the memory operation) the error handling initiated by step 4106 is called error escalation. After the completion of Step 4106 the process proceeds to “End” Step 4117 via the 4115 link.
Upon the completion of Step 4104, the process proceeds to step 4108 via the “Yes” link 4107. Then, the “Does mode bit indicate to process PBMT matches in fast mode?” Step 4108 is proceeded to. In this step, a mode bit is consulted and, if it indicates that fast-mode processing should be performed, the “Yes” link 4111 is taken. Otherwise, the “No” link 4109 is taken. The mode bit may be set according to a number of rules. After the completion of Step 4108, the process proceeds to either step 4112 via the “Yes” link 4111 or to Step 4110 via the “No” link 4109.
The “Interrupt original thread with slow address-redirect signal, passing address-redirect portion of PBMT entry to interrupt handler” Step 4110 is proceeded to via link 4109. Step 4110 proceeds to the “Optionally insert entry into complex address translator for redirect avoidance in the future. Optionally escalate error for logging/profiling and/or user monitoring” step 4114 via link 4113. In these steps 4110, 4114, the Virtual Processor originally initiating the memory operation is provided the redirect portion of the PBMT entry that was matched, and the Virtual Processor is interrupted in order to invoke the slow-address-redirect handler to process said redirect portion of the PBMT entry.
The Virtual Processor may carry out the slow-address-redirect handler methods that enable it to recover from the error of the invalid reference. One method by which this may be carried out is insertion into the complex translator's internal 3450 table of an entry capable of handling the memory operation (possibly requiring removal of a different entry to make room in the table). The entry may be created using the redirect data parameter that is passed to the handler. The slow address-redirect signal handler is useful for preventing future invalid references through leveraging of the complex translator's capabilities, so that the overall all performance may be higher when the slow address-redirect signal handler is used. This is because the slow address-redirect signal handler attempts to more completely handle the error and prevent its future occurrence for the given Virtual Processor and even for other Virtual Processors with which it shares a core. The step 4114 may further escalate the error handling so that additional profiling data is available to the compiler when recompilation is performed in the future. This may enable the recompilation to achieve higher performance through improved memory allocation request parameterization that results in fewer memory movements, and therefore fewer invalid references and PBMT processing. After the completion of Step 4114, the process proceeds to step 4112 via link 4118.
The “Re-perform memory operation using new physical address calculated from address-redirect portion of PBMT entry” Step 4112 is proceeded to via either the “Yes” link 4111 or link 4118. In this step, the address for the memory operation is recomputed using the redirect portion of the PBMT entry that was matched. In this way it is possible for no intervening error handling software to need to run to recover from the error, since the means of recovery can be implemented in hardware, which integrates the redirect data into the memory address and re-initiates the memory operation.
In one preferred embodiment an initial miss in the PBMT does not mean that the PBMT does not have a matching entry, but instead that the first lookup failed and a subsequent lookup may succeed. In that embodiment, multiple lookups may be performed before a true failure such as that of 4106 is escalated-to. One such implementation of said preferred embodiment would be hash table implementation of the PBMT, where collisions are detected as non-matching PBMT entries but only empty entries are detected as complete table misses and collisions are retried after where as complete misses move the process to step 4106. After the completion of Step 4112, the process proceeds to the “End” Step 4117 via link 4116.
The Valid reference verifier (3994 of
In another embodiment, only the bits 4216 are used in the matching operation, and the memory allocation system uses knowledge that these bits are used for matching to ensure that the PBMT matches addresses and redirects properly. In this embodiment, it may be the case that only the total amount of physical memory local to cores in the shared memory system multiplied by 2 raised to the number of bits in field 4216 may be allocated in total. This embodiment reduces the overhead of the matching operation that utilizes the entries of the PBMT. It may be the case that the compiler only configures the memory allocation to utilize this version of the matching system when a previous run of a given program has shown that it does not need more than said amount of memory.
The Original memory within-core byte address 4348 is created by combining the values of the within-unit data index 4326, which it receives as input 4338, the within-bank unit index 4324, which it receives as input 4342 from link 4336, and the within-core bank index 4322, which it receives as input 4341 from link 4334. The combination of these three fields within unit 4348 is output 4303 to Memory operation selection & mux unit 4305. The last input received by the Memory operation selection & mux unit 4305 is input 4304, which is output from the Address redirection PBMT address calculator 4347.
The Address redirection PBMT address calculator 4347 calculates its output 4304 by combining the PBMT base address 4345, which it receives as input 4346, which may include striping info (allowing the physically discontiguous PBMT 4360 to be distributed as 4371, 4372, 4373, 4374 across multiple banks 4361, 4362, 4363, 4364, respectively, and appear as a single virtually contiguous address region so as to operate as a single table) so as to distribute the PBMT over the banks appropriately, along with the Address within PBMT 4343 which is provided to it as input 4344. The Address within PBMT 4343 is the table index that is looked up into the PBMT 4360 and is calculated as the combination of bit fields comprising the within-bank unit index 4324 transmitted to it as input 4340 after input 4340 receives it from link 4336, the within-core bank index 4322 as input 4339 after input 4339 receives it from link 4334, and the Address version stamp low bits 4312 received as input 4332.
In the event that the Original memory operation command & data 4301 provided as input 4302 specify that the Valid reference verifier 4358 must produce a match (e.g., a 1-bit value) for the Match flag 4357 in order for the memory operation to proceed normally, then the Memory operation selection & mux 4305 verifies that the Match flag indicates a match and, if so, forwards the Original memory operation command & data 4301 from input 4302 onto output line 4306 combined with the Original memory within-core byte address 4348, which it received as input 4303, which will command the Memory 4395 to perform the Original memory operation command & data. It is noteworthy that the address provided via Input 4309 is a physical address, which will have already been processed by address translators if a virtual memory address space is implemented, and will be post-translation before it becomes Input 4309.
In the event that the Memory operation selection & mux unit 4305 is told that the Valid reference verifier's 4358 Match flag output 4357 must indicate a match in order to perform the normal memory operation, and the Match flag input 4357 indicates “No Match” (e.g., the flag bit is set to zero), the Memory operation selection & mux unit 4305 overrides the normal memory operation with a PBMT lookup. The unit 4305 does this by forwarding the address from unit 4347 provided to it as input 4304 onto the output 4306, which becomes input 4307 to memory 4395. Unit 4305 also outputs the value to be matched by the PBMT entry match checker 4378 via output 4306, which is presented as input 4308 to unit 4378. The value may be the Address version stamp high bits 4310. The unit 4305 also indicates to unit 4378 whether or not the PBMT entry must match. The entry in the PBMT 4370 is an example entry that is looked up in the case that the Valid reference a PBMT lookup is being performed.
The PBMT 4360 distributed pieces 4371-4374 held in memory 4395 within the memory banks 4361-4364 are searched in the event that the Match Flag (4357) indicated “No Match”, and the Original memory operation command & data 4301 indicated that a match must occur in order for normal operation to commence. The memory 4395 reads its input 4307 and looks up the value requested, if the operation commanded from input 4307 is a Load (as in the case of a PBMT lookup or a normal Load memory operation) and the value that is read from memory is output onto link 4375, which is sent as input 4377 to unit 4378, and input 4376 to Output 4380. The PBMT entry match checker reads the signal from its input 4308, which indicates whether a PBMT match is to be checked. If the check is to be made, the unit 4378 creates the Match flag output 4379, which is sent to the Output unit 4381. The output unit 4381 sends back the data received from unit 4395 and, if applicable, the Match flag it receives from unit 4378.
The Network-on-chip 4610 sends the address 4612 to the Address-in-parts 4615 unit (broken out for explanatory purposes) as input 4613 and as input 4614 to the Chip-mapped packet 4644. The Address-in-parts breakout 4615 comprises Higher bits 4616, Chip-select bits 4617, the highest Core-select bit 4618, Low Core-select bits 4619, and the Lower bits 4620. The Chip-select bits 4617 and highest Core-select bit 4618 together comprise portion 4622 of the Address in parts 4615, which is the Chip table lookup index 4623. The Chip-table lookup index 4623 is received as input 4624 by table 4625, which maintains the Next-chip threshold values 4626 and Current chip values 4627 for each entry (4630, 4631) in the table 4625. An entry is read from the table 4625 at index 4623 and the corresponding Next-chip threshold value 4630 and Current chip value 4631 for the entry are sent via output lines 4651 and 4635. The Comparator unit receives the Next-chip threshold 4630 via input 4651 and the Low-Core-select bits 4619 via input 4621. If the Low Core-select bits 4619 are less than the Next-chip threshold 4630 then the resulting output of the Comparator 4642, which is Comparison Result 4643, is equal to “Less than” (e.g., a bit flag set to 0). Otherwise the comparison result 4643 is equal to “Greater than or equal” (e.g., a bit flag set to 1). The Comparison Result 4643 is received by the Mux unit 4640 and acts as the mux unit's Select input (one of the standard inputs that a Mux unit receives, which selects which of the other inputs to forward onto the mux unit's output). The Incrementer unit 4638 receives the Current chip value 4631 of the selected entry 4635 via input 4636. The Incrementer creates output 4639 by adding 1 to its input 4636. In this way the output 4639 represents the “Next chip” whereas 4637 represents the “Current chip,” since it is the same as value 4635 taken from the Current chip column 4627 of the table 4625.
The Mux unit 4640 determines whether to output the current chip 4637 as the Chip-mapped physical chip address output 4641 or the “Next chip” 4639 based on whether the Comparison Result input 4643 is “Greater than or equal”, or “Less Than” respectively. This means that if input 4643 is “Less than,” then the current chip is forward onto output 4641. Otherwise, the “Next chip” 4639 is output. The Chip-mapped packet 4644 receives the Chip-mapped physical chip address 4641 as well as the address 4614 and data 4611 as input and creates output 4645 which is sent to the PCI Express link.
The Core-chip mapper (upward path) 4600 and its internal table 4625 allow chips that contain a number of functional cores between two adjacent powers of two, such as between 16 and 32 functional cores (inclusive), to continue to be able to contribute their memory to a Shared memory system supporting the system's striping address scheme and its ability to create virtual contiguous memory regions from physically discontiguous units of memory.
As an example, suppose we have a system with four chips, each supporting 32 cores, except for the second, which supports only 20 cores. In this case the first entry of table 4625 would have a Next-chip threshold of 32 and a Current chip value of 0. (In an alternative implementation that saves implementation space by not requiring support for Next-chip-thresholds larger than 31, the same the next-chip threshold is set to 0 and the current-chip is set to −1 so that the Incrementer value 4638 will always be taken and will be equal to 0.) The second entry would have a Next-chip threshold value of 20 and a current chip value of 1. The third entry would have a next-chip threshold value of 20 and a current chip value of 2. The fourth entry would have a next-chip threshold value of 20 and a current chip value of 3. In this way, four processors create a logically contiguous set of cores even though each processor supports an arbitrary number of cores between two adjacent powers of two.
An example Core table can be considered. In this example, the number of cores per chip is always between 16 and 32 inclusive, and the number of functioning cores that occur in processors with lower processor indices than the example processor number 150. When the physical address Core-select bits are equal to 150 (the 151st core), which has binary representation 0b 1001 0110, the bits “10110” must be translated to the proper on-chip core index, which is Core 0. Since 150 should map to the first core on the chip (Core 0), the entry in Core table at index 0bxxx10110 (a value of 22 in the least significant 5 bits) is 0. The entry at 0bxxx10111 (the entry 4740 in the Core table 4735 at index 23) should be 1, and so on, with wrap-around after 0bxxx11111 (entry at index 31, which should be 9), so that the entry 4740 at index 0bxxx00000 should have value 10. This proceeds until 0bxxx10101 (index 21) has value 31. If fewer than 32 cores are implemented on the chip, then, in fact, no packets 4720 will arrive from the PCI Express link that designate a Core-select 4723 value (or Core table lookup index 4731) as 31, so the translation which maps to a missing core will in fact never be used.
After the mapping process, the Core-mapped packet 4730 proceeds via link 4750 onto the Network-on-chip 4710 where it can be routed to the particular core, and eventually the particular bank of that particular core, to which it applies.
The Local memory operation queue 4836 receives memory operations from the Network-on-chip 4802 via input 4841 and also from the Processor Core 4804 via 4806 and 4811. The Input 4835 to the Local memory operation queue determines whether the Data 4806 and Physical address 4811 will be appended to the internal data queue structure. If the “Route to local memory flag (switches which queue becomes receiver)” 4835 indicates “Local” (such as if the flag bit of 4835 is set to 1), then 4806 and 4811 are added as a memory operation to the Local memory operation queue 4836. The majority of the components remaining to be described in
The Physical address 4808 is conveyed to 4812 via 4809, where it is shown as three components: High physical address bits 4813, Core-select bits 4814, and Low physical address bits 4815. The High physical address bits 4813 and Core-select bits 4814 merge to provide input to the Subtractor 4817 as input 4816. As described previously, the shared memory address space orders all of the cores from 0 to the total number of cores in the Shared memory address space minus 1. The number of the core is its index. The core on a chip with the lowest index of all cores on the chip is the Chip base core index value, which is stored in unit 4818 so that it can be read by the Subtractor 4817 via 4819. The Subtractor 4817 subtracts the Chip base core index 4818 from the higher address bits presented in 4816 which produces the Result high bits output 4820.
The “High bits equal to zero verifier” 4821 receives the Result high bits 4820 and verifies that the bits not needed to index on-chip cores (e.g., all bits of Result high bits, except the bottom five bits in the case that the chip has between 16 and 32 cores) are equal to zero. The “Equality result” 4822 is set by unit 4821 as either “equal to zero” or “not equal to zero”. The Core table index 4824 is derived from the Low physical address bits 4815 via 4855 and is presented to the Core table 4830 as input 4825.
The Core table 4830 has a number of entries 4832, each with three attributes (columns): column 4833, a Context-sensitive flag (4827) and a Core locality flag 4828. The Core-mapped core index 4826 is derived from attribute 4833 and the Context-sensitive flag 4827. The Core locality flag 4853 signals whether the Context-sensitive flag 4850 has special meaning and, if not, the Core-mapped core index 4826 maintains its significance. Using the Core table lookup index 4824 an entry 4832 is selected in the Core table 4830, whose Context-sensitive flag is sent to the Local verifier 4823 via 4850. The selected entry's Core locality flag is transmitted as output 4851 where it is directed to the Local verifier 4823 as input 4853, and to the Core-mapped packet 4846 as input 4852. The Core-mapped core index 4826 is sent as output 4831 to the Core-mapped packet 4846.
The Local verifier 4823 reads the Core locality flag 4853, which is set to 1 when the core represented by the table is either remote, or the core originating the memory operation 4800. The Core locality flag 4828 for the selected entry is set to 0 in the case that the core corresponding to the Core table lookup index 4824 is on-chip but is not the local core 4800. In the case that the Local verifier receives a Core locality flag 4853 of zero, the unit 4823 outputs “Not-local,” which may be implemented as a flag bit equal to 0. If the Core locality flag 4853 is set to 1, the Context-sensitive flag 4850 is consulted and, if it is set to “Local core” (e.g., a flag value of 1), the output 4833 of the Local verifier unit 4823 is set to “Local” (flag value 1). If the Core locality flag 4853 is set to 1, the Context-sensitive flag 4850 is set to “Remote Core” (e.g., a flag value of 0) and the output 4833 of the Local verifier unit 4823 is set to “Remote” (flag value 0).
If the output 4833 of the Local verifier 4823 is “Local,” then the Local memory operation queue 4836 receives this information as input 4835 and appends the memory operation corresponding to that sent by the Processor Core 4804, which is received over 4806 and 4811 is appended to the Local memory operation queue 4836. In one preferred embodiment, the Local memory operation queue 4836 maintains an “originating locally” queue, which is added-to based on the flag 4835, and an “originating remotely” queue, which has memory operations originating from the Network-on-chip 4802 appended to it. In this embodiment, the “originating locally” may be assigned higher priority, or temporary priority may be assigned to either queue in order to control which memory operations are performed most quickly.
The Remote memory operation queue 4844 directs memory operations originating from the Processor Core 4804 that are bound for the network-on-chip 4802 which are destined for a different core on the same chip or for a core on a different processor. Core-mapped packets 4846 received as input 4847 are appended to the Remote memory operation queue 4844 for sending to the Network-on-chip 4802 via 4845 if the output of the Local verifier 4834, which is inverted in NOT unit 4842, indicates that the message is bound for a core that is not the local core. The memory operation originating from the Processor core 4804 arrives at the Core-mapped packet via 4810 and 4807. The Core-mapped core index is adjusted in the Core-mapped packet with the Core-mapped core index 4831, which allows cores to know the indices of the other cores onboard the physical chip even when the physical address space is expected to be contiguous and without holes (such as when utilizing the virtual contiguous address region scheme). The core-mapped core index skips cores that are on-chip but not functional, thereby maintaining a contiguous and regular physical address space even when actual hardware cores may be non-functional.
It will be appreciated by those skilled in the art that changes could be made to the embodiment(s) described above without departing from the broad inventive concept thereof. It is understood, therefore, that this invention is not limited to the particular embodiment(s) disclosed, but it is intended to cover modifications within the spirit and scope of the present invention as defined by the appended claims.
Claims
1. A method of translating a virtual memory address to a physical memory address, comprising:
- receiving a virtual memory address;
- comparing a portion of the received virtual memory address to each of a plurality of entries of a virtual memory address matching table;
- determining a matching row of the virtual memory address matching table for the portion of the received virtual memory address;
- shifting a contiguous set of bits of the received virtual memory address, wherein the shifting is performed in accordance with information regarding memory address translation associated with the determined matching row of the virtual memory address matching table; and
- combining the shifted contiguous set of bits of the received virtual memory address with high-order physical memory address bits associated with the determined matching row of the virtual memory address matching table, and with low-order bits of the received virtual memory address, to produce a physical memory address.
2. The method of claim 1 wherein the combining comprises one of summing or performing an bitwise-OR.
3. The method of claim 1 wherein, prior to combining the shifted contiguous set of bits of the virtual memory address with high-order physical memory address bits associated with the determined matching row of the virtual memory address matching table and with low-order bits of the virtual memory address from outside of the contiguous set of bits of the virtual memory address to produce a physical memory address, the high-order physical memory address bits associated with the determined matching row of the virtual memory address matching table and the low-order bits of the virtual memory address from outside of the contiguous set of bits of the virtual memory address are combined.
4. The method of claim 1 wherein the virtual address table is stored in a content addressable memory.
5. The method of claim 1 further comprising:
- providing an indication of the matching row, wherein the indication of the matching row causes the selection of the information regarding address translation associated with the matching row.
6. The method of claim 1 further comprising:
- reading data from, or writing data to, the memory location associated with the produced physical memory address.
7. The method of claim 1 wherein the low-order bits are outside of the contiguous set of bits of the virtual memory address.
8. The method of claim 1 wherein the produced physical memory address refers to a memory location associated with a different memory bank, different microprocessor, different processor board, or different machine than that associated with a process from which the virtual memory address was received.
9. The method of claim 1 further comprising:
- storing, for each virtual memory address pattern stored in the virtual memory address matching table: high-order bits of corresponding physical memory addresses for virtual memory addresses matching the virtual memory address pattern, a number of bits to shift for virtual memory addresses matching the virtual memory address pattern, a degree of shift for virtual memory addresses matching the virtual memory address pattern, and a shift-region starting index for virtual memory addresses matching the virtual memory address pattern.
10. The method of claim 1 wherein the values of bits in the original positions of the shifted contiguous set of bits are set to zero.
11. The method of claim 1 wherein a plurality of lowest order bits of the virtual memory address are not among the contiguous set of bits of the virtual memory address that are shifted.
12. The method of claim 1 wherein the information regarding memory address translation comprises a number of bits to shift, a degree of shift, and a starting bit position for the region to be shifted.
13. The method of claim 1 wherein the information regarding memory address translation comprises a degree of shift, a starting bit position for the region to be shifted, and an ending bit position for the region to be shifted.
14. The method of claim 1 wherein the determining a matching row comprises:
- determining, for each row of the virtual address table, whether the portion of the virtual memory address matches the row;
- providing indications of a plurality of rows that matched to an arbiter; and
- determining, at the arbiter, a first matching row of the plurality of matching rows.
15. The method of claim 1 wherein there is one virtual memory address matching table per processor core.
16. The method of claim 1 wherein the virtual memory address is received as part of a request for memory access.
17. An apparatus for translating virtual memory addresses to physical memory addresses, comprising:
- a memory for storing a received virtual memory address;
- a content addressable memory for comparing a portion of the received virtual memory address to each of a plurality of entries in a virtual memory address matching table and determining a matching row of the virtual memory address matching table for the portion of the received virtual memory address;
- shifting logic for shifting a contiguous set of bits of the received virtual memory address, wherein the shifting is performed in accordance with information regarding memory address translation associated with the determined matching row of the virtual memory address matching table; and
- combining logic for combining the shifted contiguous set of bits of the received virtual memory address with high-order physical memory address bits associated with the determined matching row of the virtual memory address matching table, and with low-order bits of the received virtual memory address, to produce a physical memory address.
18. The apparatus of claim 17 wherein the combining logic comprises at least two of multiplexing logic, summing logic, and bitwise-or logic.
19. A method of translating a virtual memory address to a physical memory address, comprising:
- receiving a virtual memory address;
- deriving information regarding memory address translation from high-order bits of the received virtual memory address;
- shifting a contiguous set of bits of the received virtual memory address, wherein the shifting is performed in accordance with the information regarding memory address translation derived from the high-order bits of the received the virtual memory address; and
- combining the shifted contiguous set of bits of the received virtual memory address with predetermined high-order physical memory address bits, and with low-order bits of the received virtual memory address, to produce a physical memory address.
20. The method of claim 19 wherein the information regarding memory address translation derived from the high-order bits of the received the virtual memory address comprises a number of bits to shift, a degree of shift, and a starting bit position for the region to be shifted.
Type: Application
Filed: Mar 6, 2014
Publication Date: Sep 18, 2014
Applicant: Cognitive Electronics, Inc. (Boston, MA)
Inventor: Andrew C. FELCH (Palo Alto, CA)
Application Number: 14/199,321
International Classification: G06F 12/10 (20060101);