Shift-add based random number generation
A system for pseudorandom number generation. A processor is provided that has a first memory to hold a first value and a second memory to hold a second value. Then a logic performs a +* operation while a looping condition is true.
This application claims the benefit of U.S. Provisional Application No. 60/974,820 entitled “Shift-Add Mechanism,” filed Sep. 24, 2007 by at least one common inventor, which is incorporated herein by reference in its entirety.
BACKGROUND OF THE INVENTION1. Technical Field
The present invention relates generally to electrical computers and digital processing systems having processing architectures and performing instruction processing, and more particularly, to processes for random number generation that can be implemented in such.
2. Background Art
Powerful and efficient operational codes (op-codes) are critical for modern computer processors to perform many tasks. For example, some such tasks are multiplication and producing sequences of pseudorandom numbers.
What a task like multiplication is well understood and that it is important requires no elaboration. In contrast, random and pseudorandom number generation merit discussion here to establish context for the presently disclosed invention.
Whether selecting a set of truly random numbers is possible, what processes might be used for this, and whether the actuality of randomness is provable are all subjects of much debate. For present purposes, it is enough to accept that computers are not able to generate random numbers, since they only perform previously defined instructions to produce completely predictable results. There is, however, an abundance of computer algorithms available to generate pseudorandom numbers and the use of computers with such algorithms is very useful in many important applications. Unless one knows where in a sequence the first pseudorandom number is taken, the result can be completely unpredictable and the words random and pseudorandom are used interchangeably from this point on.
There are two main preferences for pseudorandom numbers generated by computers. The first such preference is that the sequence of pseudorandom numbers be reproduced exactly for several iterations of the same experiment. This preference is important, for example, for some scientific experiments. Here a “seed” value is used as an initializing input to a computer algorithm and the computer algorithm then is expected to generate the exact same sequence of numbers in each iteration.
The second preference is that the sequence of pseudorandom numbers not be merely seemingly random, but rather that they be completely unpredictable. This preference is important, for example, for cryptography and many other applications. Here a “seed” value is used in a manner that will result in a completely different sequence of random numbers for different iterations based on the same seed.
While no specific applications for random numbers are explored in detail herein, the fact that it is possible to generate pseudorandom sequences of numbers that are good enough for such applications is in itself of importance, and in such cases one needs to be able to compare approaches. For instance, in many situations the real problem encountered is not in creating pseudorandom sequences, but rather in showing that such sequences are random enough for the given application. In fact, for some applications, true randomness is not actually required and apparent randomness is instead preferred. The following paragraphs explore methods of testing for randomness in a given sequence and outline the characteristics associated with “quality” pseudorandom sequences.
For certain applications it is important that a distinction be made between apparent verses actual randomness, because one may be preferred over the other. Even though it is not actually possible to prove a string of numbers is random, many individuals have a funny way of determining this on their own.
Assume for the sake of a first example, that a person watches a roulette wheel for 22 successive spins. For the first 18 spins they notice what they would consider random output from the wheel, and for the 19th, 20th, 21st, and 22nd spins the same value appears. There are two thoughts that our hypothetical person may have at this point. The first is that the next spin has a high chance of resulting in the same value, since the same value occurred in the last four spins. The second thought is that there is absolutely no way that the next spin can produce the same value that occurred in the last four spins. Both of these trains of thought are completely wrong, but they lead to the idea of having certain patterns appear random verses actually being random. Each new spin on a roulette wheel has the same expected outcome as all other previous and future spins. This is not saying that each spin is random. What it is saying is that the underlying distribution for all outcome from a roulette wheel should be completely uniform, which is one of the behaviors of a truly random sequence of numbers. Four successive spins resulting in the same value is not a characteristic of apparently random behavior, but it is not unreasonable for a truly random sequence of numbers to contain four successive equivalent values.
As another example, assume that one has an iPod™ and wants to listen to their music, yet also wants to be surprised with each new song. If the iPod contains a truly random method of selecting the next song to play, it would be entirely possible for the same song to play several times in a row. However, it would not be in the best interest of this device's manufacturer, Apple, Inc., to include a truly random number generator in its iPod product, because most people want to listen to nearly all available songs before they hear a repeat. Accordingly, it is unlikely that this manufacturer uses a true random method for producing next song playback, because this would conflict with its consumer's preferences.
A point demonstrated by the two examples presented above is that applications can predefine a need for apparently random verses actually random numbers.
At present there is no one test that can determine if a sequence of numbers is truly random. Instead a plethora of tests have been developed. Some 16 in all are presently used by the National Institute of Standards and Technology (NIST), wherein combining the results of these tests gives a better indication to the degree of randomness of a sequence in question. NIST has published detailed descriptions of these tests, and debate rages over actual and perceived errors in the methodology and in these descriptions. What is important to note here is that while each test provides a yes or no answer to the question of whether or not a sequence is random, one test by itself does not guarantee randomness. Even yes or no answers to the question of randomness for each of the 16 tests can be different, depending on the confidence level in which the sequence is tested. Therefore, it is important to determine at what level of randomness a given sequence will be tested and to evaluate the results from all of the tests before making a proclamation regarding the randomness of a given sequence.
In the preceding discussion of methods for testing for “good enough” randomness, there has been no mention of characteristics that define the “quality” of a sequence of randomly generated numbers. Even if a sequence passes the NIST tests, that sequence may not be preferred over another that does not pass or that only passes at a lower confidence level.
Speed is also an extremely important consideration in many applications. So it follows that the following two questions should be answerable for every random sequence that is generated. First, how quickly is the first term in the sequence generated? Second, how quickly are successive terms in the sequence generated? The answers to these questions are not accounted for in any of the NIST tests. Yet they are important characteristics of a quality sequence. Another important quality consideration is that the numbers in a sequence not be too random (the iPod example above demonstrates this). Too random a sequence can detract from the quality of the sequence. A quality sequence of random numbers will adhere to the characteristics of a noise sphere (see e.g., any of the many excellent academic texts on this topic). Thus, it is important to realize that even a sequence which passes all of the NIST tests may not be useful because the values are generated too slowly, are too random, or do not adhere to certain noise standards.
BRIEF SUMMARY OF THE INVENTIONAccordingly, it is an object of the present invention to provide a shift-add based random number generation process that is useful for various operations in a processor.
Briefly, another preferred embodiment of the present invention is a system for pseudorandom number generation. A processor is provided that has a first memory to hold a first value and a second memory to hold a second value. A logic then performs a +* operation while a looping condition is true.
These and other objects and advantages of the present invention will become clear to those skilled in the art in view of the description of the best presently known mode of carrying out the invention and the industrial applicability of the preferred embodiment as described herein and as illustrated in the figures of the drawings.
TBLS. 1-4 represent the values in the T-register and the S-register in a SEAforth™ 24a device in a set of hypothetical +* (shift-add mechanism) examples.
TBLS. 5-10 represent the values in the T-register and the S-register in a SEAforth™ 24a device in a set of hypothetical +* (shift-add mechanism) multiplication examples.
In the various figures of the drawings, like references are used to denote like or similar elements or steps.
DETAILED DESCRIPTION OF THE INVENTIONA preferred embodiment of the present invention is a shift-add based random number generation process. As illustrated in the various drawings herein, and particularly in the view of
The present inventive shift-add based random number generation process 300 (
The shift-add mechanism 100 (
As general background, the SEAforth™ 24a has 24 stack based microprocessor cores that all use the Venture Forth™ programming language.
There are two distinct approaches that can be taken when a programmer is selecting the bits that will make up the 18 bit wide register space in a SEAforth™ 24a (with limited exceptions for some op-codes that use the A-register). The first of these is to divide this space into four equal slots that can be called: slot 0, slot 1, slot 2, and slot 3. The bit lengths of these slots are not all equal, however, because division of 18 by 4 results in a remainder. The first three slots, slot 0, slot 1, and slot 2, therefore, can each hold 5 bits, while slot 3 holds only three bits.
Specifically, these op codes are:
The second approach that a programmer can use when selecting the bits that will make up the 18-bit wide register space in the SEAforth™ 24a is to simply not divide the 18-bit wide register into slots, and to instead consider the register as containing a single 18-bit binary value. This may appear at first to be a completely different approach than the slot-based approach, but both representations are actually equivalent.
TBLS. 1-4 represent the values in the T-register and the S-register in a set of hypothetical +* examples. For simplicity, only 4-bit field widths are shown. It is important to note in the following that the value in the T-register (T) is changed while the value in the S-register (S) remains unchanged during execution of the +* op-code. [N.b., to avoid confusion between the bits making up values and the locations in memory that may hold such, we herein refer to bits in values and to bit-positions in memory. It then follows that a value has a most significant bit (MSB) and a least significant bit (LSB), and that a location in memory has a high bit (HB) position and a low bit (LB) position.
TBL. 1 shows the value one (1) initially placed in the T-register and the value three (3) placed in the S-register. Because the low bit (LB) position of T here is a 1, during execution of the +* op-code:
(1) S and T are added together and the result is put in T (TBL. 2 shows the result of this); and
(2) the contents of T are shifted to the right and a 0 is placed in bit 4 (TBL. 3 shows the result of this).
The reason for bit 4 being filled with a 0 is saved for later discussion.
The contents of T and S in TBL. 3 are now used for a second example. Because the LB position of T is now a 0, during another execution of the +* op-code:
(1) the contents of T are simply shifted to the right and a 0 is placed in bit 4 (TBL. 4 shows the result of this).
Again, the reason for bit 4 being filled with a 0 is saved for later discussion. Additionally, it should be noted that the shift to the right of all of the bits in T is not associated in any way with the fact that a 1 or 0 filled the LB position of T prior to the execution of the +* op code. Instead, and more importantly, the shift of all the bits to the right in T is associated with the +* op-code itself.
These two examples demonstrate nearly all of the actions associated with the +* op-code. What was not fully described was why 0 is used to fill bit 4. The following covers this.
The General Case of the +* Op-CodeA general explanation of the +* op-code is that it executes a conditional add followed by a bit shift of all bits in T in the direction of the low order bits when either a 1 or a 0 fills the high bit (HB) position of T after the shift.
Turning first to the shift sub-process 102, when the LSB of T is 0, in a step 110 the content of the HB position of T is examined. When the HB position of T is 0, in a step 112 the contents of T are shifted right, in a step 114 the HB position of T is filled with a 0, and in a step 116 T contains its new value. Alternately, when the HB position of T is 1, in a step 118 the contents of T are shifted right, in a step 120 the HB position of T is filled with a 1, and step 116 now follows where T now contains its new value.
Turning now to the conditional add sub-process 104, when the LB position of T is 1, in a step 122 the contents of T and S are added and in a step 124 whether this produces a carry is determined. If there was no carry, the shift sub-process 102 is entered at step 110, as shown. Alternately, if there was a carry (the carry bit is 1), the shift sub-process 102 is entered at step 118, as shown. Then the +* op-code process (the shift-add mechanism 100) continues with the shift sub-process 102 through step 116, where T will now contain a new value.
While the actions associated with the +* op-code are easy to define,
The most general case of a +* op-code is now described using a pseudo-code algorithm. For this description it is assumed that the +* op-code is executed on an n-bit machine wherein an nt-bit width number t is initially placed in T and an ns-bit width number s is initially placed in S. Furthermore, it is assumed that only one additional bit is available to represent a carry, even if the +* op-code produces a carry that is theoretically more than one bit can represent. There is no restriction on the lengths of nt and ns, only that their individual bit lengths should be less than or equal to the bit width of n. The pseudo-code is as follows:
It is important to note in the preceding that the +* op-code always involves a bit shift to the right (in the direction of the low order bits) of all bits in T. This bit shift is not the result of any event before, during, or after the execution of the +* op-code. The bit shift is an always executed event associated with the +* op-code.
It has been implied herein that the shift-add mechanism 100 can be used for multiplication. An example is now presented followed by an explanation of the general case of utilizing the +* op-code to execute complete and correct multiplication.
Let us suppose that a person would like to multiply the numbers nine (9) and seven (7) and that the letter T is used to represent an 8-bit memory location where the nine is initially placed and S is used to represent an 8-bit memory location where the seven is initially placed. [N.b., for simplicity we are not using the 18-bit register width of the SEAforth™ 24a device here, although the underlying concept is extendable to that or any bit width.] TBLS. 5-10 represent the values in the T-register and the S-register in a set of hypothetical +* multiplication examples. TBL. 5 shows the value nine (9) initially placed in the T-register and the value seven (7) placed in the S-register. Next, the value in T is right justified in the 8-bit field width such that the four leading bits are filled with zeros. Conversely, the value in S is left justified in the 8-bit field width so that the four trailing bits are filled with zeroes. TBL. 6 shows the result of these justifications.
Correct multiplication here requires the execution of four +* op-codes in series. The first +* operation has the following effects. The LB position of T is 1 (as shown in TBL. 6), so the values in T and S are added and the result is placed in T (as shown in the left portion of TBL. 7). Next, the value in T is shifted to the right one bit in the same manner described in 1a2b1. (above). The values after this first +* operation are shown in the right portion of TBL. 7.
The second +* operation is quite simple, because the LB position of T is 0. All of the bits in T are shifted right in the manner described in 2b1. (above). The values after this second +* operation are shown in TBL. 8.
The third +* operation is similar to the second, because the LB position of T is again 0. All of the bits in T are again shifted right in the manner described in 2b1. (above). The values after this third +* operation are shown in TBL. 9.
The fourth and final +* operation is similar to the first +* operation. The LB position of T is 1 (as shown in TBL. 9), so the values in T and S are added and the result is placed in T (as shown in the left portion of TBL. 10). Next, the value in T is shifted to the right one bit in the same manner described in a2b1. (above). The values after this fourth +* operation are shown in the right portion of TBL. 10.
The resultant T in TBL. 10 is the decimal value 63, which is what one expects when multiplying the numbers nine and seven.
A +* Pseudo-Code Algorithm for MultiplicationThe multiplication of a positive value with a positive value will result in a correct product when the sum of the significant bits in T and S prior to the execution of this pseudo-code is less than or equal to 16 bits. And the multiplication of a positive value with a negative value will result in a correct product when the sum of the significant bits in T and S prior to the execution of the pseudo-code is less than or equal to 17 bits. Note that S should contain the two's complement of the desired negative value in S prior to the execution of this pseudo code.
Of course, the multiplication of a negative value with a positive value is the same as 2. (above) for multiplication, as long as the negative value is in T and the positive value in S.
It has also been implied herein that the shift-add mechanism 100 can be used to generate pseudorandom numbers. An example is now presented, followed by an explanation of the general case utilizing the +* op-code for this.
Assume the following question is asked: Is it more efficient to execute complete and correct multiplication or to generate pseudorandom numbers by using just the +* op-code? It might seem logical to assume that multiplication, as complicated as it may seem at first glance, is more efficient to execute than the generation of pseudorandom numbers. This would, in fact, be incorrect except in the case when the two numbers being multiplied are only a few bits in length. Otherwise, and surprising even to many advanced programmers, the generation of pseudorandom numbers is actually the shortest program that can be written for the SEAforth™ 24a device. The generation of pseudorandom numbers utilizes the same instruction as multiplication, namely the +* op-code, but is much simpler to complete.
Like multiplication, random number generation requires two values, an nt-bit width number t in T and an ns-bit width number's in S, wherein both ns and nt are greater than zero. This means that both the values in S and T have at least one significant bit. Next the pseudo-code algorithm is shown which outlines pseudorandom number generation.
A Pseudo-Code +* Algorithm to Generate Pseudorandom Numbers[N.b., the following is not an error. The pseudo-code +* algorithm here is expressed in one line of text.]
-
- 1. While some looping condition is true:
- 1a. Execute the +* pseudo-code algorithm.
- 1. While some looping condition is true:
-
- Line Description
- 1. Comment: for the name of the file.
- 2. Loads the compiler/simulator.
- 3. White space for coding style.
- 4. Comment: This section begins the code that will be executed on the host machine up until lines 32-37 which are executed on the target machine. The following colon definitions are defined next to assist with file handling.
- 5. Compiling to the host machine not the target machine.
- 6. White space for coding style.
- 7. Creates a value like a variable that returns a value instead of its address for opening file, fid is short for file id.
- 8. Start with create, name a location within host Forth dictionary. Name a location in the host Forth dictionary and send a carriage return and line feed out to a file.
- 9. White space for coding style.
- 10. Standard Forth way to open a file.
- 11. Creates the file random.log, where r/w for read/write, create-file creates a file from scratch overwriting any existing files and returns error code 0 if successful and 1 if not successful then throws the error code away.
- 12. White space for coding style.
- 13. Standard Forth way to close a file.
- 14. Closes file we already opened, sends an error code either 0 or 1, store 0 to fid so we don't use the error code 1 later by accident.
- 15. White space for coding style.
- 16. Standard Forth way to write to a file. Converts a number to a string and writes it to a file.
- 17. Numbers written to the file random.log will be in base decimal.
- 18. Uses the standard Forth picture numeric output operators to format a field width in the file where the values are going to be written and writes the value to the file, returning an error code again either 0 or 1.
- 19. Puts the string on the stack that contains carriage return and line feed and writes this to the file so the next value is written on a new line, throw gets rid of error code.
- 20. White space for coding style.
- 21. Standard Forth way to grab the value from the T register on the target machine and use the previously defined colon definitions to write the value in T to the file random.log.
- 22. Indicates we are working with hexadecimal values. This actually opens the file random.log, puts a zero on the host stack.
- 23. There is a count up from 0 by 1 to 2 inclusive that is 0, 1, and 2. Next two loops are executed that is a do loop followed by a begin loop. Then step which moves through one cycle where the value of T in the data stack of the target machine has the variable name t on the host machine, tuck is like a swap followed by an over in Venture Forth™. This code pretty much will cycle until the value held by the variable t is different than what is on the host stack.
- 24. Swaps the two values on the stack of the host machine, put a 0 on the stack of the host machine and then begin another loop.
- 25. Keep looping until t changes; step is the word from the simulator that simulates one cycle.
- 26. Once you get a t value that is different from what was on the host stack, copy the value and then write the value to the file random.log.
- 27. keep looping until you have completed 131058 passes as indicated on line 42. The loop here corresponds with the do in line 27 which is also incrementing the loop counter.
- 28. The drop here indicates that there was one last item left on the host stack and to discard the last item and then close the file random.log.
- 29. White space for coding style.
- 30. Comment: Indicates the end of file handling. The following code except for line 42 is executed on the target machine.
- 31. White space for coding style.
- 32. Indicates the node location (node 0) where the following code will be executed. The following code being executed is on the target machine.
- 33. Colon definition for running the code inside node 0.
- 34. Puts two hexadecimal values on the data stack. The value $iff3 is placed in S and the value $5 is placed in T at the end of this code.
- 35. This next instruction begins executing a loop.
- 36. The first instruction +* executes in exactly the same way as is explained in section 3.1. and the subsections of 3.1. The second instruction again returns the loop back to line 38.
- 37. This line closes the code that will be executed within the node designated in line 35 that is the code executed in node 0. Additionally, this line controls the colon definition provided in line 36 which executes within node 0.
- 38. White space for coding style.
- 39. The value 131058 is the total number of values that will be written to the output file random.log.
The just described pseudo-code algorithm for producing pseudorandom numbers is much simpler than the previously described algorithm for multiplication. Yet, as simple as this algorithm appears, and it is quite simple, a few important caveats merit further discussion.
In the hardware used for present embodiments of the random number generation process 300, e.g., the SEAforth™ 24a device, two cycles are needed to ensure proper execution of the +* op-code. Only giving +* a single cycle to execute makes the behavior in the HB of T unpredictable. The +* requires two cycles, the first for the add and the second for the shift, to produce proper/expected results. Furthermore, it would generally be hard to tell in which cases the single cycle of +* would have deterministic behavior in the HB of T. By simply preceding each +* operation with a (nop) operation, however, this problem is solved.
This particular method of pseudorandom number generation has no restriction as to the length of sequences which can be produced. In fact, the only restrictions come as a result of the number of bits which can be utilized to represent the values in T and S. If the machine can be utilized in such a way that other registers can be made available to assist T and S with the bit length of the values they contain, this method is able to produce a pseudorandom number whose bit length is restricted only by the number of bits made available to represent T. Although it has already been noted that the initial value placed in T must be non-zero, the reasoning behind this has not been explained. To understand this, assume that T does contain the value 0 prior to an execution of the pseudo-code algorithm for pseudorandom number generation. The contents of S are not important, for the sake of this example. An initial value of 0 in T signifies that every bit in T is a 0. An iteration of +* now simply produces another value of 0 in T, because the LSB of T prior to execution is 0 and this will result in a bit shift to the right of all the bits in T where the highest bit is a 0. Another iteration or any number of iterations of +* will simply produce the same result. Thus, it is very important that the initial value placed in T be non-zero or a very uninteresting sequence is produced. Note, however, that a value of 0 placed in T prior to an execution of the multiplication pseudo-code algorithm will not result in any iterations of +*, but the correct value of the multiplication is in T even though the value in T does not change.
It has also already been noted that the initial value placed in S must be non-zero, and the reasoning behind this has also not been explained. To understand this, assume that T contains a non-zero value and that S does contain the value 0 prior to an execution of the pseudo-code algorithm for pseudorandom number generation. The effect of a +* on T is simply a bit shift to the right. If the highest bit in T is a 1 prior to any +* iterations, then ultimately T will settle to the value that is associated with all bits being set. If the highest bit in T is a 0 prior to any +* iterations, then ultimately T will settle to the value that is associated with all bits not being set.
The value which is placed in T prior to the execution of the pseudo-code pseudorandom number generation algorithm is the seed to the algorithm. Assume that the value in S does not change during the following. An initial value of t1 in T produces a sequence Q1. Using an initial value of t2 in T, where t1 is not equal to t2, produces a sequence Q2. That is for each ti a sequence Qi is produced where every Qi is a subset of a sequence Q that is the superset of all possible sequences that can be produced. Saying this another way, different seeds produce different sequences and using the same seed twice will produce the exact same sequence. The possible sequences which are produced, thus, will greatly depend on the value initially placed in T.
The value in T that is produced during each successive +* iteration is greatly dependent on the value in S. Due to the fact that the value in S is fixed during the +* iterations, it is extremely important that an appropriate value be chosen. What is meant by “appropriate” depends on the characteristics that are desired in the sequence that is produced. At present, the most appropriate value for S is debatable, and thus is a matter of a design choice by a programmer. From the inventors' perspective, however, an optimal value for S can been chosen through brute force testing on a target device (e.g., on the SEAforth™ 24a device).
Improving Sequence QualityA programmer who takes into account all of the just recited caveats will likely still not have created a pseudorandom sequence of numbers. One of the problems associated with this approach to producing pseudorandom numbers is determining when the sequence of pseudorandom number occurs. Is the first pseudorandom number the result of the first +* iteration? Or does the first pseudorandom number occur as the result of a later iteration? Or even, does this method truly produce a sequence of pseudorandom numbers?
An embodiment of this method of producing pseudorandom numbers is presented in
There are many techniques which can be utilized to improve the quality of sequences produced by the +* op-code. For example, probably the most obvious improvement is to increase the bit widths of T and S. This will increase the length of the sequence produced, as well as increasing the bit width of each term produced in the sequence. This alone could yield more favorable results when analyzed using the NIST tests. Additionally, this will improve sequence quality when any of the following techniques are applied.
The embodiment of the shift-add mechanism 100 in
Another technique for improving the sequence of generated pseudorandom numbers is similar to that of masking certain bits in the output. An exclusive or (XOR), with a static or dynamic value, can be applied to each output value in the sequence. This would not decrease the length of the values produced.
Of course, there are certainly other techniques that are not listed here that may also improve the quality of a sequence produced. These other methods, like those presented here, are not an integral component of the underlying invention but, rather, are to be appreciated as lying within the scope of the overall invention.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and that the breadth and scope of the invention should not be limited by any of the above described exemplary embodiments, but should instead be defined only in accordance with the following claims and their equivalents.
Claims
1. A system for pseudorandom number generation, comprising:
- a processor having a first memory to hold a first value and a second memory to hold a second value; and
- a logic to perform a +* operation while a looping condition is true.
2. The system of claim 1, wherein:
- said +* operation is an operational code of the processor.
3. The system of claim 1, wherein:
- said first memory includes a first set of multiple registers in said processor, or said second memory includes a second set of multiple registers in said processor, or both.
4. The system of claim 1, further comprising:
- a logic to mask one or more of the bits of said first value.
5. The system of claim 1, further comprising:
- a logic to exclusive or said first value with a third value.
6. The system of claim 5, further comprising:
- a logic to vary said third value dynamically between different instances of said performing said +* operation.
7. The system of claim 1, wherein:
- said logic to perform prefaces each said +* operation with a (nop) operation.
8. A method for pseudorandom number generation in a processor having first and second memories, comprising:
- placing a first value in the first memory;
- placing a second value in the second memory;
- while a looping condition is true, performing a +* operation.
9. The method of claim 8, wherein:
- said +* operation is an operational code of the processor.
10. The method of claim 8, wherein:
- the first memory includes a first set of multiple registers in the processor, or the second memory includes a second set of multiple registers in the processor, or both.
11. The method of claim 8, further comprising:
- after one or more iterations of said performing, masking one or more of the bits of said first value.
12. The method of claim 8, further comprising:
- after one or more iterations of said performing, exclusive or'ing said first value with a third value.
13. The method of claim 12, further comprising:
- varying said third value dynamically between different said iterations.
14. The method of claim 8, further comprising:
- performing (nop) operation prior to each said +* operation.
Type: Application
Filed: Apr 18, 2008
Publication Date: Mar 26, 2009
Inventor: Michael B. Montvelishsky (Burlingame, CA)
Application Number: 12/148,511
International Classification: G06F 7/58 (20060101);