Shift-add based random number generation

A system for pseudorandom number generation. A processor is provided that has a first memory to hold a first value and a second memory to hold a second value. Then a logic performs a +* operation while a looping condition is true.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Application No. 60/974,820 entitled “Shift-Add Mechanism,” filed Sep. 24, 2007 by at least one common inventor, which is incorporated herein by reference in its entirety.

BACKGROUND OF THE INVENTION

1. Technical Field

The present invention relates generally to electrical computers and digital processing systems having processing architectures and performing instruction processing, and more particularly, to processes for random number generation that can be implemented in such.

2. Background Art

Powerful and efficient operational codes (op-codes) are critical for modern computer processors to perform many tasks. For example, some such tasks are multiplication and producing sequences of pseudorandom numbers.

What a task like multiplication is well understood and that it is important requires no elaboration. In contrast, random and pseudorandom number generation merit discussion here to establish context for the presently disclosed invention.

Whether selecting a set of truly random numbers is possible, what processes might be used for this, and whether the actuality of randomness is provable are all subjects of much debate. For present purposes, it is enough to accept that computers are not able to generate random numbers, since they only perform previously defined instructions to produce completely predictable results. There is, however, an abundance of computer algorithms available to generate pseudorandom numbers and the use of computers with such algorithms is very useful in many important applications. Unless one knows where in a sequence the first pseudorandom number is taken, the result can be completely unpredictable and the words random and pseudorandom are used interchangeably from this point on.

There are two main preferences for pseudorandom numbers generated by computers. The first such preference is that the sequence of pseudorandom numbers be reproduced exactly for several iterations of the same experiment. This preference is important, for example, for some scientific experiments. Here a “seed” value is used as an initializing input to a computer algorithm and the computer algorithm then is expected to generate the exact same sequence of numbers in each iteration.

The second preference is that the sequence of pseudorandom numbers not be merely seemingly random, but rather that they be completely unpredictable. This preference is important, for example, for cryptography and many other applications. Here a “seed” value is used in a manner that will result in a completely different sequence of random numbers for different iterations based on the same seed.

While no specific applications for random numbers are explored in detail herein, the fact that it is possible to generate pseudorandom sequences of numbers that are good enough for such applications is in itself of importance, and in such cases one needs to be able to compare approaches. For instance, in many situations the real problem encountered is not in creating pseudorandom sequences, but rather in showing that such sequences are random enough for the given application. In fact, for some applications, true randomness is not actually required and apparent randomness is instead preferred. The following paragraphs explore methods of testing for randomness in a given sequence and outline the characteristics associated with “quality” pseudorandom sequences.

For certain applications it is important that a distinction be made between apparent verses actual randomness, because one may be preferred over the other. Even though it is not actually possible to prove a string of numbers is random, many individuals have a funny way of determining this on their own.

Assume for the sake of a first example, that a person watches a roulette wheel for 22 successive spins. For the first 18 spins they notice what they would consider random output from the wheel, and for the 19th, 20th, 21st, and 22nd spins the same value appears. There are two thoughts that our hypothetical person may have at this point. The first is that the next spin has a high chance of resulting in the same value, since the same value occurred in the last four spins. The second thought is that there is absolutely no way that the next spin can produce the same value that occurred in the last four spins. Both of these trains of thought are completely wrong, but they lead to the idea of having certain patterns appear random verses actually being random. Each new spin on a roulette wheel has the same expected outcome as all other previous and future spins. This is not saying that each spin is random. What it is saying is that the underlying distribution for all outcome from a roulette wheel should be completely uniform, which is one of the behaviors of a truly random sequence of numbers. Four successive spins resulting in the same value is not a characteristic of apparently random behavior, but it is not unreasonable for a truly random sequence of numbers to contain four successive equivalent values.

As another example, assume that one has an iPod™ and wants to listen to their music, yet also wants to be surprised with each new song. If the iPod contains a truly random method of selecting the next song to play, it would be entirely possible for the same song to play several times in a row. However, it would not be in the best interest of this device's manufacturer, Apple, Inc., to include a truly random number generator in its iPod product, because most people want to listen to nearly all available songs before they hear a repeat. Accordingly, it is unlikely that this manufacturer uses a true random method for producing next song playback, because this would conflict with its consumer's preferences.

A point demonstrated by the two examples presented above is that applications can predefine a need for apparently random verses actually random numbers.

At present there is no one test that can determine if a sequence of numbers is truly random. Instead a plethora of tests have been developed. Some 16 in all are presently used by the National Institute of Standards and Technology (NIST), wherein combining the results of these tests gives a better indication to the degree of randomness of a sequence in question. NIST has published detailed descriptions of these tests, and debate rages over actual and perceived errors in the methodology and in these descriptions. What is important to note here is that while each test provides a yes or no answer to the question of whether or not a sequence is random, one test by itself does not guarantee randomness. Even yes or no answers to the question of randomness for each of the 16 tests can be different, depending on the confidence level in which the sequence is tested. Therefore, it is important to determine at what level of randomness a given sequence will be tested and to evaluate the results from all of the tests before making a proclamation regarding the randomness of a given sequence.

In the preceding discussion of methods for testing for “good enough” randomness, there has been no mention of characteristics that define the “quality” of a sequence of randomly generated numbers. Even if a sequence passes the NIST tests, that sequence may not be preferred over another that does not pass or that only passes at a lower confidence level.

Speed is also an extremely important consideration in many applications. So it follows that the following two questions should be answerable for every random sequence that is generated. First, how quickly is the first term in the sequence generated? Second, how quickly are successive terms in the sequence generated? The answers to these questions are not accounted for in any of the NIST tests. Yet they are important characteristics of a quality sequence. Another important quality consideration is that the numbers in a sequence not be too random (the iPod example above demonstrates this). Too random a sequence can detract from the quality of the sequence. A quality sequence of random numbers will adhere to the characteristics of a noise sphere (see e.g., any of the many excellent academic texts on this topic). Thus, it is important to realize that even a sequence which passes all of the NIST tests may not be useful because the values are generated too slowly, are too random, or do not adhere to certain noise standards.

BRIEF SUMMARY OF THE INVENTION

Accordingly, it is an object of the present invention to provide a shift-add based random number generation process that is useful for various operations in a processor.

Briefly, another preferred embodiment of the present invention is a system for pseudorandom number generation. A processor is provided that has a first memory to hold a first value and a second memory to hold a second value. A logic then performs a +* operation while a looping condition is true.

These and other objects and advantages of the present invention will become clear to those skilled in the art in view of the description of the best presently known mode of carrying out the invention and the industrial applicability of the preferred embodiment as described herein and as illustrated in the figures of the drawings.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)

TBLS. 1-4 represent the values in the T-register and the S-register in a SEAforth™ 24a device in a set of hypothetical +* (shift-add mechanism) examples.

TBLS. 5-10 represent the values in the T-register and the S-register in a SEAforth™ 24a device in a set of hypothetical +* (shift-add mechanism) multiplication examples.

FIG. 1 (background art) is a table of the thirty two operational codes (op-codes) in the Venture Forth™ programming language.

FIG. 2 (background art) is a block diagram showing the general architecture of each of the cores in a SEAforth™ 24a device.

FIGS. 3a-b (background art) are schematic block diagrams depicting how the 18 bit wide registers in the SEAforth™ 24a can be represented, wherein FIG. 3a shows the actual bit arrangement, and FIG. 3b shows a conceptual bit arrangement.

FIGS. 4a-b (background art) are schematic block diagrams depicting register content, wherein FIG. 4a shows the slots filled with four  (nop) op-codes, and FIG. 4b shows the register filled with the number 236775 (as unsigned binary).

FIGS. 5a-b (background art) are block diagrams respectively and stylistically showing the return and the data stack elements in SEAforth™ 24a cores, wherein FIG. 5a depicts elements in the return stack region, and FIG. 5b depicts elements in the data stack region.

FIG. 6 is a flow chart of a shift-add mechanism, as used by the present invention, that shows all of the possible actions associated with a single execution of the +* op-code.

FIG. 7 is a table showing bit relationships in accord with FIG. 6.

FIG. 8 is a flow chart of a shift-add based multiplication process that uses the shift-add mechanism of FIG. 6.

FIG. 9 is a code listing for an example of random number generation in accord with the present invention.

In the various figures of the drawings, like references are used to denote like or similar elements or steps.

DETAILED DESCRIPTION OF THE INVENTION

A preferred embodiment of the present invention is a shift-add based random number generation process. As illustrated in the various drawings herein, and particularly in the view of FIG. 9, preferred embodiments of the invention are depicted by the general reference character 300.

The present inventive shift-add based random number generation process 300 (FIG. 9), is an application of a shift-add mechanism invented in the present inventor's company. In view of this, that shift-add mechanism is discussed first, below.

The +* Op-Code on the Seaforth™ 24A Device

The shift-add mechanism 100 (FIG. 6), can be used for a variety of tasks including, without limitation, multiplication and pseudorandom number generation. In the Venture Forth™ programming language, the shift-add mechanism 100 exists as a “+*” op-code. Before presenting more detailed examples, it is useful to consider a simple example in the context of a SEAforth™ 24a device by IntellaSys™ Corporation of Cupertino, Calif., a member of The TPL Group™ of companies.

As general background, the SEAforth™ 24a has 24 stack based microprocessor cores that all use the Venture Forth™ programming language. FIG. 1 (background art) is a table of the thirty two operational codes (op-codes) in this language, in hex, mnemonic, and binary representations. These op-codes are divided into two main categories, memory instructions and arithmetic logic unit (ALU) instructions, with sixteen op-codes in each division. The memory instructions are shown in the left half of the table in FIG. 1, and the ALU instructions are shown in the right half of the table in FIG. 1. It can be appreciated that one clear distinction between the divisions of op-codes is that the memory instructions contain a zero (0) in the left-most bit, whereas the ALU instructions contain a one (1) in the left-most bit. Furthermore, this is the case regardless of whether the op-codes are viewed in their hex or binary representations. The +* op-code of present interest is shown upper-most in the right-hand column.

FIG. 2 (background art) is a block diagram showing the general architecture of each of the cores in the SEAforth™ 24a device. All of the registers in the SEAforth™ 24a are 18 bits wide, except for the B- and PC-registers, which are not relevant here.

There are two distinct approaches that can be taken when a programmer is selecting the bits that will make up the 18 bit wide register space in a SEAforth™ 24a (with limited exceptions for some op-codes that use the A-register). The first of these is to divide this space into four equal slots that can be called: slot 0, slot 1, slot 2, and slot 3. The bit lengths of these slots are not all equal, however, because division of 18 by 4 results in a remainder. The first three slots, slot 0, slot 1, and slot 2, therefore, can each hold 5 bits, while slot 3 holds only three bits.

FIGS. 3a-b (background art) are schematic block diagrams depicting how the 18 bit wide registers in the SEAforth™ 24a device can be represented, wherein FIG. 3a shows the actual arrangement of the bits as bits 0 through 17, and FIG. 3b shows a conceptual arrangement of the bits as bits −2 through 17. In FIG. 3a it can be seen that bits 13-17 inclusive make up slot 0, bits 8-12 inclusive make up slot 1, bits 3-7 inclusive make up slot 2, and bits 0-2 make up slot designers of the SEAforth™ 24a device often point out the fact that the 18-bit wide registers can each contain three and three/five instructions, and this prompts the question whether slot 3 is significant, since none of the op-codes in FIG. 1 would appear to fit in slot 3. FIG. 3b shows how the designers of the SEAforth™ 24a device have handled this. They allow only certain op-codes to fit into slot 3 by treating the two least significant bits, called bit −1 and bit −2 here, as being hard wired to ground or zero. Of course, since slot 3 effectively has only three bits rather than five bits of space, the number of op-codes that fit into slot 3 is limited to only eight of the 32 possible op-codes.

Specifically, these op codes are:

$00 00000b ; (return) $04 00100b unext $08 01000b @p+ $0C 01100b !p+ $10 10000b +* $14 10100b + $18 11000b dup $1C 11100b • (nop).

The second approach that a programmer can use when selecting the bits that will make up the 18-bit wide register space in the SEAforth™ 24a is to simply not divide the 18-bit wide register into slots, and to instead consider the register as containing a single 18-bit binary value. This may appear at first to be a completely different approach than the slot-based approach, but both representations are actually equivalent. FIGS. 4a-b (background art) are schematic block diagrams depicting an example illustrating this. FIG. 4a shows the slots filled with four  (nop) op-codes, and FIG. 4b shows the register filled with the number 236775 (as unsigned binary). With reference to FIG. 1, it can be appreciated that the binary bit values in FIGS. 4a-b are the very same. This means that it is been left up to the programmer to differentiate whether a register will contain a number or contain four op-codes.

FIGS. 5a-b (background art) are block diagrams stylistically showing the return and the data stack elements, respectively, that exist in each core of a SEAforth™ 24a device. FIG. 5a depicts how the return stack region includes a top register that is referred to as “R” (or as the R-register) and an eight-register circular buffer. FIG. 5b depicts how the data stack region includes a top register that is referred to as “T” (or as the T-register), a (second) register below T that is referred to as “S” (or as the S-register), and also an eight-register circular buffer. In total, the return stack thus contains nine registers and the data stack contains ten registers. Only the data stack region needs to be considered in the following example.

TBLS. 1-4 represent the values in the T-register and the S-register in a set of hypothetical +* examples. For simplicity, only 4-bit field widths are shown. It is important to note in the following that the value in the T-register (T) is changed while the value in the S-register (S) remains unchanged during execution of the +* op-code. [N.b., to avoid confusion between the bits making up values and the locations in memory that may hold such, we herein refer to bits in values and to bit-positions in memory. It then follows that a value has a most significant bit (MSB) and a least significant bit (LSB), and that a location in memory has a high bit (HB) position and a low bit (LB) position.

TBL. 1 shows the value one (1) initially placed in the T-register and the value three (3) placed in the S-register. Because the low bit (LB) position of T here is a 1, during execution of the +* op-code:

(1) S and T are added together and the result is put in T (TBL. 2 shows the result of this); and

(2) the contents of T are shifted to the right and a 0 is placed in bit 4 (TBL. 3 shows the result of this).

The reason for bit 4 being filled with a 0 is saved for later discussion.

The contents of T and S in TBL. 3 are now used for a second example. Because the LB position of T is now a 0, during another execution of the +* op-code:

(1) the contents of T are simply shifted to the right and a 0 is placed in bit 4 (TBL. 4 shows the result of this).

Again, the reason for bit 4 being filled with a 0 is saved for later discussion. Additionally, it should be noted that the shift to the right of all of the bits in T is not associated in any way with the fact that a 1 or 0 filled the LB position of T prior to the execution of the +* op code. Instead, and more importantly, the shift of all the bits to the right in T is associated with the +* op-code itself.

These two examples demonstrate nearly all of the actions associated with the +* op-code. What was not fully described was why 0 is used to fill bit 4. The following covers this.

The General Case of the +* Op-Code

A general explanation of the +* op-code is that it executes a conditional add followed by a bit shift of all bits in T in the direction of the low order bits when either a 1 or a 0 fills the high bit (HB) position of T after the shift.

FIG. 6 is a block diagram of the inventive shift-add mechanism 100 that shows all of the possible actions associated with a single execution of the +* op-code. The +* op-code has two major sub-processes, a shift sub-process 102 and a conditional add sub-process 104. The shift-add mechanism 100 is embodied as a +* op-code that starts in a step 106 and where the content of the LB position of T is examined in a step 108.

Turning first to the shift sub-process 102, when the LSB of T is 0, in a step 110 the content of the HB position of T is examined. When the HB position of T is 0, in a step 112 the contents of T are shifted right, in a step 114 the HB position of T is filled with a 0, and in a step 116 T contains its new value. Alternately, when the HB position of T is 1, in a step 118 the contents of T are shifted right, in a step 120 the HB position of T is filled with a 1, and step 116 now follows where T now contains its new value.

Turning now to the conditional add sub-process 104, when the LB position of T is 1, in a step 122 the contents of T and S are added and in a step 124 whether this produces a carry is determined. If there was no carry, the shift sub-process 102 is entered at step 110, as shown. Alternately, if there was a carry (the carry bit is 1), the shift sub-process 102 is entered at step 118, as shown. Then the +* op-code process (the shift-add mechanism 100) continues with the shift sub-process 102 through step 116, where T will now contain a new value.

While the actions associated with the +* op-code are easy to define, FIG. 6 reveals that the execution of the +* op-code is not conceptually simple. FIG. 7 is a table showing the relationships between the LB position and the HB position of T prior to an execution, here called old T, an intermediate carry when the values in S and T are added (if this action occurs), and finally, the HB and the penultimate bit (HB −1) of T which is produced after execution, here called new T.

A +* Pseudo-Code Algorithm

The most general case of a +* op-code is now described using a pseudo-code algorithm. For this description it is assumed that the +* op-code is executed on an n-bit machine wherein an nt-bit width number t is initially placed in T and an ns-bit width number s is initially placed in S. Furthermore, it is assumed that only one additional bit is available to represent a carry, even if the +* op-code produces a carry that is theoretically more than one bit can represent. There is no restriction on the lengths of nt and ns, only that their individual bit lengths should be less than or equal to the bit width of n. The pseudo-code is as follows:

1. If the LB position of T is a 1: 1a. Add the value t in T to the value s in S where the sum of t + s, call this t′, replaces the present t in T and S is left unchanged. 1a1. If the HB position of T is a 1: 1a1a. If the addition of t and s resulted in a carry: 1a1a1. Shift all bits in T to the right one bit. Bit 0 of t′ after the shift contains the contents of bit 1 before the shift. Bit 1 of t′ after the shift contains the contents of bit 2 before the shift. In the same way, the rest of t′ is filled where bit m, m < n, being filled after the shift contains the contents of bit m + 1 before the shift. This process leaves bit n devoid while effectively destroying bit 0 of t′ before the shift. Bit n of t′ after the shift will be filled with a 1. 1a1b. If the addition of t and s did not result in a carry: 1a1b1. Shift all bits in T to the right one bit. Bit 0 of t′ after the shift contains the contents of bit 1 before the shift. Bit 1 of t′ after the shift contains the contents of bit 2 before the shift. In the same way, the rest of t′ is filled where bit m, m < n, being filled after the shift contains the contents of bit m + 1 before the shift. This process leaves bit n devoid while effectively destroying bit 0 of t′ before the shift. Bit n of t′ after the shift will be filled with a 1. 1a2. If the HB position of T is a 0: 1a2a. If the addition of t and s resulted in a carry: 1a2a1. Shift all bits in T to the right one bit. Bit 0 of t′ after the shift contains the contents of bit 1 before the shift. Bit 1 of t′ after the shift contains the contents of bit 2 before the shift. In the same way, the rest of t′ is filled where bit m, m < n, being filled after the shift contains the contents of bit m + 1 before the shift. This process leaves bit n devoid while effectively destroying bit 0 of t′ before the shift. Bit n of t′ after the shift will be filled with a 1. 1a2b. If the addition of t and s did not result in a carry: 1a2b1. Shift all bits in t to the right one bit. Bit 0 of t′ after the shift contains the contents of bit 1 before the shift. Bit 1 of t′ after the shift contains the contents of bit 2 before the shift. In the same way, the rest of t′ is filled where bit m, m < n, being filled after the shift contains the contents of bit m + 1 before the shift. This process leaves bit n devoid while effectively destroying bit 0 of t′ before the shift. Bit n of t′ after the shift will be filled with a 0. 2. If the LB position of T is a 0: 2a. If the HB position of T is a 1: 2a1. Shift all bits in T to the right one bit. Bit 0 of t′ after the shift contains the contents of bit 1 before the shift. Bit 1 of t′ after the shift contains the contents of bit 2 before the shift. In the same way, the rest of t′ is filled where bit m, m < n, being filled after the shift contains the contents of bit m + 1 before the shift. This process leaves bit n devoid while effectively destroying bit 0 of t′ before the shift. Bit n of t′ after the shift will be filled with a 1. 2b. If the HB position of T is a 0: 2b1. Shift all bits in T to the right one bit. Bit 0 of t′ after the shift contains the contents of bit 1 before the shift. Bit 1 of t′ after the shift contains the contents of bit 2 before the shift. In the same way, the rest of t′ is filled where bit m, m < n, being filled after the shift contains the contents of bit m + 1 before the shift. This process leaves bit n devoid while effectively destroying bit 0 of t′ before the shift. Bit n of t′ after the shift will be filled with a 0.

It is important to note in the preceding that the +* op-code always involves a bit shift to the right (in the direction of the low order bits) of all bits in T. This bit shift is not the result of any event before, during, or after the execution of the +* op-code. The bit shift is an always executed event associated with the +* op-code.

Multiplication Utilizing the +* Op-Code

It has been implied herein that the shift-add mechanism 100 can be used for multiplication. An example is now presented followed by an explanation of the general case of utilizing the +* op-code to execute complete and correct multiplication.

Let us suppose that a person would like to multiply the numbers nine (9) and seven (7) and that the letter T is used to represent an 8-bit memory location where the nine is initially placed and S is used to represent an 8-bit memory location where the seven is initially placed. [N.b., for simplicity we are not using the 18-bit register width of the SEAforth™ 24a device here, although the underlying concept is extendable to that or any bit width.] TBLS. 5-10 represent the values in the T-register and the S-register in a set of hypothetical +* multiplication examples. TBL. 5 shows the value nine (9) initially placed in the T-register and the value seven (7) placed in the S-register. Next, the value in T is right justified in the 8-bit field width such that the four leading bits are filled with zeros. Conversely, the value in S is left justified in the 8-bit field width so that the four trailing bits are filled with zeroes. TBL. 6 shows the result of these justifications.

Correct multiplication here requires the execution of four +* op-codes in series. The first +* operation has the following effects. The LB position of T is 1 (as shown in TBL. 6), so the values in T and S are added and the result is placed in T (as shown in the left portion of TBL. 7). Next, the value in T is shifted to the right one bit in the same manner described in 1a2b1. (above). The values after this first +* operation are shown in the right portion of TBL. 7.

The second +* operation is quite simple, because the LB position of T is 0. All of the bits in T are shifted right in the manner described in 2b1. (above). The values after this second +* operation are shown in TBL. 8.

The third +* operation is similar to the second, because the LB position of T is again 0. All of the bits in T are again shifted right in the manner described in 2b1. (above). The values after this third +* operation are shown in TBL. 9.

The fourth and final +* operation is similar to the first +* operation. The LB position of T is 1 (as shown in TBL. 9), so the values in T and S are added and the result is placed in T (as shown in the left portion of TBL. 10). Next, the value in T is shifted to the right one bit in the same manner described in a2b1. (above). The values after this fourth +* operation are shown in the right portion of TBL. 10.

The resultant T in TBL. 10 is the decimal value 63, which is what one expects when multiplying the numbers nine and seven.

A +* Pseudo-Code Algorithm for Multiplication

The multiplication of a positive value with a positive value will result in a correct product when the sum of the significant bits in T and S prior to the execution of this pseudo-code is less than or equal to 16 bits. And the multiplication of a positive value with a negative value will result in a correct product when the sum of the significant bits in T and S prior to the execution of the pseudo-code is less than or equal to 17 bits. Note that S should contain the two's complement of the desired negative value in S prior to the execution of this pseudo code.

1. If the desired multiplication is of a positive value with a positive value. 1a. Right justify t in the n bit field width of T. 1a1. Fill all leading bits in T after the MSB of t with zeros. The number of leading bits to fill should be exactly n − nt. 1b. Justify s in the n bit field width of S so that the LSB of s is located one bit higher than the MSB of t in T. 1b1. Fill all leading and trailing bits in S with zeros. The number of bits to fill should be exactly n − ns. 1c. Perform the multiplication. 1c1. Complete a for-loop indexing from 1 to nt. 1c1a. Execute the +* pseudo-code as described for the general case above. 2. If the desired multiplication is of a positive value with a negative value. 2a. Right justify t in the n bit field width of T. 2a1. Fill all leading bits in T after the MSB of t with zeros. The number of leading bits to fill should be exactly n − nt. 2b. Perform the two's complement of the value s in S. 2b1. Bit shift the value s in S towards the HB of S by the number of significant bits nt. 2c. Perform the multiplication. 2c1. Complete a for-loop indexing from 1 to nt. 2c1a. Execute the +* pseudo-code as described for the general case above. 3. If the desired multiplication is of a negative value with a negative value. 3a. Perform the two's complement of the value t in T. 3b. Perform the two's complement of the values in S. 3b. Execute 1a-1c.

Of course, the multiplication of a negative value with a positive value is the same as 2. (above) for multiplication, as long as the negative value is in T and the positive value in S.

FIG. 8 is a flow chart of the inventive shift-add based multiplication process 200 in accord with the present invention. In a step 202 the shift-add based multiplication process 200 starts or is invoked. In a step 204 a first value is arranged in a first memory location, i.e., in the right justified manner described in 1. (above) if T is the first memory location. In a step 206 a second value is arranged in a second memory location, ie., in the left justified manner described in 2. (above) for multiplication if S is the second memory location. [Those skilled in the programming arts will readily appreciate that alternate programmatic control mechanisms than the following count-compare-work-decrement approach can be used.] In a step 208 the number of iterations of the +* op-code is determined. Essentially, this number needs to equal the number of significant bits in the first value (in T). In a step 210 whether all needed iterations of the +* op-code have been performed is determined. If not, in a step 212 an iteration of the +* op-code is performed and in a step 214 the count still needed is decremented. Alternately, if step 210 determines that all needed iterations of the +* op-code have been performed, in a step 216 the product of the first and second values is now in the first memory (i.e., in T).

Random Number Generation Using the +* Op-Code

It has also been implied herein that the shift-add mechanism 100 can be used to generate pseudorandom numbers. An example is now presented, followed by an explanation of the general case utilizing the +* op-code for this.

Assume the following question is asked: Is it more efficient to execute complete and correct multiplication or to generate pseudorandom numbers by using just the +* op-code? It might seem logical to assume that multiplication, as complicated as it may seem at first glance, is more efficient to execute than the generation of pseudorandom numbers. This would, in fact, be incorrect except in the case when the two numbers being multiplied are only a few bits in length. Otherwise, and surprising even to many advanced programmers, the generation of pseudorandom numbers is actually the shortest program that can be written for the SEAforth™ 24a device. The generation of pseudorandom numbers utilizes the same instruction as multiplication, namely the +* op-code, but is much simpler to complete.

Like multiplication, random number generation requires two values, an nt-bit width number t in T and an ns-bit width number's in S, wherein both ns and nt are greater than zero. This means that both the values in S and T have at least one significant bit. Next the pseudo-code algorithm is shown which outlines pseudorandom number generation.

A Pseudo-Code +* Algorithm to Generate Pseudorandom Numbers

[N.b., the following is not an error. The pseudo-code +* algorithm here is expressed in one line of text.]

    • 1. While some looping condition is true:
      • 1a. Execute the +* pseudo-code algorithm.

A +* Random Number Generator in Forth Code

FIG. 9 is a code listing for random number generation (line numbers are added for reference and the code here has elements of both ANS Forth and Venture Forth™. A seed in T and a value in S generate approximately 218 (132,000) pseudorandom numbers in a separate file named “random.log.” The following describes this with reference to the line numbers and provides specific discussion.

    • Line Description
    • 1. Comment: for the name of the file.
    • 2. Loads the compiler/simulator.
    • 3. White space for coding style.
    • 4. Comment: This section begins the code that will be executed on the host machine up until lines 32-37 which are executed on the target machine. The following colon definitions are defined next to assist with file handling.
    • 5. Compiling to the host machine not the target machine.
    • 6. White space for coding style.
    • 7. Creates a value like a variable that returns a value instead of its address for opening file, fid is short for file id.
    • 8. Start with create, name a location within host Forth dictionary. Name a location in the host Forth dictionary and send a carriage return and line feed out to a file.
    • 9. White space for coding style.
    • 10. Standard Forth way to open a file.
    • 11. Creates the file random.log, where r/w for read/write, create-file creates a file from scratch overwriting any existing files and returns error code 0 if successful and 1 if not successful then throws the error code away.
    • 12. White space for coding style.
    • 13. Standard Forth way to close a file.
    • 14. Closes file we already opened, sends an error code either 0 or 1, store 0 to fid so we don't use the error code 1 later by accident.
    • 15. White space for coding style.
    • 16. Standard Forth way to write to a file. Converts a number to a string and writes it to a file.
    • 17. Numbers written to the file random.log will be in base decimal.
    • 18. Uses the standard Forth picture numeric output operators to format a field width in the file where the values are going to be written and writes the value to the file, returning an error code again either 0 or 1.
    • 19. Puts the string on the stack that contains carriage return and line feed and writes this to the file so the next value is written on a new line, throw gets rid of error code.
    • 20. White space for coding style.
    • 21. Standard Forth way to grab the value from the T register on the target machine and use the previously defined colon definitions to write the value in T to the file random.log.
    • 22. Indicates we are working with hexadecimal values. This actually opens the file random.log, puts a zero on the host stack.
    • 23. There is a count up from 0 by 1 to 2 inclusive that is 0, 1, and 2. Next two loops are executed that is a do loop followed by a begin loop. Then step which moves through one cycle where the value of T in the data stack of the target machine has the variable name t on the host machine, tuck is like a swap followed by an over in Venture Forth™. This code pretty much will cycle until the value held by the variable t is different than what is on the host stack.
    • 24. Swaps the two values on the stack of the host machine, put a 0 on the stack of the host machine and then begin another loop.
    • 25. Keep looping until t changes; step is the word from the simulator that simulates one cycle.
    • 26. Once you get a t value that is different from what was on the host stack, copy the value and then write the value to the file random.log.
    • 27. keep looping until you have completed 131058 passes as indicated on line 42. The loop here corresponds with the do in line 27 which is also incrementing the loop counter.
    • 28. The drop here indicates that there was one last item left on the host stack and to discard the last item and then close the file random.log.
    • 29. White space for coding style.
    • 30. Comment: Indicates the end of file handling. The following code except for line 42 is executed on the target machine.
    • 31. White space for coding style.
    • 32. Indicates the node location (node 0) where the following code will be executed. The following code being executed is on the target machine.
    • 33. Colon definition for running the code inside node 0.
    • 34. Puts two hexadecimal values on the data stack. The value $iff3 is placed in S and the value $5 is placed in T at the end of this code.
    • 35. This next instruction begins executing a loop.
    • 36. The first instruction +* executes in exactly the same way as is explained in section 3.1. and the subsections of 3.1. The second instruction again returns the loop back to line 38.
    • 37. This line closes the code that will be executed within the node designated in line 35 that is the code executed in node 0. Additionally, this line controls the colon definition provided in line 36 which executes within node 0.
    • 38. White space for coding style.
    • 39. The value 131058 is the total number of values that will be written to the output file random.log.

Some Caveats on Producing Pseudorandom Numbers Using +*

The just described pseudo-code algorithm for producing pseudorandom numbers is much simpler than the previously described algorithm for multiplication. Yet, as simple as this algorithm appears, and it is quite simple, a few important caveats merit further discussion.

In the hardware used for present embodiments of the random number generation process 300, e.g., the SEAforth™ 24a device, two cycles are needed to ensure proper execution of the +* op-code. Only giving +* a single cycle to execute makes the behavior in the HB of T unpredictable. The +* requires two cycles, the first for the add and the second for the shift, to produce proper/expected results. Furthermore, it would generally be hard to tell in which cases the single cycle of +* would have deterministic behavior in the HB of T. By simply preceding each +* operation with a  (nop) operation, however, this problem is solved.

This particular method of pseudorandom number generation has no restriction as to the length of sequences which can be produced. In fact, the only restrictions come as a result of the number of bits which can be utilized to represent the values in T and S. If the machine can be utilized in such a way that other registers can be made available to assist T and S with the bit length of the values they contain, this method is able to produce a pseudorandom number whose bit length is restricted only by the number of bits made available to represent T. Although it has already been noted that the initial value placed in T must be non-zero, the reasoning behind this has not been explained. To understand this, assume that T does contain the value 0 prior to an execution of the pseudo-code algorithm for pseudorandom number generation. The contents of S are not important, for the sake of this example. An initial value of 0 in T signifies that every bit in T is a 0. An iteration of +* now simply produces another value of 0 in T, because the LSB of T prior to execution is 0 and this will result in a bit shift to the right of all the bits in T where the highest bit is a 0. Another iteration or any number of iterations of +* will simply produce the same result. Thus, it is very important that the initial value placed in T be non-zero or a very uninteresting sequence is produced. Note, however, that a value of 0 placed in T prior to an execution of the multiplication pseudo-code algorithm will not result in any iterations of +*, but the correct value of the multiplication is in T even though the value in T does not change.

It has also already been noted that the initial value placed in S must be non-zero, and the reasoning behind this has also not been explained. To understand this, assume that T contains a non-zero value and that S does contain the value 0 prior to an execution of the pseudo-code algorithm for pseudorandom number generation. The effect of a +* on T is simply a bit shift to the right. If the highest bit in T is a 1 prior to any +* iterations, then ultimately T will settle to the value that is associated with all bits being set. If the highest bit in T is a 0 prior to any +* iterations, then ultimately T will settle to the value that is associated with all bits not being set.

The value which is placed in T prior to the execution of the pseudo-code pseudorandom number generation algorithm is the seed to the algorithm. Assume that the value in S does not change during the following. An initial value of t1 in T produces a sequence Q1. Using an initial value of t2 in T, where t1 is not equal to t2, produces a sequence Q2. That is for each ti a sequence Qi is produced where every Qi is a subset of a sequence Q that is the superset of all possible sequences that can be produced. Saying this another way, different seeds produce different sequences and using the same seed twice will produce the exact same sequence. The possible sequences which are produced, thus, will greatly depend on the value initially placed in T.

The value in T that is produced during each successive +* iteration is greatly dependent on the value in S. Due to the fact that the value in S is fixed during the +* iterations, it is extremely important that an appropriate value be chosen. What is meant by “appropriate” depends on the characteristics that are desired in the sequence that is produced. At present, the most appropriate value for S is debatable, and thus is a matter of a design choice by a programmer. From the inventors' perspective, however, an optimal value for S can been chosen through brute force testing on a target device (e.g., on the SEAforth™ 24a device).

Improving Sequence Quality

A programmer who takes into account all of the just recited caveats will likely still not have created a pseudorandom sequence of numbers. One of the problems associated with this approach to producing pseudorandom numbers is determining when the sequence of pseudorandom number occurs. Is the first pseudorandom number the result of the first +* iteration? Or does the first pseudorandom number occur as the result of a later iteration? Or even, does this method truly produce a sequence of pseudorandom numbers?

An embodiment of this method of producing pseudorandom numbers is presented in FIG. 9 and, from the inventors' perspective based on results from the various NIST tests, this alone will not produce pseudorandom sequences as defined by the NIST tests on an 18-bit machine like the SEAforth™ 24a. This is not to say that a sequence thus produced will not pass the NIST tests at a lower confidence level or that a larger than 18-bit machine will not produce a better result. From the inventors' perspective, the best known embodiment of this invention will yield a sequence that produces nearly every value from 0 to approximately 2n-1, not the desired pseudorandom number sequence. However, the sequences produced are still useful and the quality of the sequences produced can be improved. Additionally, such improvements will also improve the quality of a larger than 18-bits-per-term sequences.

There are many techniques which can be utilized to improve the quality of sequences produced by the +* op-code. For example, probably the most obvious improvement is to increase the bit widths of T and S. This will increase the length of the sequence produced, as well as increasing the bit width of each term produced in the sequence. This alone could yield more favorable results when analyzed using the NIST tests. Additionally, this will improve sequence quality when any of the following techniques are applied.

The embodiment of the shift-add mechanism 100 in FIG. 9 considers the value in T after each +* iteration to be a pseudorandom number in a sequence. One simple improvement to the sequence generated is to mask certain bits of each term in such a way that the bit length of each term in the pseudorandom sequence has a reduced bit length from the value placed in T after each iteration of the +* op-code. The values produced from this masking can potentially improve the overall quality of the sequence.

Another technique for improving the sequence of generated pseudorandom numbers is similar to that of masking certain bits in the output. An exclusive or (XOR), with a static or dynamic value, can be applied to each output value in the sequence. This would not decrease the length of the values produced.

Of course, there are certainly other techniques that are not listed here that may also improve the quality of a sequence produced. These other methods, like those presented here, are not an integral component of the underlying invention but, rather, are to be appreciated as lying within the scope of the overall invention.

While various embodiments have been described above, it should be understood that they have been presented by way of example only, and that the breadth and scope of the invention should not be limited by any of the above described exemplary embodiments, but should instead be defined only in accordance with the following claims and their equivalents.

TBL 1

TBL 2

TBL 3

TBL 4

TBL 5

TBL 6

TBL 7

TBL 8

TBL 9

TBL 10

Claims

1. A system for pseudorandom number generation, comprising:

a processor having a first memory to hold a first value and a second memory to hold a second value; and
a logic to perform a +* operation while a looping condition is true.

2. The system of claim 1, wherein:

said +* operation is an operational code of the processor.

3. The system of claim 1, wherein:

said first memory includes a first set of multiple registers in said processor, or said second memory includes a second set of multiple registers in said processor, or both.

4. The system of claim 1, further comprising:

a logic to mask one or more of the bits of said first value.

5. The system of claim 1, further comprising:

a logic to exclusive or said first value with a third value.

6. The system of claim 5, further comprising:

a logic to vary said third value dynamically between different instances of said performing said +* operation.

7. The system of claim 1, wherein:

said logic to perform prefaces each said +* operation with a (nop) operation.

8. A method for pseudorandom number generation in a processor having first and second memories, comprising:

placing a first value in the first memory;
placing a second value in the second memory;
while a looping condition is true, performing a +* operation.

9. The method of claim 8, wherein:

said +* operation is an operational code of the processor.

10. The method of claim 8, wherein:

the first memory includes a first set of multiple registers in the processor, or the second memory includes a second set of multiple registers in the processor, or both.

11. The method of claim 8, further comprising:

after one or more iterations of said performing, masking one or more of the bits of said first value.

12. The method of claim 8, further comprising:

after one or more iterations of said performing, exclusive or'ing said first value with a third value.

13. The method of claim 12, further comprising:

varying said third value dynamically between different said iterations.

14. The method of claim 8, further comprising:

performing  (nop) operation prior to each said +* operation.
Patent History
Publication number: 20090083350
Type: Application
Filed: Apr 18, 2008
Publication Date: Mar 26, 2009
Inventor: Michael B. Montvelishsky (Burlingame, CA)
Application Number: 12/148,511
Classifications
Current U.S. Class: Random Number Generation (708/250)
International Classification: G06F 7/58 (20060101);