Encryption-based security protection for processors
Methods, systems, and arrangements enable increased security for a processor, including by implementing block encryption. The block may include multiple instructions and/or operations to be executed by the processor. The block may also include multiple bytes that are read into the processor byte by byte. Once a block-wide encrypted buffer has been filled from an external memory source, the block may be decrypted using an encryption algorithm (e.g., the Data Encryption Standard (DES), the triple DES, etc.), and the decrypted block may be forwarded to a decrypted buffer. The decrypted block may thereafter be moved into a cache, which may optionally be organized into an equivalent block width (e.g., for each way of a multi-way cache). Therefore, when a processing core/instruction decoder needs a new instruction, it may retrieve one from the cache, directly from the decrypted buffer, or from external memory (e.g., after undergoing decryption).
Latest Maxim Integrated Products, Inc. Patents:
More than one reissue application has been filed, the instant application, application Ser. No. 16/209,263 is a continuation reissue application from Reissue application Ser. No. 14/964,424, which issued on Jul. 17, 2018 as Reissue U.S. Pat. No. RE46,956, which was a reissue of application Ser. No. 09/932,247, filed on Aug. 16, 2001, now U.S. Pat. No. 6,996,725.
TECHNICAL FIELD OF THE INVENTIONThe principles of the present invention generally relate to processors, and more particularly, by way of example but not limitation, to security microcontrollers with encryption features.
BACKGROUND OF THE INVENTIONElectronic devices are a vital force for creating and perpetuating the engine that drives today's modem economy; concomitantly, electronic devices improve the standard of living of people in our society. Furthermore, they also play an important role in providing entertainment and other enjoyable diversions. A central component of many of these electronic devices are processing units. Processing units may be broadly divided into two categories: (i) processors used as central processing units (CPUs) of (e.g., personal) computers and (ii) embedded processors (a.k.a. microcontrollers, microprocessors, etc.) (e.g., processors operating in cars, microwaves, wireless phones, industrial equipment, televisions, other consumer electronic devices, etc.). Although CPUs of computers garner the lion's share of reports and stories presented by the popular press, they are only responsible for less than 1% of all processors sold while microcontrollers are actually responsible for greater than 99% of all processors sold. Consequently, significant time and money is also expended for research and development to improve the efficiency, speed, security, feature set, etc. of microcontrollers. These aspects of microcontrollers may be improved, individually or in combination, by improving one or more of the individual aspects of which microcontrollers are composed. Exemplary relevant aspects of microcontrollers include, but are not limited to: processing core, memory, input/output (I/O) capabilities, security provisions, clocks/timers, program flow flexibility, programmability, etc.
Many uses of microcontrollers involve a need for security. The security may be related to the executable code, the data being manipulated, and/or the functioning of the microcontroller. For example, it is important to guard against the possibility of someone monitoring program code and deciphering communication protocols and/or information that are communicated between an automated teller machine (ATM) and the computer at an associated bank. Criminals, hackers, and mischief makers continually attempt to thwart and break existing security measures. Conventional, relatively relaxed and crackable, standards and approaches lead to security deficiencies because it is fairly easy to decipher or to access conventional systems and interfaces, and then control or otherwise jeopardize the microcontroller's mission. One technique for cracking a microcontroller's security is to monitor information entering the microcontroller and information exiting the microcontroller in order to effectively reverse engineer the program code by revealing what information may be effectuating the execution of a given instruction. It is therefore apparent that newer and stricter security measures are needed to safeguard against information theft, corruption, and/or misuse.
SUMMARY OF THE PRESENT INVENTIONThe deficiencies of the prior art are overcome by the methods, systems, and arrangements of the present invention. For example, as heretofore unrecognized, it would be beneficial to utilize block decryption after several reads from external memory. In fact, it would be beneficial if multiple executable instructions were decrypted simultaneously and accessible to a core processor/instruction decoder via, e.g., a decryption buffer, a cache, etc.
In certain embodiment(s), multiple instructions are loaded into a buffer from an external memory. The multiple instructions are decrypted at least substantially simultaneously and then made available to processing core and/or instruction decoder. An instruction desired by the processing core/instruction decoder may be routed directly thereto, or all or a portion of the multiple instructions may be first transferred to a cache. Also in certain embodiment(s), “n” bytes (e.g., an exemplary eight bytes) of encrypted information may be loaded byte-by-byte (or in chunks of multiple bytes) into an “n”-byte-wide encrypted buffer from an external memory. The “n” encrypted bytes may then be jointly decrypted using a designated encryption/decryption scheme, after which they may be forwarded to an “n”-byte-wide decrypted buffer. A processing core/instruction decoder, possibly in conjunction with a memory management unit (MMU)/memory controller, determines and requests a program instruction address. If the program instruction address hits in an associated instruction cache, then the instruction byte may be retrieved therefrom. If the program instruction address misses the cache but hits in the decrypted buffer, the requested instruction byte may be forwarded immediately to the processing core/instruction decoder while the “n”-bytes of the decrypted buffer are moved into the cache at the appropriate location approximately simultaneously. The buffers and decryptor, possibly in conjunction with the MMU/memory controller, may also be configured for prefetching.
A more complete understanding of the methods, systems, and arrangements of the present invention may be obtained by reference to the following Detailed Description when taken in conjunction with the accompanying drawings wherein:
The numerous innovative features of the present application are described with particular reference to the illustrated exemplary embodiments. However, it should be understood that this class of embodiments provides only a few examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present invention do not necessarily delimit any of the various aspects of the claimed invention. Moreover, some statements may apply to some inventive features, but not to others.
Security of systems including processors is problematic in that it can be extremely difficult to protect against program copying, corruption, and/or misuse. Security measures that may be taken include using block encryption of code and/or data, self-destruct inputs, or programmable countermeasures against attacks. External encryption sequencing is a critical aspect of security when fetching external data or instructions stored in an external memory. The information stored in the external memory may be encrypted using any of many encryption approaches and standards, such as, for example, the Data Encryption Standard (DES) algorithm, the triple DES algorithm, the Advanced Encryption Standard (AES) algorithm, etc. While byte-length instructions may be encrypted individually, cracking such encryption is relatively easier because a would-be hacker monitoring the processor may track input instructions and output results on something of a one-to-one basis. Block encryption, on the other hand, may be performed on multiple instructions/bytes simultaneously (e.g., a block of eight instructions composed eight bytes). Employing block encryption, however, may result in processor stalls because multiple encrypted instructions may be loaded for decryption before a decrypted instruction can be provided to the processing core/instruction decoder. An en/decryption scheme may be designed so as to rely on caching and prefetching (e.g., of blocks of instructions) to reduce or minimize the occurrence of stalls of the microcontroller. Consequently, a well-designed and well-implemented microcontroller in accordance with the principles of the present invention may execute even large loops and jumps without having to stall. In short, improvements to certain security aspects of microcontrollers may be accomplished by modifying the memory, I/O capabilities, security provisions, interrelationships therebetween, etc. of the microcontrollers.
Referring now to
Referring to
It should be noted that in certain embodiment(s) the decrypted information, or at least part thereof, may be forwarded directly from the decrypted buffer 225 to the CPU/instruction decoder 235, thus bypassing the cache 230 for at least that decrypted information. In such embodiment(s), the decrypted information may optionally be substantially simultaneously (e.g., within predictable time delays/lags of circuit elements) loaded into the cache 230. Also, it should be understood that in certain embodiment(s) the microcontroller 100 of
Referring to
Referring to
Referring to
(1) A hit occurs if there is a match and the corresponding tag is valid; or
(2) A miss occurs if there is no match or if there is a match but the tag is invalid. A hit enables the corresponding instruction or operand byte to be used for execution. A miss results in a stall of the program execution, waiting for the needed block to be loaded and decrypted. Address generation logic 415 provides the external addresses, for example, to program memory for external instruction fetch. The internal address bus 240A provides the addresses for accessing the cache 230 on reads and writes.
External data accesses are transferred through the data encryptor 420. The block size for external data memory encryption may be an exemplary one byte, but multiple-byte blocks may alternatively be implemented. If the corresponding encryption enable bit of a particular data memory chip is cleared to a logic 0, for example, the data may be transferred directly to/from the accumulator and the data memory chip. Thus, no encryption/decryption need be involved. The memory controller 405 may be responsible for coordinating the internal address bus 240A and the internal data bus 240D activities, the cache/tag access and update, program block decryption, and external data/program memory accesses.
Program decryption, on the other hand, may be effectuated on blocks of 64 bits through the program decryptor 400. This program decryptor 400 may be based, for example, on the full 16-round DES algorithm and may also be capable of supporting single or triple DES operations. The microcontroller 100 may, for example, be designed to decrypt a 64-bit block in five machine cycles for single DES operations and to decrypt a 64-bit block in 7 machine cycles for triple DES operations. The program decryptor 400 may include the following exemplary units and/or aspects:
-
- (1) A 64-bit encrypted buffer;
- (i) Capable of interfacing a byte-wide data bus (e.g., an embodiment of the bus 210 of
FIGS. 2A and 2B ) directly or via the Data I/O buffer 215′; - (ii) Data transfer between the encrypted buffer and the byte-wide data bus is 8-bits wide; and
- (iii) Capable of loading encrypted data to a decryption unit in 64-bit-wide blocks; and
- (i) Capable of interfacing a byte-wide data bus (e.g., an embodiment of the bus 210 of
- (2) A 64-bit block decryption unit;
- (i) Capable of unloading decrypted data to the cache 230 in 64-bit-wide blocks (e.g., via a decrypted buffer);
- (ii) Unloading of data to internal data bus 240D is 8-bits wide; and
- (iii) Supports direct execution by a CPU/instruction decoder in ROM loader mode.
- (1) A 64-bit encrypted buffer;
The DES algorithm, which may be employed in accordance with the present invention, is based on the principle of building up a sequence of simple operations to form an overall complex operation, each round of operation providing very little security if separated. The basic operation of each round consists of some permutations and substitutions, determined by a subset of the key bits, performed on one half of the data. In this operation, one-half may be encrypted and the other half may continually pass through unchanged, providing invertible paths for decryption. The operation can be performed using the following equations:
Given an initial input data M divided into left and right halves (L0, R0), M is transformed after the ith round into Mi=(Li, Ri) defined by
Li=Ri−1 (1)
Ri=Li−1+f(Ri−1, K1) (2)
where Ki is the encryption subkey. This is easily invertible if the key is known since, given (Li, Ri), it may recover (Li−1, Ri−1) by
Li−1=Ri+f(Ri−1, Ki)=Ri+f(Li, Ki) (3)
Ri−1=Li (4)
The f function itself need not be invertible, any desired f function may be used. However, the f function used in the DES is designed to provide a high level of security by a specially chosen base 2 value in the so called S-boxes.
The 16 subkeys can be generated from the 56 key bits (originally generated by a random number generator (An exemplary random number generator that may be employed along with the principles of the present invention is described in U.S. Nonprovisional application for patent Ser. No. 09/879,686, filed on Jun. 12, 2001, and entitled “IMPROVED RANDOM NUMBER GENERATOR”. U.S. Nonprovisional application for patent Ser. No. 09/879,686 is hereby incorporated by reference in its entirety herein.)) by first applying an initial 56-bit permutation to the original 56 key bits and then using the shifting sequence (1,1,2,2,2,2,2,2,1,2,2,2). A permutation of length 48 is applied after each shift to pick out the subkeys.
A predefined permutation is used to permute the 64 data bits before dividing into two halves. The f function is performed on one half of the input data during a round of process. These data bits are mapped into a 8×6 diagram as the following:
Modulo 2 addition of the above 48 numbers with the subkey bits provide the reference for the result of the f function from the 4×16 S-box. The 8 S-boxes can be referenced from the Data Encryption Standard (DES). This round of encryption is completed by modulo 2 addition of the result base 2 numbers with the other half data bits after a predefined permutation.
This process is repeated through all 16 rounds with a different set of subkeys. A final permutation then takes place which is the inverse of the initial permutation.
The security of a system having a processor and a memory, for example in accordance with certain embodiment(s) of the present invention, can be further improved if the encryption/decryption keys are modified by, for example, the address of the block (or other level of addressable granularity such as the byte, word, page, etc.) that is being encrypted/decrypted. Such a modification effectively creates a different key for each block (or other level of granularity) that is being encrypted/decrypted. Advantageously, if the key is dependent on the block address, no two (2) blocks can be swapped with each other or otherwise moved. When blocks can be swapped or otherwise moved, then another avenue of attack is available to a would-be hacker because the would-be hacker is able to change the order of execution of the system. Limiting or blocking this avenue of attack using the modification scheme described in this paragraph can be accomplished in many ways. By way of example but not limitation, this modification may be effectuated by (i) having the relevant address “xor”ed with the key of (or with more than one key if, e.g., the triple DES is selected as) the encryptor/decryptor algorithm, (ii) having the relevant address added to the key or keys, (iii) having the relevant address affect the key(s) in a non-linear method such as through a table lookup operation, (iv) having one or more shifts of the relevant address during another action such as that of (i) or (ii), some combination of these, etc.
While the modification described in the above paragraph can cause code under attack to become garbled and useless, the following modification described in this and the succeeding paragraph can recognize that code is or has become garbage and, optionally, take an action such as destroying the code, destroying or erasing one or more keys, destroying part(s) or all of the chip, etc. It should be noted that either modification may be used singularly or together in conjunction with certain embodiment(s) in accordance with the present invention. For this modification, an integrity check can be added to the system so as to ensure that the integrity of the code has not been compromised by an attacker. Such an integrity check may be accomplished in many ways. By way of example but not limitation, this modification may be effectuated by fetching a checksum byte or bytes after a block of code is fetched. This fetched checksum may be associated (e.g., by proximity, addressability, a correspondence table or algorithm, etc.) with the block of code previously fetched. It should be noted that the addressing order/location, as well as the fetching order, of the fetched block and the fetched checksum may be changed. It is possible, for example, to use different busses and/or different RAMS to store the encrypted code and the checksum(s), but in a presently preferred embodiment, each is stored in different section(s) of the same RAM (but may alternatively be stored in the same section).
After the block of encrypted code is fetched and decrypted, it may be latched into a checksum calculation circuit (or have a checksum operation performed in the same “circuit” as that of the decryption). A calculated checksum of the decrypted code is calculated by the checksum circuit/operation. The calculated checksum may be compared with the fetched checksum. If they differ, then the block of code may be considered to have failed the integrity check. The calculated checksum may be calculated in many different ways. By way of example but not limitation, the checksum (i) may be an “xor” of the fetched block of information, (ii) may be a summation of the fetched block of information, (iii) may be a CRC of the fetched block of information, (iv) may be some combination thereof, etc. If the fetched block fails the checksum comparison, then various exemplary actions may be taken by the system/chip. By way of example but not limitation, actions that may be taken responsive to a failed checksum integrity check include: (i) a destructive reset that may, for example, result in the clearing of internal key information and/or internal RAMS, (ii) an evasive action sequence may be started, (iii) an interrupt may allow the system program/chip to take action, (iv) some combination thereof, etc.
The microcontroller 100 of
In certain embodiment(s), the program decryptor 400 may be accessed by loaders via specific program block decryption registers in the SFR. The loaders can select either an encryption or a decryption operation by setting/clearing the relevant bits when using the program decryptor 400. The data encryptor 420 may perform byte encryption and decryption on data transferred to and from external memory (not explicitly illustrated in
Referring to
Tags 410 may be made invalid on reset, and the, e.g., byte-wide internal address bus 240A may be initialized to the first external program memory block with byte offset 0h. After a reset, the memory controller 405 may begin fetching program code from the external memory. Eight consecutive external addresses are generated by the address generator 415 (illustrated in
The Least Recently Used (LRU) bit 520 of each tag 410 is used by the memory controller 405 to determine which block is replaced upon a write to the cache 230. An access to a byte in a block clears/sets that block's LRU bit 520 and sets/clears the corresponding block's LRU bit 520 in the other cache way 230a or 230b at the same index address. An exemplary cache block replacement policy is as follows:
(1) Replace the invalid block. If both tags 410 are invalid, replace the block in Way 0 230a; and
(2) If there are no invalid blocks, replace the LRU block. Replacing a valid block causes the necessity of reloading that particular block if accessed again. A newly written block is set to be valid and to have the LRU bit(s) updated.
The MMU is responsible for managing and operating the cache 230, the program and data encryptors 400 and 420, and the accessing of external memory 205. The MMU may include the cache control, the program decryptor 400, and the external address generator 415. The memory controller 405 may provide the following exemplary control functions:
-
- (1) Read/write access to the cache 230;
- (2) Read/write access to the external memory 205;
- (3) Tag 410 accessing and updating;
- (4) Byte-wide bus sequencing and timing control;
- (5) Address generation for external memory 205;
- (6) Loading and unloading of the program decryptor 400; and
- (7) Hit and miss (stall) signal generation.
The memory controller 405 controls the cache 230 read/write access by monitoring the PC address activities and comparing the PC address with the tags 410. A hit enables a cache read, a miss causes a stall of the CPU clocks while the memory controller 405 fetches the needed block of code from the external memory 205 (if not already somewhere in the pipeline such as in the program decryptor 400). The requested code block is placed into one of the two appropriate index locations according to the active replacement policy. The corresponding tag 410a or 410b is also updated while normal program execution resumes.
The internal address is latched by the address generator 415 during a data transfer instruction or during a complete miss (block is also not in pipeline). Otherwise, the address generator 415 simply fetches the next sequential block after passing a block to the decryptor 400D. The internal address provides the address to the cache 230a or 230b and tags 410a and 410b. As illustrated in
With respect to specific exemplary implementation(s), external memory 205 may be initialized by a ROM boot loader built into the microcontroller 100, according to certain embodiment(s) of the present invention. When the loader is invoked, the external program memory space appears as a data memory to the ROM and can therefore be initialized. Invoking the loader causes the loader to generate new encryption keys (e.g., from a random number generator), thus invalidating all information external to the part. Invocation of the loader also invalidates the tags 410 and erases the cache 230. The program loading process may be implemented by the following exemplary procedures: the program is read through a serial port, encrypted, and written to the external memory space via data transfer instructions. The encryption process is done in blocks of an exemplary 8 bytes. Eight consecutive bytes are loaded to the program decryption block through an SFR interface, encrypted, then read out. It should be noted that the SFR interface, in certain embodiment(s), may only be available while executing from the loader in ROM mode or a user loader mode. The encrypted code is then written by the loader to the external memory 205 through the byte-wide data bus (e.g., a byte-wide embodiment of bus 210). The byte-wide address bus is driven by the address generator 415 on a data transfer instruction. Eight consecutive memory writes are required to store one block of encrypted code. Data-transfer-instruction-type information transferred between the accumulator and the external memory 205 may travel via a bus 425 if the information is intended for program space. The data information may or may not be encrypted (depending on the value of encryption select bits). The address generator 415 latches the data transfer instruction address, and the data encryptor 420 drives or reads the byte-wise data bus during the data transfer instruction. With respect to these exemplary implementation(s), data encryption is thus byte-wise and real-time, which contrasts with program encryption, which is a block encryption taking multiple cycles.
Referring to
-
- (1) Write data in a 64-bit block from the program decryptor 400;
- (2) Read data in a 64-bit block output decoded further to 8-bit data for internal bus;
- (3) Addressed by a way select and PC index address bits (e.g., bits [8:3]); and
- (4) 2×512×8 logically.
The RAM may be implemented as a 1 Kbyte SRAM to save silicon space. The read and write lines and way select line are provided by the memory controller 405, depending on the tag output of the current index. The index address is provided by the program counter (PC).
Similar RAM structure can be used for the cache tags with the following exemplary features:
-
- (1) Addressing by the block index address (PC [8:3]) 515;
- (2) Write operation inputs from two sources;
- (i) Cache tag: 13-bit high order address bits [21:9] 525; and
- (ii) Two status bits from the memory controller 405;
- (3) Read operation outputs to two destinations;
- (i) 13-bit tag 505 to the comparator; and
- (ii) Two status bits to the memory controller 405;
- (4) One 64×28 RAM is needed to support accessing both tags 410a and 410b simultaneously;
- (5) The two status bits require individual accessing capability; and
- (6) Fast access time.
There need be no timing difference between accessing the cache 230 and any internal SRAM used for storing code.
It should be noted that the hit signal is an important path for a cache read. A cache read includes propagation delays caused by the tag RAM read, tag value comparison, and cache read access, to be performed all in one machine cycle. The hit/miss signals are therefore generated quickly (e.g., in less than two oscillator clocks). If a read miss occurs, the CPU/instruction decoder or equivalent stalls until the targeted code block is ready to be placed into the cache 230. There are many levels of stall penalties for a single DES read miss.
These include the following exemplary latencies for single DES operations:
The stall penalties for a triple DES miss include the following exemplary latencies:
Hits on the finished decrypted buffer 400DB do not cause a miss. The CPU/instruction decoder or equivalent can execute from the decrypted buffer 400DB directly, while the corresponding block in cache 230a or 230b is replaced with the buffer instruction data.
Referring now to
Instruction fetching from the external program memory (from the perspective of the program execution unit) is actually a read from the cache. To that end, the cache is accessed, or at least checked (along with the address of the block in the decrypted buffer), prior to external code fetching. It should be noted that internal program memory may also be present and accessed. To check the cache, at steps 730a and 730b, 13 high order PC address bits are compared with values of the cache tags that are addressable by a block index (e.g., PC bits [8:3]). It should be noted that other total numbers as well as divisions of the address bits may alternatively be employed and that steps 730a and 730b may be performed in a different order, substantially simultaneously, etc. If there is a match (at step 730b) and the tag is valid (at step 730a), then, at step 735, a memory controller/MMU generates a hit signal and allows a read access to that particular cache location at step 740. If, on the other hand, a cache miss occurs (as determined at steps 730a and 730b), but the PC address matches the block that is currently in the decrypted buffer (and the decryption process is completed) at step 745, the corresponding instruction byte is transferred directly to the data bus (at step 720c) for the program execution unit, the cache and tags are updated (at steps 720a and 720b, respectively), and execution continues without a stall (at step 725).
A stall occurs when there is no tag match or the tag is matched but invalid and when the data is not available in the decrypted buffer. Such a miss temporarily stalls program execution, and if the desired instruction(s) are not already in the decryption unit or currently being loaded thereto from the encrypted buffer as determined at step 750, the current PC address is used to fetch the desired code from the external program memory (at steps 705, 710, et. seq.). If the PC address does match that of the encrypted instructions, then flow can continue at the decryption or forwarding-to-the-decryption unit stages (at steps 710, 715, et seq.). It should be understood that the PC address is not necessarily the first byte of the block address; however, the memory controller/MMU may set the byte offset bits to 0h for block alignment in the buffers, cache, etc. Program execution may be resumed after the targeted code block reaches the point (at step 725) where the decryption process is finished and the opcode/operand byte may be fed to the program execution unit directly from the decrypted buffer (at step 720c) (e.g., while the cache and tags are updated).
To minimize or reduce the latency associated with external code fetching, processors may be designed with the assumption that program execution is frequently sequential and often follows a rule of locality (e.g., the 90/10 rule). The memory controller/MMU may (pre-)fetch the next consecutive code block, decrypts it, and has it ready in the decrypted buffer. The memory controller/MMU is typically responsible for controlling these activities, and it is responsible for checking possible matches between the current PC address and the blocks in the encrypted buffer, decryption unit, and decrypted buffer. For example, if the current PC address matches the block in the decrypted buffer (as determinable at step 745), the requested byte of instruction data in the decrypted buffer is used for program execution (at step 725 via step 720c) while the related block is written to the cache (at step 720a). If, on the other hand, the current PC address matches the block in the encrypted buffer (or optionally the decryption unit) (at step 750), the encrypted instruction data in the encrypted buffer is transferred to the decryption unit and decrypted (at step 710). A match to either buffer causes the instruction data therein to be placed in the cache at the appropriate address-based index. In the case of a match to the decrypted buffer, the decrypted instruction data can be placed into the cache without delay; otherwise, a stall occurs until the block having the requested instruction byte is decrypted and forwarded to the decrypted buffer, whereafter the decrypted instruction byte may be driven for execution and the cache and tags updated. As described above, upon a miss of the cache, the requested instruction byte may be driven directly from the decrypted buffer while also being substantially simultaneously written to the cache. Any other instruction bytes in the same block that are subsequently requested are located in and may read from the cache, until the block is replaced or rendered invalid.
Although embodiment(s) of the methods, systems, and arrangements of the present invention have been illustrated in the accompanying Drawings and described in the foregoing Detailed Description, it will be understood that the present invention is not limited to the embodiment(s) disclosed, but is capable of numerous rearrangements, modifications, and substitutions without departing from the spirit and scope of the present invention as set forth and defined by the following claims.
Claims
1. A method for providing secure external memory that stores instructions for a processor, comprising the steps of:
- receiving a plurality of encrypted instructions into a buffer of the processor from the external memory, said plurality of encrypted instructions comprise a plurality of consecutive encrypted instructions, and said step of receiving said plurality of encrypted instructions into a buffer of the processor from the external memory comprises the step of receiving the plurality of consecutive encrypted instructions from a bus having a width equivalent to that of each consecutive encrypted instruction of the plurality of consecutive encrypted instructions;
- decrypting the plurality of encrypted instructions substantially simultaneously using a selected decryption algorithm to produce a plurality of decrypted instructions; and
- forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor.
2. The method according to claim 1, wherein the width is equal to eight (8) bits.
3. The method according to claim 1, wherein the selected decryption algorithm comprises at least one of a data encryption standard (DES), a triple DES, and an advanced encryption standard (AES).
4. The method according to claim 1, wherein the processing area of the processor comprises at least one of a central processing unit (CPU) and an instruction decoder.
5. The method according to claim 1, further comprising the step of:
- forwarding the plurality of decrypted instructions to a cache of the processor.
6. A method for providing secure external memory that stores instructions for a processor, comprising the steps of:
- receiving a plurality of encrypted instructions into a buffer of the processor from the external memory;
- decrypting the plurality of encrypted instructions substantially simultaneously using a selected decryption algorithm to produce a plurality of decrypted instructions, wherein said step of decrypting the plurality of encrypted instructions substantially simultaneously using said selected decryption algorithm to produce said plurality of decrypted instructions comprises the step of decrypting the plurality of encrypted instructions using at least one modified decryption key, the at least one modified decryption key being formed responsive, at least partly, to at least a portion of an address associated with at least one encrypted instruction of the plurality of encrypted instructions; and
- forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor.
7. The method according to claim 6, wherein the at least one modified decryption key is formed further responsive, at least partly, to at least one decryption key, the at least one decryption key generated using at least a pseudo-random number generator.
8. A method for providing secure external memory that stores instructions for a processor, comprising the steps of:
- receiving a plurality of encrypted instructions into a buffer of the processor from the external memory;
- decrypting the plurality of encrypted instructions substantially simultaneously using a selected decryption algorithm to produce a plurality of decrypted instructions;
- transferring the plurality of decrypted instructions to another buffer;
- forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor:
- delaying said step of forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor until an instruction address requested by the processing area corresponds to an instruction address associated with the at least one decrypted instruction of the plurality of decrypted instructions; and
- wherein said step of forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor comprises the step of forwarding the at least one decrypted instruction of the plurality of decrypted instructions to the processing area of the processor from the another buffer when the instruction address requested by the processing area corresponds to the instruction address associated with the at least one decrypted instruction of the plurality of decrypted instructions.
9. A method for providing secure external memory that stores instructions for a processor, comprising the steps of:
- receiving a plurality of encrypted instructions into a buffer of the processor from the external memory;
- decrypting the plurality of encrypted instructions substantially simultaneously using a selected decryption algorithm to produce a plurality of decrypted instructions; and
- forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor;
- said method occurs within at least one of the following: a data switcher or router; a subscriber line interface card; a modem; a digitally-controlled machining tool; a portable radio; a wireless telephone; a voltmeter, ammeter, or ohmmeter; a personal digital assistant (PDA); a television; a cable or satellite TV set top box; a camcorder; a piece of audio/visual equipment; an audio compact disk (CD) system, player, or recorder; a digital versatile disk (DVD) system, player, or recorder; a piece of financial equipment, including at least one of a personal identification number (PIN) pad and a point of sale (POS) terminal; and a smart card.
10. A system for providing security to stored information, comprising:
- at least one memory, said at least one memory storing a plurality of encrypted instructions, each encrypted instruction of the plurality of encrypted instructions associated with an address; and
- a processor, said processor operatively coupled to said memory to retrieve the plurality of encrypted instructions therefrom; said processor including:
- a first buffer, the first buffer capable of receiving the plurality of encrypted instructions;
- a decryption unit, the decryption unit capable of receiving the plurality of encrypted instructions from the first buffer, the decryption unit adapted to decrypt the plurality of encrypted instructions using a decryption algorithm to produce a plurality of decrypted instructions;
- a second buffer, the second buffer capable of receiving the plurality of decrypted instructions from the decryption unit; and
- a processing area, the processing area capable of receiving at least one decrypted instruction of the plurality of decrypted instructions.
11. The system according to claim 10, wherein said processor comprises a microcontroller.
12. The system according to claim 10, wherein the processing area comprises at least one of a central processing unit (CPU) and an instruction decoder.
13. The system according to claim 10, wherein said processor further includes:
- a cache memory, the cache memory capable of receiving the plurality of decrypted instructions from the second buffer.
14. The system according to claim 13, wherein the processing area is capable of receiving the at least one decrypted instruction of the plurality of decrypted instructions from the second buffer.
15. The system according to claim 14, wherein said processor further includes:
- a memory controller, the memory controller capable of controlling movement of the plurality of decrypted instructions, the memory controller adapted to provide the processing area the at least one decrypted instruction from the second buffer and to provide the cache the plurality of decrypted instructions from the second buffer.
16. The system according to claim 15, wherein the memory controller is further adapted to provide the processing area the at least one decrypted instruction and the cache the plurality of decrypted instructions substantially simultaneously.
17. The system according to claim 13, wherein the cache is a two-way, set associative cache; and each block in each way of the cache is equal in length to a length of the second buffer.
18. The system according to claim 15, wherein said processor further includes:
- an address unit, the address unit capable of ascertaining a current instruction address, the address unit adapted to provide the instruction address to the memory controller; and
- wherein the at least one decrypted instruction is associated with an instruction address “X” and another decrypted instruction of the plurality of decrypted instructions is associated with an instruction address “X+1”; and the memory controller is further adapted to provide the processing area the another decrypted instruction from the cache when the current instruction address corresponds to the instruction address “X+1”.
19. The system according to claim 10, further comprising:
- a bus, said bus operatively coupling said at least one memory to said processor; and
- wherein a width of said bus is equivalent to a width of each encrypted instruction of the plurality of encrypted instructions.
20. The system according to claim 19, wherein the width of said bus and the width of each encrypted instruction is equal to eight (8) bits.
21. The system according to claim 10, wherein the decryption unit is further adapted to decrypt the plurality of encrypted instructions substantially simultaneously.
22. The system according to claim 10, wherein the decryption algorithm comprises at least one of a data encryption standard (DES), a triple DES, and an advanced encryption standard (AES).
23. The system according to claim 10, wherein the first buffer comprises a latch that is capable of receiving the plurality of encrypted instructions sequentially directly from a bus coupling said at least one memory to said processor.
24. The system according to claim 10, wherein the system comprises at least one of the following: a data switcher or router; a subscriber line interface card; a modem; a digitally-controlled machining tool; a portable radio; a wireless telephone; a voltmeter, ammeter, or ohmmeter; a personal digital assistant (PDA); a television; a cable or satellite TV set top box; a camcorder; a piece of audio/visual equipment; an audio compact disk (CD) system, player, or recorder; a digital versatile disk (DVD) system, player, or recorder; a piece of financial equipment, including at least one of a personal identification number (PIN) pad and a point of sale (POS) terminal; and a smart card.
25. The system according to claim 10, wherein the decryption unit is further adapted to decrypt the plurality of encrypted instructions using a decryption key formed responsive, at least partly, to at least a portion of the address associated with at least one encrypted instruction of the plurality of encrypted instructions.
26. The system according to claim 25, wherein:
- the at least a portion of the address associated with at least one encrypted instruction of the plurality of encrypted instructions comprises an address value; and
- the decryption unit is further adapted to form the decryption key by utilizing at least one of the following operations: (i) “xor”ing the address value with the decryption key, (ii) adding the address value to the decryption key, and (iii) applying at least one of the address value and the decryption key to a non-linear operation.
27. An arrangement for providing security to executable code stored in a memory external to a processor, the arrangement comprising:
- memory means, said memory means storing a plurality of consecutive encrypted instructions;
- means for sequentially receiving the plurality of consecutive encrypted instructions into a buffer means;
- means for substantially simultaneously decrypting the plurality of consecutive encrypted instructions to create a plurality of consecutive decrypted instructions; and
- means for distributing the plurality of consecutive decrypted instructions within the processor when requested by a processing entity.
28. The arrangement according to claim 27, wherein said means for distributing the plurality of consecutive decrypted instructions within the processor when requested by a processing entity includes means for transferring a decrypted instruction of the plurality of consecutive decrypted instructions to the processing entity when the processing entity presents an instruction address that is associated with the decrypted instruction.
29. The arrangement according to claim 28, further comprising:
- cache means, said cache means for storing decrypted instructions; and
- wherein said means for distributing the plurality of consecutive decrypted instructions within the processor when requested by a processing entity includes means, responsive to a processing entity request, for transferring the decrypted instruction from said cache means if the decrypted instruction is located therein upon receiving the processing request or for transferring the decrypted instruction prior to storing the decrypted instruction in said cache means if the decrypted instruction is not located therein upon receiving the processing entity request.
30. The arrangement according to claim 27, wherein the processor comprises a microcontroller.
31. The arrangement according to claim 30, wherein the microcontroller is compatible with the 8-bit “8051” instruction set.
32. The arrangement according to claim 27, wherein the arrangement comprises at least one of the following: a data switcher or router; a subscriber line interface card; a modem; a digitally-controlled machining tool; a portable radio; a wireless telephone; a voltmeter, ammeter, or ohmmeter; a personal digital assistant (PDA); a television; a cable or satellite TV set top box; a camcorder; a piece of audio/visual equipment; an audio compact disk (CD) system, player, or recorder; a digital versatile disk (DVD) system, player, or recorder; a piece of financial equipment, including at least one of a personal identification number (PIN) pad and a point of sale (POS) terminal; and a smart card.
33. The arrangement according to claim 27, wherein said means for substantially simultaneously decrypting the plurality of consecutive encrypted instructions to create a plurality of consecutive decrypted instructions includes at least one decryption key; and
- wherein said arrangement further comprises:
- means for creating the at least one decryption key using at least a portion of an address associated with at least one encrypted instruction of the plurality of consecutive encrypted instructions.
34. The arrangement according to claim 27, wherein said memory means stores a plurality of corresponding encrypted checksums; and
- wherein said arrangement further comprises:
- means for comparing a calculated checksum to a corresponding decrypted checksum, the calculated checksum calculated from the plurality of consecutive decrypted instructions, and the corresponding decrypted checksum decrypted with the plurality of consecutive decrypted instructions from at least one corresponding encrypted checksum of the plurality of corresponding encrypted checksums.
35. The arrangement according to claim 34, wherein said arrangement further comprises:
- means for thwarting an attacker's attempts to breach security, said means for thwarting an attacker's attempts to breach security becoming active when said means for comparing a calculated checksum to a corresponding decrypted checksum determines that the calculated checksum is not equivalent to the corresponding decrypted checksum.
36. The arrangement according to claim 27, wherein said memory means stores a plurality of corresponding checksums; and
- wherein said arrangement further comprises: means for comparing a calculated checksum to a corresponding checksum of the plurality of checksums, the calculated checksum calculated from the plurality of consecutive decrypted instructions, and the corresponding checksum is retrieved from said memory means in which it is stored clear and unencrypted.
37. An arrangement for providing information security with a processor, the arrangement comprising:
- an encrypted buffer, said encrypted buffer capable of accepting a plurality of encrypted units and adapted to offer the plurality of encrypted units;
- a decryptor, said decryptor capable of accepting the plurality of encrypted units and adapted (i) to decrypt the plurality of encrypted units to produce a plurality of decrypted units and (ii) to offer the plurality of decrypted units;
- a decrypted buffer, said decrypted buffer capable of accepting the plurality of decrypted units and adapted to offer at least one of a single decrypted-buffer-originated decrypted unit of the plurality of decrypted units and the plurality of decrypted units;
- a cache, said cache capable of accepting the plurality of decrypted units and adapted to offer a single cache-originated unit of the plurality of decrypted units;
- a processing area, said processing area capable of accepting a decrypted unit of the plurality of decrypted units and adapted (i) to ascertain a program address and (ii) to offer the program address;
- a controller, said controller capable of accepting the program address and adapted to control movement of the plurality of decrypted units; and
- wherein said controller, at least partially, causes (i) said cache to offer the single cache-originated decrypted unit to said processing area if a first address associated with the single cache-originated decrypted unit corresponds to the program address and (ii) said decrypted buffer to offer the single decrypted-buffer-originated decrypted unit to said processing area and the plurality of decrypted units to said cache if a second address associated with the single decrypted-buffer-originated decrypted unit corresponds to the program address.
38. The arrangement according to claim 37, wherein the arrangement comprises at least one of the following: a data switcher or router; a subscriber line interface card; a modem; a digitally-controlled machining tool; a portable radio; a wireless telephone; a voltmeter, ammeter, or ohmmeter; a personal digital assistant (PDA); a television; a cable or satellite TV set top box; a camcorder; a piece of audio/visual equipment; an audio compact disk (CD) system, player, or recorder; a digital versatile disk (DVD) system, player, or recorder; a piece of financial equipment, including at least one of a personal identification number (PIN) pad and a point of sale (POS) terminal; and a smart card.
39. The arrangement according to claim 37, wherein each unit comprises an instruction.
40. The arrangement according to claim 39, wherein each unit comprises a byte.
41. The arrangement according to claim 37, wherein a first length of a block of said cache is equivalent to a second length of the plurality of decrypted units.
42. The arrangement according to claim 37, wherein said decryptor is further adapted to decrypt the plurality of encrypted units to produce a plurality of decrypted units using at least one decryption key.
43. The arrangement according to claim 42, wherein the at least one decryption key is created, at least partially, responsive to at least a portion of the second address.
44. The arrangement according to claim 37, wherein the plurality of encrypted units and the plurality of decrypted units each comprise eight (8) bytes.
45. A method for providing enhanced security for a processor, comprising the steps of:
- comparing a program address to at least one tag address of a cache to determine whether there is a hit;
- if the hit is determined, then transferring at least one information unit from the cache to a processing area;
- comparing the program address to an address corresponding to a decrypted buffer to determine whether there is a decrypted buffer match;
- if the hit is not determined and the decrypted buffer match exists, then transferring another at least one information unit from the decrypted buffer to the processing area and transferring a plurality of information units from the decrypted buffer to the cache;
- comparing the program address to an address corresponding to a decryption unit to determine whether there is a decryption unit match;
- if the hit is not determined and the decrypted buffer match does not exist and the decryption unit match does exist, then decrypting another plurality of information units in the decryption unit and thereafter transferring the another plurality of information units from the decryption unit to the decrypted buffer and transferring yet another at least one information unit from the decrypted buffer to the processing area and transferring the another plurality of information units from the decrypted buffer to the cache.
46. The method according to claim 45, further comprising the steps of:
- comparing the program address to an address corresponding to an encrypted buffer to determine whether there is an encrypted buffer match;
- if the hit is not determined and the decrypted buffer match does not exist and the decryption unit match does not exist and the encrypted buffer match does exist, then transferring yet another plurality of information units from the encrypted buffer to the decryption unit and decrypting the yet another plurality of information units in the decryption unit and thereafter transferring the yet another plurality of information units from the decryption unit to the decrypted buffer and transferring still yet another at least one information unit from the decrypted buffer to the processing area and transferring the yet another plurality of information units from the decrypted buffer to the cache.
47. The method according to claim 45, wherein the method occurs within at least one of the following: a data switcher or router; a subscriber line interface card; a modem; a digitally-controlled machining tool; a portable radio; a wireless telephone; a voltmeter, ammeter, or ohmmeter; a personal digital assistant (PDA); a television; a cable or satellite TV set top box; a camcorder; a piece of audio/visual equipment; an audio compact disk (CD) system, player, or recorder; a digital versatile disk (DVD) system, player, or recorder; a piece of financial equipment, including at least one of a personal identification number (PIN) pad and a point of sale (POS) terminal; and a smart card.
48. A method for providing enhanced security for a processor, comprising the steps of:
- ascertaining a program counter;
- determining whether an instruction associated with an address that corresponds to the program counter is in a cache, a decrypted buffer, a decryption unit, or an encrypted buffer;
- if so, forwarding the instruction;
- if not,
- retrieving a plurality of encrypted instructions from an external memory and loading the plurality of encrypted instructions into the encrypted buffer, the plurality of encrypted instructions including the instruction in an encrypted format;
- forwarding the plurality of encrypted instructions from the encrypted buffer to the decryption unit;
- decrypted the plurality of encrypted instructions in the decryption unit to produce a plurality of decrypted instructions, the plurality of decrypted instructions including the instruction in an unencrypted format;
- forwarding the plurality of decrypted instructions from the decryption unit to the decrypted buffer; and
- forwarding the instruction from the decrypted buffer for further processing.
49. The method according to claim 48, further comprising the step of:
- forwarding the plurality of decrypted instructions from the decrypted buffer to the cache approximately during effectuation of said step of forwarding the instruction from the decrypted buffer for further processing.
50. A method for providing secure external memory that stores instructions for a processor, comprising the steps of:
- receiving a plurality of encrypted instructions into a buffer of the processor from the external memory, said plurality of encrypted instructions comprise a plurality of consecutive encrypted instructions, and said step of receiving said plurality of encrypted instructions into a buffer of the processor from the external memory comprises the step of receiving the plurality of consecutive encrypted instructions from a bus having a width equivalent to that of each consecutive encrypted instruction of the plurality of consecutive encrypted instructions;
- decrypting the plurality of encrypted instructions substantially simultaneously using a selected decryption algorithm to produce a plurality of decrypted instructions;
- storing the plurality of decrypted instructions within a cache;
- forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor;
- delaying the step of forwarding the at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor until an instruction address requested by the processing area corresponds to an instruction address associated with the at least one decrypted instruction of the plurality of decrypted instructions; and
- wherein said step of forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor comprises the step of forwarding the at least one decrypted instruction of the plurality of decrypted instructions to the processing area of the processor from the cache when the instruction address requested by the processing area corresponds to the instruction address associated with the at least one decrypted instruction of the plurality of decrypted instructions.
51. The method according to claim 50, wherein the width is equal to eight (8) bits.
52. The method according to claim 50, wherein the selected decryption algorithm comprises at least one of a data encryption standard (DES), a triple DES, and an advanced encryption standard (AES).
53. The method according to claim 50, wherein the processing area of the processor comprises at least one of a central processing unit (CPU) and an instruction decoder.
54. A method for providing secure external memory that stores instructions for a processor, comprising the steps of:
- receiving a plurality of encrypted instructions into a buffer of the processor from the external memory;
- decrypting the plurality of encrypted instructions substantially simultaneously using a selected decryption algorithm to produce a plurality of decrypted instructions, wherein said step of decrypting the plurality of encrypted instructions substantially simultaneously using said selected decryption algorithm to produce said plurality of decrypted instructions comprises the step of decrypting the plurality of encrypted instructions using at least one modified decryption key, the at least one modified decryption key being formed responsive, at least partly, to at least a portion of an address associated with at least one encrypted instruction of the plurality of encrypted instructions;
- forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor;
- delaying said step of forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor until an instruction address requested by the processing area corresponds to an instruction address associated with the at least one decrypted instruction of the plurality of decrypted instructions; and
- wherein said step of forwarding at least one decrypted instruction of the plurality of decrypted instructions to a processing area of the processor comprises the step of forwarding the at least one decrypted instruction of the plurality of decrypted instructions to the processing area of the processor when the instruction address requested by the processing area corresponds to the instruction address associated with the at least one decrypted instruction of the plurality of decrypted instructions.
55. The method according to claim 54, wherein the at least one modified decryption key is formed further responsive, at least partly, to at least one decryption key, the at least one decryption key generated using at least a pseudo-random number generator.
56. A system for providing security to stored information, comprising:
- at least one memory, said at least one memory storing a plurality of encrypted instructions, each encrypted instruction of the plurality of encrypted instructions associated with an address; and
- a processor, said processor operatively coupled to said memory to retrieve the plurality of encrypted instructions therefrom; said processor including:
- a first buffer, the first buffer capable of receiving the plurality of encrypted instructions;
- a decryption unit, the decryption unit capable of receiving the plurality of encrypted instructions from the first buffer, the decryption unit adapted to decrypt the plurality of encrypted instructions using a block decryption algorithm to produce a plurality of decrypted instructions;
- a second buffer, the second buffer capable of receiving the plurality of decrypted instructions from the decryption unit;
- a cache memory, the cache memory capable of receiving the plurality of decrypted instructions from the second buffer;
- a processing area comprising at least one of a central processing unit (CPU) and an instruction decoder, the processing area capable of receiving at least one decrypted instruction of the plurality of decrypted instructions; and
- a memory controller, the memory controller capable of controlling movement of the plurality of decrypted instructions, the memory controller adapted to provide the processing area the at least one decrypted instruction from the second buffer and to provide the cache the plurality of decrypted instructions from the second buffer.
57. The system according to claim 56, wherein said processor comprises a microcontroller.
58. The system according to claim 56, wherein the processing area comprises at least one of a central processing unit (CPU) and an instruction decoder.
59. The system according to claim 56, wherein the processing area is capable of receiving the at least one decrypted instruction of the plurality of decrypted instructions from the second buffer.
60. The system according to claim 59, wherein the memory controller is further adapted to provide the processing area the at least one decrypted instruction and the cache the plurality of decrypted instructions substantially simultaneously.
61. The system according to claim 60, wherein the cache is a two-way, set associative cache; and each block in each way of the cache is equal in length to a length of the second buffer.
62. The system according to claim 56, further comprising:
- a bus, said bus operatively coupling said at least one memory to said processor; and
- wherein a width of said bus is equivalent to a width of each encrypted instruction of the plurality of encrypted instructions.
63. The system according to claim 62, wherein the width of said bus and the width of each encrypted instruction is equal to eight (8) bits.
64. The system according to claim 56, wherein the decryption unit is further adapted to decrypt the plurality of encrypted instructions substantially simultaneously.
65. The system according to claim 56, wherein the decryption algorithm comprises at least one of a data encryption standard (DES), a triple DES, and an advanced encryption standard (AES).
66. The system according to claim 56, wherein the system comprises at least one of the following: a data switcher or router; a subscriber line interface card; a modem; a digitally-controlled machining tool; a portable radio; a wireless telephone; a voltmeter, ammeter, or ohmmeter; a personal digital assistant (PDA); a television; a cable or satellite TV set top box; a camcorder; a piece of audio/visual equipment; an audio compact disk (CD) system, player, or recorder; a digital versatile disk (DVD) system, player, or recorder; a piece of financial equipment, including at least one of a personal identification number (PIN) pad and a point of sale (POS) terminal; and a smart card.
67. The system according to claim 56, wherein the decryption unit is further adapted to decrypt the plurality of encrypted instructions using a decryption key formed responsive, at least partly, to at least a portion of the address associated with at least one encrypted instruction of the plurality of encrypted instructions.
68. The system of claim 56 further comprising a first interface, the first interface coupling the first buffer to the at least one memory, the first interface communicates the encrypted instructions in a serial data format.
69. The system of claim 68 wherein the first interface transmits the encrypted instructions to the at least one memory to be stored as encrypted instructions.
70. The system of claim 68 wherein the first interface receives the encrypted instructions from the at least one memory to be processed within the processing area.
71. The system of claim 68 wherein the first interface is a bi-directional interface on which the encrypted instructions are transmitted or received in the serial data format.
72. The system of claim 68 further comprising a third buffer, the third buffer stores the encrypted instructions received on the first interface in accordance with a serial transmission procedure.
73. The system of claim 56 further wherein the cache memory is a multi-level cache having a first cache and a second cache, the first cache receiving at least a first portion of the plurality of decrypted instructions.
74. The system of claim 73 wherein the second cache receives a second portion of the plurality of decrypted instructions if the at least first portion of the plurality of decrypted instructions is not the entirety of the plurality of decrypted instructions.
75. The system of claim 73 wherein the first cache is a layer one cache and the second cache is a layer two cache.
76. The system of claim 56 wherein the first buffer receives the plurality of instructions in a byte-oriented fetch.
77. The system of claim 56 further comprising a memory management unit that manages a caching procedure, the caching procedure including at least one step associated with a cache block replacement policy.
78. The system of claim 77 wherein the caching procedure includes a first cache and a second cache, the memory management unit virtualizes the addressing across the first and second caches.
79. The system of claim 77 wherein the memory management unit manages read and write operations within the caching procedure such that hit and miss events are controlled.
80. The system of claim 56 wherein the block decryption algorithm is Advanced Encryption Standard.
81. The system of claim 56 further comprising:
- a cache coupled within the processor, the cache receives the plurality of decrypted instructions and stores the plurality of decrypted instructions prior to a first decrypted instruction within the plurality of decrypted instructions being executed;
- a first bus coupled between the second buffer and the cache; and
- a second bus coupled between the cache and the processing area.
82. The system of claim 56 further comprising:
- a cache coupled within the processor;
- a first bus coupled between the second buffer and the cache;
- a second bus coupled between the cache and the processing area; and
- a third bus coupled between the second buffer and the processing area, the third bus bypassing the cache to allow the processing area to receive a first decrypted instruction within the plurality of decrypted instruction without being stored within the cache.
83. The system of claim 56 further comprising:
- a data encryptor coupled to receives unencrypted data blocks, the data encryptor encrypts the plurality of unencrypted data blocks using the block encryption algorithm to produce a plurality of encrypted data blocks; and
- a data input/output interface coupled to the data encryptor, the data input/output interface transmits the encrypted data blocks to the at least one memory.
84. The system of claim 83 wherein the block encryption algorithm is Advanced Encryption Standard.
85. The system of claim 56 wherein the block decryption algorithm uses a key at least partially derived using an address within the at least one memory.
86. The system of claim 85 wherein the key is derived responsive, at least partially, to at least a portion of an address associated with at least one encrypted instruction of the plurality of encrypted instructions.
87. The system of claim 56 wherein a key is further derived response, at least partially, to at least a pseudo-random number.
88. The system of claim 56 further comprising a checksum circuit that performs an integrity check on at least one decrypted instruction within the plurality of decrypted instructions.
89. The system of claim 88 wherein the integrity check comprises the step of calculating a checksum related to the at least one decrypted instruction and comparing the calculated checksum with a fetched checksum.
90. The system of claim 89 wherein responsive to the integrity check failing, the system performs at least one of the following operations selected from a group consisting of:
- performing a destructive reset in which at least one key is erased;
- performing an evasive action sequence that protects data stored within the at least one memory; and
- performing an interrupt to allow a specific circuit to respond to the integrity check failure.
91. The system of claim 56 further comprising circuitry that responds to a tamper event, the circuitry performs, responsive to an integrity attack, at least one of the following operations selected from a group consisting of:
- performing a destructive reset in which at least one key is erased;
- performing an evasive action sequence that protects data stored within the at least one memory; and
- performing an interrupt to allow a specific circuit to respond to the integrity check failure.
92. The method of claim 50 further comprising the steps of:
- encrypting a plurality of data blocks using a selected encryption algorithm to produce a plurality of encrypted data blocks; and
- forwarding at least one data block in the plurality of data blocks to the external memory to be stored as encrypted data.
93. The method of claim 92 further comprising the steps of:
- calculating an integrity value associated with the at least one data block; and
- storing the integrity value within the external memory.
94. The method of claim 50 further comprising the steps of:
- detecting an integrity event within a microcontroller; and
- responding to the integrity event to protect data stored within the external memory.
95. The method of claim 94 wherein the step of responding to the integrity event comprises performing a destructive reset in which at least one key is erased.
3573747 | April 1971 | Adams et al. |
3609697 | September 1971 | Blevins et al. |
3798360 | March 1974 | Feistel |
5943421 | August 24, 1999 | Grabon |
5982887 | November 9, 1999 | Hirotani |
6003117 | December 14, 1999 | Buer et al. |
6061449 | May 9, 2000 | Candelore et al. |
7039814 | May 2, 2006 | DaCosta |
- Product Bulletin entitled, “VMS320 High Speed PCMCIA Security Tokken Crytographic Engine”, by VLSI Technology, Inc., 1997, PB-0497-020, (pp. 6).
Type: Grant
Filed: Jul 6, 2018
Date of Patent: Aug 31, 2021
Assignee: Maxim Integrated Products, Inc. (San Jose, CA)
Inventors: Edward Tangkwai Ma (Plano, TX), Stephen N. Grider (Argyle, TX), Wendell L. Little (Corinth, TX)
Primary Examiner: Minh Dieu Nguyen
Application Number: 16/029,263
International Classification: G06F 1/24 (20060101); G06F 21/72 (20130101); G06F 21/85 (20130101);