Systems and Methods for Providing a Computing Device Having a Secure Operating System Kernel

- PCTEL Secure LLC

A method and apparatus for resisting malicious code in a computing device. A software component corresponding to an operating system kernel is analyzed prior to executing the software component to detect the presence of one or more specific instructions such as malicious code, a change in mode permissions or instructions to modify or turn off security monitoring software, and taking a graduated action in response to the detection of one or more specific instructions. The graduated action taken is specified by a security policy (or policies) stored on the computing device. The analyzing may include off-line scanning of a particular code or portion of code for certain instructions, op codes, or patterns, and includes scanning in real-time as the kernel or kernel module is loading while the code being scanned is not yet executing (i.e., it is not yet “on-line”). Analysis of other code proceeds according to policies.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of priority to U.S. Provisional Application No. 61/445,319 filed Feb. 22, 2011, the entire contents of which is incorporated herein by reference.

FIELD OF THE INVENTION

This invention relates generally to a secure operating system kernel. In particular, the invention relates to a system and method for resisting malicious code from modifying or otherwise exploiting an operating system kernel of a computing device.

BACKGROUND OF THE INVENTION

In computing systems, kernel (Operating System) exploits are the most serious type of malicious attacks, and are also among the most difficult to prevent. In many systems, the kernel is the only software that runs in “privileged” mode of the processor; it has the right to execute a set of privileged instructions which enable the kernel to have complete control over the system, its applications, and its data. Only the kernel can execute those instructions, while user applications, such as those downloaded from an Application Store, or web pages accessed over the internet, are not able to execute these instructions. Hence, exploits that are introduced into the system by user applications (which is the vast majority of exploits) will often attempt to gain access to the kernel (i.e. run the exploit code in privileged mode of the processor) so that they too can execute those privileged instructions, which would give them essentially unlimited access to the system.

There is therefore a need to provide robust protection to the kernel against malicious code. However, protecting the kernel against malicious code has proven to be a challenge. One reason for this is that many modern general-purpose kernels (such as the Linux kernel) are so large that they are very difficult to protect using conventional techniques.

SUMMARY

A method for resisting malicious code in a computing device comprises analyzing a software component corresponding to an operating system kernel, prior to executing the software component to detect the presence of one or more specific instructions; and taking a graduated action in response to the detection of one or more specific instructions. The specific instructions may be malicious code that may compromise the security of the computing device. The instructions may be instructions relating to a change in mode permissions, such as from a user (e.g., application) mode to a privilege (e.g., kernel) mode, or instructions to modify or turn off security monitoring software. The graduated action taken may be, for example and without limitation, removing the offending instructions from the kernel software, prohibiting access to certain peripheral devices or functions and/or shutting the computing device down. The graduated action taken may be specified by a security policy (or policies) stored on the computing device. The analyzing may include off-line scanning of a software component corresponding to an operating system kernel. As used herein, “off-line” analysis includes scanning a particular code or portion of code for certain instructions, op codes, or patterns, and includes scanning in real-time as the kernel or kernel module is loading while the code being scanned is not yet executing (i.e., it is not yet “on-line”).

In addition to off-line analyzing of the kernel code, embodiments may also include analyzing user application(s), loadable modules (e.g. Linux Kernel Modules), or additional kernel(s) to detect the presence of one or more specific instructions that may be prohibited by security policies; and taking a graduated action in response to the detection of one or more specific prohibited instructions.

In embodiments, the method may further include providing a secure memory segment (e.g., domain) containing the kernel so that the kernel code cannot be modified from privileged (e.g., kernel) mode or user (e.g., application) mode. The kernel may operate in privileged (e.g. kernel) mode. The method may further comprise creating at least one “super privileged” mode, or Secure Monitor Mode (SMM), with the code and data for operating in the SMM being located in a secure section of memory distinct from the rest of the kernel. An application programming interface (API) may be used to enter and exit the SMM.

In embodiments, a memory of the computing device may be partitioned into a plurality of segments to protect certain sections of memory (including code that sits in that memory) from other memory in the system. In embodiments, code running in a particular memory segment is not aware of the existence of, and cannot read or write to, other memory in other memory segments.

In embodiments, a security monitor may protect critical parts of the kernel against code that may be able to run in privileged mode by essentially creating a super-privileged layer against normal kernel, which it relegates to a less privileged layer (but still higher than user mode). In embodiments, only a small amount of code may run in the super-privileged mode, and this code may dynamically monitor the rest of the kernel code and protect the kernel from malicious attacks or kernel bugs.

In embodiments, the computing device may prevent the kernel from issuing instructions to modify access to the defined memory segments (e.g., domains) while the kernel is operating in privileged (kernel) mode. In embodiments, an instruction to modify access to a memory segment (e.g., domain) may only be issued from Secure Monitor Mode. In embodiments, the computing device may include a Domain Access Control Register (DACR) that controls the memory segments to which the currently-executing code has access to, and instructions to modify the DACR (e.g., DACR modification instructions) may only be issued from the SMM.

In embodiments, the security monitor code may dynamically monitor the kernel and certain secure user applications for any attempt at modification, without the risk that the monitoring software itself will be modified.

Further embodiments relate to computing devices implementing a secure kernel,

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary embodiments of the invention, and together with the general description given above and the detailed description given below, serve to explain the features of the various embodiments.

FIG. 1 schematically illustrates a computing system with a secure kernel according to one embodiment.

FIG. 2 schematically illustrates a kernel with a secure monitor segment and a memory access component.

FIG. 3 is a process flow diagram illustrating a method of protecting an operating system kernel from malicious code.

DETAILED DESCRIPTION

The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims.

Typical computing devices suitable for use with the various embodiments will have in common the components illustrated in FIG. 1. For example, the exemplary computing device 100 may include a processor 102 coupled to internal memory 104. The processor 102 may be any programmable microprocessor, microcomputer or multiple processor chip or chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described herein. Typically, software applications may be stored in the internal memory 104 before they are accessed and loaded into the processor 102.

The computing device may further include an operating system 106, which, as is known in the art, may manage various resources of the device and provide a software platform upon which other programs (e.g., applications) may run. The operating system 106 includes a kernel 108, which may be a Linux-based kernel.

In various embodiments, the computing device 100 includes a protection core 110, which may be implemented in software, and which conceptually constitutes a thin layer surrounding the operating system kernel 108. The protection core 110 includes a security monitor that performs various operations, which may include monitoring for potentially dangerous system behavior and/or continually checking system integrity. Either or both of these operations may be performed pursuant to a security policy.

The computing device 100 may include one or more apps 112, which are user level applications or services, typically implemented in software, and may include, for example, a Web browser, e-mail client, etc. They may be installed on the computing device 100 by the user of the device, or by an authorized administrator, network operator, the device manufacturer, from an app store or other online marketplace, or occasionally, by untrusted parties in the form of malware.

Daemons and system processes 114 are software processes which may run in the background, and the user and administrator may or may not be aware of them. They may perform maintenance tasks, or provide internal and external communication conduits between processes and devices. They may run inside the operating system, and possibly inside the kernel 208. They often run with elevated privileges over a user's app 202.

The devices 118 may include any peripherals (e.g., I/O devices) or physical interfaces, such as network interfaces, user input devices, displays, etc., or the associated chipsets to communicate to any such devices. The device drivers 120 may be software elements that the kernel 108 uses to communicate with devices 118.

The memory 104 may a volatile or non-volatile memory, and may be RAM or Flash memory storage, or any other permanent or temporary storage. In various embodiments, the memory 104 may include any computer-readable medium for storing data, such as security policies, and/or executable programs (e.g., software) for execution by a processor, including processor 102.

The processor 102 may interpret and execute program code, including but not limited to user applications (e.g., apps 112), the operating system 106, including the kernel 108, daemons and processes 114, device drivers 220, etc. The processor 102 may also interpret and execute program code for the protection core 110.

Various embodiments of a protection core 110, which may implement one or more security policies on the computing device 100, are described in commonly-owned U.S. application Ser. No. ______, (Attorney Docket Number 1603-001), filed on even date herewith, the entire contents of which are incorporated herein by reference. The protection core 110 may provide certain functionality such as performing integrity checks on the various components of the system, including the kernel 108, system call monitoring, implementation of cryptographic or other security-related features, etc., as specified by the applicable security policy.

To provide strong separation between the protection core 110, the individual processes inside the operating system (e.g., Linux) kernel 108, and operating system 106 itself, various embodiments may include modifications to the kernel 108 to support security policy enforcement by the protection core 110 and enhanced memory management, which are described in further detail below.

As discussed above, since the kernel 108 runs in “privileged” mode, which enables the kernel 108 to have complete control over operating system 106, applications 112, and data on the computing system 100, there is a need to protect the kernel code from malicious exploits.

One way to protect the kernel 108 is through virtualization, in which the large kernel, instead of being executed in privileged mode, is instead modified to execute in user mode (just like a user application), and a much smaller kernel (a hypervisor) is executed in privileged mode in its place. This is a very complex approach that is impractical and ineffective in a number of respects, including the time and effort to port both kernels to different devices, inability for users to run actual Linux applications in secure mode, speed of execution, and so forth.

Another approach is to monitor the kernel 108 by periodically checking it while the system is running to ensure that it hasn't been modified (and hence, to ensure that any exploit code is not executing in privileged mode). One problem with this approach is that if the exploit code is able to get into the kernel 108 before being detected, it has full privileges to do anything. In that case, the malicious code can simply turn off or modify the monitoring software to avoid detection.

Various embodiments provide systems and methods for resisting malicious code in an operating system kernel using combinations of static analysis, dynamic monitoring, and memory segmentation (e.g., domain segmentation). Embodiments may include statically analyzing the kernel software to ensure that it does not execute certain sensitive instructions, including instructions to modify memory (e.g., domain) access. Certain processor architectures, such as the ARM architecture from ARM Holdings plc (Cambridge, UK), include a Domains feature. Amongst other uses, this feature enables certain sections of memory (including code that sits in that memory) to be protected from other memory in the system. In other words, code running in a particular memory segment is not aware of the existence of (and cannot read or write to) other memory segments.

In embodiments, a domain access control feature, which may be a hardware register, e.g., a DACR (Domain Access Control Register), controls which segments the currently executing code has access to. The kernel itself is not exempt from this restriction. If the DACR says that the kernel code cannot see or touch a particular piece of memory, it is unable to. Of course, the kernel code could still be modified by malicious code to cause the kernel to instruct the DACR to modify domain access and give the kernel access to whatever memory segments it wants. To do that, a particular instruction (hereafter referred to as the DACR Modification Instruction) is utilized.

In embodiments, the computing device is configured such that the kernel is unable to execute the DACR Modification Instruction. As shown in FIG. 2, the computing device 100 may include kernel monitoring software 202 that is partitioned from the rest of the kernel 108 in a secure segment 204, such that the kernel monitoring software 202 may operate alongside, yet is hidden from, the rest of the kernel 108. Any exploit in the kernel 108 may then be detected, and the kernel may not turn off that detection. Instructions to modify the DACR 206 may only come from the secure segment 204. Essentially, the DACR Modification Instruction is “super privileged”—i.e. a privileged instruction that not even the kernel 108 can execute.

In general, embodiments may include systems and methods for analyzing a software component corresponding to an operating system kernel to detect the presence of one or more specific instructions. The one or more specific instructions may include, but is not limited to, a DACR Modification Instruction. In other embodiments, the one or more specific instructions may be a series or sequence of instructions, which may depend on the particular architecture utilized (e.g., INTEL® x86 or PowerPC architecture as opposed to ARM). The PowerPC architecture, for example, includes a feature where it can set up a lookup code of instructions, and then put that index into memory, and then change the IBAT registers. Embodiment systems and methods may analyze the kernel code, using both off-line scanning and dynamic monitoring, to detect that sequence of instructions.

FIG. 3 illustrates an embodiment method 300 of protecting a kernel against malicious code. In embodiments, the processor 102 may be configured to statically analyze the kernel software to ensure that it does not execute certain sensitive instructions, including instructions to modify memory (e.g., domain) access in block 302. If it does (i.e., determination block 304=“Yes”), the offending instructions may be removed in block 306. Certain user applications may also be analyzed in block 314 to ensure that the memory modification instruction (e.g., DACR Modification Instruction) is not present. If it is, (i.e., determination block 316=“Yes”), then the application software may be isolated and not permitted to operate in block 318. In block 308, a memory segmentation feature (e.g., Domains) may be utilized to lock down the memory that contains the kernel so that it cannot be modified. In block 310, the kernel security monitor code may be segmented into a separate memory partition, e.g., using Domains. In block 312, the kernel security monitor code may dynamically monitor the kernel and certain secure user applications for any attempt at modification, without the risk that the monitoring software itself will be modified.

In certain embodiments, the operating system kernel 108 may be modified to provide certain features, including enabling the protection core 110 to operate in a special privilege mode, to offer new system calls with which code can access protection core 110 functionality from outside this privilege space, and to perform some of its base functionality.

In overview, the kernel 108 modifications may be similar to techniques originally applied to virtualization software, and they are designed to enhance the robustness and integrity of the kernel, which in one embodiment is a Linux kernel.

Along with the protection core 110, the modifications may ensure that the kernel 108, most notably its “text” segment, cannot be modified during execution, removing a large attack vector from the device 100. The kernel's text segment is where the operating system program code resides. If this code is modified, it causes the behavior of the operating system to change, potentially removing security checks, introducing code from an adversary, or even defeating, through impersonation, some of the protection core's features. Embodiments may make novel use of the ARM hardware's Memory Management Unit (MMU), including pagetables and domains, and these changes may ensure that this text can not be modified.

A combination of off-line analysis tools and online checks by the protection core 110 may be used to ensure that the kernel 108 cannot remove these enhanced protections.

In embodiments, one implementation change to the kernel 108 is the addition of a security monitor. The security monitor may create a “secure zone” inside the kernel 108 to protect “privileged” kernel instructions and data against the rest of the kernel code.

Conceptually, the technique adds a third privilege level to the operating system (e.g., Linux). In addition to the existing kernel mode and user mode, embodiments may add a “super-privileged” mode, or Secure Monitor Mode (SMM). The security monitor may move a task up or down by one level at a time to carry out operations in different levels.

To isolate the security monitor, its code and data may be located in a secure section distinct from the rest of the kernel, and an API may be used to enter and exit SMM, just as a Linux uses system calls (and other exceptions) to switch between user mode and kernel mode.

In embodiments, the security monitor may use ARM domain memory access control, combined with an off-line ARM instruction scan tool. The technology may be implemented on a variety of architectures, including on any ARM core with the domain feature, and on any operating system. Furthermore, although reference is made to certain architectures, including the ARMv6 and ARMv7 architectures from ARM Holdings, plc (Cambridge, UK), both of which may be utilized with the present invention, it will be understood that embodiments may use other processor architectures, now known or later developed. For example, the present methods and systems may be implemented using a MIPS, Intel or any other ARM architecture, and may further be implemented in hardware, such as with a FPGA.

In embodiments, the security monitor may protect critical parts of the kernel against code that may be able to run in privileged mode by essentially creating a super-privileged layer against normal kernel, which it relegates to a less privileged layer (but still higher than user mode). Only a small amount of code can run in the super-privileged mode, and this code monitors the rest of the kernel code and protects the kernel from malicious attacks or kernel bugs.

The security monitor utilizes the access domain feature of the processor combined with modification to the operating system's memory mapping to create a “super-privileged” layer in the kernel, which may be without the drawbacks of virtualization.

Embodiments may include changes to the memory mapping performed by the operating system. There are a number of security issues in existing systems (e.g., Linux-based systems) that may be addressed by the various embodiments.

For example, in a conventional Linux-based system, the entire kernel direct-mapping area that includes the kernel text and data segments has full access permission in privileged mode, and thus has no defense against code running in privilege mode, which includes Loadable Kernel Modules (LKM). This means that any user who can access the file system where the LKM binaries are stored is able to change kernel code and data. There are many explanations as to why LKMs exist and the security compromises they bring to the system. Linux assumes we can “trust” the LKMs, and it is with this in mind, together with the fact that direct-mappings are created in boot time using ARM assembly, it is not surprising that Linux implements the mapping this way.

This mapping is created at the same time that one-to-one physical mapping is created, which is before the Memory Management Unit (MMU) is enabled. It is almost impossible to allocate a second level page table at that time without breaking the memory management system Linux later uses. Therefore, the minimum security section unit is 1 MB, and it obviously cannot fit Linux kernel sections without creating memory segmentation or a large increase in kernel image size,

Also, it is somewhat inefficient that although most ARM V6 cores support dual page table and split address spaces, Linux chooses to use the same mechanism as it does in AMR VS instead of the same as in ARM V7 This may be for reasons of backwards compatibility with earlier ARM V6 cores. However, a consequence of such a mechanism is duplicated kernel address mappings in all address spaces. This redundancy makes it difficult to protect these mappings from malicious processes.

In addition, in existing Linux-based systems, the PTEs (2nd level page tables) are allocated from SLAB, which could be any address in the RAM, which makes it difficult to monitor the memory mappings.

In embodiments, the security monitor feature of the kernel 108 may include several memory mapping rules that provide increased security. In one embodiment, all critical data may be placed in a secure segment. The secure monitor code and data may be stored in a contiguous memory segment with domain set to be domain_monitor. The kernel address space 1st level page table (section table) and 2nd level page table may also be allocated from this segment. This segment may also include the .init section, which is not freed after boot time, since the monitor checks the integrity of the kernel boot code as well.

An exemplary embodiment of a memory layout of a kernel 108 with a security monitor is illustrated in Table 1:

TABLE 1 Section Access Range Name Notes Permissions 0x80000000-0x80004000 Not used Domain monitor, R/W 0x80004000-0x80008000 Used as pgd (1st level page Domain table) for kernel address space monitor, R/W 0x80008000-0x80008400 .text.head Assembly boot head code Domain monitor, R/X 0x80008400-0x80300000 .init All “.secure_monitor.*” Domain sections, including kernel 2nd monitor, page table. R/W/X 0x80300000-0x80783000 .text Text segment Domain kernel, R/X 0x80783000-0x809e4000 .rodata Read only data Domain kernel, Rd only 0x809e4000-0x80a24ee0 .data Data section Domain kernel, R/W 0x80a24ee0-0x80a2a000 .bss BSS section Domain kernel, R/W 0x80a2a000-0x8d200000 Free RAM, will be added in Domain free list, used by SLAB kernel, R/W

As the table shows, the area 0x80000000-0x80300000, is set to monitor the segment mapped to domain_monitor. When the Domain Access Control Register (DACR) is set so that the domain_monitor has no access permission, and any access from any mode (privileged mode or user mode) will generate a domain fault. Also, the kernel PGD (1st level page table) and PTE (2nd level page table) are always allocated from this segment. The absolute size of the PTE is calculated as:


PTE size=0x100000 (1st level page size)/4 (bytes per entry)=0x400


Total PTE size=x100000000 (4GB memory bus limit)−0x8d800000 (VMALLOC_START)/0x100000 (1MB)*PTE size=0x1CA00

Therefore, the size of the monitor segment needs to be larger than 0x1CA00, and should be aligned on a 1 MB boundary, since a domain may only be set in the 1st level page table entry. Considering the code and data section for the security monitor and init, 3 MB is a typical size. If the user address space page tables are also allocated from this segment, the segment may be larger. However, there are typically only a small number of PTEs in kernel address space, therefore, for efficient memory usage, the system can choose to reduce the size of security monitor segment, keeping in mind the 1 MB minimum.

In embodiments, the security monitor may also use a split page table. The address space switch may be similar to that in an ARM V7 system. Specifically, the Translation Table Base Control Register (TTBCR) may be set so that Modified Virtual Address (MVA) translation less than 0x80000000 goes to Translation Table Base Register 0 (TTBR0), and MVA translation equal to or greater than 0x80000000 goes to Translation Table Base Register 1 (TTBR1). There is no need, therefore, to copy kernel mappings across address spaces, and kernel address space may be located on a fixed physical address.

In various embodiments, the kernel address space (excluding the security monitor segment described above) may only have three privileged access permissions, and no user access permissions. These may include privileged Read Only, which may be for rodata segment, privileged Readable and Writable, which may be for the data segment including kernel address space 1st and 2nd level page tables, and Privileged Readable and Executable, which may be for the text segment. In one embodiment, there are no allowed Writable and Executable permissions, which can help prevent self-modifying code in kernel mode, which could be from an LKM.

In embodiments, only the security monitor has access to kernel 1st and 2nd level page tables, since the page tables are allocated from the security monitor segment. Therefore, change of access permission and physical address is not possible for privileged but non-security monitor kernel code or user code.

In summary, in one embodiment of a security monitor, there is a special segment in kernel address space, having an associated domain (e.g., domain_monitor), other than domain_kernel, domain_user or domain_io in normal Linux mappings. The 1st level and 2nd level page tables for kernel address space, as well as the 1st level and 2nd level page tables for user address space may be allocated from the monitor segment. All security monitor text and data sections, as well as Linux init text and data sections, may be located in the monitor segment. Modification of the Domain Access Control Register (DACR) may only be done by code in the monitor segment and memory abort exception handler, as guaranteed by off-line analysis, as described above tools, only kernel and LKM binaries verified by the off-line analysis may be installed on the system. Writes to TTBRs and the TTBCR may only be done by the monitor segment, and writes to the Instruction Fault Status Register (IFSR) and the Data Fault Status Register (DFSR), and disabling of the MMU, Translation Lookaside Buffer (TLB) and Cache debug instructions, may only be allowed in a Secure Monitor Mode (SMM). Except the monitor segment, all kernel address space mapping (e.g., addresses≧0x80000000) do not allow writable and executable access permission at the same time. In some embodiments, the kernel text pages are not writable, and kernel data pages are not executable.

Entering “super-privileged” mode (i.e., Secure Monitor Mode) may be done by setting domain_monitor in DACR to “client access” (only allow access in privileged mode), and exiting SMM may be done by setting domain_monitor in DACR to “no access” and setting IFSR and DFSR to an invalid value.

During operating system (e.g., Linux) boot-up, init code may first enter the SMM in the security monitor segment, may enhance kernel page table and initializes for SSM, and may exit SMM before the rest of the kernel is initialized. After this, no privileged kernel code nor userland code may run in the SMM unless the kernel calls routines in the monitor segment or accesses the secure monitor segment. There are generally only a limited number of SMM entries in the kernel and all entries may be registered and may be checked in run-time when switching into SMM. In some embodiments, it is not possible to switch to SMM directly from user mode.

On a domain fault, the faulting PC may be checked against registered SMM entries and only registered entries may be allowed. On entry of SMM for memory mapping, there may be code to check physical addresses in the critical area that includes the secure segment, exception area and any other area that should not be remapped.

The API may include at least two calls for the security monitor, including calls for entering and exiting the Secure Monitor Mode (SMM). When the operating system boots, before it turns on the MMU, the Domain Access Control Register (DACR) may to be set to allow all domain access to be able to perform the initialization of the kernel and monitor. After the kernel memory map has been set up for the security monitor and kernel initialization is finished, the kernel may call the exit SMM routine for the first time to disable access to the monitor domain segment by the rest of the kernel and user code.

The code for exiting the Secure Monitor Mode should be located within the secure monitor segment (e.g., domain_monitor) in kernel address space.

The goal of running the exit secure monitor function is generally to execute non-SMM code, which may be untrusted. In this function, when DACR is still set to allow access for the domain_monitor, execution of this instruction remains in SMM. However, in one embodiment, the exit secure mode function includes changing the domain access rights, such as by writing to the DACR and/or ISB (Instruction Synchronization Barrier) to restrict access to the secure monitor segment. After writing to DACR and ISB, the MMU's view of domain access right has updated, and prefetch of the next instruction address (e.g., return_addr, which is located in the monitor segment) by the MMU would result in a prefetch abort, and hence branch to prefetch abort handler.

In embodiments, the exception vector area and exception handler may not be in the monitor segment, since if they are, the branch to exception vector instruction itself would trigger a new round of prefetch abort and the CPU will then enter an infinitive loop of generating prefetch aborts. Note that this situation will not compromise the Secure Monitor Mode, since self-modifying kernel text segments are not possible, and modifying IFSR or DFSR can only be done in SMM, therefore exploit code cannot make the PC jump to the exception handler and safely return to non-SMM code.

One challenge is how to return from the exception handler to the next instruction address and then continue to non-SMM kernel code after a genuine prefetch abort triggered by the exit secure mode function. In one embodiment, the solution may include using a continuation function. Before changing DSFR, the monitor may store the continuation function pointer for this task. In the prefetch abort handler, it checks if IFSR is a domain fault, and if so enters a domain fault handler. The domain fault handler checks if the faulting PC is the ISB instruction in the exit secure mode function, and if so branches to the continuation function.

Before writing DACR, however, there are 4 registers that have been pushed onto the kernel stack as part of C EABI function entry code to save other registers used in this function, as well as the return address for the calling function. The domain fault handler fixed the return program counter (PC), however, the kernel stack pointer (SP) has changed and the restore registers and return to calling function instruction ldm sp, {fp, sp, pc} is bypassed by the monitor's mechanism. The calling function is expecting the stack to point to sp_kernel(1), but it is actually pointing to sp_kernel(2). The stack must be repaired, therefore, before branching to the continuation function.

Since the number of the registers and which set of registers are pushed onto the stack vary with the ABI of the tool chain and the context that the exit secure monitor function is invoked, any change of the function calling arguments, or even the code in the this function, would cause the compiler to choose to push different set of registers. One solution is to implement the exit secure monitor in assembly. This will ensure the ABI is satisfied, and the code can easily read registers from the context in domain fault handler, and the fix stack as needed. After returning from the exception handler, floating point (fp) and stack pointer (sp) along with the program counter (pc) will be set to correct value.

The exit secure monitor function may be set in the security monitor segment by a Linux linker script. In embodiments, there may be only one exit routine of the SMM throughout the kernel so that the domain fault handler can always check if the faulting PC is the address of the instruction after the ISB in the exit secure mode function.

After exiting the Secure Monitor Mode (SMM), the operating system (e.g., Linux) may run in “normal” privileged mode, or user mode. There may be cases, however, where the operating system re-enters SMM from kernel mode.

For example, the SMM may be re-entered for monitoring kernel code behavior. These entries to the SMM may be in exception handlers, and the security rules for the kernel can be added here. For example, rules allowing or forbidding system calls can be added in the entry of SMM in swi_handler. Kernel integrity checking can be added into the SMM entry code in irq_handler, etc.

The SMM may also be re-entered for performing “super-privileged” operations. In embodiments, the kernel needs “super privileged” instructions to do kernel work. For example, change TTBR0 for switch_mm( ) change DACR for accessing user pages, etc. When the kernel needs to use these instructions, it may re-enter SMM.

The SMM may also be re-entered for accessing the security monitor segment. The kernel may sometimes need to access memory in the security monitor segment. For example, when adding kernel mappings, Linux Memory Management (MM) code may need to write to a kernel page table.

As a result, there could be more than one entry point into SMM within the kernel, and these entries may be tracked by monitor entry code. For each entry point, there may be an API function defined inside the security monitor segment that actually does the job. In one embodiment, all the non-SMM kernel code needs to do is call the API function.

The entering of SMM may also generate a prefetch abort exception, and a check for the faulting PC to see if this domain fault is caused by branching from a legitimate registered SMM entry. The check may be a linear search in an entry table in a read-only data segment in an exception handler section. The table may be generated at compile time, and all return virtual addresses of declared SMM entries may be added in that table. In embodiments, the table may not be located in security monitor segment since the domain fault exception handler cannot be running in SMM on ARM V6, and hence it cannot access security monitor segment. Thus, all entries to SMM in non-trusted kernel code may register themselves in the monitor entry table, and any other domain fault on the domain monitor segment would be an illegitimate access.

In Linux, different architectures have different degrees of hardware support for the kernel accessing user memory. This may be important for the performance for an Operating System. In the ARM V6 architecture, the instructions LDRT and STRT provide kernel access memory to memory in user mode (i.e. the MMU treats memory accesses issued by LDRT/STRT as if it was issued in user mode, even though the CPSR mode bits are set to none-user mode). The kernel uses these instructions to access user address spaces directly without generating page faults if the page is mapped in the user space. However, this is extremely dangerous for kernel implementations. If kernel code unintentionally uses LDRT/STRT on an incorrect but nevertheless mapped user address, there is no page fault generated, and user address space could be contaminated by kernel bugs.

Linux ARM V6 enables the use of the Domain Access Control Register (DACR) as a mechanism to prevent the kernel from accessing user memory. All user address space is associated with a different domain from the kernel space. If the kernel intentionally accesses user address space, it will change DACR to allow kernel access of user address space which has domain_user. If the kernel code unintentionally uses LDRT/STRT on user address space, domain access control denies access, and a domain fault is generated.

In embodiments implementing the security monitor, modifying the DACR may be a “super-privileged” instruction, therefore it has to be moved into the security monitor segment, and is tracked and monitored by SMM. In this embodiment, non-SMM kernel code cannot access user address space without registering with the security monitor. Self-modification of user address space by the kernel can also be prevented as long as the user page table has the domain set correctly (this may easily be guaranteed by putting the 1st and 2nd user page table in the security monitor segment, for instance).

In embodiments, the kernel may prevent nested entry into SMM. Embodiments may allow a security monitor entry function to calls another secure monitor entry. In this case, the second entry of SMM does not cause a domain fault, since it's already in SMM, but its exit secure monitor routine does exit SMM. The subsequent return calling code is then executed in non-SMM mode. When branching to the exit secure monitor function for the second time, a domain fault is generated since the exit secure monitor is in the monitor segment. However, the return address of the second exit secure monitor is not a registered SMM entry, and is detected by domain fault handler and treated as an error. Therefore, there may be code at the entry of SMM to check the SMM mode and not perform the exit secure monitor function if it is determined that it is a nested entry.

The Security Monitor Mode (SMM) may provide a framework or a secure vault for critical kernel data and code. Security algorithms and checks can be put inside this secure area and monitor the rest of the kernel and user behavior..

In embodiments, security related global data can be placed in the SMM segment, and then can be used by security algorithms. For example, if any of the algorithms implements a state machine, the state (or other context information) can be reside in the SMM segment and be safe from non-SMM code.

One important state in the various embodiments is the current Security Mode of the device 100. Upon calling a special system call to switch security modes, system call monitoring code may change the security mode according to the parameter of this system call. The current security policy may then change depending on the new security mode. This global variable should not be accessed by non-SMM code, and SMM protection mechanisms can guarantee that.

In embodiments, system call monitoring resides in the entry of SMM in an interrupt handler (e.g., swi_handler). All Linux system calls must to pass the system call security policy check to reach the OS (e.g., Linux) system call handler. System call monitoring code has access to the kernel context of the SWI exception and userland parameters of the system call. It can perform any security policy check on both kernel space and user space. The security policy can be defined at compile time and be built into the SMM segment as well (with proper signature verification), as it is a data structure with a function pointer to a function with which to do the check, and other policy data associated with the function. There may be a static array of security policy maps to each OS (e.g., Linux) system call. At run time, the check function defined for the system call number may be called to perform the policy check for a given system call, and if the check fails, the interrupt (SWI) may return directly with an error number and never actually reaches a Linux system call handler which is located outside the SMM segment.

In embodiments, at least two mechanisms may be used to prevent the run-time modification of kernel code, rodata, and security monitor code: address space protection and run-time hashing. In embodiments, non-SMM code may not access kernel address space, as the only way to do so is through SMM entry in a mapping function (e.g., vmap). In the SMM entry of the mapping function, the code forbids any mapping (including DMA mapping) of physical frames that overlap above the critical area, and thus prevents any writes to physical RAM of above that area.

Run-time hashing may include a redundancy check to ensure the first mechanism has not failed in some way, or that an adversary has not physically altered a portion of memory. This mechanism may involve choosing random (e.g., 4K) pages, performing a cryptographic hash (e.g., SHA256) on the page. The system may compare the digest to statically generated digest values, and any differences may result in a system panic or other exceptions.

Since run-time integrity checking is processor-intensive and may impact cache and system performance in general, the frequency may be configurable in the security policy definition in various embodiments.

The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of steps in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular.

The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.

The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function.

In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module executed which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a machine readable medium and/or computer-readable medium, which may be incorporated into a computer program product.

The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.

Claims

1. A method of resisting malicious code in a computing device, comprising:

analyzing a software component corresponding to an operating system kernel, prior to executing the software component, to detect the presence of one or more specific instructions; and
taking a graduated action in response to the detection of one or more specific instructions.

2. The method of claim 1, wherein the one or more specific instructions comprise one or more instructions to change a memory access permission.

3. The method, of claim 1, wherein the one or more specific instructions comprise one or more instructions to modify or turn off security monitoring software.

4. The method of claim 1, wherein taking graduated action comprises at least one of removing the detected one or more specific instructions from the kernel software limiting access to peripheral devices. limiting access to kernel certain functions and shutting the computing device down.

5. The method of claim 1, further comprising:

analyzing a software component corresponding to a user application, prior to executing the user application, to detect the presence of one or more specific instructions; and
taking a graduated action in response to the detection of one or more specific instructions.

6. The method of claim 1, wherein the software component comprises a loadable module.

7. The method of claim 1, further comprising:

providing a first secure memory segment; and
storing the software component corresponding to the operating system kernel in the first secure memory segment.

8. The method of claim 7, further comprising:

preventing the software component corresponding to the operating system kernel from being modified by processes running in a privilege mode or a user mode while the component is stored in the first secure memory segment.

9. The method of claim 7, further comprising:

providing a second secure memory segment, distinct from the first secure memory segment; and
storing a software component corresponding to a security monitor, distinct from a hypervisor, in the second secure memory segment.

10. The method of claim 9, further comprising:

running the software component corresponding to a security monitor in a secure monitor Mode (SMM) having a higher privilege than a kernel mode and a user mode on the computing device.

11. The method of claim 10, wherein instructions to change a memory access permission may only be issued from a process running in secure monitor mode (SMM).

12. The method of claim 9, further comprising:

monitoring the software component corresponding to the operating system kernel using the security monitor.

13. The method of claim 12, wherein the monitoring step comprises:

detecting the presence of one or more specific instructions; and
taking a graduated action in response to the detection of one or more specific instructions.

14. The method of claim 9, further comprising:

monitoring a software component corresponding to a user application using the security monitor.

15. The method of claim 14, wherein the monitoring step comprises:

detecting the presence of one or more specific instructions; and
taking a graduated action in response to the detection of one or more specific instructions.

16. A computing device, comprising:

a memory; and
a processor coupled to the memory and configured with processor executable instructions to perform operations, comprising: analyzing a software component corresponding to an operating system kernel, prior to executing the software component, to detect the presence of one or more specific instructions; and taking a graduated action in response to the detection of one or more specific instructions.

17. The device of claim 16, wherein the one or more specific instructions comprise one or more instructions to change a memory access permission.

18. The device of claim 16, wherein the one or more specific instructions comprise one or more instructions to modify or turn off security monitoring software.

19. The device of claim 16, wherein taking graduated action comprises at least one of removing the detected one or more specific instructions from the kernel software, limiting access to peripheral devices limiting access to kernel certain functions and shutting the computing device down.

20. The device of claim 16, wherein the processor is-configured with processor executable instructions to perform operations further comprising:

analyzing a software component corresponding to a user application, prior to executing the user application, to detect the presence of one or more specific instructions; and
taking a graduated action in response to the detection of one or more specific instructions.

21. The device of claim 16, wherein the software component comprises a loadable module.

22. The device of claim 16, wherein the processor is configured with processor-executable instructions to perform operations further comprising:

providing a first secure memory segment; and
storing the software component corresponding to the operating system kernel in the first secure memory segment.

23. The device of claim 22, wherein the processor is configured with processor executable instructions to perform operations further comprising:

preventing the software component corresponding to the operating system, kernel from being modified by processes running in a privilege mode or a user mode while the component is stored-in the first secure memory segment.

24. The device of claim 22, wherein the processor is configured with processor-executable instructions to perform operations further comprising:

providing a-second secure memory segment, distinct from the first secure memory segment; and
storing a software component corresponding to a security monitor, distinct from a hypervisor in the second secure memory segment.

25. The device of claim 24, wherein the processor is configured with processor-executable instructions to perform operations further comprising:

running the software component corresponding to a security monitor in a secure monitor mode (SMM) having a higher privilege than a kernel mode and a user mode on the computing device.

26. The device of claim 24, wherein instructions to change a memory access permission may only be issued from a process running in secure monitor mode (SMM).

27. The device of claim 24, wherein the processor is configured with processor-executable instructions to perform operations further comprising:

monitoring the software component corresponding to the operating system kernel using the security monitor.

28. The device of claim 27, wherein the monitoring comprises:

detecting the presence of one or more specific instructions; and
taking a graduated:action in response to the detection of one or more specific instructions.

29. The device of claim 24, wherein the processor is configured with processor-executable instructions to perform operations further comprising:

monitoring a software component corresponding to a user application using the security monitor.

30. The device of claim 29, wherein the monitoring comprises:

detecting the presence of one or more specific instructions; and
taking a graduated action in response to the detection of one or more specific instructions.
Patent History
Publication number: 20120216281
Type: Application
Filed: Dec 9, 2011
Publication Date: Aug 23, 2012
Applicant: PCTEL Secure LLC (Bloomindale, IL)
Inventors: Eric Ridvan Uner (Carpentersville, IL), Benjamin James Leslie (Enmore), Joshua Scott Matthews (Baltimore, MD), Changhua Chen (Carlingford), Thomas Smigelski (Lake Zurich, IL), Anthony Kobrinetz (Hoffman Estates, IL)
Application Number: 13/315,531
Classifications
Current U.S. Class: Intrusion Detection (726/23)
International Classification: G06F 21/24 (20060101);