METHOD AND SYSTEM ACHIEVING INDIVIDUALIZED PROTECTED SPACE IN AN OPERATING SYSTEM
Aspects for achieving individualized protected space in an operating system are provided. The aspects include performing on demand hardware instantiation via an ACE (an adaptive computing engine), and utilizing the hardware for monitoring predetermined software programming to protect an operating system.
Latest QST Holdings, Inc. Patents:
- External memory controller node
- Adaptive integrated circuitry with heterogeneous and reconfigurable matrices of diverse and adaptive computational units having fixed, application specific computational elements
- System for adapting device standards after manufacture
- External memory controller node
- Method and system for creating and programming an adaptive computing engine
The present invention relates to robust operating system protection.
BACKGROUND OF THE INVENTIONAs is generally understood in computing environments, an operating system (O/S) acts as the layer between the hardware and the software providing several important functions. For example, the functionality of an O/S includes device management, process management, communication between processes, memory management, and file systems. Further, certain utilities are standard for operating systems that allow common tasks to be performed, such as file access and organization operations and process initiation and termination.
Within the O/S, the kernel is responsible for all other operations and acts to control the operations following the initialization functions performed by the O/S upon boot-up. The traditional structure of a kernel is a layered system. Some operating systems use a micro-kernel to minimize a size of the kernel while maintaining a layered system, such as the Windows NT operating system.
While the typical structure provides a well-understood model for an operating system, some problems remain. One such problem is the potential for crashing the machine once access below the demarcation line 60 is achieved. For example, bugs in programs that are written for performing processes below the demarcation line, e.g., device drivers that interact with the hardware abstraction layer, protocol stacks between the kernel and the applications, etc., can bring the entire machine down. While some protection is provided in operating systems with the generation of exceptions in response to certain illegal actions, such as memory address violations or illegal instructions, which trigger the kernel and kill the application raising the exception, there exists an inability by operating systems to protect against the vulnerability to fatal access.
An approach to avoiding such vulnerability is to limit which software is trusted within an operating system and utilizing control mechanisms that check all other programming prior to processing. Relying on software to perform such checks reduces the ability to limit the amount of software that is trusted. A hardware solution would be preferable, but, heretofore, has been prohibitive due to the level of instantaneous hardware machine generation that would be necessary.
Accordingly, what is needed is an ability to achieve a protected operating system through on demand hardware monitoring. The present invention addresses such a need.
SUMMARY OF THE INVENTIONAspects for achieving individualized protected space in an operating system are provided. The aspects include performing on demand hardware instantiation via an ACE (an adaptive computing engine), and utilizing the hardware for monitoring predetermined software programming to protect an operating system.
Through the present invention, all elements outside a system's own code for operating, e.g., all the stacks, abstraction layers, and device drivers, can be readily and reliably monitored. In this manner, the vulnerability present in most current operating systems due to unchecked access below the demarcation line is successfully overcome. Further, the reconfigurability of the ACE architecture allows the approach to adjust as desired with additions/changes to an operating system environment. These and other advantages will become readily apparent from the following detailed description and accompanying drawings.
The present invention relates to achieving individualized protected space in an operating system via an adaptive computing engine (ACE). The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiment and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the present invention is not intended to be limited to the embodiment shown but is to be accorded the widest scope consistent with the principles and features described herein.
In a preferred embodiment, the processing core of an embedded system is achieved through an adaptive computing engine (ACE). A more detailed discussion of the aspects of an ACE are provided in co-pending U.S. patent application Ser. No. 10/384,486, entitled ADAPTIVE INTEGRATED CIRCUITRY WITH HETEROGENEOUS AND RECONFIGURABLE MATRICES OF DIVERSE AND ADAPTIVE COMPUTATIONAL UNITS HAVING FIXED, APPLICATION SPECIFIC COMPUTATIONAL ELEMENTS, filed Mar. 7, 2003, assigned to the assignee of the present invention, and incorporated herein in its entirety. Generally, the ACE provides a significant departure from the prior art for achieving processing in an embedded system, in that data, control and configuration information are transmitted between and among its elements, utilizing an interconnection network, which may be configured and reconfigured, in real-time, to provide any given connection between and among the elements. In order to more fully illustrate the aspects of the present invention, portions of the discussion of the ACE from the application incorporated by reference are included in the following.
The controller 120 is preferably implemented as a reduced instruction set (“RISC”) processor, controller or other device or IC capable of performing the two types of functionality discussed below. The first control functionality, referred to as “kernal” control, is illustrated as kernal controller (“KARC”) 125, and the second control functionality, referred to as “matrix” control, is illustrated as matrix controller (“MARC”) 130.
Continuing to refer to
In a preferred embodiment, the various computational elements 250 are designed and grouped together, into the various reconfigurable computation units 200. In addition to computational elements 250 which are designed to execute a particular algorithm or function, such as multiplication, other types of computational elements 250 are also utilized in the preferred embodiment. As illustrated in
With the various types of different computational elements 250 which may be available, depending upon the desired functionality of the ACE 106, the computation units 200 may be loosely categorized. A first category of computation units 200 includes computational elements 250 performing linear operations, such as multiplication, addition, finite impulse response filtering, and so on. A second category of computation units 200 includes computational elements 250 performing non-linear operations, such as discrete cosine transformation, trigonometric calculations, and complex multiplications. A third type of computation unit 200 implements a finite state machine, such as computation unit 200C as illustrated in
The ability to configure the elements of the ACE relies on a tight coupling (or interdigitation) of data and configuration (or other control) information, within one, effectively continuous stream of information. The continuous stream of data can be characterized as including a first portion that provides adaptive instructions and configuration data and a second portion that provides data to be processed. This coupling or comingling of data and configuration information, referred to as a “silverware” module, helps to enable real-time reconfigurability of the ACE 106, and in conjunction with the real-time reconfigurability of heterogeneous and fixed computational elements 250, to form different and heterogenous computation units 200 and matrices 150, enables the ACE 106 architecture to have multiple and different modes of operation. For example, when included within a hand-held device, given a corresponding silverware module, the ACE 106 may have various and different operating modes as a cellular or other mobile telephone, a music player, a pager, a personal digital assistant, and other new or existing functionalities. In addition, these operating modes may change based upon the physical location of the device; for example, when configured as a CDMA mobile telephone for use in the United States, the ACE 106 may be reconfigured as a GSM mobile telephone for use in Europe.
As an analogy, for the reconfiguration possible via the silverware modules, a particular configuration of computational elements, as the hardware to execute a corresponding algorithm, may be viewed or conceptualized as a hardware analog of “calling” a subroutine in software which may perform the same algorithm. As a consequence, once the configuration of the computational elements has occurred, as directed by the configuration information, the data for use in the algorithm is immediately available as part of the silverware module. The immediacy of the data, for use in the configured computational elements, provides a one or two clock cycle hardware analog to the multiple and separate software steps of determining a memory address and fetching stored data from the addressed registers.
Referring again to
While the ability to configure and reconfigure computational elements in real-time is achieved through the ACE, the present invention applies that ability to provide a more robust operating system configuration. In accordance with the present invention, a core amount of programming, such as the kernel space, is the only so-called trusted space within the operating system. All other elements of the operating system that normally would fall within the protected space of the operating system model now receive individualized monitoring. Referring to
In the preferred embodiment, the system requirements for the configuration or reconfiguration are included within the silverware module for use by the KARC 125 in performing this evaluative function. If the configuration or reconfiguration may occur without adverse affects, the silverware module is allowed to load into memory 140, with the KARC 125 setting up the DMA engines within the memory 140. If the configuration or reconfiguration would or may have such adverse affects, the KARC 125 does not allow the new module to be incorporated within the ACE 100.
Basic operations that device drivers perform can be broken down into:
Memory Reads and Writes
-
- Reads and Writes to main memory address space to set, clear and check status of CSR (control status registers) of devices
- Reads and Writes to Input/Output address space to set, clear and check status of CSR (control status registers) of devices
Hardware Interrupts
-
- Setting up interrupt vectors to point to an interrupt service routine
- Servicing interrupt
- Disabling and enabling interrupts
- Setting and Clearing an interrupt
Direct Memory Addressing (DMA)
-
- Setting up a DMA transfer by Memory Reads and Writes to DMA CSRs or Memory Mapped CSRs
- Setting Callback routine to be executed when DMA completes
- Setting Interrupt level to be asserted when DMA completes
- Setting up Memory Tables for scatter and gather operations by reads and writes
Computational Cycles
-
- Execution of device driver code consumes clock cycles of some processor
Memory Utilization
-
- Device driver code requires a certain amount of memory for temporary buffers, scratch pad working space, stacks, constants, data buffers, control sequences, etc. . . .
Bandwidth
-
- Device driver code requires a certain amount of bandwidth, typically bus bandwidth, link bandwidth, bandwidth between computation units such as register files, memories, hardware units, as well as bandwidth between low level component building blocks required to construct larger structures such as multipliers, adders, shifters, etc. . . .
Depending on the nature of the device driver, the physical characteristics of the hardware under control of the device driver some to all of the above operations are utilized. Device driver code which has defects (bugs) either intentionally (as in virus) or un-intentionally can effect the system the device driver is installed since device drivers run at the protected kernel level and can thus effect the integrity of the system leading to crashes, freezes, failure to perform as specified, as well as unintentional side effects of other software and hardware in the system.
- Device driver code requires a certain amount of bandwidth, typically bus bandwidth, link bandwidth, bandwidth between computation units such as register files, memories, hardware units, as well as bandwidth between low level component building blocks required to construct larger structures such as multipliers, adders, shifters, etc. . . .
In an ACE system, with the ability to construct specialized hardware from lower level building blocks a device driver can be “protected” by uniquely special hardware to protect the system from the device driver. Thus there is no need to trust that the device driver will perform as specified, hardware will ensure that device driver performs correctly. On failure an exception is generated to the OS indicating the failure condition as well as the specific device driver that failed. The OS then has the ability to either terminated the device driver, restart the device driver, resume the device driver from a check pointed (device driver may occasionally save state and thus has a copy of a known good configuration) copy of the device driver, pass the exception upwards to be handled at a higher system level, or even notify the user and request corrective action. Specifically for each of the above basic device operators the ACE can:
Memory Reads and Writes
-
- Reads and Writes to main memory address space to set, clear and check status of CSR (control status registers) of devices
- The ACE produces a hardware memory range checking hardware to insure that the address of the memory read/writes are allowed and do not touch any memory that is out of bounds or range. This can range from sophistication from a simple address range checker (ALU) to multiple addresses for scattered CSR addresses (sophisticated multiple ALUs to perform in parallel range checking as well as insuring either read or write protection) to a full Customized MMU (memory management unit for block based address checking). Multiple address checking allows very specific and customized protection above and beyond what traditional MMU systems can provide.
- Reads and Writes to Input/Output address space to set, clear and check status of CSR (control status registers) of devices
- The ACE produces a hardware memory range checking hardware to insure that the address of the Input/Output (I/O) read/writes are allowed and do not touch any memory that is out of bounds or range. This can range from sophistication from a simple address range checker (ALU) to multiple addresses for scattered CSR addresses (sophisticated multiple ALUs to perform in parallel range checking as well as insuring either read or write protection) to a full Customized MMU (memory management unit for block based address checking). Multiple address checking allows very specific and customized protection above and beyond what traditional MMU systems can provide.
- Reads and Writes to main memory address space to set, clear and check status of CSR (control status registers) of devices
Hardware Interrupts
-
- Setting up interrupt vectors to point to an interrupt service routine
- The ACE can adapt hardware to produce hardware protection checking to insure that only a specific vector or group of specific vectors may be read or written.
- Servicing interrupt
- The ACE can adapt hardware to produce hardware protection checking to insure that if the device driver does not service the interrupt that a hardware default device driver is executed.
- Disabling and enabling interrupts
- The ACE can adapt hardware to produce hardware protection checking to insure that if the device driver can only enable or disable the interrupt that it has permission for.
- Setting and Clearing an interrupt
- The ACE can adapt hardware to produce hardware protection checking to insure that only the specific CSR bits are read or written by the device driver. In addition, if required, a watchdog timer can be configured to insure that strict timing durations are met in terms of duration of interrupt allowed.
- Setting up interrupt vectors to point to an interrupt service routine
Direct Memory Addressing (DMA)
-
- Setting up a DMA transfer by Memory Reads and Writes to DMA CSRs or Memory Mapped CSRs
- The ACE can adapt hardware to produce hardware protection checking to insure that only the specific CSRs or portions of CSRs as well as read/write protection is allowed by the device driver.
- Setting Callback routine to be executed when DMA completes
- The ACE can adapt hardware to produce hardware protection checking to insure that no other code can change the callback routine address to insure that the specific device driver intended to be called back is.
- Setting Interrupt level to be asserted when DMA completes
- The ACE can adapt hardware to produce hardware watchdog timers to insure that the DMA completes.
- Setting up Memory Tables for scatter and gather operations by reads and writes
- The ACE can adapt hardware to produce hardware protection checking to insure that the specific addresses (either in memory or I/O space) are accessed thereby precluding the device driver from accessing memory that it does not have authorization for.
- Setting up a DMA transfer by Memory Reads and Writes to DMA CSRs or Memory Mapped CSRs
Computational Cycles
-
- Execution of device driver code consumes clock cycles of some processor
- The ACE can adapt hardware to produce hardware cycle count checking to insure that the device driver does not exceed the specified maximum number of cycles. This can be used to terminate run-away tasks, or operations that are taking too long and may begin to effect system operation.
- Execution of device driver code consumes clock cycles of some processor
Memory Utilization
-
- Device driver code requires a certain amount of memory for temporary buffers, scratch pad working space, stacks, constants, data buffers, control sequences, etc. . . .
- The ACE produces a hardware memory range checking hardware to insure that the address of the memory read/writes are allowed and do not touch any memory that is out of bounds or range. This can range from sophistication from a simple address range checker (ALU) to multiple addresses for scattered CSR addresses (sophisticated multiple ALUs to perform in parallel range checking as well as insuring either read or write protection) to a full Customized MMU (memory management unit for block based address checking). Multiple address checking allows very specific and customized protection above and beyond what traditional MMU systems can provide.
- This may include if required hardware resource checking on the amount of memory space used to see if it will exceed a maximum specified limit (for example if the upper limit on stack space is exceeded)
- Device driver code requires a certain amount of memory for temporary buffers, scratch pad working space, stacks, constants, data buffers, control sequences, etc. . . .
Bandwidth
-
- Device driver code requires a certain amount of bandwidth, typically bus bandwidth, link bandwidth, bandwidth between computation units such as register files, memories, hardware units, as well as bandwidth between low level component building blocks required to construct larger structures such as multipliers, adders, shifters, etc. . . .
- The ACE produces a hardware bandwidth checker to insure that the specified amount of bandwidth used on the MIN is not exceeded. This can be as simple as a total number of bytes transferred limit, to an average rate not to exceed limit.
The advantage here is that only the hardware protection that is required for a particular execution of the device driver needs to consume resources. For example, if no DMA is used then no ACE circuitry protecting the DMA is required. Even more resource efficient is if between to different calls to the device driver which use differing levels of operators then only the exact hardware protection is required—e.g. in a single execution no I/O read/writes are used and thus no hardware protection is required, in a second execution there is I/O read/writes and thus hardware protection is instantiated (hardware is configured and reconfigured from lower building block hardware to construct the exact hardware that is required). In a conventional hardware architecture without the ability to reconfigure the hardware the overhead for all this protection circuitry must be paid—by using only what is required during a particular time window (or execution) the ACE can provide exactly what is needed.
- The ACE produces a hardware bandwidth checker to insure that the specified amount of bandwidth used on the MIN is not exceeded. This can be as simple as a total number of bytes transferred limit, to an average rate not to exceed limit.
- Device driver code requires a certain amount of bandwidth, typically bus bandwidth, link bandwidth, bandwidth between computation units such as register files, memories, hardware units, as well as bandwidth between low level component building blocks required to construct larger structures such as multipliers, adders, shifters, etc. . . .
Thus, through the present invention, all elements outside a system's own code for operating, e.g., all the stacks, abstraction layers, and device drivers, can be readily and reliably monitored. In this manner, the vulnerability present in most current operating systems due to unchecked access below the demarcation line is successfully overcome. Further, the reconfigurability of the ACE architecture allows the approach to adjust as desired with additions/changes to an operating system environment.
From the foregoing, it will be observed that numerous variations and modifications may be effected without departing from the spirit and scope of the novel concept of the invention. Further, it is to be understood that no limitation with respect to the specific methods and apparatus illustrated herein is intended or should be inferred. It is, of course, intended to cover by the appended claims all such modifications as fall within the scope of the claims.
Claims
1. A method for achieving individualized protected space in an operating system, the method comprising the steps of:
- (a) instantiating hardware on demand via an an adaptive computing engine (ACE); and
- (b) utilizing the hardware for monitoring predetermined software programming to protect an operating system.
2. The method of claim 1 wherein the utilizing step (b) further comprises the step of (b1), utilizing the hardware to perform memory address range checking.
3. The method of claim 1 wherein the utilizing step (b) further comprises the step of (b1) utilizing the hardware to perform resource restriction checking.
4. The method of claim 3 wherein the resource restriction further comprises a time duration restriction.
5. The method of claim 1 wherein the utilizing step (b) further comprises the step of (b1) monitoring protocol processing programming.
6. The method of claim 1 wherein the utilizing step (b) further comprises the step of (b1) monitoring device driver programming.
7. The method of claim 1 wherein the utilizing step (b) further comprises the step of (b1) monitoring hardware abstraction layer programming.
8. The method of claim 1 wherein step (b) further comprises the step of (b1), utilizing the hardware to monitor only of the portions of the software programming that is used from run to run.
9. A system for achieving individualized protected space in an operating system, the system comprising:
- memory for storing trusted operating system programming; and
- an adaptive computing engine (ACE) for providing on demand hardware instantiation to individually monitor predetermined software programming interacting with the trusted operating system programming.
10. The system of claim 9 wherein the trusted operating system programming further comprises a kernel level of programming.
11. The system of claim 9 wherein the predetermined software programming further comprises device drivers.
12. The system of claim 9 wherein the predetermined software programming further comprises abstraction layers.
13. The system of claim 9 wherein the predetermined software programming further comprises communication protocol stacks.
14. The system of claim 9 wherein the on-demand hardware instantiation performs memory address range checking.
15. The system of claim 9 wherein the on-demand hardware instantiation performs resource restriction checking.
16. The system of claim 15 wherein the resource restriction further comprises a time duration restriction.
17. A system for achieving individualized protected space in an operating system, the system comprising:
- memory for storing trusted operating system programming, wherein the trusted operating system programming further comprises a kernel level of programming and a plurality of device drivers; and
- an adaptive computing engine (ACE) for providing on demand hardware instantiation to individually monitor predetermined software programming interacting with the trusted operating system programming, wherein the predetermined software programming further comprises abstraction layers and communication protocol stacks wherein the on-demand hardware instantiation performs memory address range checking and resource restriction checking; and wherein the resource restriction further comprises a time duration restriction.
Type: Application
Filed: Dec 29, 2009
Publication Date: Jul 29, 2010
Applicant: QST Holdings, Inc. (Palo Alto, CA)
Inventor: Paul L. Master (Sunnyvale, CA)
Application Number: 12/648,792
International Classification: H04L 9/00 (20060101);