System and method for automated safe reprogramming of software radios
The proposed system defines an automated safe reprogramming of software radios. The essence of software radios is to enhance or alter the functionality of a mobile terminal by using software. This means that the required software is downloaded onto a mobile terminal on the fly to meet the critical and necessary needs of the user. Lot of caution needs to be exercised while downloading the necessary software components on the fly. The proposed invention automates the validation and verification of the downloaded component by suggesting (a) a variety of signatures; (b) a means for defining multiple zones and verification of zone-specific signatures; (c) a means for computing the signatures of the downloaded component in the multiple zones; and (d) a means for verification of the downloaded component based on the signatures. The objective is to define the safeness of the downloaded component based on multiple signatures to be validated in different zones. In this way, the multiple validity requirements are tested in a systematic way and failure to meet any one of the requirements leads to the rejection of the downloaded component.
Latest SATYAM COMPUTER SERVICES LIMITED OF MAYFAIR CENTRE Patents:
- System and method for context map generation
- System and method for automated safe reprogramming of software radios
- System and method for flying squad re authentication of enterprise users
- System and method for bounded analysis of multimedia using multiple correlations
- System and Method for Flying Squad Re Authentication of Enterprise Users
The invention relates generally to a method for reducing the risk of using a corrupted or damaged downloaded program. More particularly, the invention relates to a system and method for validating a downloaded program into a software radio using multiple signatures and a separate execution environment for validation.
BACKGROUND OF THE INVENTIONWireless terminal architecture is adopting the principles of Software Defined Radio. The main thrust in applying the principles of Software Defined Radio into the handset architecture is to utilize the potential that SDR offers in terms of universal multi-mode terminal functionality within a single reconfigurable platform. This is necessitated by the plethora of standards in Radio Access technologies in both the second and third generation of mobile communication systems. Also, the need to provide true global roaming seamlessly across network boundaries, getting access to services anytime anywhere without having to bother about the underlying technology changes necessitates the terminal to have some amount of reconfigurability built in.
The architecture of a terminal built on the principles on Software Defined Radio follows a distributed computing paradigm. SDR Forum in their Software Communications Architecture Specification, Volume 2.2, suggests CORBA as a middleware. The entire Software Radio is viewed as a heterarchical collection of software components. Each application is viewed as composed of one or more of these components. Adding a new component is termed as commissioning and removing of a component is termed decommissioning of the component. A component can be commissioned, decommissioned or replaced on the fly. The component server is capable of managing the component activity at transaction level granularity.
Over-the-air reconfiguration of wireless terminals provides the true advantages of having a reconfigurable architecture. The terminals can download software components over-the-air and reconfigure the properties of the terminal. Theses components can range from new air interface protocols to new user applications. Some of these components can even change the pattern in the power emission characteristics of the terminal.
The reconfiguration process, as per Architectures Supporting SDR Terminals by Nikolas Olazieregi et al, at the minimum level, requires some generic tasks like available mode lookup, negotiation, over-the-air software download and reconfiguration. Every terminal will have some non-reconfigurable modules that take care of such functionality. The download of software components can be in two ways, namely, user-triggered and system-initiated. User-triggered software downloads can be for user applications such as scheduler, calendar or game applications. System initiated downloads can be for system level components such as CODECs, protocol stack for a new air interface, and modem for a new air interface.
Detection and control of the rogue SDR terminals in the future networks, by Jafar Faroughi-Esfahani et al, describes conditions under which reconfiguration of a terminal could lead to potential problems. The capability of a reconfigurable terminal to download and commission new software components during an operation also throws open the possibility of the terminal malfunctioning and jamming other users in the network.
DESCRIPTION OF RELATED ARTThe possibility of the software modules corrupting the functionality of a reconfigurable software radio is very much a reality. The integrity of the software modules in this case cannot be guaranteed since the nature and the contents in the device can undergo reconfiguration dynamically. Thus, there exists a need for validating the software components before they are commissioned in a reconfigurable terminal.
For the process of over-the-air reconfiguration of software radios, the user (the terminal) requests the download of software components from a server. The package for the component is sent over-the-air making use of the wireless communication capabilities of the terminal. The process of providing safe reprogramming of the software radios involves providing an assurance that the component that is downloaded cannot cause any problem in the system context.
U.S. Pat. No. 5,978,484 to Apperson; Norman and Beckman; Brian C for “System and method for safety distributing executable objects” (issued Nov. 2, 1999 and assigned to Microsoft Corporation (Redmond, Wash.)) describes a method by which a distributing authority associates a privilege request code and digitally signs the executable. The client verifies the digital signature before executing the same and the code is monitored to ensure that the privilege request code is honored during the execution. The said patent while addresses the issues related to monitoring and controlling the execution of the code but doesn't verify whether the behavior is as expected.
U.S. Pat. No. 6,047,374 to Barton; James M for “Method and apparatus for embedding authentication information within digital data” (issued Apr. 4, 2000 and assigned to Sony Corporation (JP)) discusses a method by which arbitrary digital information is embedded within a stream of digital data and that allows a user to determine whether the digital data have been modified from their intended form. The said patent describes a method that protects the content and ensures that the content has not been modified; however, the perspective of the approach is more from data than from program point of view.
U.S. Pat. No. 5,412,717 to Fischer; Addison M for “Computer system security method and apparatus having program authorization information data structures” (issued May 2, 1995) discusses a system monitor that limits the resources that can be utilized by an executing program based on program authorization information. The executing program, thus, is regarded as being placed in a capability limiting “safety box”.
U.S. Pat. No. 6,065,118 to Bull; John Albert and Otway; David John for “Mobile code isolation cage” (issued May 16, 2000 and assigned to Citrix Systems, Inc. (Fort Lauderdale, Fla.)) describes a method that reduces the risk of damage to data or programs due to a downloaded program from an external source. The downloaded component is executed in a separate execution environment and data is passed back and forth between end user system and the cage that executed the downloaded program. The method described in the said patent, however, doesn't make an attempt to ensure whether the generated data by the downloaded program is as expected; it only attempts to reduce the risk of damage to end user system resources due the execution of the downloaded program.
U.S. Pat. No. 6,070,239 to McManis; Charles E for “System and method for executing verifiable programs with facility for using non-verifiable programs from trusted sources” (issued May 30, 2000 and assigned to Sun Microsystems, Inc. (Mountain View, Calif.)) describes a method for the verification of digital signatures associated with a program and for the verification of the program with respect to a pre-defined integrity criteria. The verification described in the said patent is based on Java bytecode verifier and include criteria such as operand stack and data type usage restrictions and the verification is related to architecture neutral programs.
U.S. Pat. No. 6,073,239 to Dotan; Eyal for “Method for protecting executable software programs against infection by software viruses” (issued Jun. 6, 2000 and assigned to In-Defense, Inc. (Santa Cruz, Calif.)) describes a method for protecting executable programs against infection by a computer virus program. The approach in the said patent is based on a typical execution pattern of the program on corruption by a software virus.
U.S. Pat. No. 6,105,072 to Fischer; Addison M for “Method and apparatus for validating travelling object-oriented programs with digital signatures” (issued Aug. 15, 2000) describes a method by which the executing instances of objects are stored and communicated to other system for further execution of the same. The approach of the said patent provides for a digital signature methodology to insure security and integrity of the traveling objects.
U.S. Pat. No. 6,105,137 to Graunke; Gary L and Rozas; Carlos V for “Method and apparatus for integrity verification, authentication, and secure linkage of software modules” (issued Aug. 15, 2000 and assigned to Intel Corporation (Santa Clara, Calif.)) describes a method of authenticating and verifying the integrity of software modules based on digital signatures and additional verification criteria such as validity of the destination addresses.
U.S. Pat. No. 6,128,774 to Necula; George C and Lee; Peter for “Safe to execute verification of software” (issued Oct. 3, 2000) descries a method that includes the steps of defining a safety policy that specifies safe operating conditions of untrusted software, generating safety predicate and a safety proof, and validating the said for untrusted software based on safety proof and safety predicate. The said patent requires the code producer to define safety policy, enforces safety policies such as immediate jumps are within the code-segment, and watches the instructions for safety policy violations.
U.S. Pat. No. 6,154,844 to Touboul; Shlomo and Gal; Nachshon for “System and method for attaching a downloadable security profile to a downloadable” (issued Nov. 28, 2000 and assigned to Finjan Software, Ltd. (San Jose, Calif.)) describes a system that comprises of an inspector and protection engine, the inspector engine. The content inspection engine uses a set of rules that include a list of suspicious operations or suspicious code patterns to generate a security profile and the protection engine include mechanisms to ensure the trustworthiness of the downloadable. The example list of operations that deemed suspicious include file operations such as read and write, network operations such as listen and connect, and registry operations such as read registry item and write registry item.
U.S. Pat. No. 6,167,521 Smith; Sean William and Weingart; Steve Harris for “Securely downloading and executing code from mutually suspicious authorities” (issued Dec. 26, 2000 and assigned to International Business Machines Corporation (Armonk, N.Y.)) describes a system for secure code-downloading and information exchange, in the full generality of complex code dependencies in which trusted code is employed to ensure that proprietary data is destroyed or made unreadable when the environment ceases to hold certain security level.
U.S. Pat. No. 6,223,291 to L. Puhl, D. Vogler, E. A. Dabbish for “Secure wireless electronic-commerce system with digital product certificates and digital license certificates” (issued Apr. 24, 2001 and assigned to Motorola, Inc. (Schaumburg, Ill.)) describes a method in which downloadable software products are associated with digital content certificates for content items and digital license certificates for licenses of the content items and verification of the licenses of the new content on request from a wireless equipment. The focus of the said patent is content verification and verification for the appropriate license for the verified content and doesn't address the issues related to the verification of the behavior of the downloaded software product.
U.S. Pat. No. 6,330,588 to Freeman; Martin for “Verification of software agents and agent activities” (issued Dec. 11, 2001 and assigned to Philips Electronics North America Corporation (New York, N.Y.)) describes a method for the verification of software agents and their activities. The method described in the said patent achieves the objective by monitoring the agent's return and comparing the original agent fingerprint and the return agent fingerprint.
A method for verifying the integrity of the software installed in devices, which operate in domains not fully controlled to prevent the situations where software integrity is compromised with a malicious interest, is mentioned in “Reflection as a mechanism for software integrity verification” by Diomidis Spinellis. These devices can be mobile phones, Set-top boxes for Pay-TV interfaces, credit card terminals, smart cards etc. The method involves verifying a hash of the installed software and comparing it with the hash of the same kept under secure storage. Again this method deals with the static characteristics of the software component and does not attempt to address the issue of dynamic behavior of the component.
A mechanism for detecting anomalous program behavior based on performance signatures is described in “Performance Signatures: A Mechanism for Intrusion Detection” by David L. Oppenheimer and Margaret R. Martonosi. The said mechanism is based on defining the variables that might indicate anomalous behavior and continually monitoring these variables during system operation. The values of these variables during program execution form the performance signature of the program and can be used to generate anomaly reports.
An approach that supports a component developer to design and run verification tests is described in “A Software Component Verification Tool” by Gary A Bundell, Gareth Lee, John Morris, Kris Parker. The said approach also describes a component test bench that is lightweight and portable facilitating packaging along with a component for verification of the component in a target environment.
A recommendation from SDR Forum related to structure, development, standardization, and implementation is provided in “SDRF Technical Report 2.2—Chapter 2 Rev. 0.3—28 Nov. 1999. The said recommendation also describes issues related to security and regulation of SDR equipment.
SUMMARY OF THE INVENTIONThe present invention provides a system and method for safe and controlled upgradation of mobile terminals. In SDR based mobile terminals, it is possible, and in some cases necessary, to download software components and commission them for immediate use. The component that can be downloaded is packaged with information to assess the integrity of the software after the download.
One aspect of the invention is to shield the functional mobile terminal from an infected component by initially downloading the component into QS that is a distinct and isolated execution environment.
Another aspect of the invention is to incorporate of multiple signatures that are used collectively to validate the downloaded component into a DLC package. The signatures are categorized into two types, namely, static signatures and dynamic signatures. The static signatures are incorporated into the package to verify the aspects such as source of the component, target (mobile terminal) of the downloaded component, adequacy of system (mobile terminal) characteristics, and interoperability with the already commissioned components (version matching).
Still another aspect of the invention is to use dynamic signatures to ensure that the downloaded component has not been infected either during packaging, transmission, or after unpacking. The twin objectives of the present invention is to provide as much protection as possible and at the same time to keep the process of generation and packaging of the signatures as simple as possible. The dynamic signatures are incorporated into the package to verify the dynamic behavior aspects such as internal and external function calls, and memory and CPU utilization.
Still another aspect of the invention is to perform validation and verification in multiple zones, namely, E-Zone and V-Zone verification in QS (the shadow execution environment), and M-Zone verification in MS (the main execution environment).
Still another aspect of the invention is to perform periodic verification of the components that execute in N-zone in MS. This is to ensure that the component has not been infected while being in use in MS.
Yet another aspect of the invention is to interact with component servers to automatically download, verify and upgrade the components in MS on release of the new versions of the components by component vendors.
Yet another aspect of the invention in one of the preferred embodiments is to collect usage statistics of the downloaded components and communicate the same to MT server for billing purposes.
BRIEF DESCRIPTION OF THE DRAWINGS
The figures,
The figures, FIGS. 9A1, 9B1, 9C1, and 9E1, describe an additional method of performing the second type of XPU verification in V-zone.
The automated safe reprogramming of a software radio involves the steps of downloading the required component (Downloaded Component, DLC) from any of the available DLC servers and performing the processes of validation and verification of the same in the context of the software radio, SR (100). The software radio has a Quarantine Space, the Q-Shell subsystem, QS (110) that aids in the component management activities which include validation and verification process (VnV process) of a downloaded component before commissioning it within the main subsystem MS (120).
The wireless network is the one in which the mobile terminal is identified as a valid terminal by the use of a SIM module or a suitable identity so that the mobile terminal is capable of using the wireless network for accessing the DLC servers. The wireless network is connected to an IP network by a suitable gateway component.
The system accesses a number of DLC servers (140,150) that provide the required software components in a package format needed by QS. The mobile terminal accesses the DLC servers via a combination of wireless network and IP network.
The package for the downloaded component comprises of the package header, instrumented DLC, where instrumentation is a piece of code embedded in the component to generate data for signature verification, upper and lower layer simulator components, static signatures, dynamic signatures and component-specific data. The simulators are also designed for use in a distributed processing environment and implement methods required for executing use-cases in the simulated environment.
The Mobile Terminal (MT) Server (130) keeps track of the components within a software radio. The MT server maintains statistics about a software radio like validation logs and usage logs. The usage information is communicated to the Billing System for the purposes of billing. Each software radio terminal has an entry in an MT server that is identified by MT's unique equipment id.
QS comprises of modules that help in the execution of the downloaded component. The Java components execute in a virtual machine that is part of QS. QS also has a minimal installation of component server needed for providing a distributed processing environment. QS has a set of libraries containing the validation and verification routines. System related information needed in the VnV process is stored in Disk-on-Chip database (220). The different software modules in QS make use of the Q-shell APIs for their functioning.
Automated Reprogramming, AR (210) is the module that manages the whole functioning of the Q-shell system. All communication to and from MS are routed through AR. AR is responsible for taking the decision about the acceptance or rejection of a downloaded component based on the results of the VnV process.
Pack/Unpack, PU (240), is the module responsible for unpacking the DLC package, which is in XML format. The PU checks the integrity of the package for the presence of all the required entities. The PU parses the information present in the package required for performing the VnV process.
Validation and Verification, VnV (230), is the module responsible for conducting the various signature evaluations according to the directions from AR. VnV module performs the static signature evaluation, dynamic signature verification in V-zone, dynamic signature verification in M-zone and communicates the results to AR.
Downloaded Component Management, DLCM (250), module is responsible for managing the entire component related activities in SR. DLCM keeps track of the status of all the downloaded components in the system. DLCM is responsible for providing a secure storage for components that are temporarily decommissioned. DLCM stores the most recent and most immediately required components in on-board storage. Remaining components are archived in the backup component store in MT server. DLCM is responsible for periodically scheduling the commissioned objects for M-zone verification. DLCM subscribes to the DLC servers that follow a subscribe-publish protocol for receiving the information related to the component version upgrades.
In V-zone (310), the dynamic signatures of the component are verified in a simulated environment in QS. The iDLC (instrumented downloaded component) and the simulators needed for the execution of use-cases are installed in QS. VnV module executes the use-case by invoking the published methods for each use-case. The dynamic signatures including the execution behavior (IXB and EXB), memory utilization (XMU) and the CPU utilization (XPU) are verified for each use-case using the data generated during the execution of the iDLC and simulators. Any failure in the V-zone verification results in the rejection of the component.
In M-zone (320), the iDLC is installed in MS of software radio and allowed to inter-operate with other components. The data is collected from the iDLC and is logged onto a file on Disk-on-Chip. The collected data is passed onto the VnV module for M-zone verification. The failure in this verification step causes the component to be rejected. N-zone (330) is the normal operating mode of the software radio. In this mode, the components operate without instrumentation. All the downloaded components operating in N-zone periodically undergo M-zone verification.
The block 420 describes the periodic online verification of components. All the downloaded components commissioned in N-zone are periodically scheduled for M-zone verification. The component is allowed to operate in MS for a preset time period with the instrumentation turned on. The verification is performed with the collected data. If the verification process is not satisfactory, the same is communicated to MS.
The block 430 describes the collection of usage related data. QS collects periodically the usage data of each of the downloaded components commissioned in MS. This data is off-loaded to MT server at regular intervals for archiving purposes. In one of the preferred embodiments, the collected usage statistics is used for billing purposes.
The block 440 describes the component version management activity of QS. For each of the commissioned components, QS subscribes with the respective DLC server for receiving version-related information about the components whenever the DLC server publishes the information about the version upgrades for the component. QS receives these published messages and informs the user about the version upgrade.
VnV module performs E-zone static signature verification and returns the status. Based on the result, AR decides to reject DLC or proceed with the V-zone verification. If VnV returns OK after E-zone verification, AR does the preparation for V-zone verification. AR installs the iDLC, the Upper Layer (UL) and Lower Layer (LL) simulators in QS (520). Then, AR invokes VnV module to perform the V-zone verification (530). The result of V-zone verification is communicated to AR. Based on the result, AR decides either to reject the DLC (if the result is not OK) or else to proceed with M-zone verification.
For performing M-zone operation, AR invokes an API implemented by Q-Agent for the installation of iDLC in MS. Before the iDLC is commissioned in MS, a check is performed for the presence of components that may be superseded by the installation of the iDLC and any such components are decommissioned (540). In block 545, the iDLC is commissioned in MS, wherein the iDLC interoperates with other components in MS, to validate the DLC behavior in a realistic scenario. During this time, instrumentation within the iDLC generates the required data for M-zone verification.
After a preset time period, AR invokes VnV to perform M-zone verification on the collected data (550). VnV performs the verification and returns the result to AR. If the result is OK, AR proceeds to turn off the instrumentation in iDLC (560). In case it is required to delay the commissioning of the DLC, the DLC is passed onto DLCM for secured on-board storage (562) and the decommissioned components are reinstalled (565). On the other hand, if the DLC is required to be commissioned immediately, then AR passes the DLC to the Q-Agent for commissioning (570). On successful commissioning of the DLC (575), AR passes this information to update QS database for the commissioned component (576). Further, AR sends a positive acknowledgment to MT Server and DLC Server (580).
The block 555 describes the error handling mechanism. In the case of a new downloaded component, any error at any of the signature evaluation stages causes the DLC to be rejected and a suitable communication is sent to DLC Server and MT Server. In the case of periodic online verification of commissioned components, an error causes a communication to be sent to MS to enable a suitable course of action.
If no active calls or data sessions are in progress, SR is put into suspended mode (591). In this mode, no activity of SR is allowed and keypad is disabled. The system remains in this mode for a very brief period of time.
Before a component is commissioned, checking is done for the presence of any other components that are superseded by the new component. Such components are decommissioned (592) and DLCM provides secure storage for such components.
The iDLC is then installed in MS (593). After this, the system is brought back to normal mode of operation (594).
The next step in the unpacking operation is to analyze the data that is part of the package (610). The signature data is checked for the presence of mandatory elements such as the use-case list, static signatures, dynamic signatures and component-specific data.
The result of unpacking is communicated to AR module (620).
Package has an ASR header that consists of ASR version, component id, vendor id, and date and time of packaging. First step in integrity checking is to check the header format (631).
The block 632 checks whether the package contains iDLC.
The block 633 checks whether the package contains an upper layer simulator.
The block 634 checks whether the package contains a lower layer simulator.
The block 635 checks whether the package contains signature data.
Package contains the instrumented DLC, the upper layer and lower layer simulators and the signature data. Checking is done for the presence of all these entities. If any one of these entities is missing, an error is returned to AR module.
The block 645 is the data structure for static signatures. This includes static source signature comprising of source server's private-key encrypted hash and information about the hashing algorithm, static target signature comprising of equipment identity, operator id and the SIM identity. Static signature also includes system signature data and static version signature data.
The block 650 is the data structure for dynamic signatures. This includes use-case specific compressed signature for all the use-cases. The number of use-cases contained in the package is also part of the data structure.
The block 655 is the data structure for component-specific data. This includes component id, component version, vendor id and vendor details like vendor URL, nature of billing and information for subscribing to the DLC server for receiving version upgrade information.
The block 660 describes Static system signature that includes data for system signature verification. This includes the details such as CPU clock rate, RAM required, Display screen resolution, data channel speed, and OS version.
The block 665 describes Static version signature that is a table containing the range of versions of other components with which the DLC inter-operates.
IXB signature (675) consists of function ids of instrumented internal functions and the number of times the function was invoked during the execution of the use-case.
EXB signature (680) consists of function ids of external functions and the number of times it was called during the execution of the use-case in the instrumented functions.
XPU signature (685) consists of function ids and the execution time for all the instrumented internal functions.
XMU signature (690) consists of an instrumented function id and the accumulated memory requested in any particular invocation of the function.
The result of the signature verification is returned to AR.
VnV module checks if the public-key for that particular DLC Server is available with SR (735). If not, SR negotiates with the DLC Server to obtain the public-key.
VnV module uses the matched public key of the DLC Server and decrypts the signature (740). The hashing algorithm is applied on the DLC binary and the hash is obtained (745). Source signature is successfully verified if the hashes are equal. The information about the hashing algorithm is also part of the DLC package.
VnV module fetches these ids from the package and compares with the corresponding entities in the system. A match validates that package is intended for that particular SR.
VnV module gets the valid range of the version corresponding to first component id in the version signature (770). The version of that particular component in MS is obtained from system database (775). This value is compared to check if it is within the range of version (780) for this component in the signature. This is repeated for all the entries in the version signature. If the version of any of the components fails to be within the range, it is reported as an error.
UL simulator implements a method that is invoked for a use-case execution (800). The method is invoked with use-case id as parameter (805) and the instrumentation within the iDLC generates the data required for signature verification. VnV module reads the file containing generated data and fills the data structure in memory (810).
From the package, the compressed signature data corresponding the use-case is extracted to obtain the IXB, EXB, XPU and XMU signatures (815). VnV module first performs the IXB and EXB signature verification (820). If it is successful, VnV module proceeds to perform XMU verification (830). Then, the XPU verification is performed (840).
If the signature verification fails at any stage, it is communicated to AR. Otherwise, verification is termed as successful and the result is logged.
VnV module is executed for each of use-cases (800a). UL simulator implements a method that is invoked for a use-case execution (802a). The method is invoked with use-case id as parameter (804a) and the instrumentation within the iDLC generates the data required for signature verification. The generated and stored data includes the execution time and the amount of memory utilized by each invocation of a function with respect to a use-case (808a). Note that if the invoked function happens to be an external function, then the amount of memory utilized is set to zero. VnV module reads the file containing generated data and fills the data structure in memory (810a).
From the package, the compressed signature data corresponding the use-case is extracted to obtain the IXB, EXB, XPU and XMU signatures (812a). VnV module first performs the IXB and EXB signature verification (814a). If it is successful, VnV module proceeds to perform XMU verification (816a). Then, the XPU verification is performed (818a). If the signature verification fails at any stage, it is communicated to AR. Otherwise, verification is termed as successful and the result is logged.
This process is repeated for all the internal and external functions associated with the use-cases.
This process is repeated for all the internal and external functions associated with the use-cases.
VnV module first generates a list (L1) of memory allocation request values for a function from the signature (875). A similar list (L2) is created from the generated data (880). A check is done for sizes of both the lists (885). If they are not equal, it is treated as an error and reported to AR (897). Otherwise, the first value from L1 is obtained (890). The list L2 is searched for an exact match (892). If matching is successful, the matched entry is removed from both the lists (895). This is repeated for all the values in L1. Failure to find a match in L2 for an element in L1 is reported as an error (897).
This process is repeated for all the functions in the use-case signature.
For each of the internal functions, Fj, (854d), count the number of memory allocation requests (Cq) and sum memory allocation requests (S2) over the various invocations of Fj (856d). Obtain the similar count Cg and S1 from the signature (858d). Add |S1−S2| to AD (860d). Determine whether AD<N*SThreshold and Cg=C1q (862d) where SThreshold is a pre-defined threshold value. If the condition is not satisfied, then report error.
This process is repeated for all the functions in the use-case signature.
The figures
The first step is fixing the first row, first column value of P as t. A reduced matrix is obtained by leaving out the row and column of t. The next step is obtaining of an N×1 matrix, C, whose each element is a set containing column numbers corresponding to values from a row of the reduced matrix, which differ at most from t by δ (900). In this process, if any of the rows of C is a null set, the process is repeated by fixing the next element in the first row of P as t.
The aim of the next step is to compute an N×1 matrix D, with the first element as the column number of t, which contains a unique column number, of the matrix P, for each row of the matrix. D defines a unique mapping from the multiple invocation of a function in QS to the same number of invocations of the function in the developer environment. Based on this mapping, a set, E, of epsilon values from P matrix is obtained.
If there are singleton sets in C, a check is done to determine whether any two singleton sets are equal, that is, multiple invocations of a function is being mapped onto a single invocation of the function in the developer environment indicating an inconsistent mapping (905). The next step involves updating D matrix with singleton set elements preserving the row identity of the elements and eliminating the same from all other row-elements of C (910). This procedure is repeated till all the singleton sets are eliminated. With the remaining sets a backtracking algorithm is applied to obtain an N×1 matrix D2 containing column numbers that together with D defines a unique mapping (915,
The next step is to update D matrix using the result obtained in D2 matrix preserving the row identity. Form an N×1 matrix E with values obtained from P matrix using the column numbers present in D matrix preserving row identity. The mean e of values of the elements of E matrix is computed (920) and forms an element of Gf. This process is repeated for the remaining elements in the first row of the matrix P (925).
H (I)∈Cr (I), I=1 to K and
H (I)!=H (J) for all I!=J
The values from H are updated onto the matrix D2 using the same row mapping relation from C to Cr.
The first step (950) is obtaining the Gf sets for all the functions (Refer to
After the Gf sets are obtained for all the functions, a check is made to determine if any one of the Gf sets is a null set (955). If so, the XPU verification fails for this use-case. If all the sets have at least one member, then the sets are ordered in the increasing order of their cardinality (960). The next step is to obtain the set G, a set of values with one element from each Gf set such that all the elements are within δ distance of each other (965). This is performed by taking the first element from the first set (of least cardinality) and trying to find at least one value from each of the remaining sets such that the elements of G satisfy the δ constraint. If such a set is formed (970), the XPU verification is termed successful and the result is communicated to AR module. The QS uses the mean value of G, γ (975), to impose an additional constraint for the remaining use-cases.
The figures FIGS. 9A1, 9B1, 9C1, and 9E1 describe the steps in XPU verification for a use-case in V-zone (XPU-2 signature verification). XPU verification is termed successful, if the execution time per function follows a pattern while executing the DLC in both QS and in the developer environment. The aim of XPU verification is to find a ratio value, e, of execution time in QS to that in the developer environment per function such that difference in e values with respect to different invocations of the function is within a tolerance limit δ. Further, a similar consistency in ratio values should be observed with respect to multiple invocations of multiple functions.
The first step is fixing the first row, first column value of P as t. A reduced matrix is obtained by leaving out the row and column of t. The next step is obtaining of an N×1 matrix, C, whose each element is a set containing column numbers corresponding to values from a row of the reduced matrix, which differ at most from t by δ (900c). In this process, if any of the rows of C is a null set, the process is repeated by fixing the next element in the first row of P as t.
The aim of the next step is to compute an N×1 matrix D, with the first element as the column number of t, which contains a unique column number, of the matrix P, for each row of the matrix. D defines a unique mapping (Map) from the multiple invocation of a function in QS to the same number of invocations of the function in the developer environment. Based on this mapping, a set, E, of epsilon values from P matrix is obtained.
If there are singleton sets in C, a check is done to determine whether any two singleton sets are equal, that is, multiple invocations of a function is being mapped onto a single invocation of the function in the developer environment indicating an inconsistent mapping (905c). The next step involves updating D matrix with singleton set elements preserving the row identity of the elements and eliminating the same from all other row-elements of C (910c). This procedure is repeated till all the singleton sets are eliminated. With the remaining sets a backtracking algorithm is applied to obtain an N×1 matrix D2 containing column numbers that together with D defines a unique mapping (915c). The input to the backtracking algorithm is a K×1 matrix Cr. Cr is derived from C by removing the rows that become null after the elimination of singleton sets. A mapping is maintained from the row index of C to the row index of Cr for each element. The objective of the backtracking algorithm is to find a K×1 matrix H, such that
H(I)∈Cr (I), I=1 to K and
H(I)!=H(J) for all I!=J
The values from H are updated onto the matrix D2 using the same row mapping relation from C to Cr.
The next step is to update D matrix using the result obtained in D2 matrix preserving the row identity. Form an N×1 matrix E with values obtained from P matrix using the column numbers present in D matrix preserving row identity. The mean e of values of the elements of E matrix is computed (920c) and forms an element of Gf. This process is repeated for the remaining elements in the first row of the matrix P (925c).
The first step (950e) is obtaining the Gf sets for all the functions (Refer to
The use-case ids in the generated data are checked to verify whether the signatures for those use-cases are present in QS (1015). If the signature is present, XPU and XMU signatures (related to CPU and Memory utilization) are verified and result is passed onto AR (1020).
If there is no signature for any of the use-cases present in the generated data, the M-zone data generation is continued for another interval of time. After each repetition of M-zone operation (1022), the generated data is checked for the presence of those use-cases for which the signature is available. If the necessary data has been generated, signature verification is performed.
If the number of iterations crosses a threshold value (R(M-Zone)) for repetitions (1025) without the necessary use-cases getting executed, suitable action is taken based on whether the component is under periodic online verification or it is a new component. In the case of periodic online verification of components, an error is reported to AR. In the case of a new DLC, the signature is requested, for the use-cases that have occurred more repeatedly, from the DLC Server (1030). If the DLC Server is unable to provide this data, an error is reported to AR. If the signature becomes available, XPU and XMU signature verifications are performed for the corresponding use-case and the result is passed onto AR.
The first step is to analyze the generated data related to multiple executions of a use-case (1050). Each such data contain values related to CPU utilization by DLC at periodic intervals. These values are normalized based on the peak value (1055) in order to account for the system and load characteristics. Due to the same reason, different instances of execution of a use-case takes different time periods to complete the execution and hence, it is required to normalize the time scale as well. This is achieved by using a time warping technique (1057).
Further objective is to abstract the CPU utilization characteristics from several executions into a template that is compared with the corresponding use-case signature (1059). The template is generated by pair-wise warping of the sequences until the sequences are reduced to an approximated representation of the CPU utilization using a single sequence. This sequence forms a template for the use-case.
An error is reported to AR if the error of comparison (1060) of the template with the signature is not within a threshold.
The first step is to analyze the generated data related to multiple executions of a use-case (U) (1050b1). Each such data contain values related to CPU utilization by DLC during a particular execution at periodic intervals. These values are normalized based on the peak value (1052b1) in order to account for the system and load characteristics. Due to the same reason, different instances of execution of a use-case take different time periods to complete the execution and hence, it is required to normalize the time scale as well. This is achieved by using a time warping technique. Let D=<D1, D2, . . . , Dn> be generated during n executions of DLC with respect to use-case U wherein Di=<Di1,Di2, . . . > is the sampled normalized CPU utilizations during ith execution (1054b1). Arrange D=<D1′, . . . ,Dn′> in non-decreasing order of execution time where the execution time is the number of samples times the sample period (1056b1). Let D1′=(((D1′▪D2′)▪D3′)▪ . . . ▪Dn′) wherein ▪ is a time-warping operation (1058b1). This pair-wise warping of CPU utilizations results in abstract characterization of CPU utilization (D1′) that is compared with the similar abstract characterization that is part of the signature (C1′). Obtain C1′ from signature and perform C1′▪D1′; determine the path of least error and compute the sum of errors (Se) based on this path of least error mapping (1060b1). Compare Se with a pre-defined Threshold (1062b1) and report error if Se not less than Threshold. A time-warping operation helps in mapping a segment (say, W1) of a time series with a segment (say, W2) of another time series such that the overall error is minimized. The mapping error is computed by extrapolating W1 to match W2 (if W1<W2) and computing the error as square root (sum (square (difference between two corresponding values in two segments))).
Similarly,
Similarly,
The path of execution of the use-case is identified (1110).
The next step is to identify functions that can distinguish between use-cases by examining some key parameters (1120).
The next step is to identify the functions that need to be instrumented (1130). The functions are chosen in such a way that (a) the distinctive behavior of the use-case is captured; (b) not too many functions are instrumented; and (c) total code due to instrumentation is much less than live code. The instrumentation code is added to these functions (1140).
The instrumentation can be turned on and off, on the fly through the modification of a global variable. The method for turning on and off instrumentation is also implemented (1150).
For the identified internal functions, the function id and the timestamp are logged into a local variable at the beginning of the function (1175). For all memory allocations within the function, successful allocation is checked, and the actual bytes allocated are logged into the local variable. If there are multiple memory allocations within the function, the bytes allocated are added up before saving them in the local variable (1180).
The logging of data into a file is performed at the exit point of the function where the end time of execution is also logged (1185). If there are more than one exit points, the developer can choose to log the data into a file at select exit points. The use-case id is also logged along with the data (1190).
The next step is to identify the use-cases for V-zone and M-zone verifications (1210). Then, the simulators are designed (1220). The upper layer simulator implements methods that act as data source and data sink for the identified use-cases. Similarly, the lower layer simulator implements methods that act as data source and data sink for the identified use-cases. One of the simulators implements a method that acts as an entry point for execution of use-cases.
The DLC is suitably instrumented (1225) so as to generate adequate data to identify dynamic signatures. Specifically, suitable internal and external functions are identified and are suitably instrumented. The use-cases are executed in the developer environment to generate the dynamic signature (1230). The compressed dynamic signature is included in the package (1240). The package header is created with information such as ASR version, component id, vendor id, and date and time of packaging (1250). Then the iDLC and simulators are packaged (1260).
The first step is to turn on the instrumentation (1300). All DLCs implement a method that is invoked for turning on and off the instrumentation. After turning on of the instrumentation, the data for performing M-zone verification is collected (1310).
With the collected data, M-zone verification is performed (1320). If the verification fails, MS is alerted about the failure (1330). The instrumentation is turned off (1340) and the verification result is logged (1350).
Periodically Component Server will invoke the function to pass the usage-related data (1420). This data is communicated to QS by the Q-Agent (1430). DLCM module in QS is responsible for processing the usage data.
DLCM stores the data on the DoC databases (1440). The data on DoC and the MT Server are kept in sync by periodic offloading of data to the MT Server (1450).
DLCM checks its internal database to verify whether the component is commissioned in MS.
If it is commissioned, a notification is sent to the user for further action such as to decide whether the new version needs to be downloaded (1520).
If the component entry is found in decommissioned components' database, a flag is marked against its entry (1515). At the time of recommissioning of this component, if the flag is set, DLCM sends the notification of version upgrade to the user.
The DLCM module frames the packet to be transmitted (1600). QS then checks MS for any streaming activity (1610). If any streaming session is active, QS backs-off for a random period of time (T(Retry)) and then retries (1615) to transmit the frames. If QS detects no activity in MS, it begins the data transmission (1620).
During the transmission, QS checks with MS for streaming activity (1625). If QS finds that a streaming session is active, it marks a checkpoint and waits for a random period of time before checking again (1630). If no streaming session is active, a check is done to verify if any more data need to be transmitted (1635). If yes, transmission is resumed from the last checkpoint.
If the component that is commissioned is a new component, DLCM forms and sends the message for subscribing to the DLC Server (1710). The required information, such as DLC server IP address and authentication information, for subscribing to the DLC server is also part of the package. Otherwise, if the commissioned component is an upgraded version of a component, DLCM updates the version database (1720).
If SR is not in suspended mode, QS first checks whether SR is active, that is, active session involving voice call or data transmission (1820). If so, QS waits for a random period of time and repeatedly checks until SR can be safely put in suspended mode (1830).
After the successful suspension, the component is removed from the memory (1840). If the SR was suspended during this decommissioning session, then SR is put back into normal mode (1850). The database is update suitably (1850). If it is not a permanent decommissioning, then the component is moved to QS for securely storing the component for future recomissioning (1870).
The second table is related to commissioned components (1920). All the components commissioned in MS are described in this table.
The third table is related to decommissioned components (1930). The components that are temporarily decommissioned from MS and kept in secure storage in QS are described in this table. When a component is decommissioned, its entry is deleted from commissioned components table and added into the decommissioned components table. This table has a field for indicating whether any version upgrade information was received during the time the component was decommissioned. When the component is commissioned again in MS, QS first checks whether this flag is set and if so, sends an appropriate notification.
The fourth table stores the component related static data (1940). This information is obtained from the DLC package.
The fifth table is the one that stores the component related dynamic data (1950) containing information such as date/time during which the component was used and usage time.
Thus, a system and method for automated reprogramming of software radios has been disclosed. Although the present invention has been described particularly with reference to the figures, it will be apparent to one of the ordinary skill in the art that the present invention may appear in any number of systems that provide safe reprogramming functionality. It is further contemplated that many changes and modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the present invention.
Acronym List
- 1. API APPLICATION PROGRAMMER INTERFACE
- 2. AR AUTOMATED REPROGRAMMING
- 3. ASR AUTOMATED SAFE REPROGRAMMING
- 4. CORBA COMMON REQUEST BROKER ARCHITECTURE
- 5. CPU CENTRAL PROCESSING UNIT
- 6. DLC DOWNLOADED COMPONENT
- 7. DLCM DOWNLOADED COMPONENT MANAGER
- 8. DoC DISK ON CHIP
- 9. E-Zone ENTRY ZONE
- 10. EAB EXTERNAL FUNCTION EXECUTION BEHAVIOR
- 11. IDLC INSTRUMENTED DOWNLOADED COMPONENT
- 12. IXB INTERNAL FUNCTION EXECUTION BEHAVIOR
- 13. LL LOWER LAYER
- 14. MS MAIN SUBSYSTEM
- 15. MT MOBILE TERMINAL
- 16. M-Zone MONITOR ZONE
- 17. N-Zone NORMAL ZONE
- 18. OS OPERATING SYSTEM
- 19. PU PACK-UNPACK
- 20. QS QUARANTINE SHELL
- 21. RAM RANDOM ACCESS MEMORY
- 22. RTOS REAL-TIME OPERATING SYSTEM
- 23. SDR SOFTWARE DEFINED RADIO
- 24. SR SOFTWARE RADIO
- 25. UL UPPER LAYER
- 26. URL UNIFORM RESOURCE LOCATOR
- 27. V-Zone VERIFICATION ZONE
- 28. VnV VALIDATION AND VERIFICATION
- 29. XMU EXECUTION MEMORY UTILIZATION
- 30. XPU EXECUTION PROCESSOR UTILIZATION
Claims
1. A system for automated reprogramming of software radios in a safe manner, said system comprising of a quarantine space for carrying out a plurality of signature evaluations of a downloaded component in an exclusive environment, a V-Zone subsystem for validating said downloaded component in said quarantine space, and an M-Zone subsystem for validating said downloaded component in Main Subsystem.
2. The system of claim 1, wherein said V-Zone subsystem comprises of a procedure to perform IXB signature verification based on a plurality of internal functions (Fi) of said downloaded component and a plurality of use-cases (Ui) related to said downloaded component, wherein said IXB signature verification is based on given <U,F,C>(gi) and generated, by executing said downloaded component in said quarantine space, <U,F,C>(qj), and comprises of: computing sum of absolute difference (SAD) between Cgi and Cqi for a use-case Ui such that Fgj=Fqj over a plurality of functions (N) associated with said use-case Ugi, comparing said SAD with N times a pre-defined threshold (SThreshold), and returning OK if said SAD is less than said N times SThreshold.
3. The system of claim 1, wherein said V-Zone subsystem comprises of a procedure to perform EXB signature verification based on a plurality of external functions (Fi) of said downloaded component and a plurality of use-cases (Ui) related to said downloaded component, wherein said EXB signature verification is based on given <U,F,C>(gi) and generated, by executing said downloaded component in said quarantine space, <U,F,C>(qj), and comprises of: computing sum of absolute difference (SAD) between Cgi and Cqi for a use-case Ugi such that Fgi=Fqi over a plurality of functions (N) associated with said use-case Ugi, comparing said SAD with N times a pre-defined threshold (SThreshold), and returning OK if said SAD is less than said N times SThreshold.
4. The system of claim 1, wherein said V-Zone subsystem comprises of a procedure to perform XMU signature verification based on an internal function (F) of said downloaded component and a use-case (U) related to said downloaded component, wherein said XMU signature verification is based on given <<Mg1,Cg1>,..., <Mgi,Cgi>,..., <Mgn,Cgn>> wherein Mgi is the memory allocations and Cgi is the number memory allocation requests during the ith invocation of said F, and generated <<Mq1,Cq1>,..., <Mqi,Cqi>,..., <Mqn,Cqn>> wherein Mqi is the memory allocations and Cqi is the number of memory allocations requests during ith invocation of said F in said quarantine space, and comprising of: computing sum (S1) of Mgi over 1<=i<=N, computing sum (Cg) of Cgi over 1<=i<=N, computing sum (S2) of Mqi over 1<=i<=N, computing sum (Cq) of Cqi over 1<=i<=N, computing absolute difference (AD) between S1 and S2, comparing AD with N times a pre-defined threshold (SThreshold), and returning OK if AD less than N times SThreshold and Cg=Cq.
5. The system of claim 1, wherein said V-Zone subsystem comprises of a procedure to perform XPU signature verification based on an internal function (F) of said downloaded component and a use-case (U) related to said downloaded component, wherein said XPU signature verification is based on given <Tg1, Tg2,..., Tgi,..., Tgn> wherein Tgi is the execution time of F with respect to said use-case U during ith invocation, and generated <Tq1, Tq2,..., Tqi,..., Tqn> wherein Tqi is the execution time of said F in said quarantine space with respect to said use-case U during ith invocation, and comprising of: computing sum (S1) of Tgi over 1<=i<=N, computing sum (S2) of Tqi over 1<=i<=N, computing absolute difference (AD) between S1 and S2, comparing AD with N times a pre-defined threshold (SThreshold), and returning OK if AD less than N times SThreshold.
6. The system of claim 1, wherein said V-Zone subsystem comprises of a procedure to perform XPU signature verification based on an internal function (F) of said downloaded component and a use-case (U) related to said downloaded component, wherein said XPU signature verification is based on given X=<X1,..., Xi,..., Xn> wherein Xi is the execution time of F with respect to said use-case U during the ith invocation, and generated Y=<Y1..., Yi,..., Yn> wherein Yi is the execution time of said F in said quarantine space with respect to said use-case U during ith invocation with respect to said use-case U, and comprising of: determining a plurality of mappings such that each Map from X to Y is such that Xi/Map(Xi) for each i (1<=i<=n) is within a pre-defined threshold from any Xj/Map(Xj), determining of Ei={t1,..., tj,... } wherein tj=Xj/Mapi(Xj) wherein Mapi is the ith mapping of said plurality of mappings, computation of mean ei based on elements of Ei, determination of Gf as a set of {e1,..., ei,... }, and returning OK if Gf is non-null.
7. The system of claim 6 further comprises of a procedure to perform XPU verification based on a plurality of internal functions (F1,..., Fi,..., Fn) of said downloaded component and a use-case (U) related to said downloaded component, wherein said XPU verification is based on Gfi={efi1,... } with 1<=i<=n, and comprising of: determination of an element ei in Gfi 1<=i<=n such that any absolute difference (ADij) between ei and ej in Gfj, for any 1<=j<=n is less than a pre-defined threshold (Threshold) and returning OK if ADij<Threshold for any 1<=i,j<=n.
8. The system of claim 1, wherein said M-Zone subsystem comprises of a procedure to perform NXPU signature verification based on a use-case (U) related to said downloaded component, wherein said NXPU signature verification is based on given <C1,..., Ci,..., Cn> wherein Ci=<Ci1, Ci2,... > wherein Cij is the normalized jth sample related to CPU utilization related to said U during the ith execution of said downloaded component and C1′ is the associated NXPU template, and generated D=<D1,..., Di,..., Dn> wherein Di=<Di1, Di2,... > wherein Dij is the normalized jth sample related to CPU utilization related to said U during the ith execution of said downloaded component in said Main Subsystem, and comprising of: arranging D in non-decreasing order of execution time resulting in D′=<D1′,..., Di′,..., Dn′>, time-warping and reducing two elements of D′ in a successive manner starting from D1′ resulting in D1′, warping C1′ and D1′ to determine a path of least error, summing (S) the least error components based on said path of least error, and returning OK if S is less than a pre-defined threshold, wherein said time-warping and reduction of two elements, D1′=<D11, D12,..., D1x> and D2′=<D21, D22,..., D2y> comprises of: warping D1′ and D2′ to determine a path of minimum error, determining mapping between D1′ and D2′ using said path of minimum error wherein <D1j, W1> is mapped onto <D2k, W2> wherein W1 and W2 are windows starting at D1j and D2k respectively, stretching D1j (D1j′) to W2 by interpolation, and computing D3j by point-wise averaging of D1j′ and D2k if W1<W2, stretching D2k (D2k′) to W1 and computing point-wise averaging of D1j and D2k′ if W2<W1, and replacing D1j of D1′ by D3j.
9. The system of claim 1, wherein said M-Zone subsystem comprises of a procedure to perform NXMU signature verification based on a use-case (U) related to said downloaded component, wherein said NXMU signature verification is based on given <C1,..., Ci,..., Cn> wherein Ci=<Ci1, Ci2,... > wherein Cij is the normalized jth sample related to memory utilization related to U during the ith execution of said downloaded component and C1′ is the associated NXMU template, and generated D=<D1,..., Di,..., Dn> wherein Di=<Di1, Di2,... > wherein Dij is the normalized jth sample related to memory utilization related to U during the ith execution of said downloaded component in said Main Subsystem, and comprising of: arranging D in non-decreasing order of execution time resulting in D′=<D1′,..., Di′,... Dn′>, time-warping and reducing two elements of D′ in a successive manner starting from D1′ resulting in D1′, warping C1′ and D1′ to determine a path of least error, summing (S) the least error components based on said path of least error, and returning OK if S is less than a pre-defined threshold, wherein said time-warping and reduction of two elements, D1′=<D11, D12,..., D1x> and D2′=<D21, D22,..., D2y> comprises of warping D1′ and D2′ to determine a path of minimum error, determining mapping between D1′ and D2′ using said path of minimum error wherein <D1j, W1> is mapped onto <D2k, W2>, wherein W1 and W2 are windows starting at D1j and D2k respectively, stretching D1j (D1j′) to W2 by interpolation, and computing D3j by point-wise averaging of D1j′ and D2k if W1<W2, stretching D2k (D2k′) to W1 and computing point-wise averaging of D1j and D2k′ if W2<W1, and replacing D1j of D1′ by D3j.
Type: Application
Filed: Aug 23, 2006
Publication Date: May 24, 2007
Applicant: SATYAM COMPUTER SERVICES LIMITED OF MAYFAIR CENTRE (Secunderabad)
Inventors: Varadarajan Sridhar (Bangalore), Ravi Amur (Bangalore), Korrapati Rao (Bangalore)
Application Number: 11/508,282
International Classification: G06F 12/14 (20060101);