SYSTEM AND METHOD FOR SAFETY-CRITICAL SOFTWARE AUTOMATED REQUIREMENTS-BASED TEST CASE GENERATION
Automated requirements-based test case generation method includes constructing a software architecture model derived from software design model architectural information, allocating requirement models into blocks/operators of the software architecture model, and generating component-level requirements-based test cases from the software architecture configured to be executable at different levels in the software architecture. The component-level requirements-based test case generation method includes receiving a software architecture along with allocated requirement models represented in hierarchical data flow diagram, selecting one of the software components, building an intermediate test model based on the selected component by automatically attaching at least one of test objectives or constraints to the corresponding software architecture model blocks/operators based on the selected test strategy, and generating human and machine readable test cases with the test generator for further automatic conversion to test executable and test review artifacts. A system and a non-transitory computer-readable medium for implementing the method are also disclosed.
This patent application claims the benefit of priority as a continuation-in-part, under 35 U.S.C. § 120, to U.S. patent application Ser. No. 14/947,633, filed Nov. 20, 2015, titled “SYSTEM AND METHOD FOR SAFETY-CRITICAL SOFTWARE AUTOMATED REQUIREMENTS-BASED TEST CASE GENERATION” (now U.S. Pat. No. TBD; issued MONTH DD, 2018), the entire disclosure of which is incorporated herein by reference.
BACKGROUNDSafety-critical software, such as aviation software, is required by certification standards (e.g., DO-178B/C for aviation software) to be strictly verified against certification objectives. Testing is an essential part of the verification process. Manual test case generation from the requirements is hard and time-consuming, especially with complex, large software.
Automatically generated test cases and/or test procedures derived from the high-level software requirements can help reduce the cost introduced by manual test case generation and review activities. Those test cases and/or test procedures generated from the specifications can be executed on the associated low-level design implementations through a test conductor.
Conventional test tools and/or models are not able to generate requirements-based test cases at different levels in the design model. The generated test cases produced by conventional tools cannot be directly executed on components at multi-levels in the design.
In accordance with embodiments, systems and methods automatically create a software architecture model from the software design architecture along with requirement models to automate multi-level architectural requirements-based test case generation based on the proposed software architecture model.
In accordance with embodiments, the software architecture model and its requirements allocation are constructed using a model-based development (MBD) tool with the representation of a hierarchical data flow diagram. As opposed to conventional MBD tools, which are traditionally used for low-level design, an embodying MBD tool automatically creates the software architecture model from the software design architecture and generates corresponding test cases for the system-level or high-level requirements.
Embodying systems and methods can implement component-level requirements-based test case generation to automatically generate test cases for components at different levels in the software architecture.
The component-level test case generator unit 140 can use the software architecture model with allocated requirements to generate, step 215, unit/module-level requirements-based test cases. The test case generator unit 140 can also generate, step 220, integration-level test cases to verify if the code component or integration complies with the allocated requirements.
The automatic test case generation strategies (i.e., to attach the test objectives and the constraints) can be based on the general form of a requirement. In natural structural English language, the form of a requirement can be expressed as:
-
- <antecedent expression>implies<consequent expression>,
Where <antecedent expression> is a logic expression on monitored variables;
and <consequent expression> is a logic expression on controlled variables.
A requirements coverage strategy includes, for each requirement, generating one test case where the requirement must be satisfied with the antecedent expression being true. This is done by inserting test objectives and constraints and running a test generation engine that can drive the input sequences to achieve the test objectives.
By way of example, the insertion of a test objective can be done using test objective and test condition blocks from a commercial design verifier block library in the selected model-based development tool (e.g., such as Simulink Design Verifier blocks available from Simulink). The test generation engine can be used to drive the inputs to achieve the test objectives.
A logic condition coverage (LCC) strategy can be implemented to achieve functional coverage of logic equation conditions. Each condition within a logic equation is demonstrated to have an effect on the outcome of the logic equation by varying only that condition and holding fixed for all others that could affect the outcome. Consider the examples in Table 1, which depicts logic condition coverage for two variables, where two Boolean values (a and b) are the conditions for the listed Boolean operators. Table 1 indicates whether a test case is necessary to achieve LCC coverage (✓) or not (x). When the antecedent expression has one of these operators, test cases are generated for each of the corresponding combinations marked with (✓), and this is generalizable for any number of operands.
An input masking strategy can achieve masking Modified Condition/Decision Coverage (MC/DC). The masking MC/DC meets the definition of independent effect by guaranteeing the same minimum test cases at each logical operator as a unique cause, and is acceptable for meeting the MC/DC objective of safety-critical software development standards (e.g., DO-178B/C). Masking refers to the concept that specific inputs to a logic construct can hide the effect of other inputs to the construct. For example, a false input to an AND operator masks all other inputs, and a true input to an OR operator masks all other inputs. The masking approach to MC/DC allows more than one input to change in an independence pair, as long as the condition of interest is shown to be the only condition that affects the value of the decision outcome. However, analysis of the internal logic of the decision is needed to show that the condition of interest is the only condition causing the value of the decision's outcome to change.
The input masking test generation strategy attaches test objectives according to the following steps:
For each basic proposition (input condition) of the antecedent expression, obtain the set S of all sub-expressions which contain this proposition, except the proposition itself. Then, for each expression in set S: (1) if the top-level operation of the sub-expression is an OR gate, substitute this expression by its negation in S; (2) create an expression e which is the conjunction of all expressions in S and the basic proposition of above; and (3) create a test objective which must make expression e true.
A data completeness analysis strategy can analyze one or more variables that appear in the requirement and selects test objectives to test different points within the physical or functional range of the particular variables. The selection of test objectives can be based on, for example, the variable type. In one implementation, numeric variables could be tested on their minimum, maximum, boundaries, etc.; and enumerated and Boolean variables can be tested on all possible values and/or states.
An event strategy can ensure that each event can be triggered at least once. This strategy can also ascertain that one event is not continuously triggered. The generated event test cases and procedures can trigger particular events and verify the outputs with other input conditions remain constant.
A list strategy can analyze list variables and operators that appear in the requirement and selects test objectives to test different properties of lists. For example, this strategy can determine whether list operations take place at different positions of the lists, and ensure that each list variable is tested at least at a minimum and a maximum list length.
A decomposition and equation strategy can analyze functions and/or equations inside of the requirement. These functions and/or equations can be undefined in some implementations. Test objectives can be selected by analyzing the input and/or the output parameters of these functions or equations, and error prone points in the defined functions or equations.
An equivalence class strategy, a boundary value analysis strategy, and a robustness strategy can each analyze inequalities in the requirement and select test objectives based on the equivalence class partitions that can be induced by the inequalities. An equivalence class strategy can select one or more test objective for each normal equivalence class; a boundary value analysis strategy can select one or more test objectives at the boundaries between every two equivalence classes; and a robustness strategy can select one or more test objectives for the abnormal equivalence classes.
A timing strategy can analyze one or more timing operators in the requirement and selects test objectives to test different points in the time span—such as, for example, a leading trigger and/or a lagging trigger. Events can be taken into consideration so that events are not always triggered in the time span.
With reference again to
The model-checking and the theorem proving methods can each utilize formal methods tools that are respectively based on model-checking or theorem proving techniques that check the satisfaction of the negation of the test objectives against the requirements. If not satisfied, a counterexample can be generated, which can be used to generate test cases; If satisfied, the particular test objective is unreachable.
The constraint-solving method can use constraint solvers and/or optimization tools to solve the constraints in the test objective to find a feasible solution as a test case. If the constraints are identified as infeasible, the corresponding test objective is unreachable.
The reachability resolution method can model a set of requirements as a hybrid model which combines discrete transitions and dynamics (if requirements are stateful). The model can then be analyzed to find a feasible path from initial conditions to reach the test objective, where the dynamics are approximated and/or analytically solved during the path finding. If a feasible path is identified, it can be used to generate test cases; if no feasible paths can reach the test objective, the test objective is identified as unreachable.
With reference again to
Collectively,
A user can select “component2” block (
In accordance with embodiments, a hierarchical data flow diagram (i.e., software architecture model along with requirement models) is automatically created to capture requirements and design information. This hierarchical data flow diagram is used to generate requirements-based test cases at different levels in the software architecture. In accordance with embodiments, system design information is used to build the hierarchical data flow diagram, where requirements models are allocated inside modules of the hierarchical data flow diagram. The requirements allocations are based on the requirements-module traceability information from the design information. Test objectives and constraints can be attached to the software architecture model according to a user-selected test strategy. Automatic test case generation is based on the hierarchical data flow diagram to generate requirements-based test cases at different levels in the design architecture that satisfy the test objectives and constraints. The generated test cases can be directly executed on components at multi-levels in the design.
In accordance with some embodiments, a computer program application stored in non-volatile memory or computer-readable medium (e.g., register memory, processor cache, RAM, ROM, hard drive, flash memory, CD ROM, magnetic media, etc.) may include code or executable instructions that when executed may instruct and/or cause a controller or processor to perform methods discussed herein such as for automated requirements-based test case generation, as described above.
The computer-readable medium may be a non-transitory computer-readable media including all forms and types of memory and all computer-readable media except for a transitory, propagating signal. In one implementation, the non-volatile memory or computer-readable medium may be external memory.
Although specific hardware and methods have been described herein, note that any number of other configurations may be provided in accordance with embodiments of the invention. Thus, while there have been shown, described, and pointed out fundamental novel features of the invention, it will be understood that various omissions, substitutions, and changes in the form and details of the illustrated embodiments, and in their operation, may be made by those skilled in the art without departing from the spirit and scope of the invention. Substitutions of elements from one embodiment to another are also fully intended and contemplated. The invention is defined solely with regard to the claims appended hereto, and equivalents of the recitations therein.
Claims
1. A method for automated requirements-based test case generation, the method comprising:
- constructing a software architecture model in a model based development tool, the software architecture model automatically derived from architectural information of a software design model;
- allocating requirement models into different components of a software architecture model;
- a test case generator unit generating component level requirements-based test cases from one or more levels of the software architecture model; and
- the generated requirements-based test cases configured to be executable at different levels in the software architecture.
2. The method of claim 1, including allocating the requirement models by connecting corresponding monitored or controlled variables with at least one of an input port and an output port of respective ones of the different modules.
3. The method of claim 1, including the test case generator unit generating integration level test cases, and applying the integration level test cases to verify if a code module complies with the allocated requirements.
4. The method of claim 1, including:
- receiving the software architecture model in the form of a hierarchical data flow diagram derived from the software design along with the allocated requirement models, the hierarchical data flow diagram including one or more blocks/operators mapping to corresponding components in the software design;
- selecting one of the software components from the software design for test case generation; and
- building an intermediate test model based on the selected component by automatically attaching at least one test objectives and test constraints to the corresponding software architecture model block/operator.
5. The method of claim 4, including selecting the software component based on a level of test generation.
6. The method of claim 1, including generating the test cases according to at least one strategy selected from the list of a requirements coverage strategy, a logic condition coverage strategy, an input masking strategy, a data completeness analysis strategy, an event strategy, a list strategy, a decomposition and equation strategy, an equivalence class strategy, a boundary value analysis strategy, a robustness strategy, and a timing strategy.
7. The method of claim 4, including:
- generating requirements-based test cases by performing at least one of model-checking, theorem proving, constraint solving, and reachability resolution methods on the intermediate test model; and
- translating the generated test cases into test scripts for test execution, and into test artifacts for review.
8. A non-transitory computer-readable medium having stored thereon instructions which when executed by a processor cause the processor to perform a method for automated requirements-based test case generation, the method comprising:
- constructing a software architecture model, the software architecture model automatically derived from architectural information of a software design model;
- allocating requirement models into different blocks/operators of a software architecture model;
- generating component level requirements-based test cases from one or more levels of the software architecture model; and
- the generated requirements-based test cases configured to be executable at different levels in the software architecture.
9. The non-transitory computer-readable medium of claim 8, including instructions to cause the processor to allocate the requirement models by connecting corresponding monitored or controlled variables with an input port or an output port of respective ones of the different modules.
10. The non-transitory computer-readable medium of claim 8, including instructions to cause the processor to generate integration level test cases, and apply the integration level test cases to verify if a code module complies with the allocated requirements.
11. The non-transitory computer-readable medium of claim 8, including instructions to cause the processor to:
- receive the software architecture model in the form of a hierarchical data flow diagram derived from the software design along with the allocated requirement models, the hierarchical data flow diagram including one or more blocks/operators mapping to corresponding components in the software design;
- select one of the software components from the software design for test case generation; and
- build an intermediate test model based on the selected component by automatically attaching at least one test objectives and test constraints to the corresponding software architecture model block/operator.
12. The non-transitory computer-readable medium of claim 10, including instructions to cause the processor to generate the test cases according to at least one strategy selected from the list of a requirements coverage strategy, a logic condition coverage strategy, an input masking strategy, a data completeness analysis strategy, an event strategy, a list strategy, a decomposition and equation strategy, an equivalence class strategy, a boundary value analysis strategy, a robustness strategy, and a timing strategy.
13. The non-transitory computer-readable medium of claim 11, including instructions to cause the processor to:
- generate requirements-based test cases by performing at least one of model-checking, theorem proving, constraint solving, and reachability resolution methods on the intermediate test model; and
- translate the generated test cases into test scripts for test execution, and into test artifacts for review.
14. A system for automated requirements-based test case generation, the system comprising:
- a model based development tool including a control processor configured to execute instructions, the control processor connected to a communication link;
- a component level test case generator unit to automatically generate test cases.
15. The system of claim 14, including the control processor configured to execute instructions that cause the control processor to perform the steps of:
- deriving software architecture model from software design
- allocating requirement models into different blocks/operators of a software architecture model;
- generating component level requirements-based test cases.
16. The system of claim 15, including the control processor configured to execute instructions that cause the control processor to generate integration level test cases, and apply the integration level test cases to verify if a code module complies with the software architecture model and the allocated requirement models.
17. The system of claim 15, including the control processor configured to execute instructions that cause the control processor to:
- receiving the software architecture model in the form of a hierarchical data flow diagram derived from the software design along with the allocated requirement models, the hierarchical data flow diagram including one or more blocks/operators mapping to corresponding components in the software design;
- selecting one of the software components from the software design for test case generation; and
- building an intermediate test model based on the selected component by automatically attaching at least one test objectives and test constraints to the corresponding software architecture model block/operator.
18. The method of claim 6, the input masking strategy including masking Modified Condition/Decision Coverage (MC/DC) to allow more than one input of an input condition to change in an independent pair.
19. The non-transitory computer-readable medium of claim 12, the input masking strategy including masking Modified Condition/Decision Coverage (MC/DC) to allow more than one input of an input condition to change in an independent pair.
20. The system of claim 15, the generating component level requirements-based test cases including an input masking strategy that allows more than one input of an input condition to change in an independent pair.
Type: Application
Filed: Mar 9, 2018
Publication Date: Jul 12, 2018
Inventors: Meng LI (Niskayuna, NY), Michael Richard DURLING (Gansevoort, NY), Kit Yan SIU (Niskayuna, NY), Italo OLIVEIRA (Rio de Janeiro), Han YU (Niskayuna, NY), Augusto Marasca DE CONTO (Rio de Janeiro)
Application Number: 15/916,660