When people talk about the ‘testing tool’, it is mostly a test execution tool that they think of, basically a tool that can run tests. This type of tool is also known as a ‘test running tool’. Most tools of this type get started by capturing or recording manual tests; hence they are also known as ‘capture/playback’ tools, ‘capture/replay’ tools or ‘record/playback’ tools. It is similar as recording a television program, and playing it back.
The Test execution tools need a scripting language in order to run the tool. The scripting language is basically a programming language. So any software tester who wants to run a test execution tool directly will need to use programming skills to create and modify the scripts.
The basic advantage of programmable scripting is that tests can repeat actions (in loops) for different data values (i.e. test inputs), they can take different routes depending on the outcome of a test (e.g. if a test fails, go to a different set of tests) and they can be called from other scripts giving some structure to the set of tests.
However, during testing, the tests are not something which is just played back for someone to watch the tests interact with the system, which may react slightly differently when the tests are repeated.
Hence captured tests are not suitable if you want to achieve long-term success with a test execution tool because:
- The script doesn’t know what the expected result is until you program in it -it only stores inputs that have been recorded, not test cases.
- A small change to the software may invalidate some or hundreds of scripts.
- The recorded script can only deal with exactly the same conditions as when it was recorded. Unexpected events (e.g. a file that already exists) will not be interpreted correctly by the tool.
- The test input information is ‘hard-coded’, i.e. it is embedded in the individual script for each test.
There are many better ways to use test execution tools so that they can work well and actually deliver the benefits of running unattended automated tests.
There are at least five levels of scripting which are described below and also different comparison techniques which are as follows:
- Linear scripts which could be created manually or captured by recording a manual test
- Structured scripts using selection and iteration programming structures
- Shared scripts where a script can be called by other scripts so can be re-used – shared scripts also require a formal script library under configuration management
- Data-driven scripts where test data is in a file or spreadsheet to be read by a control script
- Keyword-driven scripts where all of the information about the test is stored in a file or spreadsheet, with a number of control scripts that implement the tests described in the file.
Data driven scripting is an advance over captured scripts but keyword-driven scripts give significantly more benefits. They have also been described as ‘control synchronized data-driven testing’.
Although they are commonly referred to as testing tools, they are actually best used for regression testing, so they could be referred to as ‘regression testing tools’ rather than ‘testing tools’.
A test execution tool mostly runs tests that have already been run before. One of the most significant benefits of using this type of tool is that whenever an existing system is changed (e.g. for a defect fix or an enhancement), all of the tests that were run earlier can be run again, to make sure that the changes have not disturbed the existing system by introducing or revealing a defect.
Features or characteristics of test execution tools are:
- To capture (record) test inputs while tests are executed manually;
- To store an expected result in the form of a screen or object to compare to, the next time the test is run;
- To execute tests from stored scripts and optionally data files accessed by the script (if data-driven or keyword-driven scripting is used);
- To do the dynamic comparison (while the test is running) of screens, elements, links, controls, objects and values;
- To initiate post-execution comparison;
- To log results of tests run (pass/fail, differences between expected and actual results);
- To mask or filter the subsets of actual and expected results, for example excluding the screen-displayed current date and time which is not of interest to a particular test;
- To measure the timings for tests;
- To synchronize inputs with the application under test, e.g. wait until the application is ready to accept the next input, or insert a fixed delay to represent human interaction speed;
- To send the summary results to a test management tool.
Other popular articles:
- What is Test comparators in software testing?
- What is Test implementation? or How to specifying test procedures or scripts?
- What are Test management tools?
- What is a proof-of-concept or piloting phase for tool evaluation in software testing?
- Best Desktop Test Management Tools
Leave a Reply