Which of the following BEST describes why it is important to separate test definition from test execution in a TAA?
- A . It allows developing steps of the test process without being closely tied to the SUT interface.
- B . It allow choosing different paradigms (e.g event-driven) for the interaction TAS and SUT
- C . It allows specify test cases without being closely tied to the tool to run them against the SUT
- D . It allows testers to find more defects on the SUT
A web application was released into production one year ago, it has regular release which follow a V-model lifecycle and testing is well-established and fully integration into the development lifecycle. You have been asked to implement a TAS for the regression test suite. The regression tests have been developed via the GUI and are expected to be run at least four times a month, for each planned release, for the whole operation solution life of the system (six years). Each screen of the GUI uses several third-party controls which are not compatible with the existing automation solutions. The environment for the automation will be stable, fully controllable and separated from other environments (development, staging, production).
What could be the MOST problematic for this TAS?
- A . Maturity of the test process
- B . Complexity to automate
- C . Frequency of use
- D . Sustainability of the automated environment
Consider the following layers of the gTAA structure:
a. Test generation layer
b. Test definition layer
c. Test execution layer
d. Test execution layer
Consider the following capabilities associated with these layers.
Acquire all the necessary resources before each test and release all after run, in order to avoid interdependences between test Allow the automated test scripts on an abstract level to interact with components, configurations and interfaces of the SUT.
Design test directives that allow configuring the algorithms used to automatically produce the test cases a given model of the SUT.
Allow the definition and implementation of test cases and data by means of templates and/or guidelines.
Which of the following BEST matches each layer with the appropriate capability?
- A . a-3, b-4, c-1, d-2
- B . a-4, b-3, c-1, d-2
- C . a-4, b-3, c-2, d-1
- D . a-3, b-4, c-2, d-1
You are evaluating several test modelling tools and are wanting to automatically generate test cases within the tool where many different combinations of input data are created.
You are then wanting to export the test cases into a csv file which can then be read by a functional test execution tool using a data-driven or keyword-driven scripting method.
You have investigated several tools and there is only one tool that provides all the necessary features defined by your team with the exception of the export facility. It does not provide an export into either .xls or .csv formats.
What would be the BEST next step regarding the selection of this tool?
- A . Consider another tool that is more “fit for purpose" and has all the features required.
- B . Explore the possibility of creating your own export facility.
- C . Ask the vendor and use forums to see if a solution is available or going to be available in the future.
- D . Purchase this tool and generate the .csv file manually.
You are using a gTAA to create a TAS for a project. The TAS is aimed at automatically and executing test cases based on a use-case Modeling approaching that uses UML as a modeling language. All the interaction between TAS and SUT will only be at the API and GUI level.
Which of the following components of the gTAA would you EXCLUDE from the TAS?
- A . The test reporting component of the test execution layer.
- B . The Test execution component of the test generation layer
- C . The test execution (test engine of the test execution layer
- D . The Command Line Interface (CLI) component of the test adaptation layer
Which of the following describes how a test execution report is likely to be used?
- A . To understand which test step caused the failure in a test case
- B . To identify problematic areas of the SUT by keeping a history showing which test cases fail the most
- C . To measure coverage of the test basis by a test suite
- D . To record how a test case failure has been fixed
B
Explanation:
A test execution report typically includes information on the success or failure of individual test cases. By maintaining a historical record of these reports, you can identify trends and see which areas of the System Under Test (SUT) fail most often.
Explanation in English: Test execution reports are a fundamental part of any testing activity. They provide crucial information about the outcomes of executed test cases, including which passed and which failed. By keeping a record of these reports, you can observe patterns and identify problematic areas in the System Under Test (SUT) – that is, the parts of the system that most frequently cause test cases to fail. While the other options (A, C, D) could be part of the broader testing process, they are not the primary use of a test execution report.
What is NOT a factor in considering when you are asked to ensure an effective transition from manual to automated tests?
- A . Complexity to automate the manual test cases
- B . Correctness of test data and test cases
- C . The look and feel of the SUT
- D . The controllability of the SUT
What represents good practice when automating a manual regression test suite?
- A . Test data shared between tests should, where feasible, be stored and accessed from a single source to avoid duplication or introduction of error.
- B . All existing manual tests should be decomposed into several smaller automated tests to reduce functional overlap.
- C . Remove inter-dependencies between tests to reduce automation failures and costly error analysis.
- D . Once a manual test has been automated, execute it immediately to Identify whether it operates correctly.
A
Explanation:
This option represents good practice in test automation. Centralizing the test data helps in avoiding duplication or introduction of errors. It makes the process more efficient and reliable by ensuring consistency and simplifying the maintenance of test data.
All existing manual tests should be decomposed into several smaller automated tests to reduce functional overlap. This is not always practical or necessary and could potentially lead to the loss of valuable test coverage if not done carefully.
Removing inter-dependencies between tests can indeed reduce the risk of automation failures and costly error analysis, but it’s not always possible or feasible to do this, and sometimes test dependencies are important to maintain for valid testing of complex systems.
Executing a test immediately after automation is not the best practice, as the automated test script should be reviewed and verified before execution. Moreover, the test might not operate correctly not because of the automation process, but due to other reasons such as test environment or test data issues. Therefore, immediate execution doesn’t necessarily validate the success of the automation process.
Your functional regression test automation suite ran successfully for the first two sprints and no failures were encountered during the runs. The automation suite records the status of each test case as either ‘pass’ or ‘fail’ and has excellent recovery capability built in.
For the third sprint, the TAS log reported several test cases with a status of ‘fail’. You investigated each test case and found that most failures were due to a defect in one of the keyword scripts, rather than in the SUT. For those where the failure was in the SUT, defectreports were raised but several were returned by the developers asking for more information to enable them to reproduce the problem.
Which additional log items SHOULD you add to the TAS that would BEST improve failure analysis and defect reporting for future sprints?
a) Dynamic measurement information about the SUT.
b) A status of TAS error’, in additional to pass’ and ‘fail’, for each test case.
c) Use of a colour coding scheme so that ‘pass’ is in red and fail’ is in green.
d) A counter to determine how many times each test case has been executed.
e) System configuration information including software/firmware and operating system versions.
f) A copy of the source code for all Keyword scripts executed.
- A . a and b
- B . d and e
- C . a and c
- D . b and e
You are planning the pilot for an in-house developed Test Automation solution (TAS).
Which two of the following would be important steps to take as part of the planning process?
a) Review your organisation’s current projects and identify which one would be most suitable to pilot the TAS.
b) Ensure that the developers will provide the necessary commitment for the TAS deployment activities.
c) Run a series of training workshops for new users of the TAS before they are asked to use it.
d) Develop a project plan for the pilot and reserve the necessary budget and resources for its implementation.
e) Ask the developers to provide any missing functionality during the deployment activities.
- A . a and b
- B . b and d
- C . c and d
- D . c and e
The GUI of a Customer Relationship Management (CRM) application has been delivered through internet Explorer with proprietary Active X and Java controls. This implementation enables rich client capabilities, but specific commercial automation tools are necessary to automate test cases at GUI of functional test cases. This is to demonstrate whether a small set of the commercial are able to properly recognize actions taken by a tester when interacting with GUI of the CRM application.
Which of the following scripting techniques would be MOST suitable in this scenario?
- A . Data-driven scripting
- B . Keyword-driven scripting
- C . Linear scripting
- D . Structure scripting
What are the four horizontal layers of the gTAA?
- A . Test adaptation, test execution, test design, test definition
- B . Test generation, test execution, test definition, test APIs
- C . Test generation, test definition, test execution, test adaptation
- D . Test definition, test execution, test reporting, test adaptation
C
Explanation:
The gTAA is a reference architecture for test automation systems, and it is organized into four layers to separate different concerns. Each layer has its own responsibilities and interacts with the others to form a complete test automation system. The Test Generation layer is for creating test cases, the Test Definition layer is for defining those cases and their data, the Test Execution layer is for running the test cases, and the Test Adaptation layer handles interactions between the test automation system and the system being tested.
You have implemented a keyword-driven scripting framework, which uses a test execution tool to run the tests. This has been in use for the past year and all of the teams now use this framework as the standard approach for test execution.
The teams all work on different aspects of the SUT and they have all experienced significant benefits in the use of this scripting framework. However, on closer examination, you have discovered that there are numerous instances where the teams have the same functionality to test but are using different keywords.
One of your objectives for improvement is to create consistency among the teams.
What is the BEST way to handle this situation?
- A . Move to a model-based approach to scripting where the models include the keywords.
- B . Do nothing, each team are working in isolation and they are all experiencing significant benefits in the way they are currently working.
- C . Provide each team with a set of guidelines and naming conventions for keywords.
- D . Create a central library of keywords and associated definitions for each team to use.
D
Explanation:
By having a shared library, all teams can refer to the same keywords, creating a more consistent and unified scripting approach. This can make it easier to manage and maintain the scripts, especially when different teams need to work together or share resources. It can also reduce the risk of misinterpretations or misunderstandings due to different teams using different keywords for the same functionalities. The other options, while might be valid in some other contexts, do not directly address the issue of consistency among the teams.
As a TAE you are evaluating a functional test automation tool that will be for several projects within your organization. The projects require that tool to work effectively and efficiently with SUT’s in distributed environments. The test automated tool also needs to interface with other existing test tools (test management tool and defect tracking tool.) The existing test tools subject to planned updates and their interface to the test automated tool may not work property after these updates.
Which of the following are the two LEAST important concerns related to the evaluation of the test automation in this scenario?
✑ Is the test automation tool able to launch processors and execute test cases on multiple machines in different environments?
✑ Does the test automation tool support a licensing scheme that allows accessing different sets?
✑ Does the test automation tool have a large feature set, but only part of the features will be sets?
✑ Do the release notes for the planned updates on existing specify the impacts on their interfaces to other tools?
Does the test automation tool need to install specific libraries that could impact the SUT?
- A . A and C
- B . A and E
- C . B and E
- D . C and D
You have inherited a TAS that is working well it uses keyword-driven scripting and was well architected. The automation architect who built the system has now moved on to another company. The TAS is working across several projects and has a multiple library of keywords, categorised by project. The individual project teams maintain these keyword scripts.
Based only on the given information, what is the MOST significant risk for the TAS?
- A . The keyword driven scripts may become out of date if not maintained
- B . The level of abstraction, coupled with the departure of the architect may make the system hard to maintain
- C . New projects may not work as well with the TAS as the current projects
- D . Because the keyword scripts are maintained by different teams, there is a likelihood that good coding standards are not followed
The Test Automation Manager has asked you to provide a solution for collecting metrics from the TAS that measures code coverage every time the automated regression test pack is run. The metrics must be trend based to ensure that the scope of the regression test pack continues to reflect enhancements made to the SUT – coverage must not drop and should ideally increase. The solution must be as automated as possible to avoid unnecessary manual overheads and errors.
Which of the following approaches would BEST meet these requirements?
- A . Test automation cannot measure code coverage for the SUT, only the code for the automation tools and scripts. The automated test cases would need to be run manually with a code coverage and reporting tool running in the background.
- B . The automated testware would record overall code coverage for each run and add the figure to a new row in a pre-formatted Excel spreadsheet. You would then present the spreadsheet to stakeholders so they could look for changes in coverage.
- C . The automated testware would record overall code coverage for each run, export the data to a pre-formatted Excel spreadsheet that automatically updates a trend analysis bar chart for you to distribute to stakeholders.
- D . The automated testware would record the pass/fail rate of each regression test case, export the data to a pre-formatted Excel spreadsheet that automatically updates a trend analysis success rate bar chart and emails it to stakeholders.
You are working as a TAE for a company who have been using a web test execution tool for a number of years. The tool has been used successfully on ten web applications in the past.
The company are developing a new web application which has a friendly User Interface, but the developers have used an object throughout the application which the tool is unable to recognise. As a result, you have no way of capturing the object or verifying the contents using the automation tool.
What is the first thing you should do about this problem?
- A . See if the application can be run on a desktop and if the object can be recognised on the desktop by the tool.
- B . Investigate whether the object can be recognised by other test execution tools in the market
- C . Ask the developers to remove the object and replace it with some text fields
- D . Ask the developers if they can change the object to something that can be recognised by the tool
D
Explanation:
It is common in software testing to encounter situations where test automation tools are unable to recognize certain objects or elements within an application. In such cases, the first step is usually to collaborate with the development team to identify whether the problematic object can be replaced or modified to become recognizable by the automation tool.
This is generally the most efficient and cost-effective approach because it allows you to continue using the same test automation tool without the need for additional training or license costs that may come with switching tools. It also keeps the test environment consistent and doesn’t require the redesign of the test cases that are already automated. The other options either involve possibly unnecessary tool or platform switching, or suggesting changes that may not suit the application’s requirements.
Of course, if changes to the object or application aren’t feasible, then other options such as switching tools or finding workarounds may need to be explored. However, the first step should be to communicate with the developers and investigate a potential quick fix to the issue.
Consider a TAS deployed into production. The SUT is a web application and the test suite consists of a set of automated regression tests developed via GUI. A keyword-driven framework has been adopted for automating the regression tests. The tests are based on identification at low-levels of the web page components (e.g class indexes, tab sequence indexes and coordinates) in the next planned release the SUT will be subject to significant corrective maintenance (bug-fixes) and evolution (new features) Maintenance costs to update the test scripts should be as low as possible and the scripts must be highly reusable.
Which of the following statements is most likely to be TRUE?
- A . The keyword-driven framework is not suitable, it would be better to adopt a structured-scripting approach
- B . False positive errors are likely to occur when running the automated tests on the new releases without modifying the test
- C . The total execution time of the automated regression test suite will decrease for each planned release.
- D . The keyword-driven framework introduces a level abstraction that is too high and makes it difficult what really happens
You are currently designing the TAA of a TAS. You have been asked to adopt an approach for automatically generating and executing test cases from a model that defines the SUT. The SUT is a state-based and event-driven that is described by a finite-state machine and exposes its functionality via an API. The behavior of the SUT depends on hardware and communication links that can be unreliable.
Which of the following aspects is MOST important when designing the TAA in this scenario?
- A . Looking for tools that allows direct denoting of exceptions and actions depending on the SUT events.
- B . Adopting a test definition strategy based on classification tree coverage for the test definition layer.
- C . Looking for tools that allow performing setup and teardown of the test suites and the SUT.
- D . Adopting a test definition strategy based on use case/exception case coverage for the definition layer.
You are working on a government system called “Making Tax Digital" or MTD for short. This system is being implemented to stop manual human input error and also to reduce fraudulent behaviour from companies when submitting their tax and VAT returns.
The key concept is that registered companies will need to use government recommended 3rd party software for their accounts and book keeping. These 3rd party applications will have a direct interface into the government’s main system for transactions and submissions.
You have been using a test execution tool successfully on the project so far. and have implemented a basic “capture/replay” approach to scripting.
The management have been encouraged with the automation so far, but want the following objectives to be met:
* Test cases added easily
* Reduction in the amount of scripts and script duplication
* Reduction in maintenance costs
Which scripting technique would be MOST suitable in this scenario in order to meet the objectives?
- A . Linear scripting
- B . Structured scripting
- C . Data-driven scripting
- D . Keyword-driven scripting
D
Explanation:
Keyword-driven scripting allows for easier addition of test cases, less script duplication, and reduced maintenance costs which are the objectives outlined by management. This is because in keyword-driven scripting, you design test scripts using a set of predefined keywords, which represent actions that will be executed on the application under test. The keywords and the data associated with them are usually stored in tables (like a spreadsheet), making it easier to add, remove, or modify test cases.
Linear scripting (A) doesn’t provide flexibility in terms of reducing script duplication and maintenance costs as it involves writing a linear set of instructions for each test case.
Structured scripting (B) is useful when there is a need to create modular and reusable scripts, but it requires more complexity to implement than keyword-driven scripting and isn’t as easily adaptable for adding new test cases.
Data-driven scripting (C) is efficient when you need to test the same functionality with varying inputs and expected results, but it doesn’t inherently reduce script duplication or maintenance costs as much as keyword-driven scripting can. It also doesn’t directly aid in adding new test cases in an easy manner.
Consider a SUT that small run on multiple platform during the execution of automated test runs. In each test run an automated test suite needs to be executed, with the same version of the TAF, against the same version of the SUT of each platform. Each platform shall have its own dedicated test environment. Your goal is to implement a process as automated as possible ( i.e with minimal manual intervention) that allows implementing a consistent setup of the TAS across the multiple test environments.
Which two of the following aspects are MOST relevant for achieving your goal in this scenario?
✑ The configuration of the TAS uses automated installation scripts
✑ The TAF saves the logs needed to debug errors in XML format
A) Features of the TAF not used by the automated tests have been tested
B) All the automated test cases contain the expected results
C) The TAS components are under configuration management
- A . A and e
- B . B and c
- C . B and d
- D . A and d
If model-based testing has been selected for the overall test automation approach for a project, how does that influence the layers of the TAA?
- A . All layers are used, but the test generation layer will be automated based on the defined model
- B . There will be no need for the execution layer
- C . No adaptation will be needed because the interfaces will be defined by the model
- D . There will be no need to design the tests for the API because those will be covered by the model
A
Explanation:
In a model-based testing approach, all layers of the Test Automation Architecture (TAA) are typically used, but the test generation layer will be automated based on the defined model. The model describes the behavior of the system under test (SUT) and is used to automatically generate test cases.
Option B is incorrect because the execution layer is still needed to execute the generated test cases.
Option C is incorrect because the adaptation layer is typically needed to handle interface-related issues between the Test Automation System (TAS) and the System Under Test (SUT), regardless of the model used.
Option D is incorrect because even with a model-based testing approach, designing tests for the API (if APIs are a part of the system) would still be necessary. The model can aid in this, but it does not eliminate the need for test design.
You are implementing test automation for a project and you want to be able to generate test cases automatically using a series of test design tools which use a variety of test design techniques such as decision tables, pairwise testing and boundary value analysis.
You also want to generate test data automatically which can then be used by the tests.
Initially these tests will be run manually to verify their correctness and ultimately you want to include them in the test execution tool so that they can run unattended.
Which layer of the gTAA will be used to support the specification of the test cases and preparation of the test data?
- A . The generation layer
- B . The definition layer
- C . The execution layer
- D . The adaptation layer
B
Explanation:
The definition layer of the Generic Test Automation Architecture (gTAA) is responsible for defining test cases and preparing test data. This layer uses test design tools and techniques, like decision tables, pairwise testing, and boundary value analysis that you mentioned, to define what needs to be tested. This includes the preparation of test data that will be used by the tests.
The generation layer, on the other hand, is responsible for generating the actual test scripts from the defined test cases.
The execution layer is responsible for executing the generated test scripts.
And finally, the adaptation layer is responsible for managing the interaction between the Test Automation System (TAS) and the System Under Test (SUT), which involves handling interface-related issues.
Designing the System Under Test (SUT) for testability is important for a good test automation approach and can also benefit manual test execution.
Which of the following is NOT a consideration when designing for testability?
- A . Observability: The SUT needs to provide interface that give insight into the system.
- B . Re-useability: The code written for the SUT must be re-useable for other similar system.
- C . Clearly defined architecture: The SUT Architecture needs to provide clear and understandable interfaces giving control and visibility on all test levels.
- D . Control: the SUT needs to provide interfaces that can be used to perform actions on SUT.
You are working on a web-based application called Book Vault that allows people to upload books and order books. This application must be available on all major browsers.
You have been testing the application manually and management have asked you to consider automating some of the tests.
You have investigated a number of commercial and free tools which can automate tests at a web browser level and one tool in particular meets your requirements and you have implemented a trial version.
You have basic programming skills and the main goal is to automate a few functional tests to see if the tool is compatible with the application and can recognise the objects and controls.
Which scripting technique would be MOST suitable in this scenario in order to meet the objectives?
- A . Structured scripting
- B . Capture-replay scripting
- C . Data-driven scripting
- D . Model-based scripting
B
Explanation:
Given that you have basic programming skills and you are in the exploratory phase of the tool, the Capture-replay scripting approach would be the most suitable for this scenario. Capture-replay scripting involves recording a series of user actions as they navigate through the application, and then replaying those actions in an automated way. This technique is typically easy to use, requires minimal programming skills, and can quickly provide a basic level of automation for functional tests. It will also allow you to see if the tool is compatible with your application and can recognize the objects and controls. The other scripting techniques like Structured, Data-driven, and Model-based scripting require a more advanced level of programming skills and a better understanding of the application.