Within an embedded software project, the maintainability of the software is considered to be critical.
It has been decided to use static analysis on each delivered software component.
Which of the following metrics is NOT a maintainability metric typically used with static analysis?
- A . Number of Lines of Code (LOG)
- B . Number of Function Calls
- C . Mean Time Between Failures
- D . Comment Frequency
C
Explanation:
Maintainability metrics typically used with static analysis include measures that reflect the complexity and understandability of the code, such as Number of Lines of Code (LOC), Number of Function Calls, and Comment Frequency. These metrics help in assessing how easily the software can be understood, modified, and maintained. Mean Time Between Failures (MTBF), on the other hand, is a reliability metric. It measures the time elapsed between inherent failures of a system during operation. MTBF is used to predict the system’s reliability and is not directly related to the maintainability of the code. Reliability metrics like MTBF would be used in the testing phase to measure the operational reliability of the system rather than during static analysis for maintainability assessment.
You are involved in testing a system in the medical domain. Testing needs to comply with the FDA requirements and is rated as being safety critical. A product risk assessment has been performed and various mitigation actions have been identified. Reliability testing is one of the test types that needs to be performed throughout the development lifecycle.
Based on the information provided, which of the following activities would need to be addressed in the test plan?
- A . Perform a vulnerability scan.
- B . Design and execution of specific tests that evaluate the software’s tolerance to faults in terms of handling unexpected input values.
- C . Design and execution of test cases for scalability.
- D . Testing whether the installation/de-installation can be completed.
B
Explanation:
In the context of safety-critical systems, particularly in the medical domain, reliability is of utmost importance. For such systems, it is crucial to ensure that the software can handle unexpected input values and continue to operate without failure. This is essential to ensure patient safety and compliance with FDA requirements. Vulnerability scans (option A) are more related to security testing, whereas scalability (option C) and installation/de-installation (option D) are important but not specifically related to the reliability and safety criticality of the system in the medical domain.
The following characteristics were identified during an early product risk-assessment for a software system:
• the software system needs to manage synchronization between various processes
• microcontrollers will be used that will limit product performance
• the hardware that will be used will make use of timeslots
• the number of tasks supported in parallel by the software system is large and are often highly complex.
Based on the information provided, which of the following non-functional test types is MOST appropriate to be performed?
- A . Maintainability testing
- B . Security testing
- C . Time-behaviour testing
- D . Portability testing
C
Explanation:
The characteristics listed in the question point towards the need to manage synchronization between processes and make efficient use of limited hardware resources, such as microcontrollers and timeslots. Additionally, the complexity and concurrency of tasks highlight the importance of the software’s performance over time. Time-behaviour testing is the most appropriate non-functional test type to perform in this scenario as it focuses on evaluating the timing aspects of the system, such as response times, processing times, and throughput rates. It ensures that the system meets its time-related requirements, which is critical for systems reliant on synchronization and limited by hardware performance constraints.
Consider the pseudo code for the Price program:
Which of the following statements about the Price program describes a control flow anomaly to be found in the program?
- A . The Price program contains no control flow anomalies.
- B . The Price program contains unreachable code.
- C . The Price program contains data flow defects.
- D . The Price program contains an infinite loop.
D
Explanation:
The pseudo code provided for the Price program shows a potential for an infinite loop due to the way the ‘Del_Charge’ variable is being manipulated. The loop is set to continue ‘WHILE Del_Charge > 0’, and within the loop, ‘Del_Charge’ is initially set to 5 and then potentially decreased by 2 if ‘Sale_Value > 60000’. However, at the end of each loop iteration, ‘Del_Charge’ is increased by 1. This means that if ‘Sale_Value’ is not greater than 60000, ‘Del_Charge’ will not decrease and will instead increment indefinitely, causing an infinite loop. Even if ‘Sale_Value’ is greater than 60000, the decrement by 2 could be negated by the subsequent increments if the loop runs enough times, potentially leading to an infinite loop situation. There is no guaranteed exit condition once the loop is entered, which is a control flow anomaly.
A major Caribbean bank typically develops their own banking software using an Agile methodology. However, for some specific components COTS software is acquired and used. The bank does not want to create a dependency on any external COTS supplier.
As part of the test approach, portability testing will be performed.
Which portability sub-characteristic is especially relevant for the Caribbean bank?
- A . In stall ability
- B . Adaptability
- C . Replaceability
- D . Co-existence
C
Explanation:
Portability testing is concerned with how well software can be transferred from one environment to another. In the context of a bank using COTS (Commercial Off-The-Shelf) software, the sub-characteristic of replaceability becomes particularly relevant. This is because the bank does not want to create a dependency on any external COTS supplier, meaning it should be able to replace the software with another product without significant effort or operational disruption. Replaceability ensures that if needed, the bank can switch to different software, thereby mitigating the risk of supplier dependency.
Which of the following does NOT contribute to a more effective review preparation by the Technical Test Analyst?
- A . Ensure that participants spend enough time during preparation, e.g.. by managing checking rate (number of pages checked per hour during review preparation).
- B . Managing logging rate (number of defects logged per minute during the meeting).
- C . The usage of review checklists.
- D . Review training for the Technical Test Analyst.
B
Explanation:
An effective review preparation by a Technical Test Analyst includes ensuring that participants are well-prepared and that they spend enough time on preparation, which can be managed by checking the rate (option A). The use of review checklists (option C) and providing review training (option D) are also methods that contribute to more effective review preparation. However, managing the logging rate (option B), or the number of defects logged per minute during the meeting, is not related to the preparation phase but rather to the defect detection and logging phase during the actual review meeting. It is not a preparation activity but a review execution activity.
A product risk assessment has revealed the following product risks:
• lack of usability requirements
• security during on-line transactions
• perceived performance of the system and response time from the user interface
• a required availability of almost 100%
To address the 4th risk, which of the following quality characteristics for technical testing should be part of the test approach?
- A . Adaptability
- B . Reliability
- C . Portability
- D . Compatibility
B
Explanation:
To address the product risk of requiring an availability of almost 100%, the quality characteristic of reliability should be part of the test approach. Reliability testing focuses on the ability of the system to perform under expected conditions for a specified period of time. It is essential for systems that need to be operational continuously or near-continuously. This characteristic encompasses the system’s uptime, fault tolerance, recoverability, and the ability to perform under anticipated conditions, all of which are relevant to maintaining high availability.
You have been assigned to perform a review on code provided below:
Which type of defect should you report as part of the code review?
- A . Endless loop
- B . Unreachable code
- C . Too many nested levels
- D . No defects should be reported, code is correct.
A
Explanation:
The code provided contains a potential endless loop. The loop is conditioned on the variable ‘E’ being less than 1 (IF E < 1), but within the loop, there is no operation that modifies the value of ‘E’. Therefore, once the loop is entered, if the condition A > B holds true, the value of ‘E’ remains unchanged, leading to an endless loop situation. The decrement of ‘A’ in line 15 does not guarantee an exit condition for the loop, as it does not affect the value of ‘E’. This is a control flow defect that could cause the program to hang or crash.
Within the world of consumer electronics, the amount of embedded software is growing rapidly. The amount of software in high-end television sets has increased by a factor of about eight over the last six years. In addition, the market of consumer electronics has been faced with a 5 -10% price erosion per year. The price of a product is, among a number of other things, determined by the microcontroller used. Therefore, the use of ROM and RAM remains under high pressure in consumer electronic products, leading to severe restrictions on code size.
Within a new high-end TV project, it has been decided to apply dynamic analysis.
Which of the quality goals listed below is MOST appropriate to the project context?
- A . Prevent failures from occurring by detecting wild pointers and loss of system memory.
- B . Analyse system failures which cannot easily be reproduced.
- C . Evaluate network behaviour.
- D . Improve system performance by providing information on run-time system behaviour.
D
Explanation:
In the context of consumer electronics, where there is rapid growth in embedded software and pressure to minimize code size due to cost constraints, dynamic analysis can be particularly useful for improving system performance. Dynamic analysis involves examining the system’s behavior during execution, which can provide insights into the efficiency of the code at runtime, memory utilization, and processing speed. In a high-end TV project where the use of ROM and RAM is under severe restrictions, dynamic analysis would be most appropriately applied to improve system performance, ensuring that the software runs efficiently within the available hardware resources. This supports the project context by contributing to the optimization of the software to run within the constraints of the microcontroller used, thereby potentially reducing costs.
Consider the pseudo code provided below regarding a customer request for cash withdrawal from an ATM.
If the customer has sufficient funds in their account
OR the customer has the credit granted
THEN the ATM machine pays out the requested amount to the customer
Which of the following test cases would be the result of applying multiple condition testing, but would NOT be the result of applying modified condition/decision testing?
- A . TC 1: Customer has sufficient funds. Credit has not been granted.
- B . TC 2: Customer does not have sufficient funds. Credit has been granted.
- C . TC 3: Customer does not have sufficient funds. Credit has not been granted.
- D . TC 4: Customer has sufficient funds. Credit has been granted.
C
Explanation:
Multiple condition testing requires each possible combination of conditions to be tested, whereas modified condition/decision testing (MC/DC) requires each condition to be shown to independently affect the outcome. In the case of the ATM withdrawal, TC 3 (Customer does not have sufficient funds and credit has not been granted) would not result in the machine paying out, which is a result of applying multiple condition testing. However, for MC/DC, this test case would not be included because it doesn’t provide an independent assessment of either condition’s effect on the decision since both conditions are negative and the outcome is as expected (no payout).
Which of the following statements BEST describes how tools support model-based testing?
- A . Finite state machines are used to describe the intended execution-time behavior of a software-controlled system.
- B . Random sets of threads of execution are generated as test cases.
- C . Large sets of test cases are generated to provide full code coverage.
- D . An engine is provided that allows the user to execute the model.
A
Explanation:
Model-based testing tools support the creation and execution of tests based on models of the system under test. Finite state machines (FSMs) are often used in model-based testing to describe the expected behavior of a system during execution. FSMs help in defining the states of the system and the transitions between these states based on events, which can then be used to generate test cases that validate the system’s behavior against the model.
A component has been analysed during a risk-assessment and rated as highly critical.
Which of the following white-box test techniques provides the highest level of coverage and could therefore be used to test this component?
- A . Decision testing
- B . Statement testing
- C . Multiple condition testing
- D . Modified condition/decision testing
D
Explanation:
Modified condition/decision testing (MC/DC) provides a higher level of coverage compared to other white-box testing techniques because it requires each condition in a decision to be shown to independently affect that decision’s outcome. It is more rigorous than both decision testing (which only requires each decision’s possible outcomes to be tested) and statement testing (which requires only each executable statement to be executed). Therefore, for a highly critical component, MC/DC is more appropriate as it ensures a more thorough assessment of the logic in the software component.
Consider the following fault attack:
• Force all possible incoming errors from the software/operating system interfaces to the application.
Which of the following is the kind of failure you are looking for when using this attack?
- A . Application crashes when unsupported characters are pasted into an input field.
- B . Application crashes when the network is unavailable.
- C . Application crashes due to a lack of portability.
- D . Application miscalculates total monthly balance due on credit cards.
A
Explanation:
The fault attack described involves forcing all possible incoming errors from software/operating system interfaces. The type of failure being sought is one where the application does not handle erroneous or unexpected input correctly, which can lead to crashes or other unintended behavior. Thus, an application crash when unsupported characters are pasted into an input field is a typical failure that this kind of fault attack would aim to uncover.
Which of the following is NOT a common issue with traditional capture/playback test automation?
- A . Difficult to maintain when software changes.
- B . Recorded scripts are difficult to maintain by non-technical persons.
- C . Data and actions are mixed in the recorded script.
- D . Execution of the recorded script is difficult outside office hours.
D
Explanation:
Common issues with traditional capture/playback test automation include difficulty in maintaining the scripts when software changes (option A), the challenge for non-technical persons to maintain recorded scripts (option B), and the issue that data and actions are often intertwined within the recorded script (option C), which can make them hard to understand and modify. However, the timing of the execution of the recorded script (option D), such as the difficulty of running scripts outside office hours, is not typically a problem inherent to capture/playback test automation itself but rather an environmental or scheduling issue.
Which of the following is a valid reason for including security testing in a test approach?
- A . There is a threat of unauthorized copying of applications or data.
- B . To provide measurements from which an overall level of availability can be obtained.
- C . To evaluate the ability of a system to handle peak loads at or beyond the limits of its anticipated or specified workloads
- D . Software changes will be frequent after it enters production.
A
Explanation:
Including security testing in a test approach is valid when there are concerns about unauthorized access or activities, such as the threat of unauthorized copying of applications or data (option A). This type of testing aims to uncover vulnerabilities that could be exploited to compromise the confidentiality, integrity, or availability of the system. The other options listed―availability measurements (option B), system’s peak load handling (option C), and frequent software changes (option D)―relate to different aspects of testing, such as reliability, performance, and maintainability, which are not directly associated with security testing.
You are working on an internet banking project. Your company is offering this product to the financial market. For each new customer, some customization will typically be needed. To make the product successful there is a strong focus during development on a reliable and maintainable architecture. To support architectural reviews, a checklist will be developed. Within the checklist specific sections will be attributed to reliability and maintainability.
Which question from the list below should you include in the maintainability section of the architectural review checklist?
- A . Will the system use n-version programming for critical components?
- B . Will the user interface be implemented independently from the other software modules?
- C . Does the system have user-friendly error messages?
- D . Does the password protection of the system adhere to the latest regulations?
B
Explanation:
In the context of an internet banking project where reliability and maintainability are emphasized, a key factor for maintainability is the modularity of the system. Implementing the user interface independently from other software modules (answer B) can significantly enhance maintainability. This is because it allows changes to be made to the user interface without impacting the underlying business logic or data access layers, making the system more adaptable to change. This kind of separation of concerns is a recognized best practice in software design for maintainability. The other options (A, C, and D) relate more to reliability and security aspects than to maintainability.
A new web site has been launched for a testing conference. There are a number of links to other related web sites for information purposes. Participants like the new site but complaints are being made that some (not all) of the links to other sites do not work.
Which type of test tool is most appropriate in helping to identify the causes of these failures?
- A . Review tool
- B . Hyperlink tool
- C . Static analysis tool
- D . Dynamic analysis tool
B
Explanation:
When users complain about issues with links on a website, the most appropriate test tool to identify the causes of these failures is a hyperlink tool (answer B). Hyperlink tools are specifically designed to check the validity of links on web pages. They can automatically identify broken or dead links, which is essential for maintaining the quality and user experience of a website. Review tools, static analysis tools, and dynamic analysis tools do not primarily focus on hyperlink verification.
Which of the following statements is TRUE regarding tools that support component testing and the build process?
- A . Component testing and build automation tools are only used by developers.
- B . Build automations tools facilitate manual testing at a low level by allowing the change of variables values during test execution.
- C . Component testing tools are typically specific to the programming language and may be used to automate unit testing.
- D . Component testing tools are the basis for a continuous integration environment.
C
Explanation:
Component testing tools, which are often specific to a programming language, are used to automate unit tests (answer C). These tools help to validate the functionality of individual components or units of code in isolation from the rest of the application. While build automation tools are indeed used by developers and are related to continuous integration (answers A and D), and they can facilitate testing at various levels (answer B), component testing tools’ primary purpose is to support the testing of individual components, often through automated unit tests, which can be specific to the language in which the components are written.
Which of the following defect types is NOT an example of a defect type typically found with API testing?
- A . Data handling issues
- B . Timing problems
- C . High architectural structural complexity
- D . Loss of transactions
C
Explanation:
In the context of API testing, the defect types generally found are related to the specific interactions with the API, such as issues with data formatting, handling, validation, and the sequencing or timing of API calls. Architectural structural complexity is not typically a defect that would be identified at the API testing level. API tests are concerned with the interface and immediate integration points, not the overarching system architecture, which would be more relevant to design or system-level testing.
Below is the pseudo-code for the Win program:
The bingo program contains a data flow anomaly.
Which data flow anomaly can be found in this program?
- A . Variable ‘A" is not assigned a value before using it.
- B . Variable ‘D" is defined but subsequently not used.
- C . The program does not contain any comments.
- D . It is recommended to use a variable instead of the hard-coded print results "Win" and *Loose".
B
Explanation:
The pseudo-code provided for the "Win" program reads in variables A, B, C, and D. However, only variables A, B, and C are used in the conditional statements to determine if the output will be "Win" or "Loose". Variable ‘D’ is never used after it is read, which is a classic example of a ‘defined but not used’ data flow anomaly. This means that while there is an instruction to read a value into variable ‘D’, there is no subsequent use of this variable in the program’s logic or output.
At which test level would performance efficiency testing most likely be performed?
- A . Component testing
- B . Integration testing
- C . System testing
- D . User acceptance testing
C
Explanation:
Performance efficiency testing is most commonly associated with system testing. This is the level at which the complete, integrated system is evaluated, and it is typically where performance, load, and stress testing are conducted to assess the system’s behavior under various conditions and loads.
Performance efficiency testing at this level helps to ensure that the system meets the necessary performance criteria as a whole.
There are multiple activities the Technical Test Analyst performs regarding test automation.
Which of the following activities is a typical test automation activity that the Technical Test Analyst will perform?
- A . Define the business process keywords and related actions.
- B . Execute the test cases and analyze any failures that may occur.
- C . Train the Test Analyst and Business Analyst to use and supply data for the test scripts.
- D . Decide regarding a test automation project based on a business case.
B
Explanation:
A Technical Test Analyst is primarily involved in the technical aspects of test preparation and execution. One of their typical activities includes the execution of test cases, particularly those that are automated, and the subsequent analysis of any test failures to identify defects and issues. This activity is more technical than defining business processes or training other analysts, and while making decisions based on a business case may be part of their role, it is not an activity directly related to test automation.
Consider the pseudo code provided below:
Which of the following options provides a set of test cases that achieves 100% decision coverage for this code fragment, with the minimum number of test cases?
Assume that in the options, each of the three numbers in parenthesis represent the inputs for a test case, where the first number represents variable “a”, the second number represents variable “b”, and the third number represents variable “c”.
- A . (5. 3,2)
- B . (5. 3, 2); (6, 4, 2); (5, 4, 0)
- C . (5. 4, 0); (3, 2, 5); (4, 5, 0)
- D . (4,5. 0); {5, 4, 5)
B
Explanation:
To achieve 100% decision coverage with the minimum number of test cases, we need to ensure that every branch of the decision is taken at least once. For the code provided:
The first condition (a>b) is true for the first two test cases and false for the third.
The second condition (b>c) is true for the first test case, false for the second, and does not matter for the third since the first condition is false.
Therefore, with these three test cases, we cover all possible outcomes of the decision, ensuring 100% decision coverage.
Which of the following statements is TRUE regarding tools that support component testing and the build process?
- A . Both are used to examine source code before a program is executed. This is done by analysing a section of code against a set (or multiple sets) of coding rules.
- B . Both are used to reduce the costs of test environments by replacing real devices.
- C . Both provide run-time information on the state of the software code, e.g., unassigned pointers and the use and de-allocation of memory.
- D . Both provide an environment for unit testing in which a component can be tested in isolation with suitable stubs and drivers.
D
Explanation:
Tools that support component testing and the build process are designed to provide a controlled environment where individual units or components of the software can be tested in isolation. This is typically done using stubs, which simulate the behavior of missing components, and drivers, which simulate the behavior of a user or calling program. This isolated environment is essential for unit testing because it allows testers to find defects within the boundaries of a single component before integrating it into the larger system.
Below is the pseudo-code for the bingo program:
The bingo program contains a data flow anomaly.
Which data flow anomaly can be found in this program?
- A . Variable "MIN" is not assigned a value before using it.
- B . Variable "AB is defined but subsequently not used.
- C . An invalid value is assigned to variable "B".
- D . The hard-coded value ‘2" should not be used.
A
Explanation:
In the provided pseudo-code for the Bingo program, the variable MIN is used in the statement MIN = MIN + A without being initialized with a value beforehand. This represents a classic ‘use before define’ anomaly, as the variable MIN must have an initial value before any operation like addition can be performed on it.
Subject to testing is a software system (COTS) for Project Administration and Control (PACS). PACS is a multi-project system for controlling the project time, e.g., in terms of scheduling and tracking, and managing the productivity of projects, e.g., in terms of effort and deliverables.
During various interviews with stakeholders the following information was gathered:
• Using PACS is not business critical. There is no impact such as high financial losses or problems to the operational continuity of an organization. Its application is not critical since the software package does not support directly the operational, or the primary, business processes of an organization. It supports (project) management in the project planning and tracking process. Of course, it will be highly annoying for users if the system “goes down” from time to time. Although this does not have a high impact for the business process as a whole, the Mean Time Between Failures (MTBF) still needs to be at a good level to be successful in the market.
• Users of PACS typically have an academic educational level, but have no prior experience with this particular type of project management software. The system will be used by a large number of users within an organization.
• The system will be used on a regular basis, e.g., several times per day by project managers and several times per week by project employees. This means that the system will not be used very intensively, often only for some data input or an information request. Its usage is not considered to be very time-critical.
• The data is recorded on-line and real-time. The system is expected to run in multiple hardware environments in various network environments. It is also expected that changes in the operational environments will happen in the upcoming period that also need to be supported.
Based on the information provided by the stakeholder, which combination of non-functional quality characteristics should you propose to test as part of your test approach?
- A . Reliability and Portability
- B . Security and Reliability
- C . Performance efficiency and Portability
- D . Reliability and Performance efficiency
A
Explanation:
Given the stakeholder information provided:
Reliability is important because the system, while not business-critical, still needs a good MTBF to be successful in the market. This is directly mentioned in the stakeholder information.
Portability is essential as the system is expected to run in multiple hardware environments and various network environments, with changes anticipated in the operational environments. Security is not highlighted as a concern, and performance efficiency, while generally important, is less critical as the system is not used intensively and is not time-critical. Therefore, reliability and portability are the most relevant non-functional quality characteristics to test in this scenario.
Which of the following is a generic risk factor that should be considered by a Technical Test Analyst during a product risk analysis?
- A . Frequency of use of the affected feature by end-users.
- B . Complexity of new technology.
- C . Visibility of failure leading to negative publicity and potential image damage.
- D . High change rate of business requirements.
B
Explanation:
A Technical Test Analyst during a product risk analysis would consider the complexity of new technology as a generic risk factor. Complex new technology can introduce uncertainties and potential issues that may not be well-understood, which can increase the risk of defects. Frequency of use, visibility of failure, and high change rate of business requirements are also valid considerations, but they are more specific to particular scenarios or aspects of the product rather than the generic technological complexity which is always a concern regardless of the context.
You are working on project where re-use of software is an objective. You are involved in the project as a Technical Test Analyst and have been given the task to develop a checklist for code reviews.
Which question from the list below should you implement as part of the code review checklist?
- A . Are all modules, data, and interfaces uniquely identified?
- B . Can each item be implemented with the techniques, tools, and resources available?
- C . Is it possible during acceptance testing to verity whether the item has been satisfied?
- D . Are all variables defined with meaningful, consistent and clear names?
A
Explanation:
For a project where the reuse of software components is a key objective, it is essential to ensure that all modules, data, and interfaces are uniquely identified. This facilitates the tracking, maintenance, and reuse of these components in different parts of the software or in other projects. Unique identification helps in preventing confusion and conflicts that may arise due to the reuse of components that have the same name but different functionalities or interfaces.
Consider the pseudo code provided below:
Given the following tests, what additional test(s) (if any) would be needed in order to achieve 100% statement coverage, with the minimum number of tests?
Test 1: A = 7, B = 7, Expected output: 7
Test 2: A = 7, B = 5, Expected output: 5
- A . A=6, B=12, Expected output: Bingo! and A=7, B=9, Expected output: 7
- B . A=6, B=12, Expected output: Bingo!
- C . A=7, B=9, Expected output: 7
- D . No additional test cases are needed to achieve 100% statement coverage.
D
Explanation:
100% statement coverage means that every line of code is executed at least once during testing.
Based on the provided pseudo-code and the test cases given:
Test 1 executes the MIN = B statement when A and B are equal.
Test 2 executes the MIN = A statement and skips the inner IF since B is not equal to 2*A.
All statements within the code have been executed by these two tests, hence no additional test cases are needed to achieve 100% statement coverage.
You are asked to provide a practical and pragmatic testing solution for a commercial system where the main user interface is via the Internet. It is critical that the company’s existing good name and market profile are not damaged in any way. Time to market is not a critical issue when appropriate testing solutions are identified to mitigate business risks.
A product risk assessment has revealed the following product risk:
• Abnormal application termination due to connection failure of the main interface.
Which of the following is the appropriate test type to address this risk?
- A . Performance efficiency testing
- B . Portability testing
- C . Reliability testing
- D . Operability testing
C
Explanation:
Reliability testing is the process of checking whether the software consistently performs according to its specifications. For a commercial system with a critical internet-based user interface, ensuring that the application can handle connection failures without abnormal terminations is essential. Reliability testing would include testing the system’s ability to recover from failures and continue operating, which directly addresses the risk identified.
Consider the code fragment provided below:
How many test cases are needed for the code fragment lines 26 – 37 to achieve 100% modified condition/decision coverage?
- A . 2 test cases
- B . 4 test cases
- C . 6 test cases
- D . 8 test cases
B
Explanation:
Modified condition/decision coverage (MC/DC) requires each condition in a decision to be shown to independently affect the decision’s outcome. For the code fragment provided, we have three independent conditions that need to be evaluated both as true and false. The minimum number of test cases needed to satisfy MC/DC for three conditions is four, which would allow each condition to be shown to independently affect the outcome of the decision.
Consider the following specification:
If you are flying with an economy ticket, there is a possibility that you may get upgraded to business class, especially if you hold a gold card in the airline’s frequent flier program. If you don’t hold a gold card, there is a possibility that you will get ‘bumped’ off the flight if it is full when you check in late.
This is shown in the control flow graph below. Note that each box (i.e., statement, decision) has been numbered.
Three tests have been run:
Test 1: Gold card holder who gets upgraded to business class
Test 2: Non-gold card holder who stays in economy
Test 3: A person who is bumped from the flight
What is the level of decision coverage achieved by these three tests?
- A . 60%
- B . 67%
- C . 75%
- D . 80%
B
Explanation:
The control flow graph provided illustrates the decision points for an airline’s upgrade and boarding process. Decision coverage is a measure of the percentage of decision points executed during testing: Test 1 covers the decision points: Gold card? (Yes) and Business full? (No).
Test 2 covers: Gold card? (No) and Economy full? (No).
Test 3 covers the decision that leads to being bumped from the flight, which is Economy full? (Yes) and Business full? (Yes).
From the given tests, the decision points for Gold card? (No) and Business full? (No) are not tested, leaving us with 4 out of 6 decision points covered, which is approximately 67% decision coverage.
Consider the code fragment provided below:
The comment frequency of the code fragment is 13%.
To which non-functional quality characteristic does a good level of comment frequency especially contribute?
- A . Portability
- B . Maintainability
- C . Usability
- D . Performance Efficiency
B
Explanation:
The comment frequency in a code fragment relates to the number of comments in relation to the code size. A good level of comment frequency can significantly contribute to the maintainability of the software. Maintainability is a non-functional quality characteristic that refers to the ease with which a software system can be modified to correct defects, update features, improve performance or other attributes, or adapt to a changed environment. Comments in the code help developers understand the logic, purpose, and functionality of the code, which is crucial when modifications are required. This does not directly contribute to portability, usability, or performance efficiency, which are concerned with different aspects of the software’s operation and user interaction.
Which of the following is a valid reason for including performance testing in a test approach?
- A . To reduce the threat of code insertion into a web page which may be exercised by subsequent users.
- B . To evaluate the system’s tolerance to faults in terms of handling unexpected input values.
- C . To mitigate the risk of long response times to defects reported by users and/or customers.
- D . To evaluate the ability of a system to handle increasing levels of load.
D
Explanation:
Performance testing is a key part of ensuring that a system can handle the expected load and perform well under high demand. This type of testing is designed to test the robustness, speed, scalability, and stability of the system under a given workload. It is not primarily concerned with security threats such as code insertion (Option A), nor with fault tolerance in terms of unexpected input values (Option B), nor with the speed of addressing user-reported defects (Option C), although these may be tangential benefits. Performance testing is focused on ensuring that the system meets performance criteria and can handle increasing loads without degradation of service, which is essential for providing a good user experience and for the system’s reliability.
Which of the following statements best captures the difference between data-driven and keyword-driven test automation?
- A . Data-driven test automation extends keyword-driven automation by defining data corresponding to business processes.
- B . Keyword-driven test automation extends data-driven automation by defining keywords corresponding to business processes.
- C . Data-driven test automation is more maintainable than keyword-driven test automation.
- D . Keyword-driven test automation is easier to develop than data-driven test automation.
B
Explanation:
Keyword-driven test automation is a framework where test cases are written using keywords that represent the actions or tests to be performed on the system. This is an extension of data-driven test automation, which focuses on separating test scripts from the test data, allowing the same test script to be run with various sets of data. Keyword-driven test automation further abstracts the process by allowing tests to be written in a more human-readable form that corresponds to business processes. This approach can improve maintainability and readability of test cases, making them easier to understand and modify. It’s not necessarily the case that one is more maintainable or easier to develop than the other (Options C and D); rather, they serve different purposes in test automation strategy.
A medical company has performed a safety criticality analysis using the IEC61508 standard. The software components to be developed have been categorized by Safety Integrity Level (SIL). Most components have been rated at SIL 1 or 2, and a few components at SIL 4.
After some discussions with the QA manager, the project has decided to adhere to the recommendations for test coverage provided by the IEC61508 standard.
Which level and type of test coverage should at least be used for the components rated at Safety Integrity Level (SIL) 2?
- A . 100% statement coverage, 100% decision coverage and 100% multiple condition coverage
- B . 100% statement coverage, 100% decision coverage and 100% MC/DC coverage
- C . 100% statement coverage and 100% decision coverage
- D . 100% statement coverage
C
Explanation:
In the context of software testing, different safety integrity levels (SIL) require different levels of rigor in testing. According to the IEC61508 standard, for software components rated at SIL 2, achieving 100% statement coverage and 100% decision coverage is recommended. Statement coverage ensures that every line of code is executed at least once during testing, while decision coverage ensures that every decision in the code (e.g., every branch of an IF statement) is executed on both the true and false sides. These coverage criteria ensure a thorough testing of the software components to validate that they behave correctly in all circumstances. Multiple condition coverage and MC/DC coverage (Options A and B) are more rigorous and typically required for higher SIL levels, such as SIL 4.
A software company based in Spain that develops mobile applications expects many small updates in the future, e.g., due to changing configurations and customer feedback. The company also wants to focus on being able to change the software effectively and efficiently during initial development without introducing new defects.
Which maintainability sub-characteristic should be covered by the test approach during the initial
development?
- A . Analysability
- B . Modifiability
- C . Modularity
- D . Re-usability
B
Explanation:
In the context of a software company in Spain developing mobile applications with an expectation of many small updates due to changing configurations and customer feedback, focusing on being able to change the software effectively and efficiently during initial development without introducing new defects is crucial. The maintainability sub-characteristic that should be covered by the test approach during the initial development is Modifiability.
Modifiability refers to the ease with which a software product can be modified to correct faults, improve performance or other attributes, or adapt to a changed environment. In a scenario where frequent and small updates are anticipated, ensuring that the software architecture and design support easy modification is vital. This not only aids in implementing changes more rapidly but also helps in maintaining the stability and integrity of the application, thereby preventing the introduction of new defects. The focus on modifiability ensures that the software remains responsive to customer feedback and evolving requirements without compromising on quality or performance.
Which of the following statements about performance testing tools is NOT correct?
- A . Typical metrics and reports provided by performance testing tools include the number of simulated users throughout the test, and the number and type of transactions generated by the simulated users, and the arrival rate of the transactions.
- B . Significant factors to consider in the implementation of performance testing tools include the flexibility of the tool to allow different operational profiles to be easily implemented, and the hardware and network bandwidth required to generate the load.
- C . Performance testing tools typically drive the application by simulating user interaction at the graphical user interface level to more accurately measure response times.
- D . Performance testing tools generate a load by simulating a large number of virtual users following their designated operational profiles to generate specific volumes of input data.
C
Explanation:
The statement about performance testing tools that is NOT correct is that they typically drive the application by simulating user interaction at the graphical user interface (GUI) level to more accurately measure response times. In practice, performance testing tools often simulate user interactions at a protocol or service level rather than the GUI level. This approach allows the tools to generate a high load by simulating many virtual users, which would be challenging to achieve with GUI-level interactions due to the higher resource consumption and slower execution speed associated with GUI automation.
Performance testing tools are designed to assess the performance of a system under a particular load and are not primarily focused on the user interface. They simulate multiple users accessing the system simultaneously, which helps in identifying bottlenecks, understanding the system’s behavior under load, and determining how the system scales with increasing load. The tools typically simulate user requests to the server, bypassing the GUI to directly test the backend, APIs, or other service endpoints. This method allows for more efficient and scalable testing, enabling the simulation of thousands of users without the overhead of rendering the GUI.
Within the world of consumer electronics, the amount of embedded software is growing rapidly. The amount of software in high-end television sets has increased by a factor of about eight over the last six years. In addition, the market of consumer electronics has been faced with a 5 -10% price erosion per year. The price of a product is, among a number of other things, determined by the microcontroller used. Therefore, the use of ROM and RAM remains under high pressure in consumer electronic products, leading to severe restrictions on code size.
You are a Technical Test Analyst involved in the review of the architecture of this project.
Which of the following issues would be MOST important to focus on during the review and when verifying the correct implementation?
- A . Connection pooling
- B . Caching
- C . Transaction concurrency
- D . Lazy instantiation
D
Explanation:
The key context here is the challenge of managing limited resources, particularly ROM and RAM, due to severe restrictions on code size in consumer electronics. Lazy instantiation is a design pattern that defers the creation of an object until the first time it is needed. This approach can significantly reduce the application’s memory footprint by avoiding unnecessary pre-allocation of memory, which is particularly valuable in systems where memory resources are constrained. In reviewing the architecture for such a system, it’s crucial to ensure that objects are only created when necessary and that memory is optimally managed. Hence, the focus on lazy instantiation would be most important to ensure that the system uses resources efficiently and remains within the restricted code size.
Assume you are involved in testing a Health Insurance Calculation system.
At the main screen one can enter information for a new client. The information to be provided consists of last name, first name and date of birth. After confirmation of the information, the system checks the age of the potential new client and calculates a proposed premium.
The system also has the option to request information for an existing client, using the client’s ID number.
A keyword-driven automation approach is being used to automate most of the regression testing.
Based on the information provided, which TWO of the options provided would be the MOST LIKELY keywords for this application? (Choose two.)
- A . Remove_Client
- B . Enter_Client
- C . Print_Premium
- D . Select_Client
- E . Exclude_Client
B, D
Explanation:
Considering the functionalities described for the Health Insurance Calculation system, the keywords would represent the main actions that can be performed in the system. ‘Enter_Client’ would be a keyword for entering new client information, which is a primary feature of the system as described. ‘Select_Client’ would be used to retrieve information for an existing client using the client’s ID number, which is another main functionality. Other options such as ‘Remove_Client’, ‘Print_Premium’, and ‘Exclude_Client’ are not explicitly mentioned in the provided system functionalities, therefore, ‘Enter_Client’ and ‘Select_Client’ are the most likely keywords for automation.
At which test level would reliability testing most likely be performed?
- A . Static testing
- B . Component testing
- C . System testing
- D . Functional acceptance testing
C
Explanation:
Reliability testing is aimed at verifying the software’s ability to function under expected conditions for a specified period of time. It is typically conducted during system testing, where the software is tested in its entirety to ensure that all components work together as expected in an environment that closely simulates the production environment. Reliability testing is not typically associated with static testing, component testing, or functional acceptance testing, as these levels of testing do not address the overall behavior of the system over time.
Consider the following control flow graph:
The control flow represents a software component of a car navigation system. Within the project the maximum cyclomatic complexity to be allowed is set at 5.
Which of the following statements is correct?
- A . No defect needs to be reported since the cyclomatic complexity of the component is calculated at 3.
- B . No defect needs to be reported since the cyclomatic complexity of the component is calculated at 4
- C . No defect needs to be reported since the cyclomatic complexity of the component is calculated at 5.
- D . A defect needs to be reported since the cyclomatic complexity of the component is calculated at 6.
D
Explanation:
Cyclomatic complexity is a measure of the number of linearly-independent paths through a program’s source code, which is often used as a measure of the complexity of a program. The control flow graph provided represents the logic of a software component and has more than 5 nodes with decision points, indicating that the complexity would exceed the maximum allowed value of 5. The calculation for cyclomatic complexity is V(G) = E – N + 2P, where E is the number of edges, N is the number of nodes, and P is the number of connected components. In this case, the calculated cyclomatic complexity exceeds the allowed threshold, thus a defect should be reported.
Consider the pseudo code for the Answer program:
Which of the following statements about the Answer program BEST describes the control flow anomalies to be found in the program?
- A . The Answer program contains no control flow anomalies.
- B . The Answer program contains unreachable code.
- C . The Answer program contains unreachable code and an infinite loop.
- D . The Answer program contains an infinite loop.
C
Explanation:
The provided pseudo code for the Answer program shows a WHILE loop that will always execute because the condition for the loop to terminate (a >= d) is never met within the loop’s body. This results in an infinite loop. Additionally, since the value of ‘b’ is initialized with ‘a + 10’ and ‘a’ starts from a value that is read and then set to 2, ‘b’ will never be equal to 12. Therefore, the ‘THEN’ branch of the IF statement, which includes ‘print(b)’, is unreachable. These are control flow anomalies because they represent logic in the code that will not function as presumably intended.
Which of the following statements about fault seeding tools is correct?
- A . Fault seeding tools insert defects into the source code to check the effectiveness of testing.
- B . Fault seeding tools insert defects into the source code to test the input checking capabilities of the software.
- C . Fault seeding tools insert defects into the source code to support the application of specification-based test design techniques.
- D . Fault seeding tools insert defects into the source code to check the level of maintainability of the software.
A
Explanation:
Fault seeding is a method used to evaluate the effectiveness of a testing process. Tools designed for fault seeding intentionally insert known defects into the source code, which are then supposed to be discovered during testing. The main purpose is not to check the input checking capabilities, support specification-based test design techniques, or assess maintainability of the software, but rather to gauge how well the testing process can identify and capture defects. By comparing the number of seeded faults that are found against the total number of faults inserted, test teams can get an insight into the effectiveness of their testing strategies and coverage. This method helps in understanding the detection capabilities of testing efforts and in identifying potential areas for improvement in test processes.
As a technical test analyst, you are involved in a risk analysis session using the Failure Mode and Effect Analysis technique. You are calculating risk priorities.
Which of the following are the major factors in this exercise?
- A . Severity and priority
- B . Functionality, reliability, usability, maintainability, efficiency and portability
- C . Likelihood and impact
- D . Financial damage, frequency of use and external visibility
C
Explanation:
Failure Mode and Effect Analysis (FMEA) is a structured approach to identify and address potential failures in a system, product, process, or service. The major factors involved in calculating risk priorities in FMEA are typically the severity of the potential failure, its likelihood of occurrence, and the ability to detect it. These factors are usually combined to form a Risk Priority Number (RPN) for each potential failure mode identified. However, the specific factors mentioned in the options like functionality, reliability, usability, maintainability, efficiency, and portability are quality characteristics that could be considered in an FMEA analysis but are not directly used for calculating risk priorities. Likewise, financial damage, frequency of use, and external visibility might influence the severity or impact of a failure, but they are not standard factors in calculating risk priorities in the context of FMEA. Therefore, the most relevant factors for calculating risk priorities in an FMEA context would typically be the likelihood of the failure occurring and its potential impact, which aligns with option C: Likelihood and impact.
It’s important to note that while these explanations are based on general principles and practices related to fault seeding and FMEA, the specifics might vary slightly in different contexts or with different methodologies.
Which option correctly states the sequence of tasks to be undertaken when re-factoring test cases? SELECT ONE OPTION
- A . Evaluate, Identification, Analysis. Re-run, Refactor
- B . Analysis, Identification, Re-run, Refactor, Evaluate
- C . Identification, Evaluate, Analysis, Refactor, Re-run
- D . Identification, Analysis, Refactor, Re-run, Evaluate
D
Explanation:
The correct sequence of tasks for refactoring test cases is:
Identification: Recognize the need and potential areas for refactoring.
Analysis: Assess the impact and dependencies related to the changes.
Refactor: Make the actual modifications to improve the test cases.
Re-run: Execute the modified test cases to ensure they still meet the required objectives.
Evaluate: Assess the outcomes of the refactor to ensure effectiveness and efficiency.
This sequence is supported by the ISTQB documentation, emphasizing the methodical approach needed to efficiently update and improve test cases, ensuring they remain effective and relevant.
An enhancement to a Social Media application allows for the creation of new Groups. Any number of existing application members can be added to a Group. An attempt to add a non-existent member of the application to a Group will result in an error, as will an attempt to add the same member twice.
Members can be removed from an existing Group. An existing Group can be deleted but only if there are no current members attached to it.
Which one of the following Keyword-driven input tables provides the BEST test coverage of this
enhancement?
Table 1
Keyword Group Id Member Id
Create.Group Group3
Add_Member Group3 @Member1
Result
Group created
Member added to Group
Member added to Group
Error – Group not empty
Member removed from group
Member removed from group
Group deleted
- A . Table 4
- B . Table 3
- C . Table 1
- D . Table 2
B
Explanation:
Table 3 provides the best test coverage for the described scenario, as it includes various key test conditions:
Attempting to add a non-existent member, resulting in an error.
Trying to add a member twice to the same group, leading to an error for duplicate entry.
Removing a member from the group.
Attempting to delete a group that is not empty, which correctly results in an error, and finally deleting a group when it is empty.
This table most comprehensively covers the functionalities and error handling specified in the enhancement details, effectively testing all scenarios including normal and exceptional behavior.
Why could test cases need to be refactored in an Agile project?
SELECT ONE OPTION
- A . To maintain bi-directional traceability with the user stories
- B . To increase the breadth of black box coverage
- C . To make them easier to understand and cheaper to modify
- D . To ensure that the tests and code remained aligned
C
Explanation:
In Agile projects, test cases may need to be refactored to make them simpler and less costly to modify. This is crucial because Agile methodologies prioritize adaptability and frequent changes to the codebase and documentation. Refactoring test cases in this context ensures that they remain aligned with the continuously evolving project requirements, are easier for teams to manage, and can be quickly adjusted to accommodate new or changed functionality without significant rework costs.
Your Agile team is developing a web-based system that will allow users to browse and buy online from a store’s shopping catalogue. Continuous Integration has been implemented and technically it is working well, running several times per day, but each run is taking almost as much time as the team is prepared to allow. It is clear that after a few more iterations, as the number of tests needed grows with the product, it will be taking too much time.
Which of the four options contains a pair of solutions that will BOTH help to solve this problem?
a) Only include unit and component integration tests in the automated Cl runs.
b) Schedule low priority tests to be the first ones executed in each run, in order to provide rapid build verification.
c) Reduce the extent to which the automated tests go through the user interface, using technical interfaces instead.
d) Reduce the number of Cl cycles run each day.
e) Select a subset of automated tests for the daytime Cl runs, and run as many of the other tests as possible in an overnight cycle.
- A . d and e
- B . b and d
- C . c and e
- D . a and c
C
Explanation:
The correct option for addressing the issue of increasing test run times in a Continuous Integration (CI) environment is option C: reducing the extent to which automated tests go through the user interface, and selecting a subset of automated tests for the daytime CI runs while running many of the other tests in an overnight cycle.
Reducing User Interface Tests: By reducing the extent of tests that go through the user interface (UI) and using technical interfaces instead, you can significantly decrease the time each test takes. UI tests are generally slower due to rendering and user interaction simulations.
Optimizing Test Scheduling: By selecting only a subset of tests for daytime CI runs and scheduling extensive testing for overnight runs, the team can manage the test load better without compromising the frequency of integration, thus ensuring continuous testing does not become a bottleneck.
This dual approach effectively manages both the execution time of individual tests and the overall test process across the development cycle, maintaining the agility of the CI process without sacrificing the breadth of testing necessary for quality assurance.
Which statement is correct regarding the use of exploratory testing for safety critical systems?
- A . It should be used when black-box tests cannot be automated
- B . It is highly recommended for all risk levels
- C . It is not recommended, as manual black-box tests should be used instead
- D . It is highly recommended for low risk levels only
D
Explanation:
Exploratory testing is recommended for low-risk levels only when considering safety-critical systems.
The correct response is option D.
Safety-Critical Considerations: For safety-critical systems, where failures can result in severe consequences, structured and well-planned testing strategies are generally prioritized to ensure comprehensive coverage and risk mitigation.
Role of Exploratory Testing: Exploratory testing can be effective in low-risk scenarios within safety-critical systems by providing a method to quickly identify obvious issues without the constraints of scripted testing. However, it is not typically recommended for higher risk areas due to its less systematic nature and potential for missing critical test cases.
This approach ensures that while innovation and spontaneous testing are employed, they do not compromise the safety and rigorous validation required in critical system components.