Which of the following would not be a part of the principal component structure of the term structure of futures prices?
- A . Curvature component
- B . Trend component
- C . Parallel component
- D . Tilt component
C
Explanation:
The trend component refers to parallel shifts in the term structure, the tilt refers to changes in the shape of the term structure at the long and short ends, and the curvature refers to movements in the medium term part. The phrase ‘parallel component’ has no meaning and is not a part of the principal components in analyzing term structures.
Changes in the term structure can also be analyzed as "level, slope and curvature", so you should be aware of this terminology as well to refer to the principal components of a term structure analysis.
Loss provisioning is intended to cover:
- A . Unexpected losses
- B . Losses in excess of unexpected losses
- C . Both expected and unexpected losses
- D . Expected losses
D
Explanation:
Loss provisioning is intended to cover expected losses. Economic capital is expected to cover unexpected losses. No capital or provisions are set aside for losses in excess of unexpected losses, which will ultimately be borne by equity. Choice ‘d’ is the correct answer.
For a security with a daily standard deviation of 2%, calculate the 10-day VaR at the 95% confidence level. Assume expected daily returns to be nil.
- A . 0.02
- B . 0.104
- C . 0.1471
- D . None of the above.
B
Explanation:
If the daily standard deviation is 2%, the 10-day standard deviation will be 2%* 10 = 0.063245. The value of Z at the 95% confidence level is 1.64485. Therefore the VaR value is 1.64485 * 0.063245 = 10.4%. The other choices are incorrect.
If and are the expected rate of return and volatility of an asset whose prices are log-normally distributed, and a random drawing from a standard normal distribution, we can simulate the asset’s returns using the expressions:
- A . – + .
- B . + .
- C . / .
- D . – .
B
Explanation:
A standard model for representing asset returns in finance is the Geometric Brownian Motion process, and returns according to this model can be estimated by the expression given in Choice ‘b’. Note that prices according to this model are log-normally distributed, and returns are normally distributed.
Which of the following statements is true?
I. It is sufficient to ensure that a parent entity has sufficient excess liquidity to cover a liquidity shortfall for a subsidiary.
II. If a parent entity has a shortfall of liquidity, it can always rely upon any excess liquidity that its foreign subsidiaries might have.
III. Wholesale funding sources for a bank refer to stable sources of funding provided by the central bank.
IV. Funding diversification refers to diversification of both funding sources and funding tenors.
- A . IV
- B . III and IV
- C . I and III
- D . I and IV
A
Explanation:
It is not generally sufficient to ensure the adequacy of liquidity across a group – ie it is not appropriate to just add up the sources and needs for liquidity across multiple entities in a group. This is because there can be restrictions on transferring liquidity between entities, particularly when the entities are located across borders. In cases where transfers of liquidity are permitted, there may be settlement delays in transferring funds from one entity to another. Therefore both statements I and II are incorrect.
Wholesale funding sources refers to the temporary interbank funding sources that need to be rolled over on very short intervals, often as short as overnight. These are not stable sources for long term funding. Statement III is therefore false.
Statement IV is correct as funding diversification refers to diversification of both funding sources and the duration for which the amounts are borrowed, ie tenor diversity. Statement IV is the only correct statement and therefore Choice ‘a’ is the correct answer.
The loss severity distribution for operational risk loss events is generally modeled by which of the following distributions:
I. the lognormal distribution
II. The gamma density function
III. Generalized hyperbolic distributions
IV. Lognormal mixtures
- A . II and III
- B . I, II and III
- C . I, II, III and IV
- D . I and III
C
Explanation:
All of the distributions referred to in the question can be used to model the loss severity distribution for op risk. Therefore Choice ‘c’ is the correct answer.
Which of the following cannot be used as an internal credit rating model to assess an individual borrower:
- A . Distance to default model
- B . Probit model
- C . Logit model
- D . Altman’s Z-score
A
Explanation:
Altman’s Z-score, the Probit and the Logit models can all be used to assess the credit rating of an individual borrower. There is no such model as the ‘distance to default model’, and therefore Choice ‘a’ is the correct answer.
If the 1-day VaR of a portfolio is $25m, what is the 10-day VaR for the portfolio?
- A . $7.906m $79.06m
- B . $250m
- C . Cannot be determined without the confidence level being specified
B
Explanation:
The 10-day VaR is = $25m x SQRT(10) = $79.06m. Choice ‘b’ is the correct answer.
Which of the following statements are true:
I. Top down approaches help focus management attention on the frequency and severity of loss events, while bottom up approaches do not.
II. Top down approaches rely upon high level data while bottom up approaches need firm specific risk data to estimate risk.
III. Scenario analysis can help capture both qualitative and quantitative dimensions of operational risk.
- A . III only
- B . II and III
- C . I only
- D . II only
B
Explanation:
Top down approaches do not consider event frequency and severity, on the other hand they focus on high level available data such as total capital, income volatility, peer group information on risk capital etc. Bottom up approaches focus on severity and frequency distributions for events. Statement I is therefore not correct.
Top down approaches do indeed rely upon high level aggregate data and tend to infer operational risk capital requirements from these. Bottom up approaches look at more detailed firm specific information. Statement II is correct.
Scenario analysis requires estimating losses from risk scenarios, and allows incorporating the judgment and views of managers in addition to any data that might be available from internal or external loss databases. Statement III is correct. Therefore Choice ‘b’ is the correct answer.
The results of ‘desk-level’ stress tests cannot be added together to arrive at institution wide estimates because:
- A . Desk-level stress tests tend to ignore higher level risks that are relevant to the institution but completely outside the control of the individual desks.
- B . Desk-level stress tests focus on desk specific risks that may be minor or irrelevant in the larger scheme at the institution level.
- C . Desk-level stress tests tend to focus on extreme movements in risk parameters (such as
volatility) without considering economy wide scenarios that may represent more realistic
and consistent situations for the institution. - D . All of the above
C
Explanation:
All the above listed reasons are valid explanations as to why an institution level stress test cannot be estimated by merely summing up the results of the stress tests of the individual desks.
Regulatory arbitrage refers to:
- A . the practice of transferring business and profits to jurisdictions (such as those in other countries) to avoid or reduce capital adequacy requirements
- B . the practice of structuring a financial institution’s business as a bank holding company to arbitrage the differing capital and credit rating requirements for different business lines
- C . the practice of investing and financing decisions being driven by associated regulatory capital requirements as opposed to the true underlying economics of these decisions
- D . All of the above
C
Explanation:
The correct answer is Choice ‘c’. The other choices do not refer to ‘regulatory arbitrage’.
According to the Basel II standard, which of the following conditions must be satisfied before a bank can use ‘mark-to-model’ for securities in its trading book?
I. Marking-to-market is not possible
II. Market inputs for the model should be sourced in line with market prices
III. The model should have been created by the front office
IV. The model should be subject to periodic review to determine the accuracy of its performance
- A . I, II and IV
- B . II and III
- C . I, II, III and IV
- D . III and IV
A
Explanation:
According to Basel II, where marking-to-market is not possible, banks may mark-to-model, where this can be demonstrated to be prudent. Marking-to-model is defined as any valuation which has to be benchmarked, extrapolated or otherwise calculated from a market input. When marking to model, an extra degree of conservatism is appropriate.
Supervisory authorities will consider the following in assessing whether a mark-to-model valuation is prudent:
• Senior management should be aware of the elements of the trading book which are subject to mark to model and should understand the materiality of the uncertainty this creates in the reporting of the risk/performance of the business.
• Market inputs should be sourced, to the extent possible, in line with market prices. The appropriateness of the market inputs for the particular position being valued should be reviewed regularly.
• Where available, generally accepted valuation methodologies for particular products should be used as far as possible.
• Where the model is developed by the institution itself, it should be based on appropriate assumptions, which have been assessed and challenged by suitably qualified parties independent of the development process. The model should be developed or approved independently of the front office. It should be independently tested. This includes validating the mathematics, the assumptions and the software implementation.
• There should be formal change control procedures in place and a secure copy of the model should be held and periodically used to check valuations.
• Risk management should be aware of the weaknesses of the models used and how best to reflect those in the valuation output.
• The model should be subject to periodic review to determine the accuracy of its performance (e.g. assessing continued appropriateness of the assumptions, analysis of P&L versus risk factors, comparison of actual close out values to model outputs).
• Valuation adjustments should be made as appropriate, for example, to cover the uncertainty of the model valuation.
The model should be created independent of the front office, and not by it. Therefore
statement III does not represent an appropriate choice. Choice ‘a’ is the correct answer.
For identical mean and variance, which of the following distribution assumptions will provide a higher estimate of VaR at a high level of confidence?
- A . A distribution with kurtosis = 8
- B . A distribution with kurtosis = 0
- C . A distribution with kurtosis = 2
- D . A distribution with kurtosis = 3
A
Explanation:
A fat tailed distribution has more weight in the tails, and therefore at a high level of confidence the VaR estimate will be higher for a distribution with heavier tails. At relatively lower levels of confidence however, the situation is reversed as the heavier tailed distribution will have a VaR estimate lower than a thinner tailed distribution.
A higher level of kurtosis implies a ‘peaked’ distribution with fatter tails. Among the given choices, a distribution with kurtosis equal to 8 will have the heaviest tails, and therefore a higher VaR estimate. Choice ‘a’ is therefore the correct answer. Also refer to the tutorial about VaR and fat tails.
A stock’s volatility under EWMA is estimated at 3.5% on a day its price is $10. The next day, the price moves to $11.
What is the EWMA estimate of the volatility the next day? Assume the persistence parameter = 0.93.
- A . 0.0421
- B . 0.0224
- C . 0.0429
- D . 0.0018
A
Explanation:
The correct answer is choice ‘a’
Recall the formula for calculating variance under EWMA. See below. Therefore the correct answer is =SQRT ((1 – 0.93) *(LN (11/10))^2 + 0.93*((3.5%^2))) = 4.21%. Other answers are incorrect. Note that continuous returns are to be used, ie ln (11/10) and not discrete returns (=1/10) – though generally the difference between the two is small over short time periods. (If in the exam the answer doesn’t exactly match, try using discrete returns.)
CORRECT TEXT
A Monte Carlo simulation based VaR can be effectively used in which of the following cases:
- A . When returns data cannot be analytically modeled
- B . When returns are discontinuous or display large jumps
- C . Where analytical methods are too complex to effectively use
- D . All of the above
D
Explanation:
Monte Carlo simulations can be effectively used in all cases where an analytical estimate of the VaR cannot be made for any reason – which may include complexity of portfolios, discontinuities or non-linearity in returns or just the plain unavailability of closed form analytical models. Therefore Choice ‘d’ is the correct answer.
Which of the following statements are true:
I. The three pillars under Basel II are market risk, credit risk and operational risk.
II. Basel II is an improvement over Basel I by increasing the risk sensitivity of the minimum capital requirements.
III. Basel II encourages disclosure of capital levels and risks
- A . III only
- B . I only
- C . I and II
- D . II and III
D
Explanation:
The three pillars under Basel II are minimum capital requirements, supervisory review process and market discipline. Therefore statement I is false. The other two statements are accurate. Therefore Choice ‘d’ is the correct answer.
Which of the following are measures of liquidity risk
I. Liquidity Coverage Ratio
II. Net Stable Funding Ratio
III. Book Value to Share Price
IV. Earnings Per Share
- A . III and IV
- B . I and II
- C . II and III
- D . I and IV
B
Explanation:
In December 2009 the BIS came out with a new consultative document on liquidity risk.
Given the events of 2007 – 2009, it has been clear that a key characteristic of the financial crisis was the inaccurate and ineffective management of liquidity risk
The paper two separate but complementary objectives in respect of liquidity risk management: The first objective relates to the short-term liquidity risk profile of institution, and the second objective is to promote resiliency over longer-term time horizons.
The paper identifies the following two ratios – you should be aware of these – though I am not sure if these will show up in the PRMIA exam:
Which of the following are measures of liquidity risk
I. Liquidity Coverage Ratio
II. Net Stable Funding Ratio
III. Book Value to Share Price
IV. Earnings Per Share
- A . III and IV
- B . I and II
- C . II and III
- D . I and IV
B
Explanation:
In December 2009 the BIS came out with a new consultative document on liquidity risk.
Given the events of 2007 – 2009, it has been clear that a key characteristic of the financial crisis was the inaccurate and ineffective management of liquidity risk
The paper two separate but complementary objectives in respect of liquidity risk management: The first objective relates to the short-term liquidity risk profile of institution, and the second objective is to promote resiliency over longer-term time horizons.
The paper identifies the following two ratios – you should be aware of these – though I am not sure if these will show up in the PRMIA exam:
Which of the following are measures of liquidity risk
I. Liquidity Coverage Ratio
II. Net Stable Funding Ratio
III. Book Value to Share Price
IV. Earnings Per Share
- A . III and IV
- B . I and II
- C . II and III
- D . I and IV
B
Explanation:
In December 2009 the BIS came out with a new consultative document on liquidity risk.
Given the events of 2007 – 2009, it has been clear that a key characteristic of the financial crisis was the inaccurate and ineffective management of liquidity risk
The paper two separate but complementary objectives in respect of liquidity risk management: The first objective relates to the short-term liquidity risk profile of institution, and the second objective is to promote resiliency over longer-term time horizons.
The paper identifies the following two ratios – you should be aware of these – though I am not sure if these will show up in the PRMIA exam:
Which of the following is not a credit event under ISDA definitions?
- A . Restructuring
- B . Obligation accelerations
- C . Rating downgrade
- D . Failure to pay
C
Explanation:
According to ISDA, a credit event is an event linked to the deteriorating credit worthiness of an underlying reference entity in a credit derivative. The occurrence of a credit event usually triggers full or partial termination of the transaction and a payment from protection seller to protection buyer.
Credit events include
– bankruptcy,
– failure to pay,
– restructuring,
– obligation acceleration,
– obligation default and
– repudiation/moratorium.
A rating downgrade is not a credit event.
An equity manager holds a portfolio valued at $10m which has a beta of 1.1. He believes the market may see a dip in the coming weeks and wishes to eliminate his market exposure temporarily. Market index futures are available and the current futures notional on these is $50,000 per contract.
Which of the following represents the best strategy for the manager to hedge his risk according to his views?
- A . Sell 200 futures contracts
- B . Buy 220 futures contracts
- C . Sell 220 futures contracts
- D . Liquidate his portfolio as soon as possible
C
Explanation:
The number of futures contracts to sell are equal to $10m x 1.1/$50,000 = 220. Liquidating his portfolio would reduce the beta to zero, but would also get rid of the bets he wants to play on. Therefore Choice ‘c’ is the correct answer.
(Note that futures and spot prices generally move together allowing futures positions to be used for hedging the risk against movement in spot prices. However there is a basis risk between spot and futures, therefore the a perfect hedge is never possible with futures. If interest rates move a great deal, spot and futures prices may diverge. Of course, this risk is generally quite low but may become amplified with large leveraged portfolios. Just something to be aware of.)
Which of the following best describes Altman’s Z-score
- A . A calculation of default probabilities
- B . A regression of probability of survival against a given set of factors
- C . A numerical computation based upon accounting ratios
- D . A standardized z based upon the normal distribution
C
Explanation:
Choice ‘c’ correctly describes Altman’s z-score. All other choices are incorrect.
Which of the following are considered asset based credit enhancements?
I. Collateral
II. Credit default swaps
III. Close out netting arrangements
IV. Cash reserves
- A . II and IV
- B . I, II and IV
- C . I and IV
- D . I and III
D
Explanation:
Credit enhancements come in two varieties: counterparty based, where the exercise of the credit enhancement requires a third party to pay, and this includes guarantees and CDS contracts. Asset based credit enhancements are based upon a physical asset in possession, and these include collateral and balances owed on other trades or transactions, and availed through close out netting arrangements.
Of the listed choices, I and III are asset based credit enhancements, and II is third party based. Cash reserves are not credit enhancements (unless held as collateral).
When considering a request for a loan from a retail customer, which of the following factors is relevant for a bank to consider:
- A . The other retail loans in its portfolio
- B . The credit worthiness of the retail customer
- C . The contribution this new loan would bring to total portfolio risk
- D . All of the above
D
Explanation:
The credit worthiness of the retail customer is certainly a factor for the bank to consider as it will need to price the loan to cover the expectation of default. At the same time, it will need to look at other loans in its portfolio as to avoid unacceptable concentration risk. A corollary of the same theme is that the bank will need to take a portfolio view of the loan request and consider its contribution to total portfolio risk. Therefore all the choices are appropriate considerations for the bank and Choice ‘d’ is the correct answer.
Which of the following statements is true:
I. When averaging quantiles of two Pareto distributions, the quantiles of the averaged models are equal to the geometric average of the quantiles of the original models based upon the number of data items in each original model.
II. When modeling severity distributions, we can only use distributions which have fewer parameters than the number of datapoints we are modeling from.
III. If an internal loss data based model covers the same risks as a scenario based model, they can can be combined using the weighted average of their parameters.
IV If an internal loss model and a scenario based model address different risks, the models can be combined by taking their sums.
- A . II and III
- B . III and IV
- C . I and II
- D . All statements are true
D
Explanation:
Statement I is true, the quantiles of the averaged models are equal to the geometric average of the quantiles of the original models.
Statement II is correct, the number of data points from which model parameters are estimated must be greater than the number of parameters. So if a distribution, say Poisson, has one parameter, we need at least two data points to estimate the parameter. Other complex distributions may have multiple parameters for shape, scale and other things, and the minimum number of observations required will be greater than the number of parameters.
Statement III is true, if the ILD data and scenarios cover the same risk, they are essentially different perspectives on the same risk, and therefore should be combined as weighted averages.
But if they cover completely different risks, the models will need to be added together, not averaged – which is why Statement IV is true.
The largest 10 losses over a 250 day observation period are as follows. Calculate the expected shortfall at a 98% confidence level:
20m
19m
19m
17m
16m
13m
11m
10m
9m
9m
- A . 19.5
- B . 14.3
- C . 18.2
- D . 16
C
Explanation:
For a dataset with 250 observations, the top 2% of the losses will be the top 5 observations. Expected shortfall is the average of the losses beyond the VaR threshold. Therefore the correct answer is (20 + 19 + 19 + 17 + 16)/5 = 18.2m.
Note that Expected Shortfall is also called conditional VaR (cVaR), Expected Tail Loss and Tail average.
Which of the following describes rating transition matrices published by credit rating firms:
- A . Expected ex-ante frequencies of migration from one credit rating to another over a one year period
- B . Probabilities of default for each credit rating class
- C . Probabilities of ratings transition from one rating to another for a given set of issuers
- D . Realized frequencies of migration from one credit rating to another over a one year period
D
Explanation:
Transition matrices are used for building distributions of the value of credit portfolios, and are the realized frequencies of migration from one credit rating to another over a period, generally one year.
Therefore Choice ‘d’ is the correct answer.
Since they represent an actually observed set of values, they are not probabilities nor are they forward looking ex-ante estimates, though they are often used as proxies for probabilities. Choice ‘a’ and Choice ‘c’ are not correct. They include more than information on just defaults, therefore Choice ‘b’ is not correct.
Under the KMV Moody’s approach to calculating expecting default frequencies (EDF), firms’ default on obligations is likely when:
- A . expected asset values one year hence are below total liabilities
- B . asset values reach a level below short term debt
- C . asset values reach a level below total liabilities
- D . asset values reach a level between short term debt and total liabilities
D
Explanation:
An observed fact that the KMV approach relies upon is that firms do not default when their liabilities exceed assets, but when asset values are somewhere between short term liabilities and the total liabilities. In fact, the ‘default point’ in the KMV methodology is defined as the short term debt plus half of the long term debt. The difference between expected value of the assets in one year and this ‘default point’, when expressed in terms of standard deviation of the asset values, is called the ‘distance-to-default’. Therefore Choice ‘d’ is the correct answer. The other choices are incorrect.
If the default hazard rate for a company is 10%, and the spread on its bonds over the risk free rate is 800 bps, what is the expected recovery rate?
- A . 40.00%
- B . 20.00%
- C . 8.00%
- D . 0.00%
B
Explanation:
The recovery rate, the default hazard rate (also called the average default intensity) and the spread on debt are linked by the equation Hazard Rate = Spread/(1 – Recovery Rate). Therefore, the recovery rate implicit in the given data is = 1 – 8%/10% = 20%.
Which of the following are considered counterparty based credit enhancements?
I. Collateral
II. Credit default swaps
III. Close out netting arrangements
IV. Guarantees
- A . I and III
- B . II and IV
- C . I, II and IV
- D . I and IV
B
Explanation:
Credit enhancements come in two varieties: counterparty based, where the exercise of the credit enhancement requires a third party to pay, and this includes guarantees and CDS contracts. Asset based credit enhancements are based upon a physical asset in possession, and these include collateral and balances owed on other trades or transactions, and availed through close out netting arrangements.
Of the listed choices, I and III are asset based credit enhancements, and II and IV are third party based.
The probability of default of a security over a 1 year period is 3%.
What is the probability that it would have defaulted within 6 months?
- A . 98.49%
- B . 3.00%
- C . 1.51%
- D . 17.32%
C
Explanation:
The question is asking for the probability of default over a 6 month period when the probability of annual default is known. If we let the 6 month probability of defaut be ‘d’, then the probability of survival at the end of 1 year would be (1 – d)^2. This we know is equal to 1 – 3% = 0.97. Therefore we can calculate ‘d’ to be equal to 1.51%. Choice ‘c’ is the correct answer, the others are incorrect.
Note that an exam question may ask for probability of the security having survived after 6 months, in which case the answer might be 1 – 1.51%. Also note that such questions will always require you to use the probability of survival (1 – probability of default) for doing the calculations. That is because the probabilities of survival can be multiplied over periods of time, but not probabilities of default as the first default in any period is the ‘game-over’ event after which neither survival nor defaults mean anything. Therefore you generally always have to get the probability of survival till a point in time, and use that for any other calculations.
Which of the following best describes economic capital?
- A . Economic capital is the amount of regulatory capital mandated for financial institutions in the OECD countries
- B . Economic capital is the amount of regulatory capital that minimizes the cost of capital for firm
- C . Economic capital reflects the amount of capital required to maintain a firm’s target credit rating
- D . Economic capital is a form of provision for market risk losses should adverse conditions arise
C
Explanation:
Economic capital is often calculated with a view to maintaining the credit ratings for a firm. It is the capital available to absorb unexpected losses, and credit ratings are also based upon a certain probability of default. Economic capital is often calculated at a level equal to the confidence required for the desired credit rating. For example, if the probability of default for a AA rating is 0.02%, and the firm desires to hold an AA rating, then economic capital maintained at a confidence level of 99.98% would allow for such a rating. In this case, economic capital set at a 99.8% level can be thought of as the level of losses that would not be exceeded with a 99.8% probability, and would help get the firm its desired credit rating.
Choice ‘c’ is the correct answer. Economic capital does not target minimizing the cost of capital, nor is it a provision for losses arising from market risk. The concept of economic capital is unrelated to where an institution or firm is based, therefore Choice ‘a’ is incorrect as well.
Which of the following will be a loss not covered by operational risk as defined under Basel II?
- A . Earthquakes
- B . Fat finger losses
- C . Systems failure
- D . Strategic planning
D
Explanation:
Operational risk is defined as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk.
Therefore any losses from poor strategic planning will not be a part of operational risk.
Choice ‘d’ is the correct answer.
Note that floods, earthquakes and the like are covered under the definition of operational risk as losses arising from loss or damage to physical assets from natural disaster or other events.
In respect of operational risk capital calculations, the Basel II accord recommends a confidence level and time horizon of:
- A . 99.9% confidence level over a 10 day time horizon
- B . 99% confidence level over a 10 year time horizon
- C . 99% confidence level over a 1 year time horizon
- D . 99.9% confidence level over a 1 year time horizon
D
Explanation:
Choice ‘d’ represents the Basel II requirement, all other choices are incorrect.
Which of the following statements are true:
I. Stress testing, if exhaustive, can replace traditional risk management tools such as value-at-risk (VaR)
II. Stress tests can be particularly useful in identifying risks with new products
III. Stress testing is distinct from a bank’s ICAAP carried out periodically
IV. Stress testing is a powerful communication tool that can convey risks to decisionmakers in an organization
- A . I, II and III
- B . I and III
- C . II and IV
- D . All of the above
C
Explanation:
Stress testing provides an independent and complementary perspective to other risk management tools such as value-at-risk and economic capital. Both are tools that serve similar purposes but are not interchangeable. Stress testing, no matter how exhaustively done, can not replace other tools such as those based on analytical or historical models. It can provide a useful sense check to validate models and assumptions, but is not a replacement for traditional techniques. Therefore statement I is false.
Stress testing can certainly help identify risks with new products for which historical data may be limited, and analytical models may be based upon many unproven assumptions. It can help challenge the risk characteristics of new products where stress situations have not been observed in the past. Therefore statement II is correct.
ICAAP stands for the ‘internal capital adequacy assessment process’ performed by a bank (remember the acronym and its expansion). Stress testing is an integral part of a firm’s ICAAP, and not distinct. It is one of the elements of the internal process. Therefore statement III is false.
Statement IV is correct as stress testing is indeed a powerful tool that can communicate risks throughout the organization as the stress scenarios are easier to comprehend than arcane statistical models. They are also easier to explain to regulators, and are a powerful communication tool.
Thus Choice ‘c’ is the correct answer.
CORRECT TEXT
The standard error of a Monte Carlo simulation is:
- A . Zero
- B . The same as that for a lognormal distribution
- C . Proportional to the inverse of the square root of the sample size
- D . None of the above
C
Explanation:
When we do a Monte Carlo simulation, the statistic we obtain (eg, the expected price) is an estimate of the real variable. The difference between the real value (which would be what we would get if we had access to the entire population) and that estimated by the Monte Carlo simulation is measured by the ‘standard error’, which is the standard deviation of the difference between the ‘real’ value and the simulated value (ie, the ‘error’).
As we increase the number of draws in a Monte Carlo simulation, the closer our estimate will be to the true value of the variable we are trying to estimate. But increasing the sample size does not reduce the error in a linear way, ie doubling the sample size does not halve the error, but reduces it by the inverse of the square root of the sample size. So if we have a sample size of 1000, going up to a sample size of 100,000 will reduce the standard error by a factor of 10 (and not 100), ie, SQRT(1/100) = 1/10. In other words, standard error is proportional to 1/N, where N is the sample size.
Therefore Choice ‘c’ is correct and the others are incorrect.
Which of the following is not a consideration in determining the liquidity needs of a firm (as opposed to determining the time horizon for liquidity risk)?
- A . Speed with which new equity can be issued to the owners
- B . Collateral
- C . Off balance sheet items
- D . The firm’s business model
A
Explanation:
Managing liquidity requires understanding and providing for two things: the amount of liquidity needed to pay for all current obligations (in both normal and stressed scenarios), and determining the time horizon over which this liquidity should be available. The first is essentially a function of the business of the firm, and the assets and liabilities resulting from operations. The second considers other factors such as the speed with which new cash can be borrowed (eg, from the repo markets), the consequences are of running out of liquidity (eg, maybe only an overdrawn fee as opposed to bankruptcy), etc. In other words, liquidity risk management answers two questions: how much, and for how long.
This question asks for identifying the factors that affect the ‘how much’ part. Choice ‘d’, Choice ‘c’ and Choice ‘b’ do affect the determination of the liquidity needs of the firm.
Choice ‘a’ does not affect the liquidity needs, but the ‘how long’ part. Choice ‘a’ is therefore the correct answer.
Changes in which of the following do not affect the expected default frequencies (EDF) under the KMV Moody’s approach to credit risk?
- A . Changes in the debt level
- B . Changes in the risk free rate
- C . Changes in asset volatility
- D . Changes in the firm’s market capitalization
B
Explanation:
EDFs are derived from the distance to default. The distance to default is the number of standard deviations that expected asset values are away from the default point, which itself is defined as short term debt plus half of the long term debt. Therefore debt levels affect the EDF. Similarly, asset values are estimated using equity prices. Therefore market capitalization affects EDF calculations. Asset volatilities are the standard deviation that form a place in the denominator in the distance to default calculations. Therefore asset volatility affects EDF too. The risk free rate is not directly factored in any of these calculations (except of course, one could argue that the level of interest rates may impact equity values or the discounted values of future cash flows, but that is a second order effect). Therefore Choice ‘b’ is the correct answer.
Which of the following is NOT an approach used to allocate economic capital to underlying business units:
- A . Stand alone economic capital contributions
- B . Marginal economic capital contributions
- C . Fixed ratio economic capital contributions
- D . Incremental economic capital contributions
C
Explanation:
Other than Choice ‘c’, all others represent valid approaches to allocate economic capital to underlying business units. There is no such thing as ‘fixed ratio economic capital contribution’
Which of the following is closest to the description of a ‘risk functional’?
- A . A risk functional is the distribution that models the severity of a risk
- B . A risk functional is a model distribution that is an approximation of the true loss distribution of a risk
- C . Risk functional refers to the Kolmogorov-Smirnov distance
- D . A risk functional assigns a penalty value for the difference between a model distribution and a risk’s severity distribution
D
Explanation:
For operational risk modeling, both frequency and severity distributions need to be modeled. Modeling severity involves finding an analytical distribution, such as log-normal or other that approximates the distribution best represented by known data – whether from the internal loss database, the external loss database or scenario data. A ‘risk functional’ is a measure of the deviation of the model distribution from the risk’s actual severity distribution. It assigns a penalty value for the deviation, using a statistical measure, such as the KS distance (Kolmogorov-Smirnov distance).
The problem of finding the right distribution then becomes the problem of optimizing the risk functional. For example, if F is the model distribution, and G is the actual, or empirical severity distribution, and we are using the KS test, then the Risk Functional R is defined as follows:
Note that supx stands for ‘supremum’, which is a more technical way of saying ‘maximum’. In other words, we are calculating the maximum absolute KS distance between the two distributions. (Note that the KS distance is the max of the distance between identical percentiles of the two distributions using the CDFs of the two.)
Once the risk functional is identified, we can minimize it to determine the best fitting distribution for severity.
The Basel framework does not permit which of the following Units of Measure (UoM) for operational risk modeling:
I. UoM based on legal entity
II. UoM based on event type
III. UoM based on geography
IV. UoM based on line of business
- A . I and IV
- B . III only
- C . II only
- D . None of the above
D
Explanation:
Units of Measure for operational risk are homogenous groupings of risks to allow sensible modeling decisions to be made. For example, some risks may be fat-tailed, for example the risk of regulatory fines. Other risks may have finite tails – for example damage to physical assets risk (DPA) may be limited to the value of the asset in the question.
Additionally, risk reporting may need to be done at the line of business, legal entity or regional basis, and in order to be able to do, so the right level of granularity needs to be captured in the risk modeling exercise. The level of granularity applied is called the ‘unit of measurement’ (UoM), and it is okay to adopt all of the choices listed above as the dimensions that describe the unit of measure.
Note that it is entirely possible, even likely, to use legal entity, risk type, region, business and other dimensions simultaneously, though doing so is likely to result in an extremely large number of UoM combinations. That can be addressed by then subsequently grouping the more granular UoMs into larger UoMs, which may ultimately be used for frequency and severity estimation.
Which of the following is not an approach proposed by the Basel II framework to compute
operational risk capital?
- A . Basic indicator approach
- B . Factor based approach
- C . Standardized approach
- D . Advanced measurement approach
B
Explanation:
Basel II proposes three approaches to compute operational risk capital – the basic indicator approach (BIA), the standardized approach (SIA) and the advanced measurement approach (AMA). There is no operational risk approach called the factor based approach.
Which of the following steps are required for computing the aggregate distribution for a UoM for operational risk once loss frequency and severity curves have been estimated:
I. Simulate number of losses based on the frequency distribution
II. Simulate the dollar value of the losses from the severity distribution
III. Simulate random number from the copula used to model dependence between the UoMs
IV. Compute dependent losses from aggregate distribution curves
- A . I and II
- B . III and IV
- C . None of the above
- D . All of the above
A
Explanation:
A recap would be in order here: calculating operational risk capital is a multi-step process. First, we fit curves to estimate the parameters to our chosen distribution types for frequency (eg, Poisson), and severity (eg, lognormal). Note that these curves are fitted at the UoM level – which is the lowest level of granularity at which modeling is carried out. Since there are many UoMs, there are are many frequency and severity distributions. However what we are interested in is the loss distribution for the entire bank from which the 99.9th percentile loss can be calculated.
From the multiple frequency and severity distributions we have calculated, this becomes a two step process:
– Step 1: Calculate the aggregate loss distribution for each UoM. Each loss distribution is based upon and underlying frequency and severity distribution.
– Step 2: Combine the multiple loss distributions after considering the dependence between the different UoMs. The ‘dependence’ recognizes that the various UoMs are not completely independent, ie the loss distributions are not additive, and that there is a sort of diversification benefit in the sense that not all types of losses can occur at once and the joint probabilities of the different losses make the sum less than the sum of the parts.
Step 1 requires simulating a number, say n, of the number of losses that occur in a given year from a frequency distribution. Then n losses are picked from the severity distribution, and the total loss for the year is a summation of these losses. This becomes one data point. This process of simulating the number of losses and then identifying that number of losses is carried out a large number of times to get the aggregate loss distribution for a UoM.
Step 2 requires taking the different loss distributions from Step 1 and combining them considering the dependence between the events. The correlations between the losses are described by a ‘copula’, and combined together mathematically to get a single loss distribution for the entire bank. This allows the 99.9th percentile loss to be calculated.
Which of the following are valid approaches to leveraging external loss data for modeling operational risks:
I. Both internal and external losses can be fitted with distributions, and a weighted average approach using these distributions is relied upon for capital calculations.
II. External loss data is used to inform scenario modeling.
III. External loss data is combined with internal loss data points, and distributions fitted to the combined data set.
IV. External loss data is used to replace internal loss data points to create a higher quality data set to fit distributions.
- A . I, II and III
- B . I and III
- C . II and IV
- D . All of the above
A
Explanation:
Internal loss data is generally the highest quality as it is relevant, and is ‘real’ as it has occurred to the organization. External loss data suffers from a significant limitation that the risk profiles of the banks to which the data relates is generally not known due to anonymization, and may likely may not be applicable to the bank performing the calculations. Therefore, replacing external loss data with external loss data is not a good idea. Statement IV is therefore incorrect.
All other approach described are valid approaches for the risk analyst to consider and implement. Therefore statements I, II and III are correct and IV is not.
If the duration of a bond yielding 10% is 6 years, the volatility of the underlying interest rates 5% per annum, what is the 10-day VaR at 99% confidence of a bond position comprising just this bond with a value of $10m? Assume there are 250 days in a year.
- A . 233000
- B . 139800
- C . 984000
- D . 279600
Which of the following is a valid approach to determining the magnitude of a shock for a given risk factor as part of a historical stress testing exercise?
I. Determine the maximum peak-to-trough change in the risk factor over the defined period of the historical event
II. Determine the minimum peak-to-trough change in the risk factor over the defined period of the historical event
III. Determine the total change in the risk factor between the start date and the finish date of the event regardless of peaks and troughs in between
IV. Determine the maximum single day change in the risk factor and multiply by the number of days covered by the stress event
- A . II and IV
- B . I and III
- C . IV only
- D . I, II and IV
B
Explanation:
Stress events rarely play out in a well defined period of time, and looking back it is always difficult to put exact start and end dates on historical stress events. Even after that is done, the question arises as to what magnitude of a change in a particular risk factor (for example interest rates, spreads, or exchange rates) are reasonable to consider for the purposes of the stress test.
Statements I and III correctly identify the two approaches that are acceptable and used in practice – the risk manager can either take the maximum adverse move – from peak to trough – in the risk factor, or alternatively he or she could consider the change in the risk factor from the start of the event to the end as defined for the purposes of the stress test. Between the two, the approach mentioned in statement III is considered slightly superior as it produces more believable shocks.
Statement II is incorrect because we never want to consider the minimum, and statement IV is not correct as it is likely to generate a shock of a magnitude that is not plausible.
Therefore Choice ‘b’ is the correct answer.
Which of the following are valid techniques used when performing stress testing based on hypothetical test scenarios:
I. Modifying the covariance matrix by changing asset correlations
II. Specifying hypothetical shocks
III. Sensitivity analysis based on changes in selected risk factors
IV. Evaluating systemic liquidity risks
- A . I, II, III and IV
- B . II, III and IV
- C . I, II and III
- D . I and II
D
Explanation:
Each of these represent valid techniques for performing stress testing and building stress scenarios. Therefore d is the correct answer. In practice, elements of each of these techniques is used depending upon the portfolio and the exact situation.
Which of the following need to be assumed to convert a transition probability matrix for a given time period to the transition probability matrix for another length of time:
I. Time invariance
II. Markov property
III. Normal distribution
IV. Zero skewness
- A . I, II and IV
- B . III and IV
- C . I and II
- D . II and III
C
Explanation:
Time invariance refers to all time intervals being similar and identical, regardless of the effects of business cycles or other external events. The Markov property is the assumption that there is no ratings momentum, and that transition probabilities are dependent only upon where the rating currently is and where it is going to. Where it has come from, or what the past changes in ratings have been, have no effect on the transition probabilities. Rating agencies generally provide transition probability matrices for a given period of time, say a year. The risk analyst may need to convert these into matrices for say 6 months, 2 years or whatever time horizon he or she is interested in. Simplifying assumptions that allow him to do so using simple matrix multiplication include these two assumptions – time invariance and the Markov property. Thus Choice ‘c’ is the correct answer. The other choices (normal distribution and zero skewness) are non-sensical in this context.
Which of the following formulae describes Marginal VaR for a portfolio p, where V_i is the value of the i-th asset in the portfolio? (All other notation and symbols have their usual meaning.)
A)
B)
C)
D)
All of the above
- A . Option A
- B . Option B
- C . Option C
- D . Option D
D
Explanation:
Marginal VaR of a component of a portfolio is the change in the portfolio VaR from a $1 change in the value of the component. It helps a risk analyst who may be trying to identify the best way to influence VaR by changing the components of the portfolio. Marginal VaR is also important for calculating component VaR (for VaR disaggregation), as component VaR is equal to the marginal VaR multiplied by the value of the component in the portfolio. Marginal VaR is by definition the derivative of the portfolio value with respect to the component i. This is reflected in Choice ‘a’ above. Using the definitions and relationships between correlation, covariance, beta and volatility of the portfolio and/or the component, we can show that the other two choices are also equivalent to Choice ‘a’. Therefore all the choices present are correct.
The definition of operational risk per Basel II includes which of the following:
I. Risk of loss resulting from inadequate or failed internal processes, people and systems or from external events
II. Legal risk
III. Strategic risk
IV. Reputational risk
- A . I, II, III and IV
- B . II and III
- C . I and III
- D . I and II
D
Explanation:
Operational risk as defined in Basel II specifically excludes strategic and reputational risk.
Therefore Choice ‘d’ is the correct answer.
Note that Basel II defines operational risk as follows:
Operational risk is defined as the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk.
The daily VaR of an investor’s commodity position is $10m. The annual VaR, assuming daily returns are independent, is ~$158m (using the square root of time rule).
Which of the following statements are correct?
I. If daily returns are not independent and show mean-reversion, the actual annual VaR will be higher than $158m.
II. If daily returns are not independent and show mean-reversion, the actual annual VaR will be lower than $158m.
III. If daily returns are not independent and exhibit trending (autocorrelation), the actual annual VaR will be higher than $158m.
IV. If daily returns are not independent and exhibit trending (autocorrelation), the actual annual VaR will be lower than $158m.
- A . I and IV
- B . I and III
- C . II and III
- D . II and IV
C
Explanation:
In the case of mean reversion, the actual VaR would be lower than that estimated using the square root of time rule. This is because gains over a period would be followed by losses so that the price can revert to the mean. In such cases, the autocorrelation between subsequent periods is effectively negative. This means the combined VaR over the periods would be lower.
In the case of positive autocorrelation, the actual VaR would be higher than that estimated using the square root of time rule for exactly the opposite reason than that described for the mean-reverting case.
(Recall that Variance (A + B) = Variance(A) + Variance(B) + 2*Correlation*StdDev(A)*StdDev(B). In cases where correlation is zero, the variance can simply be added together (which is the case for iid observations). In cases where the correlation is negative, the combined variance (and therefore standard deviation and also VaR) will be lower; and where correlation is positive, the combined variance (and therefore standard deviation and also VaR) will be higher.)
Therefore statement II is correct, and so is statement III. Choice ‘c’ is the correct answer.
Which of the following is a most complete measure of the liquidity gap facing a firm?
- A . Residual liquidity gap
- B . Liquidity at Risk
- C . Marginal liquidity gap
- D . Cumulative liquidity gap
A
Explanation:
Marginal liquidity gap measures the expected net change in liquidity over, say, a day. It is just equal to the liquidity inflow minus liquidity outflow. The cumulative liquidity gap measures the aggregate change in liquidity from a point in time, in other words it is just the summation of the marginal liquidity gap for each of the days included in the period under consideration. The residual liquidity gap goes one step further and adds available ‘opening balance’ of liquidity to the cumulative liquidity gap to reveal the days or times when the net liquidity is most at risk.
Liquidity at Risk measures the expected time to survival at a certain confidence level applied to the firm’s cash flows – and is not a measure of the liquidity gap. Therefore Choice ‘a’ is the correct answer.
Which of the following statements is a correct description of the phrase present value of a basis point?
- A . It refers to the present value impact of 1 basis point move in an interest rate on a fixed income security
- B . It refers to the discounted present value of 1/100th of 1% of a future cash flow
- C . It is another name for duration
- D . It is the principal component representation of the duration of a bond
A
Explanation:
This is a trick question, no great science to it. Remember that the ‘present value of a basis point’ refers to PV01, which is the same as BPV (basis point value) referred to in the PRMIA handbook. In other textbooks, the same term is also variously called ‘DV01’ (dollar value of a basis point). Remember these other terms too.
PV01, or the present value of a basis point, is the change in the value of a bond (or other fixed income security) from a 1 basis point change in the yield. PV01 is calculated as (Price * Modified Duration/10,000).
Therefore Choice ‘a’ is the correct answer.
Calculate the 99% 1-day Value at Risk of a portfolio worth $10m with expected returns of 10% annually and volatility of 20%.
- A . 290218
- B . 2326000
- C . 126491
- D . 294218
A
Explanation:
Be wary of questions asking you to calculate VaR where the mean or expected returns are different from zero. The VaR formula of z-value times standard deviation needs to have an adjustment for the expected return [ie use VaR = z-value times standard deviation minus expected return]. In this case, the standard deviation for 1 day for the portfolio is =SQRT(1/250)*20%*$10m = $126,491. The VaR is therefore (2.326 * $126,491) – ($10,000,000 * 10% * 1/250) = $290,218.
If E denotes the expected value of a loan portfolio at the end on one year and U the value of the portfolio in the worst case scenario at the 99% confidence level, which of the following expressions correctly describes economic capital required in respect of credit risk?
- A . E – U
- B . U/E
- C . U
- D . E
A
Explanation:
Economic capital in respect of credit risk is intended to absorb unexpected losses. Unexpected losses are the losses above and beyond expected losses and up to the level of confidence that economic capital is being calculated for. The capital required to cover unexpected losses in this case is E – U, and therefore Choice ‘a’ is the correct answer. This question does raise an important point – are expected losses a part of economic capital, or are they not? Different text books say different things, and sometimes they say both the things. I have tried to take an approach that uses what I read in the PRMIA handbook.
This writeup – http://www.riskprep.com/all-tutorials/37-exam-3/111-credit-var-an-intuitive-understanding – may help clarify things further.
Which of the following are valid objectives of a reverse stress test:
I. Ensure that a firm can survive for long enough after risks have materialized for it to either regain market confidence, restructure or be sold, or be closed down in an orderly manner,
II. Discover the vulnerabilities of the current business plan,
III. Better integrate business and capital planning,
IV. Create a ‘zero-failure’ environment at the systemic level in the financial sector
- A . I and IV
- B . I, II and III
- C . II and III
- D . All of the above
B
Explanation:
Statement I is true. According to the statement CP08/24: Stress and scenario testing (December 2008) issued by the FSA in the UK, an underlying objective of reverse stress tests is to ensure that a firm can survive long enough after risks have crystallized for one of the following to occur:
– the market decides that its lack of confidence is unfounded and recommences transacting with the firm;
– the firm down-sizes and re-structures its business;
– the firm is taken over, or its business is transferred in an orderly manner; or
– public authorities take the firm over, or wind down its business in an orderly manner. Statement II and III are true. The same statement clarifies the intention of the reverse stress testing requirement, which is to encourage firms to: explore more fully the vulnerabilities of their business model (including ‘tail risks’); make decisions that better integrate business and capital planning; and improve their contingency planning. Statement IV is incorrect. Since the question is asking for the statement which is NOT an objective for reverse stress tests, Choice ‘b’ is the correct answer. The same statement clarifies that the introduction of a reverse-stress test requirement should not be interpreted as indicating that the FSA is pursuing a ‘zero-failure’ policy. In the FSA’s view, such a policy is neither possible, nor desirable.
The VaR of a portfolio at the 99% confidence level is $250,000 when mean return is assumed to be zero. If the assumption of zero returns is changed to an assumption of returns of $10,000, what is the revised VaR?
- A . 240000
- B . 226740
- C . 273260
- D . 260000
A
Explanation:
The exact formula for VaR is = -(Z + ), where Z is the z-multiple for the desired confidence level, and is the mean return. Now Z is always a negative number, or at least will certainly be provided the desired confidence level is greater than 50%, and is often assumed to be zero because generally for the short time periods for which market risk VaR is calculated, its value is very close to zero.
Therefore in practice the formula for VaR just becomes -Z, and since Z is always negative, we normally just multiply the Z factor without the negative sign with the standard deviation to get the VaR.
For this question, there are two ways to get the answer. If we use the formula, we know that -Z= 250,000 (as =0), and therefore -Z – = 250,000 – 10,000 = $240,000.
The other, easier way to think about this is that if the mean changes, then the distribution’s shape stays exactly the same, and the entire distribution shifts to the right by $10,000 as the mean moves up by $10,000. Therefore the VaR cutoff, which was previously at – 250,000 on the graph also moves up by 10k to -240,000, and therefore $240,000 is the correct answer.
The other choices are intended to confuse by multiplying the z-factor for the 99% confidence level with 10,000 etc.
A Bank Holding Company (BHC) is invested in an investment bank and a retail bank. The BHC defaults for certain if either the investment bank or the retail bank defaults. However, the BHC can also default on its own without either the investment bank or the retail bank defaulting. The investment bank and the retail bank’s defaults are independent of each other, with a probability of default of 0.05 each. The BHC’s probability of default is 0.11.
What is the probability of default of both the BHC and the investment bank? What is the probability of the BHC’s default provided both the investment bank and the retail bank survive?
- A . 0.0475 and 0.10
- B . 0.11 and 0
- C . 0.08 and 0.0475
- D . 0.05 and 0.0125
D
Explanation:
Since the BHC always fails when the investment bank fails, the joint probability of default of the two is merely the probability of the investment bank failing, ie 0.05.
The probability of just the BHC failing, given that both the investment bank and the retail bank have survived will be equal to 0.11 – (0.05+0.05-0.05*0.05) = 0.0125. (The easiest way to understand this would be to consider a venn diagram, where the area under the largest circle is 0.11, and there are two intersecting circles inside this larger circle, each with an area of 0.05 and their intersection accounting for 0.05*0.05. We need to calculate the area outside of the two smaller circles, but within the larger circle representing the BHC).
Refer diagram below, please excuse the awful colors.
When pricing credit risk for an exposure, which of the following is a better measure than the others:
- A . Expected Exposure (EE)
- B . Notional amount
- C . Potential Future Exposure (PFE)
- D . Mark-to-market
A
Explanation:
Exposure for derivative instruments can vary significantly over the lifetime of the instrument, depending upon how the market moves. The potential future exposure represents the extremes, not the most likely outcome. The expected exposure is the most suitable measure for pricing the credit risk. Over time, as multiple transactions are entered into, the expectation (or the mean) will be realized – though individual transactions may have more or less by way of exposure.
The notional amount may not be relevant, though for loans it may be the most important contributor to the expected exposure. Mark-to-market will represent the exposure at a given point in time, but cannot be predicted nor be used to price the credit risk.
Which of the following statements are true:
I. The sum of unexpected losses for individual loans in a portfolio is equal to the total unexpected loss for the portfolio.
II. The sum of unexpected losses for individual loans in a portfolio is less than the total unexpected loss for the portfolio.
III. The sum of unexpected losses for individual loans in a portfolio is greater than the total unexpected loss for the portfolio.
IV. The unexpected loss for the portfolio is driven by the unexpected losses of the individual loans in the portfolio and the default correlation between these loans.
- A . I and II
- B . I, II and III
- C . III and IV
- D . II and IV
C
Explanation:
Unexpected losses (UEL) for individual loans in a portfolio will always sum to greater than the total unexpected loss for the portfolio (unless all the loans are correlated in such a way that they default together). This is akin to the ‘diversification effect’ in market risk, in other words, not all the obligors would default together. So the UEL for the portfolio will always be less than the sum of the UELs for individual loans. Therefore statement III is true. This ‘diversification effect’ will be affected by the default correlations between the obligors, in cases where the probability of various obligors defaulting together is low, the UEL for the portfolio would be much less than the UEL for the individual loans. Hence statement IV is true. I and II are false for the reasons explained above.
For an equity portfolio valued at V whose beta is, the value at risk at a 99% level of confidence is represented by which of the following expressions? Assume represents the market volatility.
- A . 2.326 x x V x
- B . 1.64 x V x /
- C . 1.64 x x V x
- D . 2.326 x V x /
A
Explanation:
For the PRM exam, it is important to remember the z-multiples for both 99% and 95% confidence levels (these are 2.33 and 1.64 respectively).
The value at risk for an equity portfolio is its standard deviation multiplied by the appropriate z factor for the given confidence level. If we knew the standard deviation, VaR would be easy to calculate. The standard deviation can be derived using a correlation matrix for all the stocks in the portfolio, which is not a trivial task. So we simplify the calculation using the CAPM and essentially say that the standard deviation of the portfolio is equal to the beta of the portfolio multiplied by the standard deviation of the market. Therefore VaR in this case is equal to Beta x Mkt Std Dev x Value x z-factor, and therefore Choice ‘a’ is the correct answer.
There are two bonds in a portfolio, each with a market value of $50m. The probability of default of the two bonds are 0.03 and 0.08 respectively, over a one year horizon.
If the default correlation is 25%, what is the one year expected loss on this portfolio?
- A . $1.38m
- B . $11m
- C . $5.26m
- D . $5.5mc
D
Explanation:
We will need to calculate the joint probability distribution of the portfolio as follows.Probability of the joint default of both A and B =
The marginal probabilities (ie the standalone probabilities of default of the two bonds) are known, and if we can calculate the probability of joint defaults of the two bonds, we can calculate the rest of the entries. We then multiply the probabilities with the expected loss under each scenario and add them up to get the total expected loss.
The calculations are shown below. The expected loss is $5.5m, and therefore the correct answer is Choice ‘d’.
An assumption of normality when returns data have fat tails leads to:
I. underestimation of VaR at high confidence levels
II. overestimation of VaR at low confidence levels
III. overestimation of VaR at high confidence levels
IV. underestimation of VaR at low confidence levels
- A . I and II
- B . I, II, III and IV
- C . I, II and III
- D . II, III and IV
A
Explanation:
When returns are non-normal and have fat tails, an assumption of normality in returns leads to underestimation of VaR at high confidence levels. At the same time, at lower confidence levels the normal distribution may give higher VaR estimates. Therefore Choice ‘a’ is correct. The other choices are incorrect. Also refer to the tutorial about VaR and heavy tails.
If an institution has $1000 in assets, and $800 in liabilities, what is the economic capital required to avoid insolvency at a 99% level of confidence? The VaR in respect of the assets at 99% confidence over a one year period is $100.
- A . 200
- B . 1000
- C . 100
- D . 1100
C
Explanation:
The economic capital required to avoid insolvency is just the asset VaR, ie $100. This means that if the worst case losses are realized, the institution would need to have a buffer equivalent to those losses which in this case will be $100, and this buffer is the economic capital.
The actual value of liabilities is not relevant as they are considered ‘riskless’ from the institution’s point of view, ie they will be taken at full value. In this particular case, the institution has $200 in capital which is more than the economic capital required. Therefore Choice ‘c’ is the correct answer.
The CDS rate on a defaultable bond is approximated by which of the following expressions:
- A . Hazard rate / (1 – Recovery rate)
- B . Loss given default x Default hazard rate
- C . Credit spread x Loss given default
- D . Hazard rate x Recovery rate
B
Explanation:
The CDS rate is approximated by the [Loss given default x Default hazard rate]. Thus Choice ‘b’ is the correct answer.
Note that this is also equal to the credit spread on the reference bond over the risk free rate. Therefore credit spreads and CDS rates are generally the same. Also, ‘loss given default’ is nothing but (1 – Recovery rate). This can be substituted in the formula for the credit spread to get an alternative expression that directly refers to the recovery rate. Therefore all other choices are incorrect.
Which of the following measures can be used to reduce settlement risks:
- A . escrow arrangements using a central clearing house
- B . increasing the timing differences between the two legs of the transaction
- C . providing for physical delivery instead of netted cash settlements
- D . all of the above
C
Explanation:
increasing the timing differences between the two legs of the transaction will increase settlement risk and not reduce it. Using escrow arrangements, such as central clearing houses to settle transactions (eg the DTCC in the United States) reduces settlement risk. Cash settlements based on netting arrangements reduce settlement risk, while physical delivery combined with gross cash payments increase it. Therefore Choice ‘a’ is the correct answer.
For a FX forward contract, what would be the worst time for a counterparty to default (in terms of the maximum likely credit exposure)
- A . At maturity
- B . Roughly three-quarters of the way towards maturity
- C . Indeterminate from the given information
- D . Right after inception
A
Explanation:
With the passage of time, the range of possible values the FX contract can take increases. Therefore the maximum value of the contract, which is when the credit risk would be maximum, would be at maturity. (Note that this is different than an interest rate swap whose value at maturity approaches zero.) Therefore Choice ‘a’ is the correct answer and the others are incorrect.
Which of the following is the most accurate description of EPE (Expected Positive Exposure):
- A . The maximum average credit exposure over a period of time
- B . The price that would be received to sell an asset or paid to transfer a liability in an orderly transaction between market participants at the measurement date
- C . Weighted average of the future positive expected exposure across a time horizon.
- D . The average of the distribution of positive exposures at a specified future date
C
Explanation:
When a derivative transaction is entered into, its value generally is close to zero. Over time, as the value of the underlying changes, the transaction acquires a positive or negative value. It is not possible to predict the future value of the transaction in advance, however distributional assumptions can be made and potential exposure can be measured in multiple ways. Of all the possible future exposures, it is generally positive exposures that are relevant to credit risk because that is the only situation where the bank may lose money from a default of the counterparty.
The maximum (generally a quantile eg, the 97.5th quantile) exposure possible over the time of the transaction is the ‘Potential Future Exposure’, or PFE.
The average of the distribution of positive exposures at a specified date before the longest trade in the portfolio is called ‘Expected Exposure’, or EE.
The expected positive exposure calculated as the weighted average of the future positive Expected Exposure across a time horize is called the EPE, or the ‘Expected Positive Exposure’.
The price that would be received to sell an asset or paid to transfer a liability in an orderly transaction between market participants at the measurement date – is the ‘fair value’, as defined under FAS 157.
Therefore the corect answer is that EPE is the weighted average of the future positive expected exposure across a time horizon.
If the odds of default are 1:5, what is the probability of default?
- A . 16.67%
- B . 20.00%
- C . 12.00%
- D . 50.00%
A
Explanation:
Odds are the ratio between the probability of the occurence of an event to the probability that the event does not occur.
If odds are H, then p = H/(1 + H) and H = p/(1-p). In this case the odds are 1:5, or 1/5, therefore the correct answer is Choice ‘a’, equal to (1/5)/(1 + 1/5) = 1/6 = 16.67%. All other choices are incorrect.
Which of the following decisions need to be made as part of laying down a system for calculating VaR:
I. The confidence level and horizon
II. Whether portfolio valuation is based upon a delta-gamma approximation or a full revaluation
III. Whether the VaR is to be disclosed in the quarterly financial statements
IV. Whether a 10 day VaR will be calculated based on 10-day return periods, or for 1-day and scaled to 10 days
- A . I and III
- B . II and IV
- C . I, II and IV
- D . All of the above
C
Explanation:
While conceptually VaR is a fairly straightforward concept, a number of decisions need to be made to select between the different choices available for the exact mechanism to be used for the calculations.
The Basel framework requires banks to estimate VaR at the 99% confidence level over a 10 day horizon. Yet this is a decision that needs to be explicitly made and documented. Therefore ‘I’ is a correct choice.
At various stages of the calculations, portfolio values need to be determined. The valuation can be done using a ‘full valuation’, where each position is explicitly valued; or the portfolio(s) can be reduced to a handful of risk factors, and risk sensitivities such as delta, gamma, convexity etc be used to value the portfolio. The decision between the two approaches is generally based on computational efficiency, complexity of the portfolio, and the degree of exactness desired. ‘II’ therefore is one of the decisions that needs to be made.
The decision as to disclosing the VaR in financial filings comes after the VaR has been calculated, and is unrelated to the VaR calculation system a bank needs to set up. ‘III’ is therefore not a correct answer.
Though the Basel framework requires a 10-day VaR to be calculated, it also allows the calculation of the 1-day VaR and and scaling it to 10 days using the square root of time rule. The bank needs to decide whether it wishes to scale the VaR based on a 1-day VaR number, or compute VaR for a 10 day period to begin with. ‘IV’ therefore is a decision to be made for setting up the VaR system.
What is the 1-day VaR at the 99% confidence interval for a cash flow of $10m due in 6 months time? The risk free interest rate is 5% per annum and its annual volatility is 15%. Assume a 250 day year.
- A . 5500
- B . 1744500
- C . 109031
- D . 85123
A
Explanation:
The $10m cash flow due in 6 months is equivalent to a bond with a present value of 10m/(1.05)^0.5 =$9,759,000. Essentially, the question requires us to calculate the VaR of a bond.
The VaR of a fixed income instrument is given by Duration x Interest Rate x Volatility of the interest rate x z-factor corresponding to the confidence level.
In this case, since the question requires us to calculate the value "closest to" the correct answer, we can use an estimate for the modified duration of the bond as equal to 0.5 years/(1+r) = 0.5/1.05 = 0.47 years. The VaR would be given by 0.5 * 5% * 15% * 2.326 * sqrt(1/250) * 9,759,000 = $5,384 which is closest to $5,500. Therefore Choice ‘a’ is the correct answer. Note that we have to multiply by sqrt(1/250) as the given volatility is annual and the question is asking for daily VaR. All other answers are incorrect.
The frequency distribution for operational risk loss events can be modeled by which of the following distributions:
I. The binomial distribution
II. The Poisson distribution
III. The negative binomial distribution
IV. The omega distribution
- A . I, II and III
- B . I and III
- C . I, III and IV
- D . I, II, III and IV
A
Explanation:
The binomial, Poisson and the negative binomial distributions can all be used to model the loss event frequency distribution. The omega distribution is not used for this purpose, therefore Choice ‘a’ is the correct answer.
Also note that the negative binomial distribution provides the best model fit because it has more parameters than the binomial or the Poisson. However, in practice the Poisson distribution is most often used due to reasons of practicality and the fact that the key model risk in such situations does not arise from the choice of an incorrect underlying distribution.
Which of the following are considered properties of a ‘coherent’ risk measure:
I. Monotonicity
II. Homogeneity
III. Translation Invariance
IV. Sub-additivity
- A . II and III
- B . II and IV
- C . I and III
- D . All of the above
B
Explanation:
All of the properties described are the properties of a ‘coherent’ risk measure. Monotonicity means that if a portfolio’s future value is expected to be greater than that of another portfolio, its risk should be lower than that of the other portfolio. For example, if the expected return of an asset (or portfolio) is greater than that of another, the first asset must have a lower risk than the other. Another example: between two options if the first has a strike price lower than the second, then the first option will always have a lower risk if all other parameters are the same. VaR satisfies this property.
Homogeneity is easiest explained by an example: if you double the size of a portfolio, the risk doubles. The linear scaling property of a risk measure is called homogeneity. VaR satisfies this property.
Translation invariance means adding riskless assets to a portfolio reduces total risk. So if cash (which has zero standard deviation and zero correlation with other assets) is added to a portfolio, the risk goes down. A risk measure should satisfy this property, and VaR does. Sub-additivity means that the total risk for a portfolio should be less than the sum of its parts. This is a property that VaR satisfies most of the time, but not always. As an example, VaR may not be sub-additive for portfolios that have assets with discontinuous payoffs close to the VaR cutoff quantile.
Which of the following statements are true with respect to stress testing:
I. Stress testing results in a dollar estimate of losses
II. The results of stress testing can replace VaR as a measure of risk as they are better grounded in reality
III. Stress testing provides an estimate of losses at a desired level of confidence
IV. Stress testing based on factor shocks can allow modeling extreme events that have not occurred in the past
- A . I and IV
- B . I, II and IV
- C . II and III
- D . II, III and IV
A
Explanation:
Any stress test is conducted with a view to produce a dollar estimate of losses, therefore statement I is correct. However, these numbers do not come with any probabilities or confidence levels, unlike VaR, and statement III is incorrect. Stress testing can complement VaR, but not replace it, therefore statement II is not correct. Statement IV is correct as stress tests can be based on both actual historical events, or simulated factor shocks (eg, a factor, such as interest rates, moves by say 10-z). Therefore Choice ‘a’ is correct.
For a US based investor, what is the 10-day value-at risk at the 95% confidence level of a long spot position of EUR 15m, where the volatility of the underlying exchange rate is 16% annually. The current spot rate for EUR is 1.5. (Assume 250 trading days in a year).
- A . 526400
- B . 2632000
- C . 1184400
- D . 5922000
C
Explanation:
The VaR for a spot FX position is merely a function of the standard deviation of the exchange rate. If V be the value of the position (in this case, EUR 15m x 1.5 = USD 22.5m), z the appropriate z value associated with the level of confidence desired, and be the standard deviation of the portfolio, the VaR is given by ZV.
In this case, the 10-day standard deviation is given by SQRT(10/250)*16%. Therefore the VaR is =1.645*15*1.5*(16%*SQRT(10/250)) = USD 1.1844m. Choice ‘c’ is the correct answer.
Which of the following credit risk models focuses on default alone and ignores credit migration when assessing credit risk?
- A . CreditPortfolio View
- B . The contingent claims approach
- C . The CreditMetrics approach
- D . The actuarial approach
D
Explanation:
The correct answer is Choice ‘d’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:
Which of the following credit risk models focuses on default alone and ignores credit migration when assessing credit risk?
- A . CreditPortfolio View
- B . The contingent claims approach
- C . The CreditMetrics approach
- D . The actuarial approach
D
Explanation:
The correct answer is Choice ‘d’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:
Which of the following credit risk models focuses on default alone and ignores credit migration when assessing credit risk?
- A . CreditPortfolio View
- B . The contingent claims approach
- C . The CreditMetrics approach
- D . The actuarial approach
D
Explanation:
The correct answer is Choice ‘d’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:
Which of the following credit risk models focuses on default alone and ignores credit migration when assessing credit risk?
- A . CreditPortfolio View
- B . The contingent claims approach
- C . The CreditMetrics approach
- D . The actuarial approach
D
Explanation:
The correct answer is Choice ‘d’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:
Which of the following credit risk models focuses on default alone and ignores credit migration when assessing credit risk?
- A . CreditPortfolio View
- B . The contingent claims approach
- C . The CreditMetrics approach
- D . The actuarial approach
D
Explanation:
The correct answer is Choice ‘d’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:
Which of the following credit risk models focuses on default alone and ignores credit migration when assessing credit risk?
- A . CreditPortfolio View
- B . The contingent claims approach
- C . The CreditMetrics approach
- D . The actuarial approach
D
Explanation:
The correct answer is Choice ‘d’. The following is a brief description of the major approaches available to model credit risk, and the analysis that underlies them:
There are two bonds in a portfolio, each with a market value of $50m. The probability of default of the two bonds are 0.03 and 0.08 respectively, over a one year horizon.
If the probability of the two bonds defaulting simultaneously is 1.4%, what is the default correlation between the two?
- A . 0%
- B . 100%
- C . 40%
- D . 25%
D
Explanation:
Probability of the joint default of both A and B =
We know all the numbers except default correlation, and we can solve for it. Default Correlation*SQRT(0.03*(1 – 0.03)*0.08*(1 – 0.08)) + 0.03*0.08 = 0.014. Solving, we get default correlation = 25%
Which of the following best describes a ‘break clause?
- A . A break clause gives either party to a transaction the right to terminate the transaction at market price at future date(s)
- B . A break clause determines the process by which amounts due on early termination will be determined
- C . A break clause describes rights and obligations when the derivative contract is broken
- D . A break clause sets out the conditions under which the transaction will be terminated upon non-compliance with the ISDA MA
A
Explanation:
A break close, also called a ‘mutual put’, gives either party the right to terminate a transaction at market price at a given date, or dates in the future. These are usually availed of in longer dated transactions, eg 10 years and over. For example, a 15-year swap might have a mutual put in year 5, and every 2 years thereafter. All other choices are incorrect.
Under the CreditPortfolio View approach to credit risk modeling, which of the following best describes the conditional transition matrix:
- A . The conditional transition matrix is the unconditional transition matrix adjusted for the state of the economy and other macro economic factors being modeled
- B . The conditional transition matrix is the transition matrix adjusted for the risk horizon being different from that of the transition matrix
- C . The conditional transition matrix is the unconditional transition matrix adjusted for probabilities of defaults
- D . The conditional transition matrix is the transition matrix adjusted for the distribution of the firms’ asset returns
A
Explanation:
Under the CreditPortfolio View approach, the credit rating transition matrix is adjusted for the state of the economy in a way as to increase the probability of defaults when the economy is not doing well, and vice versa. Therefore Choice ‘a’ is the correct answer. The other choices represent nonsensical options.
Which of the following are elements of ‘group risk’:
I. Market risk
II. Intra-group exposures
III. Reputational contagion
IV. Complex group structures
- A . II, III and IV
- B . II and III
- C . I and IV
- D . I and II
A
Explanation:
The term ‘group risk’ has been defined in the FSA document 08/24 on stress testing as the risk that a firm may be adversely affected by an occurrence (financial or non-financial) in another group entity or an occurrence that affects ther group as a whole.
These risks may occur through:
– reputational contagion,
– financial contagion,
– leveraging,
– double or multiple gearing,
– concentrations and large exposures (particularly intra-group).
Thus, the insurance sector may be considered a group, and a firm may suffer just because another group firm has had losses or reputational issues.
The FSA statement goes on to identify some elements of group risk as follows:
– intra-group exposures (credit or operational exposures through outsourcing or service arrangements, as well as more standard business exposures);
– concentration risks (from credit, market or insurance risks which could put a strain on capital resources across entities simultaneously);
– contagion (reputational damage, operational or financial pressures); and
– complex group structures (with dependencies, complex split of responsibilities and accountabilities).
Therefore Choice ‘a’ is the correct answer and the rest of the choices are incorrect.
A bank extends a loan of $1m to a home buyer to buy a house currently worth $1.5m, with the house serving as the collateral. The volatility of returns (assumed normally distributed) on house prices in that neighborhood is assessed at 10% annually. The expected probability of default of the home buyer is 5%.
What is the probability that the bank will recover less than the principal advanced on this loan; assuming the probability of the home buyer’s default is independent of the value of the house?
- A . More than 1%
- B . Less than 1%
- C . More than 5%
- D . 0
B
Explanation:
The bank will not be able to recover the principal advanced on this loan if both the home buyer defaults, and the house value falls to less than $1m, ie the price moves adversely by more than $500k, which is $-500k/$150k = -3.33. (Note that 150k is the 1 year volatility in dollars, ie $1.5m * 10%).
The probability of both these things happening together is just the product of the two probabilities, one of which we know to be 5%. The other is also certainly a small number, and intuitively it is clear that the probability of both the things happening together will be less than 1%.
For a more precise answer, we can calculate the probability of the house price falling by 3.33 standard deviations by calculating the area under the standard normal curve to the left of -3.33. This indeed is a very small number (actually equal to NORMSINV(-3.33)=0.00043), which when multiplied by the probability of default of the home buyer at 5% is certainly going to be less than 1%. Therefore Choice ‘b’ is the correct answer.
When the volatility of the yield for a bond increases, which of the following statements is true:
- A . The VaR for the bond decreases and its value increases
- B . The VaR for the bond increases and its value decreases
- C . The VaR for the bond decreases and its value is unaffected
- D . The VaR for the bond increases and its value stays the same
D
Explanation:
The VaR of a fixed income instrument is given by Duration x Volatility of the interest rate x z-factor corresponding to the confidence level. Therefore as the volatility of the yield goes up, the value at risk for the instrument goes up.
At the same time, the value of the bond is given by the present value of its future cash flows using the current yield curve. This value is unaffected by the volatility of the underlying interest rates. Therefore a change in volatility of interest rates does not affect the value of the bond.
Therefore Choice ‘d’ represents the correct answer.
Altman’s Z-score does not consider which of the following ratios:
- A . Market capitalization to debt
- B . Sales to total assets
- C . Net income to total assets
- D . Working capital to total assets
C
Explanation:
A computation of Altman’s Z-score considers the following ratios:
– Working capital to total assets
– Retained earnings to total assets
– EBIT to total assets
– Market cap to debt
– Sales to total assets
It does not consider Net Income to total assets, therefore Choice ‘c’ is the correct answer. This makes sense as net income is after interest and taxes, both of which are not relevant for considering the cash flows for debt servicing.
Which of the following methods cannot be used to calculate Liquidity at Risk?
- A . Monte Carlo simulation
- B . Analytical or parametric approaches
- C . Historical simulation
- D . Scenario analysis
B
Explanation:
Analytical or parametric approaches are not useful at all for liquidity at risk calculations because there are no neat distributions available to parameterize the large number of factors that affect the calculations of liquidity inflows and outflows. Historical simulations, Monte Carlo and scenario analysis (which can complement historical scenarios) are all valid choices
For a corporate issuer, which of the following can be used to calculate market implied default probabilities?
I. CDS spreads
II. Bond prices
III. Credit rating issued by S&P
IV. Altman’s scoring model
- A . III and IV
- B . I and II
- C . I, II and III
- D . II and III
B
Explanation:
Generally, the probability of default is an input into determining the price of a security. However, if we know the market price of a security, we can back out the probability of default that the market is factoring into pricing that security. Market implied default probabilities are the probabilities of default priced into security prices, and can be determined from both bond prices and CDS spreads. Credit ratings issued by a credit agency do not give us ‘market implied default probabilities’, and neither does an internal scoring model like Altman’s as these do not consider actual market prices in any way. Therefore Choice ‘b’ is the correct answer and the others are not.
Which of the following is not a limitation of the univariate Gaussian model to capture the codependence structure between risk factros used for VaR calculations?
- A . The univariate Gaussian model fails to fit to the empirical distributions of risk factors, notably their fat tails and skewness.
- B . Determining the covariance matrix becomes an extremely difficult task as the number of risk factors increases.
- C . It cannot capture linear relationships between risk factors.
- D . A single covariance matrix is insufficient to describe the fine codependence structure among risk factors as non-linear dependencies or tail correlations are not captured.
C
Explanation:
In the univariate Gaussian model, each risk factor is modeled separately independent of the others, and the dependence between the risk factors is captured by the covariance matrix (or its equivalent combination of the correlation matrix and the variance matrix). Risk factors could include interest rates of different tenors, different equity market levels etc.
While this is a simple enough model, it has a number of limitations.
First, it fails to fit to the empirical distributions of risk factors, notably their fat tails and skewness. Second, a single covariance matrix is insufficient to describe the fine codependence structure among risk factors as non-linear dependencies or tail correlations are not captured. Third, determining the covariance matrix becomes an extremely difficult task as the number of risk factors increases. The number of covariances increases by the square of the number of variables.
But an inability to capture linear relationships between the factors is not one of the limitations of the univariate Gaussian approach – in fact it is able to do that quite nicely with covariances.
A way to address these limitations is to consider joint distributions of the risk factors that capture the dynamic relationships between the risk factors, and that correlation is not a static number across an entire range of outcomes, but the risk factors can behave differently with each other at different intersection points.
An investor holds a bond portfolio with three bonds with a modified duration of 5, 10 and 12 years respectively. The bonds are currently valued at $100, $120 and $150.
If the daily volatility of interest rates is 2%, what is the 1-day VaR of the portfolio at a 95% confidence level?
- A . 115.51
- B . 163.11
- C . 370
- D . 165
A
Explanation:
The total value of the portfolio is $370 (=$100 + $120 + $150). The modified duration of the portfolio is the weighted average of the MDs of the different bonds, ie =(5 * 100/370) + (10 * 120/370) + (12 * 150/370) = 9.46.
This means that for every 1% change in interest rates, the value of the portfolio changes by 9.46%. Since the daily volatility of interest rates is 2%, the 95% confidence level move will be 1.65 * 2% = 3.30%. Thus, the VaR of the portfolio at the 95% confidence level will be 3.3 * 9.46% * $370 = $115.51.
All other answers are incorrect.
A bank evaluates the impact of large and severe changes in certain risk factors on its risk using a quantitative valuation model.
Which of the following best describes this exercise?
- A . Stress testing
- B . Simulation
- C . Scenario analysis
- D . Sensitivity analysis
C
Explanation:
It is important to note the difference between sensitivity analysis and stress testing. Sensitivity analysis applies to measuring the effect of changes on the outputs of a model by varying the inputs – generally one input at a time.
In scenario analysis, a number of variables may be changed at the same time to see the impact on the dependent variable. For example, a bank may measure the changes in the value of its mortgage portfolio by varying its assumptions on prepayment expectations, interest rates and other factors, using its modeling software or application. The changes in the inputs may or may not relate to integrated real world situations that may arise. Sensitivity analysis is purely a quantitative exercise, much like calculating the delta of a portfolio.
A stress test may include shocks or large changes to input parameters but it does so as part of a larger stress testing programme that generally considers the interaction of risk factors, past scenarios etc. At its simplest, a stress test may be no different from a sensitivity analysis exercise, but that is generally not what is considered a stress test at large financial institutions.
A stress test may consider multiple scenarios, for example one scenario may include the events witnessed during the Asian crisis, another may include the events of the recent credit crisis. Simulation generally refers to a Monte Carlo or historical simulation, and is often a more limited exercise.
The exercise described in the question is the closest to a scenario analysis, therefore Choice ‘c’ is the correct answer.
It is important to note that all of the choices referred to in this question are related to each other, and the boundaries between them tend to be fuzzy. At what point does a complex sensitivity analysis start resembling a scenario, or a stress test can always be debatable, but such a debate would be more about the symantics than be of any practical use.
If the cumulative default probabilities of default for years 1 and 2 for a portfolio of credit risky assets is 5% and 15% respectively, what is the marginal probability of default in year 2 alone?
- A . 15.79%
- B . 10.53%
- C . 10.00%
- D . 11.76%
B
Explanation:
One way to think about this question is this: we are provided with two pieces of information: if the portfolio is worth $100 to start with, it will be worth $95 at the end of year 1 and $85 at the end of year 2.
What it is asking for is the probability of default in year 2, for the debts that have survived year 1. This probability is $10/$95 = 10.53%. Choice ‘b’ is the correct answer.
Note that marginal probabilities of default are the probabilities for default for a given period, conditional on survival till the end of the previous period. Cumulative probabilities of default are probabilities of default by a point in time, regardless of when the default occurs. If the marginal probabilities of default for periods 1, 2… n are p1, p2…pn, then cumulative probability of default can be calculated as Cn = 1 – (1 – p1)(1-p2)…(1-pn). For this question, we can calculate the probability of default for year 2 as [1 – (1 – 5%)(1 – 10.53%)] = 15%.
In the case of historical volatility weighted VaR, a higher current volatility when compared to historical volatility:
- A . will not affect the VaR estimate
- B . will increase the confidence interval
- C . will decrease the VaR estimate
- D . will increase the VaR estimate
D
Explanation:
When calculating volatility weighted VaR, returns are adjusted by a factor equal to the current volatility divided by the historical volatility, ie the volatility that existed during the time period the returns were earned. If the current volatility is greater than the historical volatility (also called contemporary volatility), then it has the effect of increasing the magnitude of any past returns (whether positive or negative). This in turn increases the VaR.
Consider an example: if the current volatility is 2%, and a return of -5% was earned at a time when the volatility was 0.8%, then the volatility weighted return would be 12.5% (=-5% x 2%/0.8%). Clearly, this has the effect of increasing the VaR. Choice ‘d’ is therefore the correct answer.
Which of the following are true:
I. Delta hedges need to be rebalanced frequently as deltas fluctuate with fluctuating prices.
II. Portfolio managers are right to focus on primary risks over secondary risks.
III. Increasing the hedge rebalance frequency reduces residual risks but increases transaction costs.
IV. Vega risk can be hedged using options.
- A . I and II
- B . II, III and IV
- C . I, II, III and IV
- D . I, II and III
C
Explanation:
Delta is non-linear with respect to prices for a number of securities such as bonds, options and other derivatives. It changes with changes in prices, and any hedge initially undertaken becomes quickly mismatched. Therefore delta hedges need to be managed quite actively and kept up-to-date. Therefore I is true. Primary risks comprise most of the risk in a position, and therefore portfolio managers are right to focus on them over secondary risks. Therefore II is true. The greater the hedge rebalance frequency, the lower is the hedge mismatch at any point in time, and therefore residual risks would be lower. However, rebalancing hedges requires rebalance trades to be done, and these involve transaction costs. Generally, a reasonable balance needs to be struck between the frequency of
rebalances (a lower frequency increases residual risk, but this residual risk is not directionally biased) and the costs of rebalancing. III is correct. Vega risk is the risk arising due to changes in prices due to changes in volatility. Options carry vega risk. Therefore any hedges against vega risks can only be obtained using other options positions. (Vega risk may also be hedged using other volatility based products, eg an OTC volatility swap, or a VIX futures type product.)
Which of the following losses can be attributed to credit risk:
I. Losses in a bond’s value from a credit downgrade
II. Losses in a bond’s value from an increase in bond yields
III. Losses arising from a bond issuer’s default
IV. Losses from an increase in corporate bond spreads
- A . I, III and IV
- B . II and IV
- C . I and II
- D . I and III
D
Explanation:
Losses due to credit risk include the loss of value from credit migration and default events (which can be considered a migration to the ‘default’ category). Therefore Choice ‘d’ is the correct answer. Changes in spreads or interest rates are examples of market risk events. [Discussion: It may be argued that losses from spreads changing could be categorized as credit risk and not market risk. The distinction between credit and market risk is never really watertight.
The reason I have called it market risk in this question is because spreads can change due to two reasons: first, due to the individual issuer going down in their credit rating (whether issued or perceived, as we have witnessed in Europe sovereign debt), and second due to the spread for the overall category changing due to macro fundamentals with nothing changing for the individual issuer. For example the spread between municipal bonds and treasuries may be small during boom times and may expand during recessions – regardless of how the individual issuer has been doing. Clearly, the first case is credit risk and the second is probably market risk.
A change in overall corporate bond spreads is something I would consider akin to a rate change – which is why I have called it as not a part of credit risk. But an alternative perspective may not be incorrect either.]
Which of the following situations are not suitable for applying parametric VaR:
I. Where the portfolio’s valuation is linearly dependent upon risk factors
II. Where the portfolio consists of non-linear products such as options and large moves are involved
III. Where the returns of risk factors are known to be not normally distributed
- A . I and II
- B . II and III
- C . I and III
- D . All of the above
B
Explanation:
Parametric VaR relies upon reducing a portfolio’s positions to risk factors, and estimating the first order changes in portfolio values from each of the risk factors. This is called the delta approximation approach. Risk factors include stock index values, or the PV01 for interest rate products, or volatility for options. This approach can be quite accurate and computationally efficient if the portfolio comprises products whose value behaves linearly to changes in risk factors. This includes long and short positions in equities, commodities and the like.
However, where non-linear products such as options are involved and large moves in the risk factors are anticipated, a delta approximation based valuation may not give accurate results, and the VaR may be misstated. Therefore in such situations parametric VaR is not advised (unless it is extended to include second and third level sensitivities which can bring its own share of problems).
Parametric VaR also assumes that the returns of risk factors are normally distributed – an assumption that is violated in times of market stress. So if it is known that the risk factor returns are not normally distributed, it is not advisable to use parametric VaR.
Which of the following is not a parameter to be determined by the risk manager that affects the level of economic credit capital:
- A . Risk horizon
- B . Confidence level
- C . Probability of default
- D . Definition of credit losses
C
Explanation:
Three parameters define economic credit capital: the risk horizon, ie the time horizon over which the risk is being assessed; the confidence level, ie the quintile of the loss distribution; and the definition of credit losses, ie whether mark-to-market losses are considered in addition to default-only losses. The probability of default is not a parameter within the control of the risk manager, but an input into the capital calculation process that he has to estimate. Therefore Choice ‘c’ is the correct answer.
Which of the following statements is true:
I. Expected credit losses are charged to the unit’s P&L while unexpected losses hit risk capital reserves.
II. Credit portfolio loss distributions are symmetrical
III. For a bank holding $10m in face of a defaulted debt that it acquired for $2m, the bank’s legal claim in the bankruptcy court will be $10m.
IV. The legal claim in bankruptcy court for an over the counter derivatives contract will be the notional value of the contract.
- A . I and III
- B . I, II and IV
- C . III and IV
- D . II and IV
A
Explanation:
Statement I is true as expected losses are the ‘cost of doing business’ and charged against the P&L of the unit holding the exposure. When evaluating the business unit, expected losses are taken into account. Unexpected losses however require risk capital reserves to be maintained against them.
Statement II is not true. Credit portfolio loss distributions are not symmetrical, in fact they are highly skewed and have heavy tails.
Statement III is true. The notional, or the face value of a defaulted debt is the basis for a claim in bankruptcy court, and not the market value.
Statement IV is false. In the case of over the counter instruments, the replacement value of the contract represents the amount of the claim, and not the notional amount (which can be very high!).
Under the standardized approach to calculating operational risk capital, how many business lines are a bank’s activities divided into per Basel II?
- A . 7
- B . 15
- C . 8
- D . 12
C
Explanation:
In the Standardized Approach, banks’ activities are divided into eight business lines: corporate finance, trading & sales, retail banking, commercial banking, payment & settlement, agency services, asset management, and retail brokerage. Therefore Choice ‘c’ is the correct answer.
What is the risk horizon period used for credit risk as generally used for economic capital calculations and as required by regulation?
- A . 1-day
- B . 1 year
- C . 10 years
- D . 10 days
B
Explanation:
The credit risk horizon for credit VaR is generally one year. Therefore Choice ‘b’ is the correct answer.
Which of the following statements are true:
I. Credit VaR often assumes a one year time horizon, as opposed to a shorter time horizon for market risk as credit activities generally span a longer time period.
II. Credit losses in the banking book should be assessed on the basis of mark-to-market mode as opposed to the default-only mode.
III. The confidence level used in the calculation of credit capital is high when the objective is to maintain a high credit rating for the institution.
IV. Credit capital calculations for securities with liquid markets and held for proprietary positions should be based on marking positions to market.
- A . I and III
- B . I, III and IV
- C . I and II
- D . II and III
B
Explanation:
Statement I is correct as credit VaR calculations often use a one year time horizon. This is primarily because the cycle in respect of credit related activities, such as loan loss reviews, accounting cycles for borrowers etc last a year.
Statement II is false. There are two ways in which loss assessments in respect of credit risk can be made: default mode, where losses are considered only in respect of default, and no losses are recognized in respect of the deterioration of the creditworthiness of the borrower (which is often expressed through a credit rating transition matrix); and the mark-to-market mode, where losses due to both defaults and credit quality are considered. The default mode is used for the loan book where the institution has lent moneys and generally intends to hold the loan on its books till maturity. The mark to market mode is used for traded securities which are not held to maturity, or are held only for trading.
Statement III is correct. The confidence interval, or the quintile of losses used for maintaining credit ratings tends to be very high as the possibility of the institution’s default needs to be remote.
Statement IV is correct too, for the reasons explained earlier.
If the full notional value of a debt portfolio is $100m, its expected value in a year is $85m, and the worst value of the portfolio in one year’s time at 99% confidence level is $60m, then what is the credit VaR?
- A . $40m
- B . $25m
- C . $60m
- D . $15m
B
Explanation:
Credit VaR is the difference between the expected value of the portfolio and the value of the portfolio at the given confidence level. Therefore the credit VaR is $85m – $ 60m = $25m. Choice ‘b’ is the correct answer.
Note that economic capital and credit VaR are identical at a risk horizon of one year. Therefore if the question asks for economic capital, the answer would be the same. [Again, an alternative way to look at this is to consider the explanation given in III.B.6.2.2: Credit Var = Q(L) – EL where Q(L) is the total loss at a given confidence interval, and EL is the expected loss. In this case Q(L) – $100-$60 = $40, and EL = $100-$85=$15. Therefore Credit VaR = $40-$15=$25.]
Which of the following belong in a credit risk report?
- A . Exposures by country
- B . Exposures by industry
- C . Largest exposures by counterparty
- D . All of the above
D
Explanation:
All the listed variables are relevant to management monitoring the credit risk profile of an institution, therefore Choice ‘d’ is the correct answer.
Ex-ante VaR estimates may differ from realized P&L due to:
I. the effect of intra day trading
II. timing differences in the accounting systems
III. incorrect estimation of VaR parameters
IV. security returns exhibiting mean reversion
- A . I and III
- B . II, III and IV
- C . I, II and III
- D . I, II and IV
C
Explanation:
Ex-ante VaR calculations can differ from actual realized P&L due to a large number of reasons. I, II and III represent some of them. Mean reversion however has nothing to do with VaR estimates differing from actual P&L. Therefore Choice ‘c’ is the correct answer.
As the persistence parameter under EWMA is lowered, which of the following would be true:
- A . The model will react slower to market shocks
- B . The model will react faster to market shocks
- C . High variance from the recent past will persist for longer
- D . The model will give lower weight to recent returns
B
Explanation:
The persistence parameter, , is the coefficient of the prior day’s variance in EWMA
calculations. A higher value of the persistence parameter tends to ‘persist’ the prior value of variance for longer. Consider an extreme example – if the persistence parameter is equal to 1, the variance under EWMA will never change in response to returns.
1 – is the coefficient of recent market returns. As is lowered, 1 – increases, giving a greater weight to recent market returns or shocks. Therefore, as is lowered, the model will react faster to market shocks and give higher weights to recent returns, and at the same time reduce the weight on prior variance which will tend to persist for a shorter period.