Most of the important financial industry tests (Series 6, 7, 24, 26, CFA I, CFA II, CFA III, etc) only have two possible binary outcomes: PASS or FAIL. Failure is a waste of time and money. Over-studying, however, can also waste time. (Studying for a PASS/FAIL test is investing in a binary “real option.”)
All of the material is worth knowing for someone, but some information is simply not relevant to everyone. For example, investment advisor reps don’t necessarily need to know all of the rules for broker dealer agents (and vise versa). Knowing the stuff that is relevant to you is more valuable than simply allowing you to pass a test.
That said, the goal is to PASS. And you’ve got a million other things to do. So what’s a quant to do? Get quantitative of course!
Quantitative Test Prep
Step 1: Find representative sample tests. All else hinges on this. Obtaining sample tests from multiple independent sources may help.
Step 2: Determine your average score on practice tests.
Step 3: Determine the standard deviation of your scores.
Step 4: Calculate the probability of achieving a passing score given your mean score and standard deviation.
Step 5: Decide the risk/reward and whether more study provides sufficient ROI.
Assuming normal distributions, I use the 68/95/99.7 rule. Regardless of the standard deviation, if your practice average is the same as the minimum score your chance of success is only 50%. Naturally, if your mean practice score is 1-sigma above the threshold for passing, your chance on the real test is 86%. If your mean score is plus 2 sigma, your chance of passing is almost 98%.
This little exercise shows two possible ways to improve your expected pass rate. The obvious way is getting better with the material. The less obvious way is reducing your standard deviation. Can this second way be achieved? If so how?
Keeping in mind the four-answer multiple-choice format, the mean deviation is:
MD = 2*p*(1-p)
Where p is the probability of answering a particular question correctly. Per-question deviation (PQD) is highest for p=0.5 at 0.5. PQD is lowest when p=1 at 0. For random guessing, PQD is 0.375.
Increasing your pq to from random guess 0.25 to 0.5 for a given category q will increase your expected score, but will also increase sigma. Taking the first derivative of MD with respect to p gives: 2-4p. Because the range of p is [0,1] (arguably [0.25,1)) the best incremental decrease in MD is greatest near p=1.
Now, the test candidate must decide what the the d/dt(pqc(t)) is for each question category (where t is time is time spent studying that category). Studying the categories (qc) with the highest d/dt(pqc(t)) will most efficiently improve the expected score. Further studying the categories with the maximum d/dt(pqc(t))*(4p-2) will reduce PQD and hence reduce test standard deviation.
Deeper Analysis of the Meta Problem
Naturally, this analysis only scratches the tactical surface of the "binary-test optimization meta problem." [The test itself is the problem, the tactics are part of the meta-problem of optimizing generalized multiple-choice test prep]. Improving from p=0.8 to p=0.9 is clearly better than improving from p=0.4 to p=0.5 in terms of PQD reduction, and equal in terms of increase of expected score.
Also of relevance is PQD (modified) downside semi-deviation, which I will call PQDd. I’ll spare you the derivation; it turns out that:
PQDd = p*sqrt(2*(1-p))
This value peaks at p=2/3 with a value of 0.5443. PQDd slowly ascends as p goes from 0.25 up to 0.667, then falls pretty rapidly for values of p>0.8.
We care about the random variable S which represents the actual test score. S is a function of the mean expected score μ and standard deviation σ… in a normal distribution. What we really care about is Pr(S>=Threshold), the probability that our score meets or exceeds the minimum passing score.
PQD = PQDd only when p = 0, 0.5, or 1. For p in (0,0.5) PQDd<PQD and for p in (0.5,1) PDQd>PQD. Even though it seems a bit strange for discrete binary distribution, p in (0,0.5) has positive skewness and p in (0.5,1) negative skewness.
In the “final” analysis the chance of passing, pr(S=>Threshold), depends on score mean, μ, and downside deviation, σd. In turn σd depends on PQD and PQDd.
Summary and Conclusions
Theoretically, one’s best course of action is to 1) increase the average expected score and 2) reduce σd. If practical, the best and most efficient way to achieve both objective simultaneously is to improve areas that are in the 60-75% range (p=0.6 to 0.75) to the mid to high 90% range (p>=0.95). This may seem counter-intuitive, but the math is solid.
Caveats: This analysis is mostly an exercise in showing the value of statics, variance, and downside variance in an area outside of finance. It shows that there is more than one way to approach to a goal; in this case passing a standardized test.