HSCI 341 — Lesson 5

Screening &
Diagnostic Tests

Fundamental Epidemiological Concepts and Approaches

Kiffer G. Card, PhD, Faculty of Health Sciences, Simon Fraser University

Learning objectives for this lesson:

  • Define accuracy and precision as they relate to test characteristics
  • Interpret measures of precision for quantitative tests and calculate kappa for categorical tests
  • Define sensitivity and specificity, and calculate their estimates and confidence intervals
  • Define predictive values and explain the factors that influence them
  • Choose appropriate cutpoints using ROC curves and likelihood ratios
  • Use multiple tests and interpret results in series or parallel

This course was developed by Kiffer G. Card, PhD, as a companion to Dohoo, I. R., Martin, S. W., & Stryhn, H. (2012). Methods in Epidemiologic Research. VER Inc.

Section 1

Introduction & Test Attributes

⏱ Estimated reading time: 12 minutes

Learning Objectives

  • Distinguish between screening tests and diagnostic tests.
  • Define analytic sensitivity and specificity of a test.
  • Explain the difference between accuracy and precision.
  • Describe measures of agreement, including Cohen’s kappa and weighted kappa.

What Is a Test?

A test is any device or procedure designed to detect or quantify a sign, substance, tissue change, or body response in an individual. Tests can also be applied at the household or other levels of aggregation. In epidemiology, the term “test” extends broadly to include clinical signs, history-taking questions, survey items, and post-mortem findings.

Why Evaluate Tests?

In a decision-making context (e.g., clinical diagnosis), the selection of an appropriate test should alter your assessment of the probability that a disease exists, and guide subsequent actions (further testing, treatment, quarantine). In a research context, understanding test characteristics is essential for knowing how they affect data quality.

Screening vs. Diagnostic Tests

Click each card to learn more:

Screening TestsClick to learn more
Diagnostic TestsClick to learn more

Despite their different uses, the principles of evaluation and interpretation are the same for both screening and diagnostic tests.

Attributes of the Test Per Se

Analytic Sensitivity and Specificity

The analytic sensitivity of an assay refers to the lowest concentration of a chemical compound the test can detect. The analytic specificity refers to the capacity of a test to react to only one chemical compound. These are distinctly different from diagnostic (epidemiologic) sensitivity and specificity, which are discussed in Section 2.

Accuracy and Precision

The laboratory accuracy of a test relates to its ability to give a true measure of the substance of interest. To be accurate, a test need not always be close to the true value, but if repeat tests are run, the resulting average should be close to the true value.

The precision of a test relates to how consistent the results are. If a test always gives the same value for a sample (regardless of whether it is the correct value), it is said to be precise.

Accurate & Precise Inaccurate but Precise Accurate but Imprecise Inaccurate & Imprecise

Figure 5.1 — Laboratory accuracy and precision. The bullseye represents the true value.

Precision and Agreement

Repeatability refers to variability obtained from repeated testing of the same sample within the same laboratory. Reproducibility refers to variability from testing the same sample in different laboratories. Agreement refers to how well two different tests (or raters) agree when applied to the same sample.

Measuring Precision: Quantitative Tests

Common measures for quantifying variability between pairs of test results include:

Coefficient of Variation (CV)

The CV is computed as CV = σ / μ, where σ is the standard deviation among test results on the same sample and μ is the mean. A lower CV indicates greater precision.

Concordance Correlation Coefficient (CCC)

The CCC compares two sets of test results and better reflects agreement than a Pearson correlation. It is computed from three parameters: the location-shift (how far data are from the equality line), the scale-shift (difference in slopes), and the Pearson r. A CCC of 1 indicates perfect agreement.

Limits of Agreement (Bland-Altman Plot)

A Bland-Altman plot plots the differences between paired test results against their mean value. The mean difference (μd) and limits of agreement (μd ± 1.96σd) are shown. This reveals systematic bias and whether disagreement varies with the magnitude of the measurement.

Measuring Agreement: Categorical Tests — Kappa (κ)

When test results are categorical (dichotomous or ordinal), Cohen’s kappa (κ) measures agreement beyond what would be expected by chance alone.

κ = (observed agreement − expected agreement) / (1 − expected agreement) Eq 5.2
κ ValueInterpretation
≤ 0Poor agreement
0.01 – 0.20Slight agreement
0.21 – 0.40Fair agreement
0.41 – 0.60Moderate agreement
0.61 – 0.80Substantial agreement
0.81 – 1.00Almost perfect agreement

Factors Affecting Kappa

Bias: If one test consistently produces more positive results than the other, κ will be affected. Use McNemar’s χ² test to check whether the two tests classify the same proportion as positive before evaluating agreement.

Prevalence: The prevalence of the underlying condition affects κ. Two tests will have a higher κ when prevalence is moderate (~0.5) compared to very high or very low prevalence.

Weighted Kappa

For tests measured on an ordinal scale, a weighted kappa accounts for partial agreement. Pairs of test results that are close (e.g., scores of 4 and 5) receive more credit than pairs that are far apart (e.g., scores of 1 and 5). This provides a better reflection of agreement for ordinal data.

Key Takeaways

  • A test is any procedure designed to detect or quantify a sign, substance, or response.
  • Screening tests are applied to healthy populations; diagnostic tests are applied to individuals suspected of disease.
  • Accuracy measures closeness to the true value; precision measures consistency of results.
  • Cohen’s kappa quantifies agreement beyond chance for categorical tests; weighted kappa extends this to ordinal scales.
  • Prevalence and bias both affect kappa values.
Knowledge Check — Section 1

1. A test that always gives the same result for a sample, but the result is consistently wrong, is best described as:

Precision relates to consistency of results. If the test always gives the same value, it is precise. However, if that value is wrong, it is inaccurate. This corresponds to the “inaccurate but precise” target pattern.

2. Cohen’s kappa measures:

Kappa measures the extent of agreement between two sets of categorical test results (or raters) beyond what would be expected by chance alone.

3. Which statement about screening and diagnostic tests is correct?

Screening tests are applied to healthy populations to detect disease early, while diagnostic tests are used to confirm disease in individuals already suspected of being ill. Despite different uses, the principles of evaluation are the same.

✦ Pass the knowledge check with 100% to continue

Section 2

Sensitivity & Specificity

⏱ Estimated reading time: 15 minutes

Learning Objectives

  • Explain the concept of a gold standard and its role in test evaluation.
  • Calculate sensitivity, specificity, false positive fraction, and false negative fraction from a 2×2 table.
  • Distinguish between true prevalence and apparent prevalence.
  • Estimate true prevalence from apparent prevalence using the Rogan-Gladen formula.

The Gold Standard

A gold standard (GS) is a test or procedure that is absolutely accurate — it diagnoses all cases of a specific disease and misdiagnoses none. In reality, very few true gold standards exist. Much of the error in test evaluation is due to biological variability: people do not immediately become “diseased” upon exposure, and the timescale for crossing a detectable threshold varies from person to person.

Important Caveat

When no true gold standard exists, alternative approaches for estimating sensitivity and specificity are needed, including the use of results from several different tests, repeated testing of selected samples, and latent class models (discussed in Section 5.7 of the textbook).

The 2×2 Contingency Table

The concepts of sensitivity and specificity are most easily understood through a 2×2 contingency table comparing disease status to test results:

Test Positive (T+)Test Negative (T−)Total
Disease Positive (D+)a (true positive)b (false negative)m1
Disease Negative (D−)c (false positive)d (true negative)m0
Totaln1n0n

Key Measures from the 2×2 Table

Click each card to explore:

Sensitivity (Se)Click to explore
Specificity (Sp)Click to explore
False Positive FractionClick to explore
False Negative FractionClick to explore

Worked Example (Norovirus EIA Data)

From a study of 188 stool samples tested with an EIA against a gold standard:

GS+ (D+)GS− (D−)Total
T+71374
T−11103114
Total82106188
  • Se = 71/82 = 86.6% (95% CI: 77.3%, 93.1%)
  • Sp = 103/106 = 97.2% (95% CI: 92.0%, 99.4%)
  • FNF = 1 − 0.866 = 13.4%
  • FPF = 1 − 0.972 = 2.8%

True and Apparent Prevalence

The true prevalence (P) is the actual proportion of the population that has the disease. In Example 5.4, P = 82/188 = 43.6%.

The apparent prevalence (AP) is the proportion that tests positive, which includes both true positives and false positives. In Example 5.4, AP = 74/188 = 39.4%.

AP = P × Se + (1 − P) × (1 − Sp) Eq 5.6

Estimating True Prevalence from Apparent Prevalence

If the Se and Sp of a test are known, the true prevalence can be estimated from the apparent prevalence using the Rogan-Gladen formula:

P = (AP + Sp − 1) / (Se + Sp − 1) Eq 5.7

Example Calculation

If AP = 0.150, Se = 0.363, and Sp = 0.876, then:

P = (0.150 + 0.876 − 1) / (0.363 + 0.876 − 1) = 0.026 / 0.239 = 0.109 (10.9%)

Note: Some combinations of Se, Sp, and AP can produce estimates of P outside the range 0–1, indicating that the Se and Sp estimates may not be applicable to the population being studied.

Reflection

A new rapid test for influenza has a sensitivity of 75% and a specificity of 98%. In a population where the true prevalence of influenza is 5%, calculate the apparent prevalence using the formula AP = P × Se + (1 − P) × (1 − Sp). What does this tell you about relying solely on test results to estimate disease burden?

Minimum 20 characters required.

✓ Reflection saved

Key Takeaways

  • A gold standard is the reference test assumed to be perfectly accurate; in practice, few truly exist.
  • Sensitivity = probability of testing positive given disease; specificity = probability of testing negative given no disease.
  • High Se is important for ruling out disease (SnNOut); high Sp is important for confirming disease (SpPIn).
  • Apparent prevalence differs from true prevalence due to test imperfections.
  • The Rogan-Gladen formula estimates true prevalence from apparent prevalence when Se and Sp are known.
Knowledge Check — Section 2

1. In a 2×2 table, the false negative fraction (FNF) is calculated as:

The false negative fraction is the proportion of truly diseased individuals that test negative. Since Se = a/(a+b), the FNF = b/(a+b) = 1 − Se.

2. If a test has Se = 90% and Sp = 95%, and the true prevalence is 10%, what is the apparent prevalence?

AP = P × Se + (1 − P) × (1 − Sp) = 0.10 × 0.90 + 0.90 × 0.05 = 0.09 + 0.045 = 0.135 or 13.5%.

3. A highly specific test is most useful for:

A highly specific test has few false positives, so a positive result strongly suggests the individual truly has the disease (SpPIn — Specificity, Positive result, Rules In).

✦ Pass the knowledge check with 100% and complete the reflection to continue

Section 3

Predictive Values

⏱ Estimated reading time: 12 minutes

Learning Objectives

  • Define predictive value positive (PV+) and predictive value negative (PV−).
  • Calculate PV+ and PV− from a 2×2 table and using Bayesian formulas.
  • Explain how prevalence affects predictive values.
  • Describe strategies for increasing the predictive value of a positive test.

What Are Predictive Values?

While Se and Sp are characteristics of the test, predictive values tell us how useful the test is for individuals of unknown disease status. Once we decide to use a test, we want to know the probability that the individual has or does not have the disease, given the test result.

Predictive Value Positive (PV+)

The PV+ is the probability that an individual who tests positive actually has the disease: p(D+|T+) = a / n1.

PV+ = p(D+) × Se / [p(D+) × Se + p(D−) × (1 − Sp)] Eq 5.8

In the norovirus example: PV+ = 71/74 = 95.9% (95% CI: 88.6%, 99.2%)

Predictive Value Negative (PV−)

The PV− is the probability that an individual who tests negative truly does not have the disease: p(D−|T−) = d / n0.

PV− = p(D−) × Sp / [p(D−) × Sp + p(D+) × (1 − Se)] Eq 5.9

In the norovirus example: PV− = 103/114 = 90.4% (95% CI: 83.4%, 95.1%)

Effect of Prevalence on Predictive Values

Predictive values depend heavily on the prevalence of disease in the population being tested. This is why PV+ and PV− are not good measures of a test’s intrinsic performance — they vary from population to population.

Dramatic Impact of Prevalence

Using Se = 86.6% and Sp = 97.2% from the norovirus example, observe how PV+ and PV− change as prevalence drops:

Prevalence (%)PV+ (%)PV− (%)
5096.987.9
561.999.3
0.13.0100.0

As you can see, when prevalence drops to 0.1%, the PV+ falls to just 3% — meaning 97% of positive results are false positives! Meanwhile, the PV− approaches 100%. This is a fundamental challenge in screening low-prevalence populations.

Strategies to Increase PV+

Click each card to explore:

Target High-Risk GroupsClick to explore
Increase SpecificityClick to explore
Use Multiple TestsClick to explore

Scenario: Universal HIV Screening

A country considers implementing universal HIV screening using a rapid test with Se = 99.5% and Sp = 99.8%. The national HIV prevalence is 0.3%.

PV+ = (0.003 × 0.995) / [(0.003 × 0.995) + (0.997 × 0.002)] = 0.002985 / (0.002985 + 0.001994) = 60.0%

Even with an excellent test (99.5% Se, 99.8% Sp), 40% of positive results in this low-prevalence population would be false positives. This is why confirmatory testing is essential!

Reflection

Consider a screening programme for a rare genetic condition affecting 1 in 10,000 newborns. The test has Se = 99% and Sp = 99.9%. Calculate the PV+ and discuss the implications of the result for clinical decision-making. What strategies would you recommend to improve the programme?

Minimum 20 characters required.

✓ Reflection saved

Key Takeaways

  • PV+ is the probability of disease given a positive test; PV− is the probability of no disease given a negative test.
  • Predictive values are driven by both test characteristics (Se, Sp) and the prevalence of disease.
  • In low-prevalence populations, even highly specific tests can produce mostly false positive results.
  • Strategies to increase PV+ include targeting high-risk groups, increasing Sp, and using multiple tests in series.
Knowledge Check — Section 3

1. As the prevalence of a disease decreases, what happens to PV+ (assuming Se and Sp stay constant)?

When prevalence decreases, there are proportionally more non-diseased individuals who can produce false positives, driving PV+ down. PV− tends to increase as prevalence drops.

2. PV+ is best described as:

PV+ = p(D+|T+), the probability that an individual who tests positive actually has the disease. This is distinct from sensitivity, which is p(T+|D+).

3. Which strategy would NOT help increase PV+?

Lowering the cutpoint increases sensitivity but decreases specificity, leading to more false positives and a lower PV+. The other strategies all help increase PV+.

✦ Pass the knowledge check with 100% and complete the reflection to continue

Section 4

Cutpoints, ROC Curves & Likelihood Ratios

⏱ Estimated reading time: 15 minutes

Learning Objectives

  • Explain the trade-off between sensitivity and specificity when choosing a cutpoint.
  • Describe receiver operating characteristic (ROC) curves and the area under the curve (AUC).
  • Define and calculate likelihood ratios for positive and negative test results.
  • Apply likelihood ratios to update pre-test probability to post-test probability.

Interpreting Continuous Test Results

Many tests produce results on a continuous or semi-quantitative scale (e.g., blood urea nitrogen levels, optical density values, enzyme activity). To classify individuals as positive or negative, we select a cutpoint (also called a cut-off or threshold) to determine what level indicates a positive test result.

The Overlap Problem

In reality, the distributions of test values for healthy and diseased individuals often overlap. Whatever cutpoint we choose will result in both false positive and false negative results. Raising the cutpoint increases Sp (fewer false positives) but decreases Se (more false negatives). Lowering the cutpoint has the opposite effect.

Healthy Diseased Cutpoint False Negatives False Positives Test Value

Figure 5.4 — Overlap between healthy and diseased distributions. Moving the cutpoint left or right trades off sensitivity for specificity.

Receiver Operating Characteristic (ROC) Curves

A ROC curve plots the Se (y-axis) against the false positive fraction (1 − Sp) (x-axis) computed at a number of different cutpoints. This graphical tool helps select the optimum cutpoint and evaluate overall test performance.

Interpreting the ROC Curve

The 45° diagonal line represents a test with no discriminating ability (no better than chance). The closer the ROC curve gets to the top-left corner, the better the test discriminates between D+ and D− individuals. The top-left corner represents a test with Se = 100% and Sp = 100%.

Choosing the Optimal Cutpoint

Assuming equal costs of false negative and false positive results, the optimal cutpoint occurs where Se + Sp is at a maximum, which corresponds to the point closest to the top-left corner (or farthest from the 45° line). However, if the costs are unequal, you might emphasise Se or Sp depending on the clinical context.

Parametric vs. Non-Parametric ROC Curves

A non-parametric ROC curve simply plots Se and (1 − Sp) using each observed test value as a cutpoint. A parametric ROC curve provides a smoothed estimate by assuming that the latent variables follow a specified distribution (usually binormal). Both approaches can generate 95% confidence intervals.

Area Under the Curve (AUC)

The AUC summarises the overall discriminatory ability of the test across all cutpoints. It can be interpreted as the probability that a randomly selected D+ individual has a greater test value than a randomly selected D− individual.

AUC ValueInterpretation
0.50No discrimination (chance alone)
0.50 – 0.70Poor discrimination
0.70 – 0.80Acceptable discrimination
0.80 – 0.90Excellent discrimination
> 0.90Outstanding discrimination

Likelihood Ratios

A likelihood ratio (LR) is the ratio of the probability of a given test result among D+ individuals to the probability of that same result among D− individuals. LRs combine information from both Se and Sp, and allow the determination of post-test odds from pre-test odds.

Likelihood Ratio for a Positive Test (LR+)

LR+ = Se / (1 − Sp) Eq 5.10

An LR+ of a positive test result is the odds of disease given a positive test result divided by the pre-test odds. Higher LR+ values mean a positive test result is more informative for confirming disease.

Likelihood Ratio for a Negative Test (LR−)

LR− = (1 − Se) / Sp

Lower LR− values mean a negative test result is more informative for ruling out disease. An LR− close to 0 is ideal.

Category-Specific LR

Instead of simply classifying results as positive or negative, researchers in diagnostic settings often calculate category-specific LRs based on the actual test value. This uses the actual result rather than just positive/negative, giving a more nuanced assessment.

LRcat = P(result category | D+) / P(result category | D−) Eq 5.12

From Pre-Test to Post-Test Probability

Likelihood ratios allow you to update your assessment of disease probability after receiving a test result:

Three-Step Process

  1. Convert pre-test probability to pre-test odds: odds = P / (1 − P)
  2. Multiply by the likelihood ratio: post-test odds = pre-test odds × LR
  3. Convert post-test odds back to probability: P = odds / (1 + odds)

Example: Pre-test probability = 2%, test result at a cutpoint where LRcat = 25.95.

  • Pre-test odds = 0.02/0.98 = 0.0204
  • Post-test odds = 0.0204 × 25.95 = 0.5294
  • Post-test probability = 0.5294 / (1 + 0.5294) = 35%

After obtaining the test result, the estimated probability of disease rises from 2% to 35%.

Reflection

A disease screening programme uses a test with Se = 92.7% and Sp = 77.4% at a particular cutpoint. Calculate LR+ for this cutpoint. If the pre-test probability of disease is 10%, what is the post-test probability after a positive result? Discuss whether this cutpoint is appropriate for a screening programme where false negatives are very costly.

Minimum 20 characters required.

✓ Reflection saved

Key Takeaways

  • The choice of cutpoint involves a trade-off between sensitivity and specificity.
  • ROC curves plot Se vs. (1 − Sp) across cutpoints; the AUC summarises overall test performance.
  • An AUC of 0.5 represents chance; values closer to 1.0 indicate better discrimination.
  • LR+ = Se/(1 − Sp); LR− = (1 − Se)/Sp. LRs combine both Se and Sp into a single metric.
  • LRs allow conversion of pre-test probability to post-test probability using a three-step odds-based calculation.
Knowledge Check — Section 4

1. A ROC curve that perfectly follows the 45° diagonal indicates:

The 45° diagonal represents a test that performs no better than random chance (AUC = 0.5). A good test produces a ROC curve that bows toward the top-left corner.

2. If a test has Se = 90% and Sp = 80%, what is LR+?

LR+ = Se / (1 − Sp) = 0.90 / (1 − 0.80) = 0.90 / 0.20 = 4.5. This means a positive test result is 4.5 times more likely in a diseased individual than in a non-diseased individual.

3. Raising the cutpoint for a continuous test will generally:

Raising the cutpoint means fewer individuals test positive. This reduces false positives (increasing Sp) but increases false negatives (decreasing Se).

✦ Pass the knowledge check with 100% and complete the reflection to continue

Section 5

Final Review & Assessment

⏱ Estimated time: 20 minutes

Lesson Summary

In this lesson, you explored the fundamental concepts underlying the evaluation and interpretation of screening and diagnostic tests. Let’s review the key themes from each section:

Section 1: Introduction & Test Attributes

Tests are any device or procedure for detecting or quantifying a substance or response. Screening tests are applied to healthy populations; diagnostic tests are used to confirm disease. Accuracy refers to closeness to the true value, while precision refers to consistency. Agreement between tests is measured by Cohen’s kappa, with weighted kappa extending to ordinal scales.

Section 2: Sensitivity & Specificity

Sensitivity is the probability of testing positive given disease (Se = a/m1); specificity is the probability of testing negative given no disease (Sp = d/m0). High Se is important for ruling out disease (SnNOut); high Sp for confirming disease (SpPIn). Apparent prevalence differs from true prevalence, and the Rogan-Gladen formula can be used to estimate true prevalence from AP, Se, and Sp.

Section 3: Predictive Values

PV+ is the probability of disease given a positive test; PV− is the probability of no disease given a negative test. Predictive values depend heavily on prevalence: in low-prevalence populations, even excellent tests produce many false positives. Strategies to increase PV+ include targeting high-risk groups, using more specific tests, and testing in series.

Section 4: Cutpoints, ROC Curves & Likelihood Ratios

Choosing a cutpoint involves a trade-off between Se and Sp. ROC curves plot Se vs. (1 − Sp) across cutpoints; the AUC summarises overall test performance. Likelihood ratios combine Se and Sp into a single metric that updates pre-test probability to post-test probability using a three-step odds-based calculation.

Reflection

You are advising a public health agency that wants to implement a two-stage screening programme for a disease with a population prevalence of 2%. The first-stage test has Se = 95% and Sp = 90%, and the second-stage (confirmatory) test has Se = 85% and Sp = 99%. Discuss how using these tests in series would affect the overall Se, Sp, and PV+ compared to using just the first test alone. What are the practical implications of this approach?

Minimum 20 characters required.

✓ Reflection saved

Final Assessment

Complete all 15 questions below with 100% accuracy to finish this lesson. You must also complete the reflection above before submitting.

Final Assessment — Screening and Diagnostic Tests

1. The analytic sensitivity of a test refers to:

Analytic sensitivity refers to the lowest concentration the test can detect. This is distinct from diagnostic (epidemiologic) sensitivity, which is the proportion of truly diseased individuals testing positive.

2. A kappa value of 0.55 between two diagnostic tests indicates:

According to the Landis and Koch interpretation scale, a kappa of 0.41–0.60 indicates moderate agreement.

3. In a 2×2 table for test evaluation, cell “c” represents:

In the standard 2×2 table, cell c represents false positives: individuals who do not have the disease (D−) but test positive (T+).

4. If Se = 80% and Sp = 95%, what is the false positive fraction (FPF)?

FPF = 1 − Sp = 1 − 0.95 = 0.05 or 5%. The false positive fraction depends only on specificity.

5. The Rogan-Gladen formula is used to:

The Rogan-Gladen formula estimates true prevalence: P = (AP + Sp − 1) / (Se + Sp − 1), correcting for test imperfections.

6. A screening programme tests 10,000 people for a disease with 1% prevalence using a test with Se = 99% and Sp = 95%. How many false positives would you expect?

Non-diseased individuals = 10,000 × 0.99 = 9,900. False positives = 9,900 × (1 − 0.95) = 9,900 × 0.05 = 495.

7. PV+ depends on which of the following?

PV+ is determined by the formula: PV+ = (P × Se) / [P × Se + (1 − P) × (1 − Sp)]. It depends on all three: Se, Sp, and prevalence.

8. In the context of ROC curves, the area under the curve (AUC) of 0.85 indicates:

An AUC of 0.80–0.90 is generally interpreted as excellent discrimination between diseased and non-diseased individuals.

9. LR+ = Se / (1 − Sp). If a test has Se = 95% and Sp = 90%, what is LR+?

LR+ = 0.95 / (1 − 0.90) = 0.95 / 0.10 = 9.5. A positive result is 9.5 times more likely in a diseased individual.

10. The mnemonic “SnNOut” means:

SnNOut stands for Sensitivity, Negative result, Rules Out. If a highly sensitive test is negative, the individual is very unlikely to have the disease because the test catches almost all true cases.

11. A Bland-Altman plot is used to:

A Bland-Altman (limits of agreement) plot displays the differences between paired measurements against their mean, revealing systematic bias and whether disagreement varies with measurement magnitude.

12. Using tests in series (sequential testing) will generally:

Testing in series requires both tests to be positive to classify as positive. This reduces false positives (increasing Sp and PV+) but may miss some true positives (decreasing overall Se).

13. McNemar’s χ² test is used before evaluating kappa to:

McNemar’s test checks for systematic bias between two tests. If one test produces significantly more positive results than the other, the detailed assessment of agreement could be misleading.

14. To convert pre-test probability to post-test probability using a likelihood ratio, the correct sequence is:

The three-step process is: (1) convert pre-test probability to pre-test odds, (2) multiply pre-test odds by the LR to get post-test odds, (3) convert post-test odds back to post-test probability.

15. Which factor does NOT directly affect the predictive value of a test?

Predictive values are determined by Se, Sp, and prevalence. The coefficient of variation (CV) is a measure of test precision/reproducibility and does not directly enter the PV formula.

✦ Complete the final reflection above before submitting