HSCI 410 — Lesson 6

Modelling Count and Rate Data

Exploratory Data Analysis For Epidemiology

Kiffer G. Card, PhD, Faculty of Health Sciences, Simon Fraser University

Learning objectives for this lesson:

  • Distinguish among simple counts, rates with person-time denominators, population rates, and area-based counts
  • Describe the Poisson distribution and its mean=variance property
  • Specify and interpret a Poisson regression model including the offset term
  • Interpret incidence rate ratios (IR) from exponentiated Poisson coefficients
  • Evaluate Poisson models using Pearson, deviance, and Anscombe residuals
  • Distinguish apparent from real overdispersion and apply appropriate corrections
  • Compare negative binomial regression models (NB-1, NB-2) to Poisson regression
  • Apply zero-inflated, hurdle, and zero-truncated models to handle excess zeros

This course was developed by Kiffer G. Card, PhD, as a companion to Dohoo, I. R., Martin, S. W., & Stryhn, H. (2012). Methods in Epidemiologic Research. VER Inc.

Section 1

Introduction & The Poisson Distribution

⏱ Estimated time: 15 minutes

Why Model Count and Rate Data?

Many outcomes in epidemiology are measured as counts—the number of disease cases in a region, the number of doctor visits per year, or the number of parasites on a host animal. These outcomes differ fundamentally from continuous outcomes (modelled with linear regression) and binary outcomes (modelled with logistic regression). Count data require their own family of statistical models because they are discrete, non-negative, and often right-skewed.

Chapter 18 introduces the statistical tools for modelling count and rate data, beginning with the Poisson distribution and Poisson regression, and extending to negative binomial and zero-adjusted models for situations where the basic Poisson assumptions are violated.

Types of Count and Rate Data

Before selecting an analytical approach, it is essential to understand which type of count or rate data you are working with. There are four main types encountered in epidemiological research:

🔢
Simple Counts
Click to learn more
Rates (Person-Time)
Click to learn more
🌎
Population Rates
Click to learn more
🗺
Area-Based Counts
Click to learn more

The Poisson Distribution

The Poisson distribution is the foundational probability distribution for modelling count data. It describes the probability of observing a given number of events in a fixed interval of time or space, assuming events occur independently at a constant average rate.

Poisson Probability Function (Eq 18.1)
p(Y = y) = μy × e−μ / y!

In this formula, Y is the count of events, μ (mu) is both the mean and the expected number of events, and e is the base of the natural logarithm (≈ 2.718). The factorial in the denominator (y!) ensures the probabilities are correctly normalised. The Poisson distribution is defined for non-negative integers: y = 0, 1, 2, 3, …

Key Property: Mean = Variance

The most important property of the Poisson distribution is that the mean equals the variance: E(Y) = Var(Y) = μ. This single-parameter property means that as the expected count increases, so does the variability. This assumption is central to Poisson regression—and when it is violated (variance > mean), we have overdispersion, which requires alternative approaches covered in later sections.

The Poisson distribution is particularly useful for modelling rare events in large populations. When the probability of an event is small and the number of trials (or opportunities) is large, the Poisson distribution provides an excellent approximation. Examples include the number of rare disease cases in a large population, or the number of equipment failures over an extended operating period.

When is the Poisson distribution appropriate?

The Poisson distribution is appropriate when: (1) events are independent of one another; (2) the rate at which events occur is constant over the observation period; (3) two events cannot occur at exactly the same instant; and (4) the probability of an event in a short interval is proportional to the length of the interval. In practice, these assumptions are often approximately met in epidemiological settings.

Shape of the Poisson distribution

When μ is small (e.g., μ < 3), the distribution is noticeably right-skewed—most observations cluster near zero with a long right tail. As μ increases, the distribution becomes more symmetric and begins to resemble a normal distribution. By the time μ ≥ 20, a normal approximation with mean μ and variance μ is often adequate.

Section 1 Knowledge Check

1. Which type of count data uses person-time in the denominator?

Rates use accumulated person-time at risk as the denominator. This is essential when subjects have varying amounts of follow-up time in a study.

2. What is the key property of the Poisson distribution?

The Poisson distribution has the property that E(Y) = Var(Y) = μ. This single-parameter property is central to Poisson-based models.

3. The Poisson distribution is most appropriate for modelling:

The Poisson distribution models counts, particularly when events are rare relative to the population at risk. It is defined for non-negative integers and is well-suited for rare event data.

Reflection

How might the type of count data (simple counts vs. rates) influence your choice of analytical approach in an epidemiological study you're familiar with?

Reflection saved!
* Complete the quiz and reflection to continue.
Section 2

Poisson Regression Model & Interpretation

⏱ Estimated time: 20 minutes

The Expected Count

The starting point for Poisson regression is the relationship between the expected number of events and the underlying rate. If an individual (or group) is observed for n units of person-time and the event rate is λ (lambda), the expected count is:

Expected Count (Eq 18.2)
E(Y) = nλ

Here, n represents the person-time at risk (e.g., person-years of follow-up) and λ is the incidence rate. The expected count is simply the product of the time at risk and the rate at which events occur. Different subjects may contribute different amounts of person-time, which must be accounted for in the model.

The Log-Linear Model

Poisson regression uses a log link function to relate the expected count to a linear combination of predictors. Taking the natural logarithm of both sides of the expected count equation and incorporating predictors gives us the Poisson regression model:

Poisson Regression Model (Eq 18.3)
ln(E(Y)) = ln(n) + β0 + β1X1 + β2X2 + …

The term ln(n) is the offset—a fixed term in the model that is not estimated but rather included to account for the fact that different observations may have different amounts of exposure (person-time). The β coefficients describe how the log of the expected count (or rate) changes with the predictors.

The Offset Term

The offset is one of the most important concepts in Poisson regression. It transforms the model from one that predicts counts to one that effectively predicts rates.

Modelling Counts (Without Offset)

When no offset is included, the model predicts the expected count directly:

ln(E(Y)) = β0 + β1X1 + …

This is appropriate when all observations have the same amount of exposure or follow-up time. For instance, if all herds are observed for exactly one year, the count of disease cases directly reflects the rate. In practice, this situation is relatively uncommon—most epidemiological studies have subjects with varying follow-up times.

Modelling Rates (With Offset)

When the offset ln(n) is included, the model effectively predicts the rate rather than the raw count:

ln(E(Y)) = ln(n) + β0 + β1X1 + …

This is equivalent to modelling ln(E(Y)/n) = β0 + β1X1 + …, where E(Y)/n is the expected rate. The offset accounts for the fact that subjects with longer follow-up times are expected to accumulate more events simply by virtue of being observed longer. This is the standard approach when follow-up times vary across subjects.

Interpreting Poisson Regression Coefficients

In Poisson regression, the exponentiated coefficient eβ is interpreted as an incidence rate ratio (IRR). This is analogous to the odds ratio in logistic regression but applies to rates rather than odds.

Incidence Rate Ratio (IRR)

For a one-unit increase in the predictor X1, the incidence rate is multiplied by eβ1. If β1 = 0.30, then IRR = e0.30 = 1.35, meaning the rate increases by 35% for each one-unit increase in X1. An IRR > 1 indicates an increased rate; an IRR < 1 indicates a decreased rate; and an IRR = 1 indicates no association.

Epidemiological Example: Mastitis in Dairy Herds

Suppose we model the number of mastitis cases per herd over one year, with herd size as a predictor and cow-years at risk as the offset. The Poisson regression yields βherd size = 0.012.

Interpretation: e0.012 = 1.012, so for each additional cow in the herd, the incidence rate of mastitis increases by 1.2%. A herd with 100 more cows would have an expected rate ratio of e0.012×100 = e1.2 = 3.32 compared to the baseline—a 3.32-fold higher mastitis rate.

Poisson Regression for Relative Risk Estimation

An important application of Poisson regression is estimating relative risks (RR) directly from binary outcome data. When the outcome is rare, the Poisson model can provide estimates of the RR that are more interpretable than the odds ratios from logistic regression. This approach typically uses robust (sandwich) standard errors to account for the fact that binary data do not truly follow a Poisson distribution.

Section 2 Knowledge Check

1. What is the purpose of the offset term in Poisson regression?

The offset (log of person-time) accounts for the fact that subjects may have different amounts of follow-up or exposure time. It transforms the model from predicting counts to effectively predicting rates.

2. The exponentiated Poisson regression coefficient (eβ) is interpreted as:

eβ from Poisson regression represents the incidence rate ratio—the multiplicative change in the rate for a one-unit change in the predictor.

3. Using Poisson regression to estimate relative risks from binary data is appropriate when:

When the outcome is rare, the Poisson model can be used to estimate relative risks directly, which are more interpretable than odds ratios from logistic regression.

Reflection

Consider a study where participants have very different follow-up times. How would using an offset term change your interpretation compared to simply modelling raw counts?

Reflection saved!
* Complete the quiz and reflection to continue.
Section 3

Evaluating Poisson Models & Overdispersion

⏱ Estimated time: 20 minutes

Residuals for Poisson Models

Just as in linear regression, residuals are the primary tool for evaluating how well a Poisson model fits the observed data. However, because the variance of a Poisson variable depends on its mean, raw residuals (observed − expected) are not directly comparable across observations. Several types of standardised residuals have been developed:

Pearson Residuals

Pearson residuals standardise the raw residual by dividing by the square root of the expected value:

Pearson Residual (Eq 18.6)
rP = (y − μ̂) / √μ̂

This accounts for the Poisson assumption that Var(Y) = μ. If the model fits well, Pearson residuals should have approximately mean 0 and variance 1. The sum of squared Pearson residuals follows an approximate χ² distribution and can be used as an overall goodness-of-fit test.

Deviance Residuals

Deviance residuals are based on the contribution of each observation to the overall model deviance (the log-likelihood ratio comparing the fitted model to a saturated model). They are defined as:

di = sign(yi − μ̂i) × √[2(yi ln(yi/μ̂i) − (yi − μ̂i))]

Deviance residuals tend to be more normally distributed than Pearson residuals, especially when some expected counts are small. This makes them preferable for normal probability plots and other diagnostic displays.

Anscombe Residuals

Anscombe residuals use a transformation of the observed counts designed to make the residuals as close to normally distributed as possible. They apply a cube-root transformation to both the observed and expected values. Anscombe residuals are particularly useful when checking the normality assumption of residuals in Poisson models, and they complement Pearson and deviance residuals in a thorough model evaluation.

Goodness of Fit

The overall fit of a Poisson model can be assessed using the sum of squared Pearson residuals, which approximately follows a χ² distribution with (n − p) degrees of freedom, where n is the number of observations and p is the number of estimated parameters. A significant test statistic suggests the model does not fit the data adequately.

An important diagnostic is the dispersion parameter, estimated as the sum of squared Pearson residuals divided by the residual degrees of freedom:

Dispersion Parameter Estimate
φ̂ = ΣrP² / (n − p)

Under the Poisson assumption (mean = variance), φ should equal 1. Values substantially greater than 1 indicate overdispersion; values less than 1 indicate underdispersion.

Understanding Overdispersion

Overdispersion—when the observed variance exceeds the Poisson-assumed variance—is one of the most common problems in count data modelling. It is critical to distinguish between two types:

Warning: Interpreting Overdispersion

Before concluding that overdispersion is “real,” always investigate whether the model is correctly specified. Adding missing predictors, removing outliers, or modelling non-linear effects may resolve apparent overdispersion without needing to change the distributional assumptions. Applying overdispersion corrections to a misspecified model can mask important features of the data.

Apparent Overdispersion

Apparent overdispersion arises from problems with the model rather than the data-generating process itself. Common causes include:

  • Outliers: A few extreme observations can inflate the dispersion statistic dramatically.
  • Missing important predictors: If key covariates are omitted from the model, the unexplained variation appears as overdispersion.
  • Wrong model form: Using a linear predictor when the true relationship is non-linear.
  • Non-linear effects: Failing to include quadratic or other polynomial terms for predictors with curvilinear relationships.

Apparent overdispersion can be resolved by correcting the model specification—removing outliers, adding missing predictors, or using the correct functional form.

Real Overdispersion

Real overdispersion reflects genuine extra-Poisson variation in the data that cannot be explained by observable covariates. This often arises from:

  • Unobserved heterogeneity: Subject-level variation in the underlying rate that is not captured by measured predictors.
  • Clustering: Events within groups (e.g., animals within herds) are correlated, violating the independence assumption.
  • Biological variability: Inherent variation in susceptibility or exposure that exceeds what the Poisson model allows.

Real overdispersion requires statistical corrections such as scaling standard errors, using negative binomial regression, or employing random effects models.

Approaches to Handling Overdispersion

ApproachHow It WorksWhen to Use
Scale SEs by √φMultiplies standard errors by the square root of the estimated dispersion parameter; coefficients unchangedMild to moderate overdispersion; quick fix when coefficient estimates are trusted
Negative binomial regressionAdds an extra parameter (α) to model the excess variance explicitlyModerate to severe overdispersion; when a more principled model is desired
Random effects / GLMMIncludes subject- or group-level random intercepts to capture unobserved heterogeneityClustered data (e.g., animals within herds); hierarchical study designs
GEE (robust SEs)Uses generalised estimating equations with an empirical (sandwich) variance estimatorClustered data when marginal (population-averaged) estimates are of primary interest

Section 3 Knowledge Check

1. In a Poisson model, overdispersion is indicated when:

Overdispersion occurs when the observed variance exceeds the Poisson-assumed variance (mean), giving a dispersion parameter φ > 1.

2. Which of the following is NOT a cause of apparent overdispersion?

Inherent biological variation causes real (not apparent) overdispersion. Apparent overdispersion is caused by model misspecification issues such as outliers, missing predictors, or incorrect functional form.

3. One approach to handling real overdispersion is:

Negative binomial regression explicitly models the extra-Poisson variation by adding a parameter that allows the variance to exceed the mean. Removing outliers and adding predictors address apparent overdispersion instead.

Reflection

Why is it important to distinguish between apparent and real overdispersion before choosing a correction strategy? What could go wrong if you apply the wrong fix?

Reflection saved!
* Complete the quiz and reflection to continue.
Section 4

Negative Binomial & Zero-Adjusted Models

⏱ Estimated time: 20 minutes

The Negative Binomial Distribution

The negative binomial (NB) distribution extends the Poisson by adding an extra parameter α that captures the additional variation not accounted for by the Poisson assumption. Conceptually, the NB distribution arises when the Poisson rate itself varies randomly across individuals—each subject has their own λ, drawn from a Gamma distribution.

The NB distribution allows the variance to exceed the mean, making it the natural first choice when overdispersion is present. Two common parameterisations define how the variance relates to the mean:

NB-1: Linear Variance

NB-1 Variance Function (Eq 18.8)
Var(Y) = μ + αμ = μ(1 + α)

In the NB-1 parameterisation, the variance increases linearly with the mean. The overdispersion is proportional to the mean: doubling the expected count doubles the excess variance. The ratio Var(Y)/μ = (1 + α) is constant across all observations, making NB-1 similar to a quasi-Poisson model with a fixed dispersion parameter.

NB-1 is sometimes preferred when overdispersion is relatively constant across the range of predicted values. However, it is less commonly used in practice than NB-2.

NB-2: Quadratic Variance

NB-2 Variance Function (Eq 18.9)
Var(Y) = μ + αμ²

In the NB-2 parameterisation (the most commonly used form), the variance increases quadratically with the mean. Observations with higher expected counts have proportionally more overdispersion. This is often more realistic in biological settings where variability tends to grow faster than the average.

The NB-2 model is the default in most statistical software (e.g., Stata’s nbreg, R’s glm.nb()). When α = 0, the NB-2 model reduces to the Poisson model, making the Poisson a special (nested) case of NB-2.

Negative Binomial Regression

The NB regression model uses the same log-linear form as Poisson regression—the only difference is in the assumed distribution of the outcome:

NB Regression Model
ln(E(Y)) = ln(n) + β0 + β1X1 + β2X2 + …

Coefficients are interpreted identically to Poisson regression: eβ gives the incidence rate ratio. The key advantage is that the NB model produces correct standard errors even when overdispersion is present, because the extra variation is explicitly modelled through α.

Testing Poisson vs. Negative Binomial

Since the Poisson model is nested within the NB model (when α = 0), a likelihood ratio test (LRT) can be used to determine whether the NB model provides a significantly better fit. A significant LRT indicates that overdispersion is present and the NB model is preferred. Note that this is a boundary test (testing α = 0 vs. α > 0), so the p-value from the standard χ² reference distribution is conservative.

Zero-Adjusted Models

Standard count models (Poisson and NB) may not adequately handle datasets with an unusual number of zeros. Three families of models have been developed to address different zero-related problems:

🔠
Zero-Inflated Models
Click to learn more
🏃
Hurdle Models
Click to learn more
Zero-Truncated Models
Click to learn more

Choosing Among Zero-Adjusted Models

The choice depends on the data-generating process:

  • If some zeros are “structural” (from a fundamentally different process) and others arise from the count process, use a zero-inflated model.
  • If the zero/non-zero distinction is a separate decision from the magnitude of the count, use a hurdle model.
  • If zeros are impossible by design, use a zero-truncated model.
ModelSource of ZerosKey FeatureTest / Comparison
Zero-InflatedBoth components (structural + count)Mixture of logistic + count modelVuong test vs. standard model
HurdleBinary component onlyTwo-part: binary then truncated countLRT or AIC/BIC comparison
Zero-TruncatedZeros cannot occurConditional on Y > 0Applied when sampling excludes zeros

Section 4 Knowledge Check

1. The NB-2 model differs from the Poisson model by:

The NB-2 model adds an αμ² term to the variance, allowing the variance to increase quadratically with the mean—this is the most commonly used negative binomial model.

2. Zero-inflated models are appropriate when:

Zero-inflated models handle situations where some zeros come from a separate process (structural zeros) in addition to zeros from the count process, resulting in more zeros than the standard count model predicts.

3. The key difference between a hurdle model and a zero-inflated model is:

In a hurdle model, ALL zeros come from the binary (logistic) part; once the hurdle is crossed, only positive counts are modelled. In zero-inflated models, zeros can come from both components.

Reflection

When might you choose a hurdle model over a zero-inflated model in practice? Think of an epidemiological example where the distinction matters.

Reflection saved!
* Complete the quiz and reflection to continue.
Final Assessment

Lesson 6 — Comprehensive Assessment

⏱ Estimated time: 25 minutes

This final assessment covers all material from this lesson. You must answer all 15 questions correctly (100%) and complete the final reflection to finish the lesson.

Final Reflection

Reflecting on the full range of count and rate models covered in this lesson, how would you decide which model to use for a new dataset? What diagnostic steps would you follow?

Reflection saved!

Final Assessment (15 Questions)

1. Which type of count data involves dividing event counts by accumulated person-time?

Incidence rates use person-time at risk as the denominator, dividing the number of new events by the accumulated person-time of follow-up.

2. The Poisson distribution assumes:

The fundamental property of the Poisson distribution is that E(Y) = Var(Y) = μ. This single-parameter property is what distinguishes it from the negative binomial.

3. In Poisson regression, the offset term represents:

The offset is ln(person-time) which accounts for different exposure times across observations, allowing the model to estimate rates rather than raw counts.

4. The exponentiated coefficient from a Poisson regression (eβ) is interpreted as:

eβ gives the incidence rate ratio—the multiplicative change in the event rate per unit change in the predictor.

5. Pearson residuals for Poisson regression are calculated as:

Pearson residuals standardize the difference between observed and expected by dividing by √μ̂, which accounts for the Poisson variance assumption.

6. A dispersion parameter of 3.5 in a Poisson model suggests:

A dispersion parameter substantially greater than 1 indicates overdispersion—the data have more variability than the Poisson model assumes. A value of 3.5 is well above the expected value of 1.

7. Which is a cause of APPARENT (not real) overdispersion?

Missing important predictors is a model specification issue that creates apparent overdispersion, which can be resolved by correcting the model rather than changing distributional assumptions.

8. Scaling standard errors by √φ addresses overdispersion by:

Scaling SEs by √φ makes confidence intervals wider and P-values more conservative, but does not change point estimates of the coefficients.

9. The NB-2 model specifies the variance function as:

The NB-2 model has variance μ + αμ², which is quadratic in the mean and is the most commonly used form of negative binomial regression.

10. To test whether negative binomial regression provides a better fit than Poisson, you use:

Since Poisson is nested within NB (when α = 0), a likelihood ratio test comparing the two models tests whether the extra parameter is needed.

11. A zero-inflated Poisson model combines:

ZIP models have two components: a logistic model determining whether an observation is from the “always zero” group, and a Poisson model for the count process.

12. The Vuong test is used to:

The Vuong test is specifically designed to compare non-nested models, particularly zero-inflated models versus their standard counterparts.

13. In a hurdle model, zero counts are generated by:

In a hurdle model, ALL zeros come from the binary component. Once the “hurdle” is crossed (event > 0), only positive counts are modelled by the truncated count component.

14. Zero-truncated models are appropriate when:

Zero-truncated models are used when the sampling design excludes zeros—for example, hospital length of stay (minimum 1 day) or number of items purchased by customers.

15. Poisson regression can be used to estimate relative risks directly from binary outcome data because:

When outcomes are rare, Poisson regression with robust standard errors provides valid estimates of the relative risk, which is often more interpretable than odds ratios from logistic regression.

Lesson 6 Complete!

Congratulations! You have successfully completed the Modelling Count and Rate Data module. Your responses have been downloaded automatically.