The paper is published in the Nature Communications.
This research was led by Korryn Bodner, PhD and co-authored by Linwei Wang, Rafal Kustra, Jeffrey C. Kwong, Beate Sander, Hind Sbihi, Michael A. Irvine and Sharmistha Mishra. The study was supported by funding from CIHR, NSERC, and a Catalyst Grant from the University of Toronto Data Science Institute.

Figure 1: This figure shows how the size (i.e. magnitude) of the bias in vaccine effectiveness estimates changes based on the testing scenario and how well a vaccine protects against an infection (vaccine efficacy against susceptibility). The vaccine effectiveness estimates are shown for two types of study designs: the cohort study design (VERR) (Panel a) and the test-negative study design (VEOR) (Panel c). Panel b also shows the total numbers of symptomatic cases in an outbreak, which change based on how well the vaccine protects against an infection.
Summary
Why did we conduct this study?
Accurate measurements of vaccine effectiveness – a measurement of how well vaccines work in the real world – are essential for making informed decisions for vaccination programs. However, one challenge that could prevent us from getting accurate measurements is when people who are vaccinated are more likely to receive testing than people who are unvaccinated. Survey data tells us that people who are vaccinated may also be more likely to receive testing especially because such folks may have more engagement and/or access in the healthcare system – not because vaccinated folks are getting infected more.
This difference in testing could potentially lead to biased measurements of vaccine effectiveness, meaning that sometimes, the measurements may be less accurate.
Although we know this can happen in real-world studies, we know little about how big the problem really is and how the size of the problem (i.e. size of the bias or how much accuracy is lost) might vary depending on how a real-world study of vaccine effectiveness is carried out (study design). Additionally, we also know little about how different vaccines (with different levels of protection), and different properties of a virus (how easily they can be transmitted) might influence the size of this problem.
What did we do?
We developed a mathematical model to simulate an epidemic and tracked cases over time with a vaccine that protects against infection and against passing infection on to others. We used SARS-CoV-2 as our example. We looked at different scenarios: in the first scenario people who are vaccinated were just as likely to be tested as those who are unvaccinated; in all other scenarios, people who are vaccinated were more likely to be tested – by a little or by a lot. We then used the simulated data (which includes cases picked up from testing) and put the data through two common study designs used in the real-world to calculate vaccine effectiveness: the cohort design and the test negative design. The cohort design includes everyone in the population, while the test-negative design only includes people who have symptoms and receive testing for an infection. In our study, the cohort design included all simulated individuals whereas the test-negative study design only included those who were symptomatic and had been tested according to the testing scenario.
First, we measured vaccine effectiveness for each study design. Then we looked at the size of bias by comparing the measured vaccine effectiveness to the “true” protection from the vaccine.
What did we find?
We found the following main results:
- Both study designs underestimated “true” protection from the vaccine when people who are vaccinated were more likely to receive testing for infection than people who were unvaccinated. As expected, the more unequal the testing – the bigger the underestimate. The size of the bias, however, was smaller with the test-negative design.
- Irrespective of the study design, the biases were largest when the vaccine’s true protection against infection was low.
- The influence of other factors on bias depended on the study design.
- Changes in the vaccine’s ability to protect against an infection had a stronger effect on bias in the cohort design
- Changes in a vaccine’s ability to reduce an infected person’s infectiousness had a stronger effect on bias in the test-negative design.
- Changes in the underlying transmission risks (e.g. how infectious a virus might be) had a stronger effect on bias in the test-negative design.
What do these findings mean for public health?
- Despite a vaccine working well to protect us from infection, real-world studies can sometimes underestimate how well vaccines work: particularly when people who are vaccinated are also more likely to receive testing for infection. Because unequal testing in the real-world can make the vaccine appear less effective than what it really is, it could mislead us into thinking that vaccines are not working when they actually are working.
- Overall, this problem can be minor, especially when we use a test-negative design, which is better than the cohort design at buffering against the impact of unequal testing.
- But there is one key circumstance that is important to watch out for (even with a test-negative design): when the vaccine’s true protection against infection is on the lower-end, the risk of a large bias (a large underestimate) is high.
- When doing real-world studies to measure vaccine effectiveness, it is important to consider if there is unequal testing by vaccination status and how it might influence our interpretation of measurements of vaccine effectiveness.
