Using past pandemics to guide COVID‑19 predictions
With COVID‑19 upon us and mathematical modelling being used by both provincial and federal governments to guide decision-making, the reliability of such models becomes paramount. We propose to analyse the modelling predictions done in three recent pandemics – SARS in 2003 (also a coronavirus), the H1N1 « swine flu » outbreak of 2009 and MERS, starting in 2012 (another coronavirus) – in order to assess how accurate these models were in the long term. The proposal will also explore the lessons learned and the predictive limits of random events such as superspreaders (individuals who are vastly more likely to transmit the disease than most people) or the onset of a second (or third) wave. The outbreaks on which the models are based happened during the era of « big data », and various predictions were made at the time using mathematical models. It makes them ideal candidates for this research. While it would be optimal to wait for further data in order to validate current models, in the early stages of a fast-moving pandemic, we do not have the luxury of time. However, the nature of modelling in past pandemics can serve as a guide for the current one. Using the data and past predictions, we will develop and analyse parallel COVID‑19 models based on best practices from the most successful past models. These COVID‑19 models will allow decision-makers to gain early warning of further waves, as well as other future pandemics, with the knowledge of which models are likely to be reliable and under what circumstances.