TECH
Codecheck confirms reproducibility of COVID-19 model results
Imperial's COVID-19 Response Team has published the script to reproduce its high-profile 16 March coronavirus report, as it passes a codecheck. The code, script and documentation, which is available on Github, was subject to an independent review led by Dr. Stephen Eglen, reader in computational neuroscience in the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge.
The review grants the code a Codecheck.org.uk "certificate of reproducible computation."
In his codecheck certification Dr. Eglen writes: "I was able to reproduce the results… from Report 9."
Codecheck.org.uk provided an independent review of the replication of key findings from Report 9 using COVIDSim reimplementation. The process matches domain expertise and technical skills, taking place as an open peer review. The reviewer conducts the codecheck and submits the resulting certificate as part of their review.
The results confirm that the key finding of Report 9—on the impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand—are reproducible.
COVIDSim produces the same output, across platforms (Linux, Mac and Windows) and across compilers (GCC, Clang, Intel and MSVC) for a specified number of threads and fixed random number seeds, as can be seen on Github.
Reproducible results
In his analysis, Dr. Eglen said: "Each run generated a tab-delimited file in the output folder. Two R scripts provided by Prof Ferguson were used to summarise these runs into two summary files... These files were compared against the values generated by Prof Ferguson... The results were found to be identical. Inserting my results into his Excel spreadsheet generated the same pivot tables."
The codecheck found that: "Small variations (mostly under 5%) in the numbers were observed between Report 9 and our runs."
The report explains the factors contributing to these small variations:
-The COVIDSim codebase is now deterministic.
-Slightly different population input files have been used.
-These results are the average of NR=10 runs, rather than just one simulation as used in Report 9.
The codecheck confirmed the trends and findings of the original report.
Building in part on code originally developed, published and peer-reviewed in 2005 and 2006, the code used for Report 9 continues to be actively developed to allow examination of the wider range of control policies now being deployed as countries relax lockdown. The Imperial team is sharing the code to enhance transparency and to allow others to contribute and make use of the simulation.
Refactoring the code has allowed changes to be made more quickly and reliably, including incorporating new data that has become available as the pandemic has progressed.
In addition to the features presented in Imperial Report 9, further strategies can now be examined such as testing and contact tracing, which was not a UK policy option in March.
Users also now have the ability to vary intensity of interventions over time and to calibrate the model to country specific epidemic data.
Scrutinising and improving
Some world-leading software engineers have helped scrutinise, review and improve Imperial's code and modelling, including John Carmack, the legendary videogame developer.
Commenting in April, John Carmack said that the code "fared a lot better going through the gauntlet of code analysis tools I hit it with than a lot of more modern code. There is something to be said for straightforward C code. Bugs were found and fixed, but generally in paths that weren't enabled or hit. Similarly, the performance scaling using OpenMP was already pretty good, and this was not the place for one of my dramatic system refactorings. Mostly, I was just a code janitor for a few weeks, but I was happy to be able to help a little."
Provided by Imperial College London
No comments:
Post a Comment