A Pandemic Model Shaping Isolation Measures

COVID-19 | Epidemiological Modelling | SerVal & TruX

Salah Ghamizi is a Ph.D. student at SnT and a member of Prof. Yves Le Traon’s Security Design and Validation (SerVal) Research Group. Together with his research group, Ghamizi has served on the Luxembourg COVID-19 Task Force since March 2020. Their recent paper, written in collaboration with SnT’s TruX research group, outlines the group’s contribution to the fight against COVID-19 — a tool that blends artificial intelligence and epidemiological modelling. The paper won the Best Paper Award at the 2020 Knowledge Discovery and Data Mining Conference on machine learning. We spoke to Salah about the project.

Tell us about the tool your team has built. What does it do?

Our tool uses machine learning techniques to analyse public data and deliver hypothetical projections of how different isolation measures will impact the spread of coronavirus in more than 100 countries around the world. Currently, it is one of the most sophisticated public tools available for modelling the pandemic and we have an update coming very soon that will make it even better. The update will introduce an optimisation function for policy recommendations based on the desired outcome. That means, for example, you could put in “keep total infections under XXX number at all times” and the tool will give you a set of policies and an implementation timeline that is projected to produce the desired result. We hope that our approach will be a powerful tool for public health officials and decision-makers in the coming weeks and months as societies around the globe continue to come out of lockdown.

"It is one of the most sophisticated public tools available for modelling the pandemic."

S Ghamizi
Salah Ghamizi (SnT)

What makes your tool different from other modelling tools? What technologies did you apply to make it so “smart”?

Unlike traditional epidemiological models, our tool harnesses techniques from artificial intelligence and machine learning to improve the quality of the parameters and assumptions that we plug into the model. My colleague Renaud Rwemalika’s implementation of a time-dependent neural network made our predictions very accurate. The result is that our model produces much more reliable simulations of the spread of the virus through a specific community (users can select the country they want to look at in the tool to customise its output). But creating a simulation, even a very good one, is relatively easy. The hard part is finding ways to make this complex tool really helpful for policy-makers. That’s why adding prescriptive search, the ability to search for an optimised result, is so powerful. It means you can quickly extract the information you actually want from the model. Before this, users needed to manually test potentially viable mitigation parameters one at a time, landing on the best option after a tedious game of “hot and cold”. Our new approach reverses that process, making our tool even more unique. It gives you the result you want right away, and that improved efficiency means policymakers are more likely to actually take advantage of it — which at the end of the day is really what matters most.

“It has been reaffirming for me to see that even when the world seems to have come to a standstill — science doesn't stop."

What was the biggest challenge you faced while developing your tool?

Data is always the biggest challenge. Data is messy and imperfect. When doing research, often we have the luxury of using artificial datasets, which are — by nature — very orderly and complete. But data produced “in the wild” is completely different and you need to be flexible when you work with it. For example, when we first started out, we had a problem with data scarcity. Luxembourg just didn’t produce a big enough data set for us to build a good model because the population is too small. We resolved this by adding data from lots of other countries to our data pool. Augmenting this additional data with the demographic information from the country or community of origin now allows us to adjust our results to reflect the demographics of the target community — whatever community we want to reason about, whether that’s Luxembourg or Latvia or wherever else. It is a creative solution that not only compensates for imperfect, messy data, but also makes our tool more useful to the global community.

 

We also encountered challenges in setting up the optimisation algorithm we used for the new result-guided policy recommendation feature. The class of algorithm we used, called a genetic algorithm, requires a good bit of fine-tuning to get it off the ground, and for this we were very fortunate to have Dr. Maxime Cordy on our team. His insights and guidance to establish the framework for the successful execution of the algorithm were indispensable.

What has the project taught you that you will bring back to your regular research?

The idea for the optimisation technique actually came out of my previous research in the financial sector. I used the same approach — genetic algorithms — in a paper about optimising machine learning attacks and defense for banks. So it is natural for me to see myself applying these techniques to yet another field again in the future. In fact, it is already very clear that much of the work we’ve done for our tool can be applied to model challenging topics, like fake news or cyber attacks. The exciting thing will be to see exactly where we can apply this to have the biggest impact. On a more personal level, one thing I’ll take with me from this experience is that even when you feel surrounded by chaos and loss, there is also always progress. Even a small contribution like ours may open new outlooks to fellow researchers. It has been reaffirming for me to see that even when the world seems to have come to a standstill — science doesn’t stop.

People & Partners in this Project​

S Ghamizi
Salah Ghamizi
R Rwemalika
Renaud Rwemalika
Maxime Cordy
Maxime Cordy