The Robot Protectors of Space

The device you are using to read this article is based on a centralised, monolithic architecture – a processor core steering a number of other components into executing tasks. Most of our technological systems work this way, because it’s us humans who lead, telling them what to do – we are the intelligence.

 

Yet, over the last decades, another pattern has emerged, both in computing and other sectors: distributed systems. In these systems, decision 

Drone

power is distributed among all components. These work together to execute one or more tasks without central coordination  instead they take autonomous decisions based on constant communication with one another and using each the same heuristics – mathematical functions ranking different possible courses of action (i.e., algorithms). In a nutshell, distributed systems work similarly to a beehive – their behaviours are examples of so-called swarm intelligence, in which at any given time, each component has the local intelligence to make the right decision for the group to accomplish a common goal. A well-known example of a distributed system is the SETI@home project (1999-2020), in which anyone with a computer and an internet connection could contribute some of their machine’s computational power to analyse radio signals from space in search of signs of extraterrestrial intelligence. 

“For robot swarms, and robots in general, the few existing approaches are selective approaches.” 

Robot Swarm
Robots demonstrating swarm technology

Distributed systems present several advantages over traditional, monolithic systems. First, they are more resilient: if a component breaks down, the other parts autonomously reorganise to accomplish the task, thus avoiding downtime. Secondly, they have overall lower operational costs, as they autonomously leverage existing resources. In some cases, they also offer increased capacity to tackle additional or different tasks due to their flexibility.


However, the key problem with these systems is designing efficient behaviours for them, some that enable each component to make the best decisions on its own, and still work in concert with the others. Traditionally, researchers manually designed those underlying algorithms with a trial-and-error approach – a difficult, costly and error-prone process. But with artificial intelligence, this process can be automated, saving a considerable amount of time and money, and generating unseen, efficient, and reusable behaviours. In fact, automated algorithm design is a discipline combining optimisation techniques with machine learning to find the best heuristics for achieving a certain goal. 

“In the case of asteroid observation with a distributed system – a fleet – of small satellites, we may want to maximise area coverage while also minimising energy consumption”

In the framework of the FNR-backed project entitled Automating the design of autonomous robot swarms (ADARS), launched in May 2021, a group of researchers from SnT’s Parallel Computing and Optimisation group (PCOG) are currently conducting research on the automated generation of behaviours for distributed aerospace and space systems (DASS). Leveraging their long-standing expertise in optimisation, swarm intelligence, and machine learning, PCOG researchers are looking for solutions in an even more complex subset of automated algorithm design: generating unseen and efficient robot swarm behaviours from scratch. To achieve this goal, the researchers are using hyper-heuristics: higher-level heuristics able to execute operations on heuristics. 

 

Mostly, with hyper-heuristics, a selective approach is privileged: their goal is to select or combine the best heuristics within a given set for the device or component to use. “For robot swarms, and robots in general, the few existing approaches are selective approaches,” says Dr. Grégoire Danoy, deputy head of PCOG at SnT and principal investigator of the ADARS project. “With ADARS, we are using a generative approach – instead of looking for existing heuristics, we define their building blocks to create new and potentially unseen heuristics for robots to address the problem they are facing. However, we have to account for an additional layer of complexity: we are working on multi-objective problems. In the case of asteroid observation with a distributed system – a fleet – of small satellites, for example, we may want to maximise area coverage while also minimising energy consumption. To do that, we need to develop new, multi-objective machine learning techniques,” he explains.  

The PCOG research group presenting at SnT's Partnership Day 2021

After the initial training and testing phases, the project team, comprised of Danoy, Prof. Pascal Bouvry, Dr. Sébastien Varrette, Dr. Daniel H. Stolfi, Florian Felten, and Pierre-Yves Houitte, will explore two main application scenarios: swarm formation for a counter unmanned autonomous vehicles (UAV) system for protection purposes, and swarm formation of small satellites for asteroid observation. This is because both aerospace and space systems are currently shifting towards distributed models due to lower launch and operational costs, higher levels of resilience, and increased capacity. The project’s practical applications range from low-level airspace protection with UAV fleets to satellite formations. But the researchers, true to their scientific spirit, are already looking beyond, to see how these could potentially be scaled up in the future. 

People & Partners in this Project​

Pascal Bouvry
Pascal Bouvry
Daniel Stolfi Rosso
Daniel Stolfi Rosso
Gregoire Danoy
Gregoire Danoy
P-Y Houitte
Pierre Yves Houitte
Sebastien Varrette
Sebastien Varrette
F Felten
Florian Felten