Cybersecurity

Securing Bank Industry AI Against Adversaries

Cybersecurity | Artificial Intelligence | BGL BNP Paribas

In 2019, researchers at KU Leuven in Belgium found a way to render a person invisible — at least to one common facial recognition algorithm. 

The researchers had developed what they called a “patch” — a piece of cardboard printed with colourful, abstract shapes.

When a person holds or wears this patch in the centre of their body, they become unrecognizable to certain artificial intelligence (AI). Passing the patch back and forth between researchers in a demonstration video, first one researcher and then the other pops in and out of the AI’s watchful eye in a strange game of now you see me, now you don’t.

 

The research team is investigating AI systems

 

By 2020, researchers at Northeastern University, MIT, and IBM created a patch of their own that worked when printed onto a regular T-shirt. This advancement brings us just a little closer to realising some version of William Gibson’s “ugly T-shirt” or Philip K. Dick’s “scramble suit”, which rendered their fictive wearers respectively invisible to, or unidentifiable by, a cyberpunk surveillance state.

AI does not see the world in the same way we do. Without the benefit of our lifetime of experience, it can usually be fooled by innocuous changes to the object it is interacting with, whether that object is a stop sign, an image of a human, or a banana. But we trust it to do a whole lot more than just image identification, and adversarial examples exist in far more abstract variations as well.

“An adversarial example is any modification that games the targeted AI,” explains SnT researcher Dr. Maxime Cordy. Together with industry partners BGL BNP Paribas in Luxembourg, Cordy and his FNR-funded research team are investigating how the AI systems created for banks can be shored up against these subtle games of masquerade.

"People’s livelihoods and life savings are on the line, so it is really important that the automated tools are secure, accurate and explainable."

Yves Le Traon and Maxime Cordy
Yves Le Traon and Maxime Cordy (SnT)
 

The team’s first focal point is BGL’s automated credit scoring tool. “We’re working with BNP to develop a system that is resilient against adversarial examples,” says Cordy, the project’s principal investigator. In this unique context, adversarial examples could be used to manipulate the system into providing an inappropriate credit score.

 

“Pretty early on in the project we quickly ran into a problem — existing methods for generating adversarial examples all came out of computer vision, and they just don’t translate well into this alpha-numeric domain.” Their solution was to create something completely new.

Our method for generating adversarial examples for this context is an important step towards understanding their more universal properties,” says Prof. Yves Le Traon,
who coordinates the project. Importantly, their new approach will eventually allow BGL to automate the algorithm’s improvement.

 

“Ultimately, we want to create a whole security flow, so that BGL can run our tool to automatically generate new adversarial examples, and then have the AI learn and improve itself automatically based on these scenarios,” says Cordy. It is a state-of-the-art technology, often referred to as “learning a robust classifier”, which will help keep banks like BGL one step ahead

"Our method for generating adversarial examples for this context is an important step towards understanding their more universal properties."

Yves Le Traon
Yves Le Traon (SnT)

“We’re always searching for the best AI technologies to implement in the banking context,” says BGL’s Anne Goujon, Director of the bank’s data science lab and the project industry partner representative. “It is a context where people’s livelihoods and life savings are on the line, so it is really important that the automated tools used to assist with administrative decisions are secure, accurate and explainable.

 

Ensuring their tools are resilient against adversarial examples means ensuring they cannot be maliciously misled or exploited — a vital step towards safeguarding a tool’s overall security and reliability. Improved security is good news for bankers and customers alike, because at the end of the day, a tool that cannot be gamed is a tool that everyone can have confidence in.

People & Partners in this Project​

Yves Le Traon
Maxime cordy
Maxime Cordy
BGL BNP Paribas

More Cybersecurity stories