AI experts have warned of a possible cyber war against banks

Source: https://cobaltstrike.net/2022/03/23/ai-experts-have-warned-of-a-possible-cyber-war-against-banks/



The US government has already warned banks about the possible intensification of cyber attacks on their systems after Russia invaded the territory of Ukraine. However, experts believe that financial organizations may also face risks associated with a very obscure aspect of their business – artificial intelligence (AI) models used in almost everything from landing to trading.

“This is a huge unaccounted risk. Vulnerabilities in AI and complex analytical systems are significant and are very often overlooked by many organizations that use them,” Andrew Burt, a former adviser to the head of the FBI’s cyber division, who now runs the BNH law firm specializing in AI problems, told The Wall Street Journal.

The machine learning models used by banks pose a greater threat than the system operators that have become targets of attacks in the past. Unlike systems for protection against ransomware, the protection of machine learning technologies is only in its infancy, which puts financial organizations relying on them at risk.

“Machine learning security is not just a combination of security and machine learning; it’s a whole new field […] When machine learning is introduced into any software infrastructure, new attack surfaces open up, new ways to influence the system in order to change its behavior. All this infrastructure seems fragile, like a house of cards. It is not known which card needs to be pulled out so that it completely collapses,” explained Abhishek Gupta, founder and head of the non-governmental group Montreal AI Ethics Institute.

Machine learning models vary in complexity, ranging from relatively simple algorithms to complex so-called “black boxes” of artificial intelligence, so named because, like the human brain, they cannot simply be opened and see how the decision-making process takes place. In addition, just like the human brain, AI platforms can receive false information, including from intruders who want to manipulate them.

The “Russian experience” of using the Internet and social networks to spread disinformation can easily be used against machine learning models that, like other investors, turn to the Internet to assess market sentiment, Gupta believes. For example, false information about an imminent takeover or an unfolding fiasco can easily deceive the trading systems of a financial institution, and there are currently no reliable protection systems against such threats.

Although many large organizations in the United States use machine learning models, the models used by banks are unique in their importance for the country’s economy, therefore they pose unique risks.

“They are very susceptible to manipulation. If you find a way to deceive the models in banks with excessive debt and force them to cause large losses, it will become a kind of atomic bomb for the economy,” said David Van Bruwaene, an AI specialist and head of the Fairly AI company.

Machine learning models in banks are subject to various kinds of attacks. They can be deceived with the help of fake trading data, and landing computers, for example, can be loaded with fake applications for a loan of funds and thereby distort their vision of financial reality.

According to Burt, attacks on machine learning models are currently mostly the work of hackers working for the government. However, it is very difficult to count the number of such attacks from the outside, since banks are reluctant to disclose weaknesses in their systems.

Since research on detecting and preventing attacks on machine learning models is still at a relatively early stage, it is very difficult to advise potential victims on how to protect themselves from such attacks.

“This is a billion-dollar question. Enough work has been done in this area, and most of it has not been successful. Therefore, unfortunately, at the moment there are not so many effective ways to counter them (threats – ed.),” Gupta said.

Start a discussion …