New research reveals the majority of CISOs working in the financial services zone are increasingly worried about the plausible use of deepfakes
Over three-quarters (77%) of cyber security decision makers are concerned about the potential for deepfake technological know-how to be used fraudulently – with online payments and personal banking services notion to be most at risk – but barely a quarter (28%) have taken any action against them.
Biometric authentication technology provider iProov surveyed over a hundred security decision makers in the financial services sector in strive to highlight how severely the chance of deepfakes is perceived.
A portmanteau of “deep learning” and “fake”, deepfakes first emerged on Reddit a few years ago, and describes a variety of artificial intelligence (AI) that can be used to create image, audio and video hoaxes that can be indistinguishable from the real thing.
The creation of deepfakes is still an emerging application for AIs, however nevertheless, iProov founder and CEO Andrew Bud stated it was encouraging to see the financial services industry has acknowledged the scale of the dangers, which is probably massive in terms of fraud, although he delivered that the tangible measures being taken to shield towards them were what in reality mattered.
“It’s likely that so few organisations have taken such action because they’re unaware of how quickly this technology is evolving. The latest deepfakes are so good they will convince most people and systems, and they’re only going to become more realistic,” he said.
A whole of 71% of respondents said they thought their clients have been at least really concerned about the threat, and as deepfake technology moves increasingly more into the public eye – Facebook announced new policies banning them from its platform at the beginning of January 2020 – 64% said they thought this was once due to worsen.
According to iProov, as each AI and machine learning technologies have turn out to be greater advanced (and crucially, more available), deepfakes have already been deployed through fraudsters in an industrial context, which may want to be specifically regarding when thinking about the world of personal finance.
Of particular subject to decision makers was the conceivable for deepfake pix to be able to compromise facial recognition defences.
“The era in which we can believe the evidence of our own eyes is ending. Without technology to help us identify fakery, every moving and still image will, in future, become suspect,” said Bud. “That’s hard for all of us as consumers to learn, so we’re going to have to rely on really good technology to protect us.”
A previously survey of the British public performed by iProov in 2019 revealed huge lack of awareness of deepfake technology amongst shoppers – 72% said they had never heard of deepfake videos, although once it was once explained what they were 65% stated that deepfakes would undermine their trust in the internet.
Consumers noted identity theft as their biggest concern for how deepfakes might be misused, and 72% additionally said they would be more likely to use an online service that had put measures in place to mitigate their impact.