According to the Consumer Cyber Safety Pulse Report by NortonLifelock’s Norton Labs, the company blocked more than 1 billion threats in the first quarter of this year.
Deepfakes, artificially created photos and videos, are widespread as a way to sow misinformation and dupe consumers, according to the cybersecurity company. Ukraine war propaganda, fake social media profiles and phony charitable appeals are among the scams driven by deepfakes, the researchers find.
Heists involving cryptocurrency are also popular among cybercriminals. Norton tracked more than $29 billion in stolen Bitcoin last year and projects that number to rise in 2022.
The European Union’s law enforcement agency is also sounding the alarm on deepfakes, as Security Week points out.
In a recent report, Europol warned that crooks could use deepfake technology in CEO fraud, evidence tampering, and non-consensual porn, among other serious crimes. Summarizing the findings, Europol writes that “law enforcement, online service providers and other organizations need to develop their policies and invest in detection as well as prevention solutions for misinformation, and policymakers need to adapt to the changing technological reality as well.”
Artificial intelligence tools and certificates of authenticity can be used to defend against the threat from deepfakes, as Dark Reading reports. The eyes in a deepfake image may look slightly wrong, adds Security Week, and zero-trust principles should also help protect against the threat.
Russia’s war in Ukraine has so far led to numerous deepfakes but not yet the massive cyber conflict that some feared, as the Australian Financial Review reports. Crowdstrike’s chief technology officer, Mike Sentonas, said, ““We saw deepfake videos and SMS campaigns telling people that banks were shutting down.” One prominent deepfake video claimed to depict Ukrainian President Volodymyr Zelensky calling on his forces to surrender.