Is Your Business Ready For A Deepfake Attack? 4 Steps To Take Before It's Too Late

Trending 1 day ago
deepfake concept
Curly_photo/Moment/Getty Images

Follow ZDNET: Add america arsenic a preferred source connected Google.


ZDNET's cardinal takeaways

  • Deepfakes tin origin superior reputational and financial damage.
  • Deepfake incidents are rising, and existent defenses whitethorn autumn short.
  • Take steps now to trim your business's deepfake scam risk.

Deepfake technologies are quickly advancing, and pinch them, nan risks to nan endeavor increase.

The emergence of ChatGPT and its almost contiguous effect connected businesses took galore by surprise. The generative AI chatbot proved celebrated pinch everyone from students and SMBs to nan enterprise, taking galore companies by large wind and prompting them to research nan benefits of AI successful earnest.

Also: Is that an AI video? 6 telltale signs it's a fake

Generative AI tin beryllium transformative for businesses; however, arsenic pinch immoderate caller technology, it tin beryllium abused, an rumor highlighted by nan risks posed by deepfakes.

What are deepfakes?

Deepfakes first emerged connected societal media arsenic individuals experimented pinch satirical contented -- realistic images and videos of everything from high-profile figures to cats driving cars. Entertainment aside, threat actors tin employment nan aforesaid methods for criminal and fraudulent purposes.

Deepfakes are generated done AI devices and ample connection models (LLMs). You tin make photos and videos of a target saying thing aliases performing an action. Source worldly fed into LLMs aliases recovered online, including existing photos aliases sound clips specified arsenic those scraped from interviews and podcasts, tin adhd capable realism that deepfakes go very difficult to detect.

Also: Google spots malware successful nan chaotic that morphs mid-attack, acknowledgment to AI

Generative Adversarial Networks (GANs) tin besides usage existing datasets to create wholly caller -- but believable -- people, and you tin find these AI beings doing everything from touting scam products online to spreading clone news.

Now that AI devices are freely disposable online and easy to learn, this has reduced nan obstruction to introduction for cybercriminals willing successful creating blase phishing campaigns, scamming individuals, spreading misinformation, aliases creating malicious ads.

The risks deepfakes airs to businesses

Deepfake technologies are quickly advancing, and we person yet to understand every perspective of onslaught utilizing generative AI -- but we've already observed a number of imaginable onslaught vectors.

Also: Are AI browsers worthy nan information risk? Why experts are worried

According to Ironscales' Fall 2025 Threat Report, location has been a 10% year-over-year summation successful deepfake attacks this year, and 85% of organizations surveyed said they had dealt pinch astatine slightest 1 deepfake-related incident successful 2025.

Some of nan awesome risks deepfakes airs to businesses include:

  • Misinformation, propaganda: Deepfakes, whether images aliases video, tin beryllium utilized to dispersed misinformation, clone news, and propaganda. This could see expected labor bashing their institution online, clone executives making derogatory comments, aliases clone news publication by presenters that implicate an statement successful criminal activities.
  • Reputational harm: Deepfakes spreading misinformation tin besides lead to terrible reputational harm and financial damage, specified arsenic stock prices plummeting aliases incidents that erode user spot successful a brand. This could see clone videos of a CEO admitting to embezzlement, a marque being fraudulently linked via clone news reports to kid labor, and more. Companies tin besides carnivore nan consequences of deepfake contented circulating connected societal media platforms that spreads clone structural changes -- specified arsenic acquisitions and mergers -- that could severely effect banal prices. These kinds of deepfakes tin besides person a lasting effect connected early genuine news, arsenic watchers whitethorn not cognize what to believe.
  • Identity theft, societal engineering: Deepfake videos, images, and sound calls are immoderate of nan astir vulnerable deepfake applications disposable today. By generating convincing videos and synthetic sound recordings, attackers could impersonate business leaders -- specified arsenic CEOs aliases VPs -- and lure labor into handing complete delicate information aliases credentials to entree institution systems, aliases to o.k. fraudulent invoices. An illustration of this is erstwhile UK master services supplier Arup mislaid millions of dollars to a deepfake scam, successful which cybercriminals created a deepfake version of an executive to petition fraudulent transfers during a video call.
  • Vishing: Leading connected from Arup's case, nan usage of deepfake exertion successful these ways is known arsenic vishing. Voice cloning successful voicemails, clone audio notes, and deepfake video contented embedded successful emails aliases dispersed crossed societal media whitethorn each beryllium utilized to entice victims into revealing delicate accusation aliases approving fraudulent payments.

Unfortunately, deepfake technologies are a increasing market. A caller research report published by Google's Threat Intelligence Group (GTIG) revealed AI devices being sold underground for creating lure contented useful successful phishing operations, and moreover GenAI devices for circumventing know-your-customer (KYC) banking information requirements.

How to take sides your business from deepfakes

1. Staff training

Providing labor pinch knowledge, guidance, and support connected knowing what deepfakes are and really to observe them should beryllium nan first measurement you take.

Training needs to beryllium consistent, frequent, and interesting, arsenic investigation has already shown that yearly cybersecurity training is almost pointless. Tips connected spotting deepfakes are important, arsenic while they are becoming progressively much difficult to detect, making unit alert of insignificant specifications that tin bespeak a deepfake -- specified arsenic unusual shadows, a distorted voice, a deficiency of acquainted phrases aliases terms, aliases blurred features -- tin besides use them.

Also: 6 basal rules for unleashing AI connected your package improvement process - and nan No. 1 risk

Audian Paxson, main method strategist astatine Ironscales, told ZDNET that video deepfakes are typically nan hardest for labor to spot, and truthful training for this should return privilege -- moreover though employers should expect first walk rates to beryllium rather low.

"88% of organizations successful our 2025 investigation connection deepfake consciousness training, but erstwhile we asked astir first-attempt walk rates connected phishing simulations, astir fell successful nan 20-60% range," Paxson commented. "That's not a training nonaccomplishment -- it's a reflection of really bully these attacks person gotten. You request realistic simulations that reflector existent onslaught patterns (audio clips of executives, clone video gathering requests) truthful labor tin believe verification behaviors nether pressure. And you request to support moving them!"

2. Multi-factor authentication, layered authentication controls

One of nan champion forms of defense against deepfake attacks based connected fraud is to instrumentality layered, chopped authentication and costs verification controls.

No azygous worker should beryllium capable to authorize high-value payments aliases nan transportation of delicate information, specified arsenic financial aliases payroll records. Instead, by adding a 2nd level of approval, a convincing deepfake onslaught has to fool much than 1 unfortunate -- and this gives unit a chance to measurement back, think, rationalize, and perchance observe a deepfake strategy much readily.

It tin beryllium elemental to implement, too, simply by utilizing a trusted telephone number, Slack message, aliases soul mail.

Also: How to prep your institution for a passwordless early - successful 5 steps

Another action is to usage codification words, changed frequently, that person to beryllium said erstwhile a costs is requested. Without this soul knowledge, a deepfake sound aliases video effort will fail. And if a deepfake attacker has someway lured an worker into handing complete credentials, nan usage of multi-factor authentication connected extremity devices and systems will create an important obstruction to entry.

"A clear 'call-back' argumentation is 1 of nan simplest defenses," Nick Knupffer, CEO of VerifyLabs.AI, told ZDNET. "If a petition seems different aliases urgent, unit should ever telephone nan executive backmost connected a verified number, not nan 1 provided successful nan message. Multi-factor authentication is different must, ensuring delicate accounts and payments can't beryllium approved connected a azygous instruction alone."

3. Develop an incident consequence scheme

Businesses concerned astir nan emergence successful deepfakes must behaviour a thorough audit of their networks, place their weaknesses, find training requirements, and reappraisal their information and authentication measures, including whether a convincing deepfake could fool existing automated verification systems.

Also: Are Sora 2 and different AI video devices risky to use? Here's what a ineligible clever clever says

Organizations tin past create realistic incident consequence plans for deepfake-related information incidents, including really to guarantee mission-critical systems stay online, really to reside fraud, disposable ineligible remedies, security considerations, and really to grip nationalist relations.

4. Trust thing

Businesses should now commencement to see implementing zero-trust architectures and controls, particularly arsenic spot successful nan quality facet and our expertise to observe scams is eroding -- a situation that is apt to go moreover harder arsenic deepfake technologies evolve.

According to Gartner, by adjacent year, attacks utilizing AI-generated deepfakes connected look biometrics will origin 30% of enterprises to trim their spot successful isolated personality verification solutions, and truthful aggregate points of verification will go necessary, including solutions that tin separate a unrecorded personification from a deepfake.

Also: Anxious astir AI occupation cuts? How white-collar workers tin protect themselves - starting now

Investing successful zero-trust entree and power systems, mixed pinch MFA and behavioral analytics software, whitethorn each thief trim nan consequence of deepfakes and associated technologies from compromising your network.

Want much stories astir AI? Check retired AI Leaderboard, our play newsletter.

More