By Hadi Khatib
Who hasn’t seen AI-Generated deep fakes of famous actors, government officials and other global personalities dressed in weird garb, dancing to a popular meme, or making outlandish announcements?
The world is full of such AI face swaps most of which are driven by efforts at shifting public opinion or creating an artificial followings and subscriptions.
While these can provoke a smile or smirk from us, they are generally harmless. But there is nothing funny about how sophisticated, deepfakes are becoming, especially in times of military conflict like those we are all experiencing in 2026, putting both brands and consumers at great risk.
As of March 2026, threat actors release voice clones to impersonate embassy officials or airline customer service agents, according to Fodor’s Travel and Vectra AI’s 2026. Rogue entities try to take advantage of peak travel disruptions caused by airspace closures deploying “Deepfake Bots” that contact stranded passengers via WhatsApp or Telegram and use hyper-realistic voices to offer priority evacuation seating or private corridor transport for a fee.
“We are seeing manipulated videos, cloned voices, and fabricated announcements targeting brands, governments, and travelers. The real danger is erosion of trust. Organizations must strengthen verification protocols and respond with transparent, real-time communication,” Sonal Chiber, Senior Consultant – Global Benchmarking, Analytics & Strategic Communications at Coalition Greenwich (a CRISIL company, part of S&P Global), said.
Airline and hospitality brands take a direct hit from this.
“When you simulate the voice of a trusted travel agent or a safety official during an active crisis, you aren’t just stealing money; you are compromising the regional security architecture,” Ayshwarya Chari, Director of Strategy at media intelligence firm 1115Inc, said in Q1, 2026.
Media channels are becoming fair game for wars of illusions, that succeed, among others, in stripping away equity from established brands.
The 2026 Deepfake Landscape
Deepfakes have greatly evolved from crude “face-swaps” into AI lookalike models freed from the flickering or warping around the jawline that previously served as forensic telltale signs of fakes.
This gets really treacherous for bands when hackers deploy real-time interactive deepfakes, with live-streamed synthetic overlays during video calls aimed at conducting high-stakes negotiations or customer service impersonations with phony entities.
Imagine a high-stakes scenario of negotiations taking place between a real GCC-based FMCG distributor and an actual European CPG supplier seeking a $15 million emergency shipment of grain or essential oils during a regional shipping crisis like today’s.
Attackers intercepting this communication would sense the urgency of a distressed procurement director looking to make a deal. They could send a “Teams” invite, impersonating the CPG company’s Global Head of Sales. When the call starts, the attacker uses a Live Generative Streamer where an AI persona takes on the face of the sales head. If the attacker leans forward, the AI deepfake leans forward, and so on, in real time. A
Voice Clone would have already processed the Sales Head’s exact tone, accent, and breathing patterns. If the transaction turned suspicious because of a sudden change in payment terms agreed upon earlier, the attacker is able to reply immediately and convincingly in a way that feels fluid, human. Potentially, a $15 million transfer to a fraudulent holding account is executed.
What makes this more believable is that voice cloning has reached a point where as little as 3 seconds of original audio can generate a clone with 90 percent accuracy, according to a McAfee report in 2025.
“Organizations must publicly establish that legitimate embassies never cold-call requesting payments or documents. Callback verification using official numbers, real-time voice liveness detection, and booking-tied verification codes create meaningful friction. Telecom partnerships for spoofed-number flagging and consistent traveller education remain essential complements,” Dr. Najla Al Futaisi shared.
According to KPMG, the volume of deepfake content has surged by approximately 900 percent annually over the past few years.
“Shadow” Agencies
Deepfake attackers went from being lone wolves to sophisticated fraud networks operating under a “Deepfake-as-a-Service” (DaaS) model, a disinformation-spreading service available on the dark web for as little as $50 a month.
“Crises create ideal conditions for deepfakes to spread. Heightened emotion and fragmented information weaken people’s ability to critically evaluate what they see. Coordinated disinformation campaigns increasingly exploit disasters, conflicts, and public health emergencies, with fabricated content often spreading faster than corrections,” said Dr. Najla Al Futaisi, Assistant Professor of Artificial Intelligence, School of Engineering, Applied Science and Technology, Canadian University Dubai.
Their repercussions are massive. Fake videos of CEOs making incendiary political or economic statements can result in either stock depletions or gluts in the FMCG sector.
Especially geared to exploit pressure points arising from geopolitical conflicts, these groups pose as emergency crisis PR reps at reputable agencies or even logistics partners for FMCG giants, with fabricated LinkedIn histories and fraudulent credentials.
Enter Agentic AI reps. Though becoming standards in logistics and FMCG transactions, and with autonomous inventory management, swapping real Agentic AI reps with fake ones is less effective when it comes to deepfakes.
Fake Agentic reps simulate these same behaviors to divert shipments or authorize fraudulent payments but they often bypass standard protocols, making them suspicious. They also likely lack verifiable metadata, and could experience processing lags during live videos feeds.
Negative Impact of deepfakes
At the very least, deepfakes insert made up reviews and forged testimonials, causing consumers to purchase bad quality or counterfeit goods.
On a global scale, consumer losses to AI-driven scams reached an estimated $442 billion in 2025, the Global Anti-Scam Alliance (GASA) published in their Global State of Scams 2025 Report.
The average loss for a business per deepfake incident in late 2025 was approximately $680,000, according to leading AI-threat intelligence firm DeepStrike statistical report in 2025.
Deepfakes are also behind scams where fake influencers promote non-existent FMCG products. According to Gartner’s 2026 Strategic Fraud Forecast, a deluge in deepfakes led up to 30 percent of consumers to lose trust of video advertisements that major brands promote if they lack a trusted digital verification marker or badge.
And when FMCG companies fail to intercept or address deepfakes in real-time, this delay can result in ‘Equity Bleed’, a 2025 cross-industry study led by Publicis Groupe Middle East reported. It said that if a MENA deepfake of a CPG brand’s safety standards goes uncorrected for more than 4 hours, during which a rebuttal of the claims needs issuing, brand sentiment scores in the region could drop by 22 percent within 2 hours, or more if prolonged.
“In the Middle East, we are seeing the leap directly into Agentic Commerce. But this infrastructure is only as strong as its verification layer. If an FMCG brand in Saudi Arabia or the UAE fails to secure its ‘Digital Twin,’ the reputational damage isn’t just local—it’s viral and immediate,” Darius LaBelle, Managing Director (Middle East) at November Five, an independent digital product agency, said.
Security firms like UncovAI and DeepStrike now scan for ‘Neural Fingerprints’ or fake signatures that point to inconsistencies invisible to humans but clear to forensic AI.
“Organizations cannot rely on a purely reactive approach. Effective responses rely on real-time AI-powered monitoring, pre-authenticated spokesperson content, and digital watermarking already being in place before incidents occur. When deepfakes surface, swift platform takedowns and a verified source of truth page become essential. The first hour often determines how far the damage spreads,” Dr. Najla Al Futaisi added.
Regaining trust pre or post deepfake
A C2PA, or the Coalition for Content Providence and Authenticity, provides an open technical standard for publishers, creators and consumers to establish the origin and edits of digital content.
These credentials are nearly impossible forge and so asking for proof of these will prevent many deepfake from ever happening. Brands that adopt the C2PA standard for all digital assets allows consumers to click am icon to verify the content’s and/or footage’s origin.
In the case of Travel and Hospitality, a C2PA can work well against deepfakes, but are they enough?
“Ghost listings exploit urgency during crises, particularly when travelers search for safe zones. Verification frameworks such as C2PA can help authenticate content provenance, but they are only one layer. Platforms must strengthen listing verification and encourage travelers to book through trusted channels,” Chiber advised.
And if the damage is done, “Verified statements using C2PA-signed content can help signal authenticity. Independent audits and appropriate compensation can help restore credibility. Trust is rarely rebuilt through a single announcement; it requires consistent, authenticated engagement over time,” Dr. Najla Al Futaisi explained.



