Share

92% sounds impressive, until the reputational fallout of AI misuse in MENA PR strikes

Across the MENA region, PR professionals juggle diverse audiences with unique languages, histories, and cultural sensitivities. AI tools, trained on broad datasets, lack the lived understanding of these subtleties, writes Raed Jafar)

In the latest PRCA MENA AI in PR Report 2025, a striking statistic stands out immediately: 92 percent of PR professionals in the Middle East and North Africa say they are using AI in some form. On the surface, this signals rapid uptake and modernization of public relations functions across the region. Dig slightly deeper, and another figure enters the conversation. Thirty-nine percent of those same professionals remain worried about the reliability and accuracy of AI-generated outputs, a warning sign of unease about the very tools they’re embracing.

This juxtaposition, near-ubiquitous adoption alongside persistent doubts about output quality, captures the contradiction facing PR leaders today. The widespread use of AI does not automatically translate to high-quality communications or safeguarded reputations. Too often, speed and efficiency win out over accuracy, cultural nuance, and contextual insight. The result? Brands exposed to avoidable missteps that could, and should, have been prevented.

When automation meets cultural blind spots

Across the MENA region, PR professionals juggle diverse audiences with unique languages, histories, and cultural sensitivities. AI tools, trained on broad datasets, lack the lived understanding of these subtleties. In one anonymized instance from a mid-sized agency, an AI-generated press release intended for a Gulf audience included phrasing that had unintended political connotations, sparked complaints from local media outlets, and forced a damage-control rewrite. The mistake might seem small, but in a region where semantics carry weight, the reputational cost was tangible.

In another case, a global brand’s campaign for an international product used AI to produce Arabic social media captions. The output was technically correct but tone-deaf, overlooking local idioms and connotations that made the messaging sound awkward and insincere to native speakers. What was meant to foster connection instead looked like a rushed localization attempt and raised questions about the brand’s understanding of its audience.

These examples underscore a key point: AI can generate copy quickly, but it still struggles with cultural nuance, implicit context, and emotional intelligence, elements at the heart of effective PR.

The “hallucination” problem and misinformation risk

Worry about reliability isn’t abstract. Across industries, AI “hallucinations”—confidently presented but incorrect assertions—have sparked errors in press materials, keynote talking points, and even crisis statements. Analyses show AI tools sometimes churn out information that has no factual basis, leaving human teams unaware until the content is published. One audit of generative AI found nearly 17 percent of outputs contained errors, with many left uncorrected.

In PR, an innocuous factual error can morph into a crisis, especially when competitor or activist groups amplify the mistake. The brand then has to defend not just the content itself, but the credibility of the team that approved it.

Ethical and trust implications

AI’s limitations are not only linguistic or contextual. Many times, they become ethical. Biases in training data can seep into outputs, inadvertently producing messaging that feels exclusionary or insensitive to certain groups. If key stakeholders—whether journalists, community leaders, or customers—feel a brand lacks authenticity, trust erodes fast. Rebuilding that trust would cost far more than the time saved by automation.

This intersection of speed, bias, and misalignment matters because PR is fundamentally about trust. AI misuse can chip away at that trust incrementally, especially in tightly knit markets where reputational impressions spread quickly through social media and personal networks. Human oversight is strategic

So what’s the path forward for communicators in MENA?

AI should aid ideation and drafting, not replace the human editorial process. Content must be vetted for accuracy, cultural nuance, brand voice, and ethical alignment before release. Offices should implement structured checkpoints where outputs are validated against trusted sources. Mistakes that could have been caught early undermine both reputation and internal confidence.

Even though AI can mimic tone, only clear governance ensures consistency. Creating style and ethics guides helps keep AI outputs on brand and culturally sensitive. Teams must understand not just how to use tools, but their limitations and risks. Technical fluency without ethical judgment is a recipe for reputational misfires. Given how quickly errors can spread, responding swiftly and transparently when an AI-related mistake occurs is critical to maintaining credibility.

Reclaiming strategic leadership in the AI era

AI will continue to shape PR practices in the years ahead—that much is clear. The goal shouldn’t be to slow adoption, but to elevate how it’s used. Communicators in the MENA region are positioned to lead not just in tool usage, but in defining frameworks that marry technological innovation with cultural insight and ethical judgment.

Widespread adoption is impressive. But reputation, once lost, is hard won back. And in public relations, quality always matters more than speed. Recognizing the limits of AI is strategic stewardship of trust in an age of automation. Ensuring the technologies we embrace amplify credibility rather than undermine it must be the North Star guiding every communications strategy.

(Raed Jafar is a public relations executive at Keel Comms with experience across industries such as real estate and technology)

Tags:

Ai, Mena, Pr

READ MORE

View all