Share

Bridging the Data‑Trust Chasm: How AI and Personalization Can Rebuild Confidence

October 22, 2025

Suliman Gaouda, Regional Vice President, Middle East & Africa, Sitecore Middle East contributed this op-ed exclusively for Communicate.

While much of the global conversation around digital transformation centres on the race to adopt emerging technologies, the real differentiator lies in the ability to earn and sustain trust. In the Middle East, where mobile adoption is high and consumers are increasingly open to data-driven services, the conditions for trust-based innovation are already taking shape. As digital experiences become more embedded in everyday life, the brands that succeed will be those that see personalisation as a seamless capability and a way to strengthen relationships through clarity, relevance and respect. The question is no longer whether consumers will engage, but whether businesses will meet expectations with integrity and transparency.

The temptation for many organisations is to bolt AI onto existing systems and hope for the best. That approach widens the trust gap. Instead, companies should treat AI adoption as part of their commitment to security and privacy. One way is to reframe existing policies under a clear banner of responsible use. A name matters because it makes a promise: any AI capability must comply with principles of transparency, fairness, accountability, privacy, security and reliability. AI features should be optional; customers must be able to turn them on or off. Organisations should avoid training models on customer data and, where appropriate, allow clients to use their own models to retain control. They should test new functions using synthetic, not personal, data. These practices serve as safeguards and signals, showing AI can be a tool for trust, not a Trojan horse.

It has become common to criticise personalisation as inherently invasive. The real problem is not personalisation but how some companies collect everything in sight. If you build a dossier of every click, purchase and mood, people will be wary. A more ethical path focuses on contextual signals as intent shown in a session, location or device to shape the experience. Winners in digital experience apply this kind of personalisation. It reduces the need for long-term profiling while still delivering relevance. Equally important is granular consent. Users should see what data is used and adjust permissions in real time. When people understand why something is happening and feel in control, they are more comfortable sharing information. Studies show most consumers are willing to share relevant data if collected transparently and used to benefit them. The path to both privacy and personalisation lies in clarity, not coercion.

Policies do not enforce themselves. A culture of privacy and security must be woven at every stage of design and delivery. That includes building cross-functional oversight, with legal, HR, finance, marketing and product teams evaluating risks and setting boundaries. It means defining unacceptable uses of AI, like generating defamatory content or making fully automated decisions that affect rights. It also means keeping people in the loop. Even the most advanced models can amplify biases in training data, so organisations must insist on human review and accountability. Ethical AI is not a set-and-forget solution; it is an ongoing conversation among diverse stakeholders.

The Middle East is not standing still on privacy. The UAE’s Law on the Protection of Personal Data provides a clear framework for confidentiality and privacy. It prohibits processing personal data without consent, except in limited cases, and gives people the right to correct or restrict its use. Other measures, from consumer protection to cybercrime laws, reinforce the idea that individuals own their data and companies must respect it. In this environment, responsible use of AI is not optional. It is both a legal requirement and a competitive differentiator. Organisations that embrace these rules as a baseline will be better positioned to innovate because they will have earned the right to use data.

Bridging the data trust chasm is about redefining the relationship between businesses and people. It requires moving beyond the false choice between privacy and innovation. When organisations treat trust as a design principle, they unlock the ability to personalise without invading privacy. When they build governance and accountability into AI projects, they reduce misuse and build confidence. When companies embrace regulation as a catalyst, not a burden, they align their actions with the values of the communities they serve. The way forward is clear: put trust at the centre of data strategy, empower users with transparency and control, and commit to responsible innovation. The reward is not just compliance; it is the chance to create digital experiences that feel personal, meaningful and respectful.

READ MORE

View all