Share

Digital tools or digital traps? Rethinking consumer empowerment in the age of AI

August 15, 2025

Dr. Luciana Blaha, Assistant Professor at School of Social Sciences, Edinburgh Business School, Heriot-Watt University, has provided this opinion exclusively for Communicate.

The GCC has consistently sought to scale its industries and scale, experiencing an accelerated growth in digitalisation and digital marketplaces in the last decade. This has attracted an increasing demand for platform-based services, financial flexibility, and customisation from the digitally equipped population (over 90% of which owns smartphones).

As these services expand, so does the digital infrastructure required to support them, ranging from physical data centres to cloud services, which are expected to grow at a rate of over 25% year on year, reaching an estimated $51bn by 2030 according to McKinsey. In a 2025 Boston Consulting Group review, UAE currently leads the service infrastructure with 35 data centres, and Saudi Arabia has fostered its companies to deploy emerging technologies like AI in over 70% of its services, leading to a boom in the market. Particularly in the eCommerce section, companies like Openxcell, Transcom and ReviuAI have grown significantly, showcasing the success of this market segment. Yet at the same time, the demand for both a skilled workforce for configuring these services, and a skilled customer base for using them at their full potential, persists. Let’s dive deeper into how this works, and why it becomes complex, particularly in the case of artificial intelligence (AI)-based systems.

AI, particularly through machine learning (ML) and deep learning, is central to creating hyper-personalised experiences across various sectors, including e-commerce, social media, education, and healthcare. Companies like Amazon and Alibaba leverage advanced recommendation engines to tailor shopping experiences, predict user interests, and curate dynamic product feeds based on data regarding personal habits, such as browsing history, past purchases.

While less visible than large language model (LLM)-based customer services such as AI chatbots, these more traditional models support companies in providing reliable services, while also providing customers with a better experience while using the service. This in turn improves the companies’ conversion rates, order value, improved customer loyalty, and retention. In social media, platforms like YouTube, Facebook, and TikTok use AI to dynamically tune content recommendations based on user behaviors such as likes, shares, watch durations, and scrolling patterns with the aim of improving this experience for consumers. However, this pervasive personalisation has a “dual nature”.

The more we rely on AI-based recommendations for the products and services we choose, the more they become part of complex decision-making processes, with real impact in areas, ranging from personal finance to business performance monitoring. While the final decisions are still human made, to maximise the benefits of these decisions, we need to be capable of discerning when the suggestions based on algorithmic data processing are correct, appropriate, or based on real data. In other words, whether a private or business customer, consumers need to be able to discern when the recommendation is likely to be correct or wrong, biased, or fair.

While beneficial for engagement, AI algorithms can simultaneously create unintended harmful effects. In e-commerce, personalised recommendations can lead to impulse buying and potential financial strain. When using ‘buy now, pay later solutions’ for instance, this could create individual financial strain and, perhaps, given the over 13% growth rate of the market, even have a compound negative effect on the local economy. On social media, algorithms risk creating “echo chambers” or “filter bubbles,” exposing users repeatedly to content that reinforces pre-existing beliefs while limiting exposure to diverse perspectives, for instance in relation to healthcare, and respecting medical advice.

This trap of overreliance can occur because users may superficially process AI explanations, viewing their mere presence as a sign of accuracy, rather than evaluating their source content. In a study on human correctness likelihood, Shuai Ma and colleagues showed that our focus primarily on enhancing AI systems for fairness and correctness, while good, may distract from developing ourselves as consumers. In fact, as consumers, they showed that many people had either too little, or too much confidence when accepting suggestions or personalised content from AI-enabled products. They proposed two approaches for consumers and companies:

  1. Looking at the success and impact of pervious decisions in a similar task (for example, if you chose hotels very well in the past based on your criteria but you trust the AI recommendation more, perhaps you should increase confidence in your own experience), and
  2. Configuring digital tools to showcase both human and AI-based information included in the content (for instance, displaying both customer reviews and AI suggestions as filter options for reports or suggestions, showcasing which one had higher accuracy over time, or lowering/hiding AI-based recommendations for services where humans may be overreliant on them, educating customers more instead).

Apart from discerning the risk of the task, this ultimately means revisiting ourselves, and sometimes, why we prefer content that confirms our preconceptions over analysing data.

READ MORE

View all