By Professor Paul Hopkinson, Head of Edinburgh Business School at Heriot-Watt University Dubai and Academic Director for Heriot-Watt Online
The rapid increase in the use of facial recognition technology has been accompanied by growing concerns about the potential for algorithmic bias. From the instances of racial and gender bias documented in connection with the use of AI-powered facial recognition tools, to smartphone applications disproportionately misidentifying certain social groups, there is a pressing need to recognize and address these issues.
Moreover, concerns about privacy and mishandling [of] consumer data serve to create an atmosphere of mistrust surrounding the rapid growth in the use of data analytics and the application of new technologies such as AI.
AI-aided workplace automation continues to grow apace, with applications including content writing, customer service, marketing, retail, and other industries, providing a clear demonstration of the unprecedented importance that technology is assuming in our daily lives. Despite tech vendors’ assurances that AI working alongside humans can free up the tedious aspects of their jobs and enable them to focus on more critical tasks, the issue of trust remains a significant obstacle limiting greater adoption. Although many of those concerns are natural by-products of the introduction of new and novel technologies, we should not dismiss these concerns. Since algorithmic bias is in essence a reflection of human bias, how should we evaluate the use of AI in decision-making?
The growing use of AI in decision-making is regarded by many as inevitable and there are [many] existing use cases to demonstrate its efficacy. For example, studies have proven the efficiency of machine learning and pattern recognition algorithms in the early detection of cancer, which could otherwise be missed by trained pathologists. This could reduce the mortality rate by 30% according to studies from the University of California. However, accountability in the use of this technology is key, and organizations that implement these technologies must ensure that they are used responsibly. Fortunately, the scope of involvement and manual interventions can be decided by the respective companies.
As the pace of change in business operations accelerates, the success of companies is determined by how quickly they adjust. Keeping up with digital transformation requires another level of speed and responsiveness. This is where the use of AI becomes a solution. In fact, according to Bain & Company, the effectiveness of decision-making drives 95% of business performance. The practices companies follow have a considerable impact on the performance of [the] business and employees. In this context, the inevitability of the use of technology in decision-making lies in its essentialness for business survival in the age of digital transformation.
Considering the expansion of the scope of many businesses as they increasingly operate internationally, whether in the retail, marketing, or supply chain aspect, AI-enabled automation offers significant benefits. In fact, it allows the verification and tracking of every step of the supply chain. Machine-learning models that can operate in real time and take automated and proactive action are a source of added value to both employees and customers who may face delays to urgent requests awaiting a response from customer service representatives. In [the] financial and banking sectors, AI and machine learning techniques can be indispensable to highlighting suspicious behavior and preventing significant financial losses.
However, with great power comes great responsibility. It is important to recognize that the leaders of businesses and institutions that deploy these technologies are responsible for both their successes and failures. After all, technology is in place to augment and enhance human decision-making and automate routine tasks. But the technology is not without its limitations. Algorithms are generated by learning from data created by humans, with all their inherent biases and boundedly rational behavior. This is where accountability comes into play. Ultimately, a negative customer experience, damaged reputation, or a missed cybersecurity attack will always be the company’s responsibility. However, the good news is the more AI has access to data, the more it gets better at decision-making. The key is accountability and intervention by leaders in this step-by-step process. It is a delicate balance of allowing innovation to flourish while safeguarding privacy and security.
This site uses cookies: Find out more.