The ability of artificial intelligence (AI) to grasp morality and empathy are among concerns expressed by customers when it comes to interacting digitally with brands.

The lack of trust in AI has been revealed by Pegasystems Inc. and research firm Savanta, who surveyed 5,000 consumers across the globe. They found that many don’t understand the extent to which AI can make their interactions with businesses better and more efficient, while one-in-ten said they believed AI cannot tell the difference between good and evil.

The suspicions on morality seeped into customers’ overall opinions on brands, with 68 percent believing organisations have an obligation to do what is morally right for the customer, beyond what is legally required.

Sixty-five percent don’t trust that companies have their best interests at heart, raising significant questions about how much trust they have in the technology businesses use to interact with them. Less than half (40 percent) of respondents agreed that AI has the potential to improve the customer service of businesses they interact with, while less than one third (30 percent) felt comfortable with businesses using AI to interact with them.

Just nine percent said they were “very comfortable” with the idea. At the same time, one-third of all respondents said they were concerned about machines taking their jobs, with more than one quarter (27 percent) also citing the “rise of the robots and enslavement of humanity” as a concern.

Moral choice: Customers have doubts AI can tell the difference

Over half (53 percent) said it’s possible for AI to show bias in the way it makes decisions, and 53 percent also felt that AI will always make decisions based on the biases of the person who created its initial instructions, regardless of how much time has passed.

Meanwhile, just 12 percent of consumers agreed that AI can tell the difference between good and evil, while over half (56 percent) of customers don’t believe it is possible to develop machines that behave morally. Just 12 percent believe they have ever interacted with a machine that has shown empathy.

The results of the survey coincide with plans by Pega to “improve empathy in AI systems”, and speaking of the poll results, the firm’s VP of Decisioning and Analytics, Dr Rob Walker, said: “Our study found that only 25 percent of consumers would trust a decision made by an AI system over that of a person regarding their qualification for a bank loan. Consumers likely prefer speaking to people because they have a greater degree of trust in them and believe it’s possible to influence the decision, when that’s far from the case.

“What’s needed is the ability for AI systems to help companies make ethical decisions. To use the same example, in addition to a bank following regulatory processes before making an offer of a loan to an individual, it should also be able to determine whether or not it’s the right thing to do ethically.”

He continued: “An important part of the evolution of artificial intelligence will be the addition of guidelines that put ethical considerations on top of machine learning. This will allow decisions to be made by AI systems within the context of customer engagement that would be seen as empathetic if made by a person. AI shouldn’t be the sole arbiter of empathy in any organisation and it’s not going to help customers to trust organisations overnight. However, by building a culture of empathy within a business, AI can be used as a powerful tool to help differentiate companies from their competition.”

Post Views: 1058