Machines v humanity: the ethical challenges behind AI

In retail the growth of AI is exploding as companies scramble to release chat bots or make sense of the huge amount of data at their fingertips.

However, Google’s recent experiences should act as a cautionary tale. When Google unveiled Duplex, a technology that can carry out real word tasks over the phone using an AI that imitates the natural language of a human, it was met by the developers in the room with cheers and applause.

However, outside observers of the Google IO developer conference reacted with horror at what Google had created.

The AI called a hair salon and a restaurant to make appointments and fooled the person receiving the call into believing they were speaking to another human with a series of “mmm hmmms”, “ers” and colloquial phrases such as “oh, I got ya”.

Google was chastened enough by the backlash to release a statement that insisted it is developing the feature with “disclosure built-in” that will make it clear it is a machine rather than a human.

Since the demo of the tech last month little more has emerged about Duplex and Google is cagey about discussing the development.

And last month, during a panel discussion on ‘The Future of Chatbots’ at the CogX AI conference in London, Google Assistant UK lead Alice Zimmerman refused to discuss Duplex when asked about the tech.

At the conference there was an entire stage devoted to discussing the ethics of AI, which demonstrates just how importantly companies should consider the implementation of AI. The consensus was retailers must take care when developing their own AI technologies to ensure they design them in a manner that is palatable for their customers.

The ethics of chatbots

Chatbots are the current must-have for retailers. M&S, Lidl and eBay are among the retailers to have developed AI-powered chatbots.

However, retailers also appear reluctant to discuss the development of their AI applications. Both M&S and eBay declined to comment for this article, while Lidl did not respond to a request for comment. 

Lidl’s Margot the Winebot, which acts as a digital sommelier, has won numerous plaudits including the best consumer chatbot award at CogX.

Tobias Goebel, the senior director of emerging technologies at Aspect Software, developed Margot the Winebot on behalf of Lidl and believes there is “something magical” about conversational interfaces.

Margot can also understand the use of emojis to make it an “engaging, friendly, and fun” chatbot. 

While Margot the Winebot is a benign experience and clearly a chatbot, the anthropomorphism of its name and its imitation of human communication methods such as the use of emojis is indicative of how AI technology is being deployed.

The danger is companies are lackadaisical about how the technology further develops.

Goebel admits AI could be a threat to humans but claims there is “nothing for the average person to be concerned about” and argues it will “not affect us in the next few years or probably even decades”.

He is a proponent of the attitude of Andy Grove, former Intel CEO, who claims "in technology, what can be done will be done."

Creating ethical principles

The runaway development of technology has come under intense scrutiny and the tech giants have received criticism for their perceived lack of ethics from both external and internal sources.

Google’s deal to help the US military develop AI reportedly led to a number of employees resigning in protest, and is believed to have been the prompt for CEO, Sundar Pichai, releasing ethical principles of AI such as it needing to be “socially beneficial”.  

Dr Nicola Millard, head of customer insight and futures at BT Global Services, believes there is a lot that needs to be discussed around ethics when it comes to the development of new AI technologies.

“The trouble is you get into this uncanny valley problem where you are creating a bot that is almost human but not quite,” says Millard.

She argues it is best to have a mix of human and machine intelligence and prefers the use of the term “augmented intelligence” rather than artificial intelligence.

“Machines are very good where there is established process and rules and data and patterns, but they are not really good at empathy or context problem solving or creativity or caring, or any of those things that people are pretty good at,” says Millard.

The introduction of GDPR has already caused companies to take significant stock when it comes to how they handle AI.

“There is a fine line between a butler and a stalker. I want the butler, something that understands quite a bit about my preferences and needs because as a customer I want an easier life."Nicola Millard, BT

GDPR’s requirement for ‘privacy by design’ should provide a strong guiding principle in how businesses approach the development of any technology. Ethics should be built into the tech at the initial design phase rather than being a bolt-on added later following a consumer backlash.

Being up front with the customer

Dennis Mortensen, chief executive and founder of AI virtual assistant firm, reveals his tech came unstuck in the first rounds of development.

“We started out in early 2014 with no disclosure of Amy being an AI, by design. We were afraid that such a disclosure would stymie the dialogue and create unnecessary confusion on the guest end,” he says. “To be blunt, we were wrong and almost immediately figured out, like Google, that there was little to win and a lot to lose by not being up front.”

The key is to ensure the customer is placed at the forefront when deploying technology. 

“We need to go from the customer in rather than the corporate out,” says Millard. She also advises that care should be taken with how data is used by AI systems.

“There is a fine line between a butler and a stalker,” says Millard. “I want the butler, something that understands quite a bit about my preferences and needs because as a customer I want an easier life.

“But if it is starting to leap to things I don’t want it to know about me because it might have genuinely learned a certain behaviour or it has come to a conclusion that is wrong from the data then that could cross the creepy line.”

Millard does not believe a declaration needs to be made every time AI is behind a decision, but suggests measures need to be put in place to ensure the customer is in control.

“You need a feedback loop. AI works on feedback, that is how it learns, and having that feedback loop is vital,” says Millard.

For this brave new world of machines to succeed then humanity must be placed front and centre when developing the technology.