The ethical implications of chatbots

Technology is a great enabler. The countless advancements ranging from self-driving cars to virtual reality and a million others are astounding badges of honour to human ingenuity.

Ok, that is the positive preamble. Maybe it is just the alarmist in me, but where we seem to be lagging in our meteoric rise as a species is coupling the question “Can we do this?” with “Should we do this?”, without which we may go down a near Frankensteinian route towards disaster.

And with that segue, I would like to talk about Microsoft’s recent patent for “creating a conversational chatbot of a specific person”. First off, it is brilliant as a thought. If we can train chatbots from the responses of thousands of different people, it makes sense that we ought to do it from one person’s inputs as well. And with us blindly proffering pieces of ourselves to the digital gods, there is probably no end of information with which Big Tech could rebirth us, if not now, then in the near future. But what this piece of genius may not be considering, is whether we should be doing this at all.

The Why? A few quick reasons come to mind:

  1. Pounds and dollars – If you build it, they will come… and pay you for it. The bottom line is that this could probably be monetised in a fairly lucrative way.
  2. Parenthetical advances – There is a lot of research that has incidental benefits, i.e. that can give people direction on new ways of using this technology or a new direction in which to conduct further research.
  3. Morbid reasons – As this Forbes article states, there is the potential that using your online presence and input from your living days, there is the possibility that your chatbot can outlive you and provide an ongoing presence after your… departure.
  4. Because we can – Enough said.

Ethical design thinking: empowering designers to drive ethical change

As technologies like artificial intelligence become more advanced, the question of what we should do is at times lost to the question of what we can do. And, what we can do often raises ethical concerns. Read here

The most reasonable of the motives above is arguably point two. The research related to this patent could be hugely beneficial in ways we haven’t even conceived of yet. However, the unforeseen has no moral compass and the fact is that it could also be used for nefarious purposes outside of the original intent of its creators.

You might think I have seen one too many Hollywood movies. However, if you think of the already burgeoning list of morally ambiguous ways in which technology is used, it is not so much science fiction, as it is science fact.

Just one example is bias in Artificial Intelligence (AI) algorithms. While AI can help doctors interpret test results more accurately in certain instances, these algorithms are trained on historical data that has our inherent human biases built in. So when studies such as this show that black patients would have gotten better life-saving medical care “if their kidney function had been estimated using the same formula as for white patients”, we should probably take notice and do something about it.

In addition to simply creating your digital zombie doppelganger, does this technology potentially open the pandora’s box for identity theft? The Federal Trade Commission in the US reports that there were $3.3 billion in fraud losses in 2020 alone.

If hackers can additionally get to one’s tone of voice, word choice, etc., the sky is the limit to their “success” in impersonating someone. We could keep going on this hyperbolic ride by adding on a layer of ‘deepfakes’ (AI generated fake videos). So now we could impersonate someone’s tone of voice with a video of “them” saying it. While we are at it, we might as well invite Boston Dynamics to the party with their walking, talking and (yes) dancing robot, but by now I am sure you catch my doomsday drift.

Can AI and blockchain be used in fight against deepfake?

Deepfakes are hitting the news with more frequency, what is the fix? Can AI and blockchain be used in the fight against deepfake? Read here

The practice of asking “why” is not a novel concept; this tension between science and ethics has long persisted, and is in fact necessary. But it seems as if this time, it is being played in a new arena, one in which regulation is struggling to keep up. The rate at which we are willingly giving our private data to Facebook and its peers far outpaces anything that legislation such as GDPR and its counterparts can rein. We just accept the delicious cookies and move on. And then we are shocked with outcomes like (allegedly) swayed election results. These breaches were enabled using the tools available to those who know how to manipulate them. And the aforementioned Microsoft patent could be another weapon in the arsenal.

Privacy pertaining to our data is the great debate of our time, or at least it should be. Let me be clear: I am not advocating for an abandoning of all scientific breakthroughs and a return to the dark ages (they had their issues too). The thing to do however is foster discussion and debate. And this needs to be from a wide swathe of society as opposed to those in the know of technological advancement and who “speak the language”. More voices mean less myopic points of view and a consideration of all relevant aspects of an issue.

Let us talk about the ethical implications. Let us talk about what else this technology could be used for. Let us talk about whether we can do this another way. Let us talk about what checks and balances we need in place to do this safely. Let us talk.

Written by Hargo Kalra, business intelligence manager at ContactEngine

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at