Human-Machine Understanding: how tech helps us to be more human

So, whatever happened to the revolution? AI, they said, would spark a fundamental shift in the world order. Machine learning, they agreed, would automate the drudgery of existence, and liberate society. Don’t get me wrong, we are witnessing mind-blowing breakthroughs and advances every day. But honestly, I’m restless, I’m dissatisfied, and I want more, sooner rather than later. I need technology to ‘get me’ on a deeper, emotional level – and that requires the exquisite synergy of Human-Machine Understanding.

Human-Machine Understanding, or HMU, is one of the lines of enquiry currently getting me out of bed in the morning, and I’m sure that it will shape a new age of empathic technology. In the not-too-distant future, we’ll be creating machines that comprehend us, humans, at a psychological level. They’ll infer our internal states – emotions, attention, personality, health and so on – to help us make useful decisions.

But let’s just press pause on the future for a moment, and track how far we’ve come. Back in 2015, media headlines were screaming about the coming dystopia/utopia of artificial intelligence. On one hand, we were all doomed: humans faced the peril of extinction from robots or were at least at risk of having their jobs snatched away by machine learning bots. On the other hand, many people – me included – were looking forward to a future where machines answered their every need. We grasped the fact that intelligent automation is all about augmenting human endeavour, not replacing it.

Q&A: Nebuli CEO on putting people first in augmented intelligence

Tim El-Sheikh, CEO and chief architect at Nebuli, spoke to Information Age about the importance of human collaboration in augmented intelligence. Read here

Training the future AI workforce

Five or six years on, we can look back on significant change. We have a plethora of institutes and academies training the future AI workforce, buoyed by multibillion-dollar resources. We have bigger datasets, bigger Graphical Processing Units (the GPUs that perform millions of calculations in parallel) and bigger neural networks (the brain-like system of algorithms dedicated to perception), and there’s plenty more to come here.

All of the above has contributed to extraordinary breakthroughs and more exciting headlines. Google’s DeepMind artificial intelligence defeated the world’s number one Go player, much to the dismay of Ke Jie, who was a ‘little sad’ because he thought he’d played pretty well. If you want a creative escape from quarantine, you can now go online to create a personalised poem, with the help of AI. And if you want a little light reading on the side, you can catch up with the thoughts of the GPT-3 language generator, writing here exclusively for The Guardian.

The status quo sucks

That’s all well and good, but I’m staying out on a limb here to say that in my opinion, the status quo sucks. I’d argue that there’s been remarkably little tangible progress in the products and services that you and I interact within our personal and professional lives. Maybe I should rephrase that — there hasn’t been enough of the right kind of progress for me. As I’ve already alluded to, we’ve made great strides making machines with logical intelligence – but what about the social, emotional, or even ethical intelligence? Put it this way, I’m sure Google’s AI didn’t put its metaphorical arm around Ke Jie to console him in defeat.

Let me share my frustration here. My smart speaker happily misunderstands me six times in a row – and has no qualms about responding in the exact same way a seventh time. Infuriating, and let’s say I’m test driving a top-of-the-range motor. It might have all the AI-powered bells and whistles when it comes to sensing danger or staying in lane, but it hasn’t a clue whether I’m enjoying the drive or not. See where I’m going with this?

As I said at the outset, technology doesn’t get me, you, or any of us. Each day, I am constantly having to adjust to every piece of technology I interact with. Rather than accounting for me and my needs, I am pouring energy into adjusting to technology. I just think it should be the other way round. I believe in a future where each and every piece of technology takes account of my emotions, behaviours and wants, to give me the best possible outcome. Instead of a passive interface, I expect products and services to understand my state and make decisions to aid me in my life. That’s not too much to ask, surely?

As we dig a little deeper into this, you might argue that there are already many products and services that make assessments of the human state — wearables that help track our sleep quality, for example, or biomarkers that track our stress levels. That’s true, and to date, we’ve been able to approach successively difficult problems by treating each as a data problem to be fixed with bigger AI. But here’s the thing: we know that empathy is not equal to the amount of data processed. To truly move forward, I believe machines need to truly understand humans, and that can only mean one thing: HMU.

The biggest trends in digital ethics

This article will explore the biggest trends that are occurring in digital ethics, as technologies continue to play major roles in people’s lives. Read here

HMU is beginning to inch forward

There is currently no single signal that can be read from your brain or body to reliably tell a computer what you are feeling. But there are a variety of multisensory systems through which a computer can begin to infer information about your emotional state. HMU is beginning to inch forward, but only just. And if we continue with the current trajectory of innovation, we are unlikely to meet the milestones I’ve been talking about. This is because while technology is getting cognitively smarter, there needs to be equivalent progress in developing technologies that ‘understand’ and interact with us much more seamlessly to best serve our needs. In this domain, I’m pleased to say that some inroads are being made.

To provide a top line for reference, I reckon three vital things are necessary to create technology that understands us:

  • The creation of a new multidisciplinary field;
  • New models of human cognition and behaviour;
  • Socially intelligent systems that learn naturally.

Digging a little deeper still, let me share some of the specific HMU challenges I’m currently working on : context understanding is a key one, and it is still not understood what type and number of modalities are needed to achieve the highest level of accuracy in affect classification (affect being the outward display of someone’s emotional state). Essentially, there’s still plenty more work to be done to address the obstacle of incorporating contextual information into the affect classification process.

Personalisation is another interesting area. Existing deep learning methods have a mixed performance when it comes to human emotion detection. A one-size-fits-all machine learning model is inherently ill-suited to predicting outcomes like mood and stress, which vary greatly due to individual differences. There are ways to assess emotional state, including postural movements, facial expressions, physiological markers, and language. But they must be combined in a unique setting to best represent each individual. Real-time understanding of human state, affect again, is crucial. It’s obviously not enough to fill in a survey then take an action a day later. There will be a way through this, starting perhaps by pushing the boundaries of neural interfaces, non-invasive measurement, wearables and so on.

Plenty to be getting on with then, but in the meantime, I hope my snapshot has whetted your appetite for the potential of HMU, and watch this space for a follow-up article, where my colleague Monty Barlow will reveal more details of our exploratory work in the space. Intelligent systems will undoubtedly continue to improve in their ability to calm, comfort and soothe us, to earn our trust and rapport. And it will happen when neuroscientists and psychologists successfully join forces with engineers to teach computers to truly understand humans. Viva la revolution.

Written by Sally Epstein, head of strategic technology at Cambridge Consultants

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com