“In order to understand machines you need to understand humans and how humans and machines interact in increasingly complex ways,” says Nick Obradovich, one of the researchers behind a paper looking at machine behaviour and recently published in Nature.
The paper called for a new field of research — machine behaviour. The new discipline would take the study of artificial intelligence well beyond computer science and engineering into biology, economics, psychology, and other behavioral and social sciences.
Obradovich told Information Age that we wanted to investigate whether you “can take the methods and tools from behavioral sciences, such as social and biological science and apply tools developed to study the behavior of black box agents, such as humans and fish, and apply those tools to increasingly complex statistical machine learning models as if they are black box agents.”
He gave as an example the field of online trading: “Online trading — an heterogeneous area existing in an environment which is very complex and fast paced, and characterised by population level dynamics and emergent properties.” He said that while economists “may look at this area, the appropriate questions and methods can also fall into areas such as evolutionary ecology and population dynamics.”
The paper surveyed existing machine behaviour research, and found that “a number of different disciplines are doing work that falls within this realm but are not talking to each other.”
Previous studies have highlighted how cross fertilisation from different disciplined can throw new insight on problems. For example, in the discovery of the double helix structure of DNA, it took a British biophysicist, Francis Crick, an American micro biologist, James Watson, building on the work of biochemist, Rosalind Franklin; and then for the physicist, George Gamow, to advance the work to explain how the four bases of the double helix could control the synthesis of amino acids.
“It’s not that economists and political scientists aren’t studying the role of AI in their fields currently,” said lead researcher Iyad Rahwan. “Labour economists, for example, are looking at how AI will change the job market, while political scientists are delving into the influence of social media on the political process. But this research is taking place largely in silos. Gathering varied, interdisciplinary perspectives is critical to understanding how to best study, and ultimately live with, these novel intelligent technologies.”
It seems then that the study of machine behaviour is being applied in silos. Maybe, machine behavior needs to have some digital transformation applied to it, after all, silos seem to represent the antithesis of digital transformation.
Four things we need to realise about explainable AI
Artificial intelligence (AI) and its capabilities are undoubtedly astounding, and they leave many people wondering “How does it do that?” The answer to that question drives the concept of explainable AI, which is sometimes called XAI.
More examples of machine behaviour
Obradovich provided Information Age with additional examples of machine behaviour at work.
“A development psychologist, specialising in children, may study how they play with animate objects with AI.” They may look at “what happens if you they put the robot in different conditions.
“What happens to the robot? Does the robot engage in harmful feedback loops? Does it let the child get away with treating it badly? Does the robot change the child’s behavior? Or, taking a longitudinal approach, the psychologist may want to know at the five-year or ten-year mark what the developmental outcomes were, compared with children who used inanimate toys.”
He also turned to the topic that has appeared in mainstream media with a vengeance, the echo chamber. He said: “We know from the study of social networks that we choose friends who are similar to us — homophiloy.”(The tendency of individuals to associate and bond with similar others, as in the proverb ‘birds of a feather flock together.’ )
30 years on and the internet is in crisis
Another example of behavioural science applied to machine behaviour, relates to bias in AI and indeed the issue of ethical AI.
In simple machine learning algorithms it may not be that difficult to understand why a machine behaved in a certain way. At a layered deep learning level, it is much harder.
Does your company have an AI ethics dilemma?
The ethics of Artificial Intelligence has been in the news — particularly with the creation and almost immediate collapse of Google’s AI Ethics board. But do companies that are new to AI tools need to be asking themselves: ‘Do I have to ‘care’ about ethics?’ asks Alexa Hagerty and Igor Rubinov.
Given this, how do we explain why an algorithm may select people of certain social groups for a specific job opening?
Obradovich says that you could gain greater understanding by borrowing tools from social sciences. In the example of a CV sorting algorithm, “interrogate the algorithms, putting them in different conditions, randomly assigning them, say black, white and hispanic sounding names and specifically characterise whether or not the one thing that you are permuting in that experiment was changing the machine learning system’s decision making.”
Joi Ito, MIT Media Lab director says“The Media Lab has long applied a wide range of expertise and knowledge to its research and study of thinking machines. I’m excited that so many others have endorsed this approach, and by the momentum now building behind it.”