Machine learning ethics at AWS reInvent

Amazon Web Services (AWS) has been heavily focused on machine learning over the past few years, releasing a number of products and features which showcase how effective the technology can be for organisations and consumers.

But while the technology – much like its parent artificial intelligence – can do a lot of good, there are always questions about what this means in the long-term for humanity, both in terms of a reduction in jobs but also in terms of how these products and services can be used for unethical reasons.

AI, machine learning and Python: Let’s see how far they can go

The following post emphasises on why technologies such as AI, machine learning turns out to be a big deal for python experts. Read here

In a press Q&A last week at AWS reInvent in Las Vegas, AWS CEO Andy Jassy fielded several questions about how the company intends to ensure its machine learning capabilities are used ethically by customers.

In response, Jassy cited use cases such as reducing human trafficking and reuniting children with parents where machine learning has already had a positive influence. However, he acknowledged that people may use machine learning for the wrong reasons too.

“Even though we haven’t had a reported abuse case of our machine learning services, we’re very aware that people are able to do things with these services – like they can with any technology – in that they can do harm in the world,” he said.

Jassy suggested that over the last two to three years there have many “evil, surreptitious” things that people have done with computers and servers – and these are technologies that have been around for many years. He may be referring to the vast number of business data breaches, the iCloud data hack that affected celebrities, or the various top-secret government files that have been published online.

Increasing the adoption of ethics in artificial intelligence

Digital Catapult, the innovate UK backed centre for digital innovation, has unveiled plans to increase the adoption of ethics in artificial intelligence. Read here

Machine learning ethics

To reduce the likeliness of machine learning being used for corrupt reasons, Jassy suggested that the algorithms that different companies produce have to be constantly benchmarked to ensure they are as accurate as possible. Then the companies – including Amazon – have to be clear about how their service is used.

“So for things like facial recognition…. If you’re using it to match celebrity photos [with individuals] then it’s okay to have a confidence threshold which is between 70 and 80% but if you’re doing it for law enforcement, meaning it will impact people’s civil liberties then you need a very high confidence threshold,” he said.

“We recommend people using at least a 99% confidence threshold for this, and even then what we say is that it shouldn’t be the sole determinant in making a decision, there should be a human and a number of inputs, of which one is machine learning with 99% confidence,” he added.

Jassy said that the company is working with an organisation that it cannot yet reveal, and it is encouraging this organisation to benchmark across all of these algorithms so that everyone has transparency around the veracity of them. However, he emphasised that AWS cannot tell its customers what the laws are or how to use the technologies, as this is down to the country that the customer is based in, or where their services will be used.

“At the end of the day, countries themselves have to decide what standards, regulation or compliance they’re using for this kind of technology. We’re having active conversations with a lot of countries and governments…. We don’t control laws, some governments are more interested in having that collaboration and participating,” he said.

“We provide services that people can build on top of and we have a set of terms and if people violate those terms they’re not allowed to use those services anymore,” he added.

It will be interesting to see how companies like Amazon do keep up with tracking these bad actors, and if in the years to come whether governments will enforce more responsibility on the top tech firms to ensure that machine learning products are not used unethically.

Written by Sooraj Shah, freelance journalist
Written by Sooraj Shah, freelance journalist

 For more on ethical AI

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com