Addressing governance when leveraging AI and analytics

As governance on AI technology within the European Union has begun, with a whitepaper being recently introduced by the European Commission, it’s surely time to think about what companies should be doing in order to meet regulatory standards while continuing to get the best out of AI and analytics.

This has been a talking point over in the US also, with the chiefs of Alphabet, Facebook and IBM also weighing in.

How should we go about establishing strong AI regulation?

In the wake of the CEOs of Alphabet and IBM announcing that AI regulation is needed, it’s important to know how we would go about this. Read here

Atif Kureishy, global vice-president, emerging practices, artificial intelligence & deep learning at Teradata, began his career working in the space and defence intelligence sectors before applying himself to AI and other emerging tech.

Atif Kureishy, global vice-president, emerging practices, artificial intelligence & deep learning at Teradata

When sharing his background, Kureishy said that there are similarities between his current role and his beginnings in a post-9/11 world in regards to data usage.

“Post-9/11, there was this dramatic increase in the ability and the need to share data,” he explained, “but at the same time, have it governed and have a set of controls in place.

“When I look at industry now, in a global role leading artificial intelligence and deep learning, I see a lot of similarities in the sense of almost every company is pursuing an intelligence-like capability in the sense that they want to know more about the customer.

“They want to know more about their business and financial operations, so they’re leveraging data to do that, and what’s happening now is the same challenges that the intelligence and security defence communities had around data governance are now applying into industry.”

The challenges

As the amount of data being used within companies continues to grow, so do the complications associated with governance, and regulations such as EU GDPR and the CCPA are starting to turn their attention towards AI and the data that drives it.

“The challenges essentially, are that organisations, industry regulators and customers all have to start to keep up and be aware of what their rights are, what controls have to be in place, what risk management principles they need to employ, and that gets very, very complicated,” said Kureishy.

“So I would say there are technical challenges, making sure these implementations get appropriately developed and applied. But more importantly, because of the evolving nature of what’s happening now, and that’s happening so quickly, it’s all the risk exposure, risk management, risk mitigation, that are going to keep every company up at night.”

Board-level visibility

It’s important that companies constantly find ways to keep the data it uses for AI and analytics capabilities under control in the midst of a governance evolution.

However, when ensuring that company data is secured to the standard of regulation, a common talking point is whether security is the responsibility of the board or the security staff.

Who is responsible for cyber security in the enterprise?

Uncertainty is widespread across companies over who takes the lead on cyber security, according to Willis Towers Watson. Read here

Research has shown that company executives and security teams have had issues communicating with each other, whether it be due to being distant by nature, or to board members not understanding the jargon that staff dealing with security use.

However, for Kureishy, it is vital that chief information security officers (CISOs) and chief risk officers (CROs) have a mindset regarding the importance of protecting data that goes above their level of operations.

The Teradata global vice-president said: “When I look at the past 10 to 20 years, I relate this whole space very closely to the information cyber security landscape, and the biggest step forward in the cyber space was when CISOs and CROs had board level visibility around the nature of information security; why it’s so prominent and prevalent to an organisation, and why it’s so important that not only dollars, lots of dollars need to be allocated to it.

“But a lot of attention and risk management programmes need to be applied to it.”

Automated GRC

Kureishy went on to explain the benefits that companies can gain from having governance, risk and compliance programmes, as well as those needed for risk management and implementing automation in order to increase efficiency.

“There’s a lot of technology out there to help with these (GRC) programmes,” he said. “But probably the most interesting is how does the era of machine and deep learning, which is really focused on pushing more intelligence into the machine, get impacted based on these privacy aspects, and having humans in the loop and being able to adjudicate these decisions with transparency, which is counterintuitive to the nature of machine learning and AI. So, so that’s going to be a big challenge.

Balance sheets and staff remuneration — the value of data is rocketing

The best organisations, or so Greg Hanson from Informatica recently told Information Age, remunerate people based upon their ability to demonstrate good culture and good activity around managing data. Is it time then to give more thought to the value of data, how it is managed, and how it sits on balance sheets? Read here

“Again, if I go back and look in history of what happened in the early 2000s, and all these information security programmes, there’s this potpourri of regulation and standards, and you still have them; ISO 9001, ISO 27,000, ISO 99, NIST standards, and so on.

“It’s dizzying, and the same thing is happening in the privacy space now, and there’s gonna be a lot of money spent.

“The benefit is clearly more accountability and transparency, but there’s a drag on the pace of innovation because a lot of spending is going to go towards trying to account for all this complex regulation. However, it is absolutely necessary.”

The future of consensual data use

Another aspect of data governance that shouldn’t be forgotten about is its effect on customers who are visiting a vendor’s website.

For Kureishy, users could find it more difficult to cope with the rise in data consent request messages that websites must now send out when visited, but this is another area that AI may be used to a company’s benefit.

“Imagine a future where you get inundated with those types of details everywhere,” he explained. “How do you keep track of knowing what you’ve issued consent for, what you’ve not, what’s the aggregate set of decisions that you’ve made, and how that overall impacts your privacy?=

The privacy paradox in the digital age

There is an inherent paradoxical relationship between privacy and advances in technology, in the digital age. Read here

“We’ll get very quickly to a future where there will be offerings and capabilities needed, on the consumers behalf, to manage all this complexity, because right now, we’re sort of in the beginnings of it.

“But even as a practitioner in this information security and privacy space, it’s overwhelming just to keep up with all of this. So that’s, that’s where I believe we’re going to get to very quickly.

“It’s going to almost be like applying machine and deep learning or AI to the consumer to better understand what their risk exposure is for offering consent. That’s what I would love to see actually happen in the next couple of years.”

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.