Writing your company’s own ChatGPT policy

Giving your staff unfettered access to ChatGPT could have disastrous results, especially when it comes to unwittingly releasing confidential data. Here are some pointers as to how to write your company’s own generative AI policy

Many new tools are dubbed gamechangers, but ChatGPT truly deserves the title. Bursting onto the scene six months ago, the generative artificial intelligence (AI) marvel has gone from sparking enthusiasm to driving cross-sector experimentation — creating multiple issues along the way.

The biggest business concerns are, of course, around longer-term threats to human workforces. Arguably more tangible and immediate fears, however, have centred on the consequences of moving too fast (and far) with recent adoption, including putting data privacy at serious risk.

Only weeks after technology and science leaders signed an open letter calling for a research halt while safety protocols are ironed out, Samsung became one of the first major examples of what happens when unauthorised AI use goes wrong. Simply by entering data into ChatGPT requests, Samsung employees unwittingly exposed highly sensitive company information.

‘There is a rising need for firms to start better protecting systems, people, and data now by writing their own security rules’

Global authorities have sprung into relatively speedy action. The UK’s proposed AI framework, Canada’s privacy investigation, and the European Union’s draft regulation are positive steps towards much-needed governance. But as progress hurtles ahead, there is a rising need for firms to start better protecting systems, people, and data now by writing their own ChatGPT policy.

Thin line between success and calamity

A great explanation for current generative AI buzz comes via McKinsey: while the deep learning innovations behind its development have been in motion for years, applications such as ChatGPT are the result of a sudden leap forward which has created foundation models able to process huge volumes of unstructured data and fulfil multiple requests at once.

With the capacity to provide instant support across a range of tasks — from producing marketing content to solving coding challenges — it’s easy to see why chatbots are proving hugely popular and fuelling expectations of increased productivity; tipped to bring up to $4.4 trillion in annual gains to the global economy.


Transforming chatbot technology with GPT modelsTim Shepheard-Walwyn, technical director and associate partner at Sprint Reply, spoke to Information Age about how businesses can drive value from chatbot technology powered by GPT models


Additionally, Deloitte’s Generative AI is all the rage report highlighted how, away from its generative capabilities, AI could be harnessed for tasks which require a lot of heavy lifting, but which are easy to validate. As with all new technologies however, this versatility is also a risk factor.

There is ever-expanding scope for users to delegate work in a bid to boost efficiency, without considering whether they should. Samsung’s recent issues are a prime example of this: with employees so heavily focused on the benefits of handing over time-consuming chip testing and presentation building, they failed to consider that inputting sensitive data into an open-source AI would make it accessible to other users.


Will ChatGPT make low-code obsolete?Romy Hughes thinks that ChatGPT could do what low-code has been trying to achieve for years – putting software development into the hands of users


As the everyday applicability of sophisticated tools grows, this means robust safety measures are essential to ensure responsible use.

Preparing for (almost) all eventualities

The value of preparation should never be underestimated; as McKinsey also notes, reaping the substantial rewards of generative AI will involve managing its equally sizeable risks.

Companies already rigorously assessing new tools before implementation will be on the front foot here; with strict vetting reducing the chances of unexpected hazards. This is especially true when evaluation is multi-faceted: involving users, legal and security teams to cover all bases, including whether tools adequately protect personally identifiable information (PII) and non-public data.

Such approaches, however, still only provide a relatively top-level overview of how technologies such as ChatGPT should be used. To make sure practical application is consistently secure, firms must build in-depth policies which help workers understand exactly what is and isn’t appropriate.

Amplifying awareness

In addition to defining what tools are and how they function, policies should outline wider risks, such as unreliable output and breach of confidentiality for ChatGPT.

Writing your company’s own ChatGPT policy

To help employees grasp and embrace key basics quickly, one useful starting point can be signposting relevant parts of existing policies they can check for best practices.

Producing tailored guidance for an internal ChatGPT policy is slightly more complex. To develop a truly all-encompassing ChatGPT policy, companies will likely need to run extensive cross-business workshops and individual surveys which enable them to identify, and discuss, every use case. Putting in this groundwork, however, will allow them to build specific directions which ultimately ensure better protection, as well as giving workers the comprehensive knowledge required to make the most of advanced tech.

Defining limitations

Explicitly highlighting threats and setting unambiguous usage limitations is also just as critical to leave no room for accidental misuse. This is particularly important for businesses where generative AI may be deployed to streamline tasks that involve some level of PII, such as drafting client contracts, writing emails, or suggesting which code snippets to use in programming.

Dos and don’ts

Again, providing generalised advice such as FAQs can be a useful step; equipping employees with an initial reference for questions about when chatbots are the right option and what sort of data they can enter. But minimising risk will mean going further and offering a list underlining precisely which “don’ts” to avoid. For example, that might include broader rules such as an outright ban on uploading PII data to chatbots for any purpose, including employee, contractor, client, customer, vendor, or product data. Meanwhile, instructions for specific use cases may involve line manager approval of any information prior to ChatGPT entry, alongside stringent interrogation of the answers it produces to validate genuine sources and outputs.

There is a difference between agile evolution and hasty adaptation. As hype around generative AI keeps on growing, businesses must take care not to let it drive ill-considered usage with disastrous long-term consequences. If solid security processes are already established, the challenges which come with implementing and using technologies such as ChatGPT can be addressed in a structured and efficient way, tapping rich benefits while locking down risks.

Andreas Niederbacher is CISO at Adverity

More on generative AI

What is generative AI and its use cases?Generative AI is the is a technological marvel destined to change the way we work, but what does it do and what are its use cases for CTOs?