Using generative AI to understand customers

Generative AI seems poised to transform IT from co-writing code through to enhancing cybersecurity but one of its most profound uses could be how it is used to understand customers

The quote “With great power comes great responsibility” has become well-known thanks to its association with the Spider-Man franchise. Originally referring to superhuman abilities such as speed, strength, agility, and resilience, it’s a sentiment that’s also relevant when considering the rise of generative AI.

Although the technology has been around for some time, the launch of ChatGPT has brought it into the hands of millions, giving them what for many has felt like a superpower. However, if Marvel movies have taught us anything, it’s that superpowers can be used for good or evil. The same goes for generative AI when it comes to understanding customers.

Nearly five months into 2023, economic uncertainty and rising inflation continue to affect consumer trends and behaviours. At the same time, brands are grappling with how they will use generative AI, if at all.

From my perspective, the technology has the potential to help brands stand out in the competition for consumer attention. But, without assessing the risks, brands could set themselves up for disaster.

What generative AI means for understanding customers

The methods for which brands conduct market research have undergone significant changes in recent years, bolstered by new tools and methodologies. However, the impact of generative AI for understanding customers remains uncertain. To prepare for these potential changes, brands and their research departments must have certain foundations in place so they can quickly respond as more information becomes available.

At the core of these foundations is asking the right questions. By focusing on developing and refining the right questions, decision makers can better navigate the uncertain landscape and take advantage of the many benefits that generative AI can offer.

Benefits for customer understanding

The primary benefit of generative AI across industries is enhanced productivity. It can accelerate the process of generating ideas, information and written texts, such as the initial drafts of emails, reports, or articles. This allows additional time for employees to focus on tasks that require higher levels of human expertise.

Summarising information fast

In terms of better understanding customers, generative AI is really effective in summarising information. Companies are already using the technology to create auto-summaries of market research reports, eliminating the need for having to precis reports manually.

Going forward, there is potential to expand this use case to summarise large volumes of information quickly and efficiently in order to provide concise answers to key business questions.

For example, this could look like an employee typing a question into a search bar, such as “what are our customer demographics for this product?” They’d then receive a succinct answer based on the company’s internal knowledge base.

Increased use of insights

Generative AI can also make it easier for all stakeholders to access market research without having to involve an insights manager each time, thereby removing access barriers and facilitating the seamless integration of consumer insights into daily operations.

Moreover, generative AI can help to address common concerns associated with all stakeholders accessing market research, such as non-research workers asking the wrong questions. By prompting relevant questions related to their search query, the technology can help those without research backgrounds to ask better questions, ultimately leading to more accurate and useful customer information.

In this way, generative AI can help to promote a culture of data-driven decision-making throughout an organisation, empowering everyone to access insights and leverage data more effectively. This can result in better informed decisions for businesses and greater outcomes for their customers.

Tailored communication for each audience

Another valuable opportunity that comes with generative AI is the ability to tailor communication to both internal and external audiences.

In a market research context, the potential applications are numerous. For instance, generative AI could enable the personalisation of insights for various business stakeholders throughout an organisation. This helps ensure that insights are conveyed in a way that is relevant and accessible to each audience, ultimately leading to more informed decision-making.

Additionally, generative AI could be utilised to tailor briefs to market research agencies, streamlining the research process and minimising the back-and-forth involved. By automating this process, brands can save time and resources while also ensuring that research briefs are optimised for the specific needs of each project.

What are the downsides?

While generative AI offers several benefits to research teams, it also poses various risks that organisations must be aware of before implementation.

Relying on the prompt

One of the most fundamental risks associated with generative AI is prompt dependency. The technology is statistical in nature, meaning it leverages analytical models to predict the most likely response to a given prompt. However, if the prompt is incorrect or biased, the output generated by the AI may not accurately reflect the intended meaning or may even provide misleading information.

If the prompt is formulated in a way that is too broad or too narrow, it may result in irrelevant or incomplete responses. Similarly, if the prompt contains biased language or assumptions, the AI may perpetuate those biases in its output, leading to inaccurate or unfair conclusions.

Trust

What becomes even trickier is the way that generative AI can blend correct information with incorrect information. While it may be entertaining in low-stakes scenarios, when multi-million-dollar decisions are being made, it becomes critical for businesses to rely on accurate and trustworthy information.

Furthermore, the complexity of consumer behaviour demands a more nuanced approach, especially when delving into deeper questions about human values and emotions. Some questions may not have a definitive answer, and when dealing with vast amounts of research, crucial details may be overlooked.

Transparency

Another key risk to consider is a lack of transparency regarding how algorithms are trained. For example, ChatGPT cannot always tell you where it retrieved its answers from, and even when it can, those sources might be impossible to verify or non-existent.

Also, because generative AI algorithms are trained using existing data and human input, they are susceptible to biases. This can result in generating offensive or discriminatory responses, such as those which are racist or sexist. For organisations seeking to promote inclusivity and reduce bias in their decision-making, such outcomes would not be conducive to productive work.

Security

One of the common applications of generative AI is the creation of various types of written content, including emails, meeting agendas and reports. However, by inputting sensitive company information to generate these texts, there is a risk of a security compromise.

An analysis by cybersecurity firm Cyberhaven revealed that out of 1.6 million research workers across industries, 5.6 per cent had used ChatGPT at work, and 2.3 per cent had entered confidential company information into the tool. Due to security concerns, companies including JP Morgan, Verizon, Accenture and Amazon have banned the use of ChatGPT by their staff. And just recently, Italy became the first Western country to ban ChatGPT while investigating privacy concerns, drawing attention from privacy regulators in other European countries.

For research teams or anyone dealing with proprietary company information, it is crucial to understand the risks involved in using a tool such as ChatGPT and to keep up-to-date with their organisation’s internal data security policies and the policies of providers like OpenAI.

We strongly believe that the future of consumer understanding will depend on a combination of human expertise and advanced technology. No matter how powerful the technology is, it will be rendered useless if people are not willing to use it.

For brands, it is crucial to practice responsible experimentation and use the appropriate tools to solve their unique problems rather than just implementing technology for the sake of it. Brands should understand that with great power comes great responsibility, and it is up to them to determine how to properly leverage the power of generative AI.

Thor Philogéne is the CEO and co-founder of Stravito, an AI-powered knowledge management system for market research and insights whose clients include McDonald’s, Danone and Burberry

More on Generative AI

What is generative AI and its use cases?Generative AI is the is a technological marvel destined to change the way we work, but what does it do and what are its use cases for CTOs?

What ChatGPT means for developersWill ChatGPT replace software developers? Or will it make their jobs more enjoyable by doing the heavy lifting when it comes to writing boilerplate code, debugging and testing? David Adams speaks to developers already using ChatGPT

ChatGPT vs alternatives for programmersChatGPT and its alternatives are set to be especially useful for programmers writing code. But just how reliable is the technology for developers? Antony Savvas considers what’s available and what the alternatives are